Hacker Newsnew | past | comments | ask | show | jobs | submit | crizzlenizzle's commentslogin

I remember seeing VRML as a kid and I was mindblown. If I recall correctly, it ran in IE 4 with an add-on. I really thought, only a few more weeks and I can build games like Doom myself running in a browser.

Well, with my assessment I was off over a decade lol.

Last week I went to some meetup and met a young guy raving about every website is going to be VR in about two years. I didn’t even know about VRML.


I built an entire website in VRML. It was fun.

I recall the biggest hindrance to VRML, aside requiring a plugin, was that most computers didn’t have hardware acceleration for 3D rendering. So performance was too poor to do anything useful for wide audiences.


Still using Google Workspace in our company to collaborate and send/receive emails, still using Google Maps for recommendations and routes, still using a Google phone. I guess the answer is no.


Are you usually an early or a late adopter of tech?


Nice, I like lightweight and modern terminal emulators. Just installed kitty and compared it in a sloppy way to foot [0] (by running `xxd /dev/urandom` side-by-side) and foot appears to be faster.

[0] https://codeberg.org/dnkl/foot


kitty is written in Python, which instantly lowers the ceiling for performance by an order of magnitude. I discarded it as an option in the past: the most important requirement for me is the terminal cannot crash, and I can't trust a Python program to do that.


Much of kitty is written in C, particularly the (very fast) rendering pipeline.


I like Kitty graphics protocol, but never used it. Didn't know Kitty was python, always assumed it was compiled due to the reported speed benefits. Maybe I'd benefit from switch to foot since my setup is mostly wayland nowadays, extra startup speed would be great. Foot seems to also have working terminal clipboard integration with `micro` too.


Foot is really great. I often open terminals for executing single commands, so I appreciate its short startup time


Almost three years ago, but the last time we used database as engine for Laravel’s queue subsystem it exploded due to some database table locks under high load. We switched to redis and things just worked well.


> Plain old files on the file system

…and accessed over SFTP.

I worked for a company in the health industry and one of the labs we integrated refused to call a HTTPS endpoint whenever a result was ready, so we had to poll every _n_ mins to fetch results. That worked well until covid happened and there were so many test results causing all sorts of issues like reading empty files (because they were about to be written to disk) and things of that nature.


Sounds like Germany’s company register[1].

You can search for companies, select the documents you’d like to see (like shareholder lists), then you go through a checkout process and pay 0 EUR (used to be like a few euros years ago), and then you can finally download your file. Still a super tedious process, but at least for free nowadays.

[1] https://www.unternehmensregister.de/ureg/


> Disable IPv6: this approach relies on ARP, which IPv6 doesn't use

I wish more people would care about IPv6.


Especially when the reality is… you can almost go IPv6 only nowadays if you wanted to.

I went down the rabbit hole recently, switching my network to IPv6 primary with IPv4 as the fallback. The ultimate test was disabling IPv4 for a weekend to see what, if anything, broke.

I had set up DNS64, NAT64 and 464XLAT. The only weirdness is how Windows clients handle IPv6 literals in UNC paths, which is super ugly, and how some applications (like Discord calls) will actually embed IPv4 literals. Discord apparently does that for the relay servers for calls.

Those things — and the rare website not supporting it - aside, I could actually be IPv6 only. I have IPv4 enabled as a fallback now, but it’s no longer primary on my network.


464XLAT should work fine with ipv4 literals, no? At least on macOS, this will get routed to a local 192.0.0.2 interface, which does the CLAT, translates it to an ipv6 64:ff9b::<ipv4> address, and relays it to your nat64 server. The ipv4-only software doesn't know any different, and the only traffic going on your LAN is ipv6.

I'm not sure if windows works the same way though...

(Edit: Looks like windows can do this, but it only configures it for WWAN interfaces, go figure: https://techcommunity.microsoft.com/t5/windows-os-platform/c...)


Discord calls on Windows was the only exception I ran into, because as you found yourself, it functionally had to fallback on DNS64. In general, some peer-to-peer situations are the likely one of the few cases where it will end up falling back onto DNS64 for resolution. This isn’t something I entirely realized myself — I’m apparently far from the only person to discover this behavior (Discord embedding IPv4 literals for its relay servers). Discord has had tickets open about it for years so far.

I appreciate my son being patient on that one, but he appreciated how seriously I dug into everything. Again, we have IPv4 enabled again as fallback, but the family agreed for an IPv6-only weekend as a test, and that was the only thing (outside of one website) that failed.

Everything else worked perfectly, including tons of legacy devices and software, some of which had no concept of IPv6.

For 464XLAT on clients, phones are actually the leaders here. It’s worked perfectly on at least iOS (and I assume Android) for a LONG time because of its built-in automatic tunneling. Mac OS had some recent improvements in Ventura to make things easier. Windows absolutely has some quirks, the biggest being IPv6 literals in UNC paths ending up using a domain Microsoft doesn’t actually own - a potential huge future attack vector.


> you can almost go IPv6 only nowadays if you wanted to.

I think that is the reason why many aren't enthusiastic about it.


Again, I disabled IPv4 purely for testing reasons, to guarantee nothing was using IPv4 without me knowing it. There’s no reason to actually disable IPv4 as a fallback.

Over 99.999% of traffic through my home network is IPv6 now. The tiny remainder that has to use IPv4 does, without issue. 5 months on, and nary a complaint. Just works.

The world is ready for IPv6 as your primary, with IPv4 as the fallback.


Not really. When I worked in a networking company, we've observed that connections between hosts via IPv6 were often worse than with IPv4. By connection, I mean between two hops on route to your destination.


I understand that. I'm just saying that the people who don't want to try IPv6 aren't motivated by that. For many the calculus is simply functionality divided by effort. They're not configuring IPv6 because it's hard or because it doesn't work. They're not configuring IPv6 because they already use IPv4 and it still works for them.


I wish ISPs would care. CGNAT sucks.


I’m behind CGNAT for the last 1.5 years without issue. What am I missing? I actually prefer my router not being bombarded by connection attempts all day.


Try connecting to an SSH server for more than a few hours without passing traffic and then have the server be the one to send a message. Oops! Your ISP tore down the NAT association and you have no idea the server isn't sending anything until you try to communicate with the server and get a timeout / RST.

NAT breaks TCP, but at least with consumer NAT you're in control of the timeouts on your router. With CGNAT you're at the mercy of an ISP that likely optimizes for HTTP and has low timeouts that you can't control.


I actually used to have that issue years ago at work. To work around that I just enabled a keepalive (ServerAliveInterval maybe?) setting in my ssh config. I don’t connect to any ssh servers outside my house for long periods of time, so I haven’t encountered that. Thanks for the heads up, good info!


OpenSSH does enable TCP keepalives by default so that it can detect and close dead connections, but the keepalive interval is far too high to work around bad NATs.

Kind of related to the OP, I spent a decent amount of time trying to have my SSH sessions survive a sleep on Windows. With keepalive disabled, proper Wi-Fi adapter sleep behavior and long enough DHCP leases, I was able to put my PC to sleep and come back the next day and still have my sessions active on resume. Unfortunately it wasn't too practical to disable keepalive as sessions that really do crash never get cleaned up.


Especially when NDP covers all of the features of ARP


> Especially when NDP covers all of the features of ARP

... and more.

and lots of options with varies level of support. Too many switch and flags to fiddle with.

Someone in IEEE need to publish a Current Best Pratice list and deprecate all other options.


I wish I could care about IPv6 but I've never used it, my ISP doesn't provide it and I've never once seen it deployed in a business environment


Seems unlikely. The world has made its peace with NAT, and IPv4 is simpler and therefore easier to understand. IPv6 isn't happening.


Roughly 40% (and rising) of Google users use IPv6.

https://www.google.com/intl/en/ipv6/statistics.html


Without realising and without having had to set it up themselves.


So? The vast majority of IPv4 users also do not realize and did not have to set it up themselves.


No, but the vast majority of people who understand enough IPv4 to set up a home or small office network still don't understand IPv6.


There's basically nothing toconfigure with IPv6.

With IPv4 you need an addres, a gateway, netmasks, DNS.

On v6, as long as you have a working router sending RADV packets, clients will self-configure via SLAAC. Granted, same with IPv4 and DHCP.

If you don't have a router, most things should work thanks to link-local+mDNS.

You can easily pop a second router on the network to bridge two LANs, no need to reconfigure the DHCP. Gateways self-advertise, etc.

The point I'm trying to make is that most people trying toconfigure their IPv4 network have a functional IPv6 network the moment they put the cables in (on Linuxes es at least, not sure about other platforms).


Is that mostly mobile phones while on cellular data, perhaps?


I am sure a large percent of it is. That being said in my area I only have 2 choices in ISP and both support 'dynamic' ipv6 and have for 3-4 years.


Or is it just happening extremely slowly? I don't think we can count IPv6 out yet.


I hope you're right, if only so that cgNAT goes away someday. But I'm pessimistic on that front. cgNAT is too easy and works just well enough to make adopting something better too low a priority to ever happen.


The increasing prices of IPv4 address blocks will probably drive adoption of IPv6. The increased complexity will be outweighed by the elimination of scarcity that IPv6 brings. If we are still using IPv4 in 2100 that would be tragic. IPv4 block pricing: https://ipv4marketgroup.com/ipv4-pricing/


Who cares about IPv6 on a home server that you might not even want to expose to the public internet anyway?


I care. My ISP is deploying IPv6 only and with NAT64 translation and some of my servers hosted elsewhere does not even have public IPv4 to SSH.


I do (and I guess I'm not alone...) -- have IPv6 exposed machines over a HE.net tunnel. Some of the things are ONLY accessible over IPv6 (because nobody needs them over IPv4, so that's enough).


I came on the comment page just to write the same.


Yes, you contact support provide them a description of what is faulty (disk with the exact serial number) and they gonna replace it usually within 30-60 mins.

Provisioning of servers was always quite fast. Same day or the next business day.

My experience is a little dated, I used to order bunch of dedicated boxes from them for our clients and with Hetzner we always had the best experience. Also the most bang for the buck.


Is it correct though?

I’ve been toying around with ChatGPT for a few weeks now and I encountered a few situations in which ChatGPT was like 90% accurate at best. Things like suggesting snippets of configuration files or plugin research. It’s good to get an idea and get started somewhere, but I certainly cannot trust it blindly.


What I've been telling everyone is that you can not (should not) ask ChatGPT a question that you can not independently verify that answer to yourself.

This is kind of what makes it good for generating code, because everything it generates can be pretty quickly verified and validated by another machine (interpreter/compiler).

Makes it not so great for writing essays on books you didn't read, and especially for doing math you don't understand... because it can't do math AT ALL.


I was kind of thinking about this.

Let's hypothetically assume we have some sort of AGI and we can ask it to write programs and text and nothing else.

Is there anyone on this planet who would think that they don't need to look at the generated code? I mean imagine a manager simply feeding in tickets and getting a finished application out without ever knowing how it was produced.

The application is business critical and any kind of mistake could ruin his business which puts the manager at complete mercy of the AI.

Now you might say that this happens with humans as well but when humans cause problems we let other humans review and test their code.

AI causes problems? Let's add more humans. Wait a minute...


> everything it generates can be pretty quickly verified and validated by another machine

It can be verified in a sense that it builds, but that doesn't mean that it actually does what you asked it to do, or that it does it on all valid inputs. The worst bugs to track down are silent logic bugs.


For math, I'm kind of surprised that it can't recognize "this is math" and then handle that with normal calculations instead of the language model. I assume we'll see that before long.


I really want Wolfram|Alpha to be integrated into this... that'd be nice. Also if they could make W|A any faster than a glacier while there at it that'd be great.


A good trick is to ask it to translate the request into commands of your choosing. Like ask it to generate python code to make the calculation for example. Another thing that works well is to turn it into a command extraction problem, give it examples of the kinds of commands you want, and build an interpreter for those commands.

I agree, we’re not far from that, or we’re there now.


I'm leaning towards using it for things I already know exactly how to do -- including a very clear idea of the result. In these contexts, it can save some mental workload / time.


Don't think so. There's clearly the beginning of a while loop near the top of the obfuscated version. There's no loops at all in the 'de-obfuscated' version.


Here's GPT's own explanation what the purpose of that while loop is:

---

This code uses JavaScript's `eval` function to obfuscate the code by looping over an array of strings and passing them as arguments to `eval` to create a variable. It also uses an anonymous function to obfuscate the code. The code is deobfuscated by replacing the `eval` function and the anonymous function with their respective strings.


That explanation is also not correct! The while loop is an obfuscation gadget (of sorts), but it doesn't use eval, it uses push and shift to rotate the array. The only use of eval is 'eval("find")', which is not top grade obfuscation.


That's not generally a good enough indicator; plenty of obfuscation involves loops that otherwise aren't hit.


Yeah, pretty sure that loop is to modify the lookup table for strings in runtime so you can't just statically replace it in the code, it's not strong obfuscation but in a properly deobfuscated code that loop wouldn't exist


The while loop does not come from the original code. Probably part of the obfuscation https://twitter.com/AlexAlexandrius/status/16178998824000839...


Yeah an example I was shown was python code to process some data. It was 30 lines of correct-looking trivial boilerplate code, except for one regex to do the actual processing. The regex was hopelessly wrong.

Clearly if you didn't know how to write the other 29 lines of code there's no way you are going to be able to debug the regex.


The optimistic way to look at it though is that it wrote the boring 29 lines that you didn't want to write and got you straight to the actual problem that needs solving.


We recently had an ad-hoc experiment like this as well "give us basic config management code to download a service, add a systemd service for it, deploy a config and setup reloading of the service"

And it had some funny mistakes in there - something called "Reload service XYZ" and it was actually a hard restart of the service, rather silly file locations and such, sure.

But at the same time, it saved us an hour or two of boilerplate setup and even dug up a somewhat smart way to validate the configuration for this very specific service. This allowed us to jump more into understanding the service, tuning the config and setting up good tests for the setup instead of the same boring 20 resources in a config management.

I guess I could also ask if we could have some better form of service or config management which eliminates this boilerplate... but ChatGPT made our current day-to-day work a little easier there.


Yes, and honestly I think this is the actual potential win here, especially in boilerplate-heavy languages (Java, I'm looking at you in particular). So if this turns out to be the case it could be good for programmer productivity while skewing the dev landscape towards tools, frameworks, languages etc that the prevailing AI models work well with.


90% accurate sounds impressive, and it is, but its still 100% incorrect almost always.


But does it follow the 80/20 rule ?

In this case, 80% of the answer for 20% of the effort ?


It has this really amazing and terrifying quality of being a really good bullshitter. I asked it an AWS question once and it gave me 4 very convincing sounding answers. I went to try it. 2 of them are complete bullshit as in as the commands don't even exist. The only good answer is the one I already had. It's in this uncanny valley of bullshitting. Can be quite dangerous in some situations, especially if one is lolled into trusting it.


I recognize what you are describing and I actually think that its predisposition to doing this has become worse in the past week or so.


In my experience, ChatGPT often comes up with pseudo syntax.


It often happens that ChatGPT will confidently give you something that _looks_ like what you're asking for despite it being awfully wrong - sometimes you can make it "understand" its mistake and correct it, sometimes not. It's usually not that far off, but trusting it blindly is just out of the question.


I was having ChatGPT give me wildly wrong answers and when I asked for a source it provided me fake websites and confidently quoted information from those sites that have never existed


It's good enough. I know zero powershell, but I know other languages enough to understand the common grammar. With ChatGPT I'm a fairly rapid powershell programmer right off the bat - as evidenced by the script I've been writing this afternoon. I don't know any of the (overcomplicated) syntax, but now I don't have to.


This is the original code https://twitter.com/AlexAlexandrius/status/16178998824000839...

It's very similar to the deobfuscated version, but ChatGPT wrote the code in the first place


Is it ready for production? Maybe not. Is it amazing and inevitably going to get better? Yes. Does it make a lot of human labor redunant in the very foreseeable future? Also yes.


Define accurate?

It's just like any other AI system, it returns results as a best effort proposition of accurate with a % confidence that doesn't map well to binary outcomes.

So yes, it can be accurate. But there are scenarios where it must be strict or binary correct, and its not great at that bit.


I’ve had it confidently tell me to use Python libraries that don’t exist, pass parameters to methods that aren’t in the method signature, and to write code that had to be debugged and fixed.

I’m still excited to use it, but you have to know enough about coding to ensure correctness. It’s no where near possible for a non-coder to build a complicated app with (so far).


I got one self-hosted setup: Postfix, dovecot, both IPv4 and IPv6. It has been working for 15+ years and is still going strong. Never had any major deliverability issues.

In my new company we are using Gmail though. Easier to manage for non-tech people and it’s fully managed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: