Hacker Newsnew | past | comments | ask | show | jobs | submit | seek3r00's commentslogin

“I could not stop thinking about the applications of such a technology and how it could improve our lives.

I was thinking of how cool it would be to build a Twitter-like service where the only posts are GPT-3 outputs.”

This could have been either the output of GPT-3 or someone who doesn’t know what they’re saying.


The title is obviously clickbait-y, but it’s fine: they’re trying to sell a product (Google Colab).

IMO if you’re interested in AI research or ML engineering, you already know that — in order to avoid getting people killed - you have to understand how it works under-the-hood. You’re doing yourself, your employer and your fellow humans a favour.

Just keep up the good work, and ignore the bullshit. If an AI winter comes, you’ll be well prepared to migrate to another engineering role.


It reminds me of Coders at Work: https://en.m.wikipedia.org/wiki/Coders_at_work


This is a must read, and I’d argue it has a very different conclusion on programmers. They are all very different.


I can't believe I never seen that. I'll definitely check it out. Douglas Crockford, Brendan Eich - those guys are my heros!


I've read it 10 years ago. The only thing I remember is that the best programmers use printf to debug :-)



That looks like a super nice book. Thanks for sharing!


There is also "Programmers at work", much older but also a great read from the 80s

https://www.amazon.com/Programmers-work-Interviews-Susan-Lam...


Woooooow!


Thanks, bought the book.


“To put it simply: the overestimation of technology is closely connected with the underestimation of humans.”


Exactly, that’s what I thought. Although, they should be able to look at the set of instructions that generated the output, which is basically an algorithm in itself. Then they could try to prove whether that algorithm would really generalise.


The model is given the input and a set of instructions (e.g. swap) for producing the output. Essentially, it’s taking a permutation of the instructions that minimise the running time, based on some underlying patterns in the data.


Some interesting extracts:

“The instruction set together with the input representations jointly determine the class of algorithms that are learnable by the neural controller, and we see this as a fruitful avenue for future research akin to how current instruction sets shaped microprocessors.”

“The generalization or correctness we aim for is mostly about generalization to instances of arbitrary sizes.”

“[...] computing a = f(s) can be more expensive on current CPUs than executing typical computation employed in the algorithms studied here. We thus hope that this research will motivate future CPUs to have “Neural Logic Units” to implement such functions f fast and efficiently, effectively extending their instruction set, and making such approaches feasible.”


It makes sense to me. You can use WSL, and run PHP on Linux.


tl;dr: Training learners is becoming cheaper every year, thanks to big tech companies pushing hardware and software.


Yeah, the thing is that most of us don’t have the patience of figuring out how Nix works , write derivations and stuff.

Docker and Compose look pretty straightforward in comparison, at least for local development.


Nix may not be straightforward to learn, but Docker isn't straightforward to use. Container orchestration is a pain to deal with, for instance.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: