The title is obviously clickbait-y, but it’s fine: they’re trying to sell a product (Google Colab).
IMO if you’re interested in AI research or ML engineering, you already know that — in order to avoid getting people killed - you have to understand how it works under-the-hood. You’re doing yourself, your employer and your fellow humans a favour.
Just keep up the good work, and ignore the bullshit. If an AI winter comes, you’ll be well prepared to migrate to another engineering role.
Exactly, that’s what I thought. Although, they should be able to look at the set of instructions that generated the output, which is basically an algorithm in itself. Then they could try to prove whether that algorithm would really generalise.
The model is given the input and a set of instructions (e.g. swap) for producing the output. Essentially, it’s taking a permutation of the instructions that minimise the running time, based on some underlying patterns in the data.
“The instruction set together with the input representations jointly determine the class of algorithms that are learnable by the neural controller, and we see this as a fruitful avenue for future research akin to how current instruction sets shaped microprocessors.”
“The generalization or correctness we aim for is mostly about generalization to instances of arbitrary sizes.”
“[...] computing a = f(s) can be more expensive on current CPUs than executing typical computation employed in the algorithms studied here. We thus hope that this research will motivate future CPUs to have “Neural Logic Units” to implement such functions f fast and efficiently, effectively extending their instruction set, and making such approaches feasible.”
I was thinking of how cool it would be to build a Twitter-like service where the only posts are GPT-3 outputs.”
This could have been either the output of GPT-3 or someone who doesn’t know what they’re saying.