Hacker Newsnew | past | comments | ask | show | jobs | submit | dcanelhas's commentslogin

I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.

> I'm half expecting to see "AI model" appearing as stand-in for "linear regression" at this point in the cycle.

Already the case with consulting companies, have seen it myself


Some career do-nothing-but-make-noise in my organization hired a firm to 'Do AI' on some shitty data and the outcome was basically linear regression. It turns out that you can impressive executives with linear regression if you deliver it enthusiastically enough.

Tbh, often enough, linear regression is exactly what is needed.

Yes, and we do it every day and call it 'linear regression' and don't need a data center full of expensive toys to do it

I'm half expecting to see "AI model" appearing as stand-in for "if > 0" at this point in the cycle.

This is why I am programming now in Ocaml, files themselves are AI ( ml ).

I am sure you did not forget that pattern matching.

This is essentially what any relu based neural network approximately looks like (smoother variants have replaced the original ramp function). AI, even LLMs, essentially reduce to a bunch of code like

    let v0 = 0
    let v1 = 0.40978399*(0.616*u + 0.291*v)
    let v2 = if 0 > v1 then 0 else v1

    let v3 = 0
    let v4 = 0.377928*(0.261*u + 0.468*v)
    let v5 = if 0 > v4 then 0 else v4...

Thats a bit far. Relu does check x>0 but thats just one non-linearity in the linear/non-linear sandwich that makes up universal function approximator theorem. Its more conplex than just x>0

The relu/if-then-else is in fact centrally important as it enables computations with complex control flow (or more exactly, conditional signal flow or gating) schemes (particularly as you add more layers).

Multiply-accumulate, then clamp negative values to zero. Every even-numbered variable is a weighted sum plus a bias (an affine transformation), and every odd-numbered variable is the ReLU gate (max(0, x)). Layer 2 feeds on the ReLU outputs of layer 1, and the final output is a plain linear combination of the last ReLU outputs

    // inputs: u, v
    // --- hidden layer 1 (3 neurons) ---
    let v0  = 0.616*u + 0.291*v - 0.135
    let v1  = if 0 > v0 then 0 else v0
    let v2  = -0.482*u + 0.735*v + 0.044
    let v3  = if 0 > v2 then 0 else v2
    let v4  = 0.261*u - 0.553*v + 0.310
    let v5  = if 0 > v4 then 0 else v4
    // --- hidden layer 2 (2 neurons) ---
    let v6  = 0.410*v1 - 0.378*v3 + 0.528*v5 + 0.091
    let v7  = if 0 > v6 then 0 else v6
    let v8  = -0.194*v1 + 0.617*v3 - 0.291*v5 - 0.058
    let v9  = if 0 > v8 then 0 else v8
    // --- output layer (binary classification) ---
    let v10 = 0.739*v7 - 0.415*v9 + 0.022
    // sigmoid squashing v10 into the range (0, 1)
    let out = 1 / (1 + exp(-v10))

There is an HIGGS dataset [1]. As name suggest, it is designed to apply machine learning to recognize Higgs bozon.

[1] https://archive.ics.uci.edu/ml/datasets/HIGGS

In my experiments, linear regression with extended (addition of squared values) attributes is very much competitive in accuracy terms with reported MLP accuracy.


The LHC has moved on a bit since then. Here's an open dataset that one collaboration used to train a transformer:

https://opendata-qa.cern.ch/record/93940

if you can beat it with linear regression we'd be happy to know.


I'm sure I've seen basic hill climbing (and other optimisation algorithms) described as AI, and then used evidence of AI solving real-world science/engineering problems.

Historically this was very much in the field of AI, which is such a massive field that saying something uses AI is about as useful as saying it uses mathematics. Since the term was first coined it's been constantly misused to refer to much more specific things.

From around when the term was first coined: "artificial intelligence research is concerned with constructing machines (usually programs for general-purpose computers) which exhibit behavior such that, if it were observed in human activity, we would deign to label the behavior 'intelligent.'" [1]

[1]: https://doi.org/10.1109/TIT.1963.1057864


That definition moves the goalposts almost by definition, people only stopped thinking that chess demonstrated intelligence when computers started doing it.

The term artificial intelligence has always been just a buzzword designed to sell whatever it needed to. IMHO, it has no meaningful value outside of a good marketing term. John McCarthy is usually the person who is given credit for coming up with the name and he has admitted in interviews that it was just to get eyeballs for funding.

I am somewhat cynically waiting for the AI community to rediscover the last half a century of linear algebra and optimisation techniques.

At some point someone will realise that backpropagation and adjoint solves are the same thing.


There are plenty of smart people in the "AI community" already who know it. Smugly commenting does not replace actual work. If you have real insight and can make something perform better, I guarantee you that many people will listen (I don't mean twitter influencers but the actual field). If you don't know any serious researcher in AI, I have my doubts that you have any insight to offer.

I am sure they are aware...

And why not, when linear regression works, it works so well it's basically magic, better than intelligence, artificial or otherwise

Having work with people who do that, I can guarantee that’s not the case. See https://ssummers.web.cern.ch/conifer/ and HSL4ML, these run BDT and CNN

That works well to get around patents btw :)

Wait, does that mean that you could make an ASCII representation of a QR code that points to the URL that the QR code was made of?


Does that count as quine?


How shocking!


Shouldn't 523 in that list of "other numbers" actually be 522?


You're right


https://en.wikipedia.org/wiki/Fast_inverse_square_root for those who just want the info without AI filler


> I wrote my first line of code in 1983. I was seven years old, typing BASIC into a machine that had less processing power than the chip in your washing machine

I think there may be a counterpoint hiding in plain sight here: back in 1983 the washing machine didn't have a chip in it. Now there are more low-level embedded CPUs and microcontrollers to develop for than before, but maybe it's all the same now. Unfathomable levels of abstraction, uniformly applied by language models?


The opposite of down is up, so it wouldn't be completely illogical.


It does sound like a good outcome for automation. Though I suppose an investigation into the matter would arguably have to look at whether a competent human driver would be driving at 17mph (27km/h) under those circumstances to begin with, rather than just comparing the relative reaction speeds, taking the hazardous situation for granted.

What I would like to see is a full-scale vehicle simulator where humans are tested against virtual scenarios that faithfully recreate autonomous driving accidents to see how "most people" would have acted in the minutes leading up to the event as well as the accident itself


>Though I suppose an investigation into the matter would arguably have to look at whether a competent human driver would be driving at 17mph (27km/h) under those circumstances to begin with, rather than just comparing the relative reaction speeds, taking the hazardous situation for granted.

Sure but also throw in whether that driver is staring at their phone, distracting by something else, etc. I have been a skeptic of all this stuff for a while but riding in a Waymo in heavy fog changed my mind when questioning how well I or another driver would've done at that time of day and with those conditions.


How would that help in the investigation?


17 mph is pretty slow unless it’s a school zone


Indeed, 15 or 25 mph (24 or 40 km/h) are the speed limits in school zones (when in effect) in CA, for reference. But depending on the general movement and density and category of pedestrians around the road it could be practically reckless to drive that fast (or slow).


If my experience driving through a school zone on my way to work is anything to go off of, I rarely see people actually respecting it. 17 mph would be a major improvement over what I'm used to seeing.


> a full-scale vehicle simulator

The UK is such a situation, and this vehicle would have failed a driving test there.


Indeed, then this design will truly have come full circle.


I remember the pitch for Julia early on being matlab-like syntax, C-like performance. When I've heard Julia mentioned more recently, the main feature that gets highlighted is multiple-dispatch.

https://www.youtube.com/watch?v=kc9HwsxE1OY

I think it seems pretty interesting.


Julia is actually faster than C for some things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: