> If students want to type notes in class or papers in the library, they can use digital typewriters, which have word processing but nothing else.
Only, replacing the guts of such a machine to contain a local LLM is damn easy today. Right now the battery mass required to power the device would be a giveaway, but inference is getting energetically cheaper.
> Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.
Just like my on-campus classmates never smoked weed or drank underage, I'm sure.
Just return to old fashioned styles of universities with tutorials, lectures, offline handwritten exams, and viva voce.
It's very hard to hide the fact that someone else did an assignment when you have to defend it in front your tutor and a small group of fellow students and it's next to impossible to pass a final viva without knowing and understanding what you are talking about.
The problem is we have all become addicted to cheap 'education' and a the traditional methods are expensive.
But I think the institutions and the students need to ask themselves what the university is for. Is it to hand out diplomas or is it there so that the students can learn? A student who only wants the diploma has an incentive to cheat, one who wants to learn does not because the only person cheated is themself.
I think your last point is precisely why universities shouldn't limit access to llms beyond reasonable means. Make it hard for the weak to access, and easy enough for those dedicated. the ones to make an effort to cheat aren't there to learn anyway
There's always going to be ways to cheat, the idea is to make it hard. I think secretly replacing a computer's internals such that no one else will notice is pretty hard.
When I looked at this a decade ago, I concluded that if bugs can't get popular as a source of protein powder, they aren't getting popular in the US and Canada. Since then, not a single gym rat I've mentioned this to has liked my concept product, Pretty Fly for a White Powder.
> I sent 70 emails. Personalised. Researched. Each one had a PS line referencing something specific about that person — a career pivot, a published article, a podcast appearance. I spent real time on every single one.
> Zero replies.
.
.
.
> Hermann Simon — the founder of Simon-Kucher, one of the world’s most respected pricing consultancies — was one of the few people who did reply.
I know when I'm being lied to. I stopped reading here.
Thinking about speed like this used to be necessary in C and C++ but these days you should feel free to write the most legible thing (Horner's form) and let the compiler find the optimal code for it (probably similar to Horner's form but broken up to have a shallower dependency chain).
But if you're writing in an interpreted language that doesn't have a good JIT, or for a platform with a custom compiler, it might be worth hand-tweaking expressions with an eye towards performance and precision.
You should never assume the compiler is allowed to reorder floating-point computations like it does with integers. Integer math is exact, within its domain. Floating-point math is not. The IEEE-754 standard knows this, and the compiler knows this.
Ah, fair point, it has been a while since I've needed fast inexact math.
Though... they are allowed to cache common subexpressions, and my point about dependency chains is quite relevant on modern hardware. So x*x, x*x*x, etc may each be computed once. And since arithmetic operators are left-to-right associative, the rather ugly code, as written, is fast and not as wasteful as it appears.
> And since arithmetic operators are left-to-right associative, the rather ugly code, as written, is fast and not as wasteful as it appears.
This is incorrect, for exactly the reason you are citing: A * x * x * x * x = (((A * x) * x) * x) * x), which means that (x * x) is nowhere to be seen in the expression and cannot be factored out. Now, if you wrote x * x * x * x * A instead, _then_ the compiler could have done partial CSE against the term with B, although still not as much as you'd like.
The compiler is often not allowed to rearrange such operations due to a change in intermediate results. So one would have to activate something like fastmath for this code, but that’s probably not desired for all code, so one has to introduce a small library, and so on. Debug builds may be using different compilation flags, and suddenly performance can become terrible while debugging. Performance can also tank because a new compiler version optimizes differently, etc. So in general I don’t think this advice is true.
Probably for ints unconditionally. For floats in Sesse__'s example without `-ffast-math`, I count 10 muls, 2 muladds, 1 add. With `-ffast-math`, 1 mul, 3 muladds. <https://godbolt.org/z/dPrbfjzEx>
Are you selling insights from chat logs too? Until you're monetizing my health, sex life and snitching to any government agency with a shiny nickel, you're playing in the shallows.
Only, replacing the guts of such a machine to contain a local LLM is damn easy today. Right now the battery mass required to power the device would be a giveaway, but inference is getting energetically cheaper.
> Colleges that are especially committed to maintaining this tech-free environment could require students to live on campus, so they can’t use AI tools at home undetected.
Just like my on-campus classmates never smoked weed or drank underage, I'm sure.
reply