Even worse was Google Wave. Totally unusable from the start, which is when I tried it, due to all the hype (by them) about it. Probably too JavaScript-heavy, was the reason, I think, back then. I remember reading reports confirming my guess, at the time. I was on an average machine. I bet the Google devs had quite more powerful ones, and in their infinite wisdom (not!), did not trouble to test, or even think of testing on average machines that most of the world would have.
Google Wave worked fine for me, as I recall. I remember being really impressed on one level, but I also couldn't figure out what to actually use it for.
A couple weeks ago, I thought about diving back into demos to reacquaint myself with it. It's possible that it was ahead of its time. Or maybe it was a solution in search of a problem.
> And this has to be at the White House specifically because location determines oversight. When infrastructure is part of the Executive Office of the President, when it exists at 1600 Pennsylvania Avenue, it can be classified under executive privilege. The East Wing sits directly above the Presidential Emergency Operations Center, the bunker where Dick Cheney sheltered during September 11th. By demolishing the entire East Wing, you create space to expand that existing secure facility, integrate new infrastructure, go deeper underground. All protected by the classification that covers anything related to presidential security.
I used to regularly do work on my laptop on the subway, and then the PATH, after I moved to NJ. You don't see a whole lot of people doing it but it's very doable. I tether to get online via my phone.
This is incredible. I once assembled a collection of 100,000 tracks for research on exploration of large music libraries. Essentially vector search. I was limited in storage and processing power to a single machine.
If I were to do it today, I could get so much farther with hyperscaler products and this dataset.
Are there thoughts on getting to something more like a "single window dev workflow"? The code editing and reviewing experiences are very disjoint, generally speaking.
My other question is whether stacked PRs are the endpoint of presenting changes or a waypoint to a bigger vision? I can't get past the idea that presenting changes as diffs in filesystem order is suboptimal, rather than as stories of what changed and why. Almost like literate programming.
I really like all these ideas - very similar to what we discuss internally! We need to iterate our way there, but working with Cursor makes some of these visions much more possible
You can definitely understand backpropagation, you just gotta find the right explainer.
On a basic level, it's kind of like if you had a calculation for aiming a cannon, and someone was giving you targets to shoot at 1 by 1, and each time you miss the target, they tell you how much you missed by and what direction. You could tweak your calculation each time, and it should get more accurate if you do it right.
Backpropagation is based on a mathematical solution for how exactly you make those tweaks, taking advantage of some calculus. If you're comfortable with calculus you can probs understand it. If not, you might have some background knowledge to pick up first.
> it's a language made by academics for academics to play with language design. It was a little weird it blew up in industry for a while.
Yep. They have always been pretty honest about this.
I think that it blew up in industry because it really was ahead of its time. Type systems were pretty uncool before Scala. It proved that you could get OO and FP in a single type system.
Actually, a big part of reason for doing Scala 3 was rebasing the language on a more rigorous basis for unifying OO and FP. They felt that for all their other big ideas, it was time to rethink the fundamentals.
I’m not up on programming language engineering as much as I should be at 37, could you elaborate a bit here? (To my untrained ear, it sounds like you’re saying Scala was one of the first languages that helped types break through? And I’m thinking that means, like, have int x = 42; or Foo y = new Foo()”
Not types, type-safety. Things like covariant and contravariant type declarations, implicit types (variables looked up by type instead of by label), and other things that you need to make a type safe system/service/application. The problem is that that feature of a language is massively oversold. Its nice but to pretend it prevents bugs or is even a great design goal is questionable and not backed up by research (as they claim).
> Its nice but to pretend it prevents bugs or is even a great design goal is questionable and not backed up by research (as they claim).
That's why people use JavaScript instead of Rust for critical systems, right?
Claiming in the year 2025 that strong static types don't provide massive advantages is almost laughable, TBH. This was settled long ago, and the whole industry now understands that type safety is inevitable to create reliable and scalable systems.
reply