Hacker Newsnew | past | comments | ask | show | jobs | submit | nrr's commentslogin

dang seems to have outright banned him just over a fortnight ago. https://news.ycombinator.com/item?id=42653007

On Hacker News, banned accounts can still comment, but those comments are immediately dead until vouched.


"There's no foresight. There's no planning." Couple that with "as an expression of agility," and it really rings true to me. I've worked in enough shops where the contractual obligations preclude any ability to slow down and put together a plan. A culture where you're forced to go from an angry phone call from the suits to something running in production in mere hours is a culture that finds building bookcases out of mashed potatoes acceptable.

The best environment I've ever worked in was, ironically enough, fully invested in Scrum, but it wasn't what's typical in the industry. Notably, we had no bug tracker[0], and for the most part, everyone was expected to work on one thing together[1]. We also spent an entire quarter out of the year doing nothing but planning, roleplaying, and actually working in the business problem domain. Once we got the plan together, the expectation was to proceed with it, with the steps executed in the order we agreed to, until we had to re-plan[2].

With the rituals built in for measuring and re-assessing whether our plan was the right one through, e.g., sprint retrospectives, we were generally able to work tomorrow's opportunity into the plan that we had. With the understanding that successfully delivering everything we'd promised at the end of the sprint was a coin toss, if we were succeeding a lot, it gave us the budget to blow a sprint or two on chasing FOMO and documenting what we learned.

0: How did we address bugs without a bug tracker? We had a support team that could pull our andon cord for us whenever they couldn't come up with a satisfactory workaround (based on how agonizing it was for everyone involved) to behavior that was causing someone a problem. Their workarounds got added to the product documentation, and we got a product backlog item, usually put at the top of the backlog so it'd be addressed in the next sprint, to make sure that the workaround was, e.g., tested enough such that it wouldn't break in subsequent revisions of the software. Bad enough bugs killed the sprint and sent us to re-plan. We tracked the product backlog with Excel.

1: Think pairing but scaled up. It's kinda cheesy at first, but with everyone working together like this, you really do get a lot done in a day, and mentoring comes for free.

2: As it went: Re-planning is re-work, and re-work is waste.


Sounds amazing! Do you still work there now?


No, I left the industry.


I've had a lot of experiences like this, and I wound up ducking out of the industry entirely in 2021 after having had my skillset reduced to dogmatic use of the infrastructure buzzword of the day.


What are you doing for money now?


Nothing.


That sounds nice


A lot of newer[0] US domestic market manual transmission cars do, in fact, have an interlock that prevents the starter motor from getting power without the clutch pedal also being depressed. Of particular note, my 1984 Ford Bronco II, 1991 Mitsubishi Galant, and 2004 Honda Accord all had such an interlock.

0: This is basically everything after the three-on-the-tree/four-on-the-floor era. I have yet to drive anything with an overdrive gear that didn't require popping the clutch to crank the starter.


My 1987 Toyota 4x4 pickup had such an interlock. It also had a switch to disable the interlock, allowing you to start the truck in gear. A very useful feature when you were stalled on a very steep hill offroad. Starting in 1st gear, low range basically turned it into an electric car for a few seconds :-)



Indeed, in my country, and despite scaremongering about boats, 95% of illegal immigrants are illegals because of visa overstay, and I'd bet the US numbers are around that.


I'm also on team Keep An Eye On Things™, and approximately none of it really feeds into an anxiety loop. (There's a tiny sliver of the pie that does, but it's easy enough to talk myself off that ledge and go engage in a Weltschmerzspaziergang[0].)

I know it's stuff I can't control, and that's sort of the point. I want to know what I can't control so that I can know what I can control, if that makes sense.

0: Otherwise known as "touching grass."


Spot on.

Closely tracking things you can not control may provide a sense of control to some.

Or the other way round: Crunching enough data and building reasonable predictions based on that takes away the element of surprise, and the element of surprise for some translates to anxiety.

For me the only things that scare me are in the "I have no data on that" category.


It's all part of the actuarial mindset. The entire point of the exercise is to arrive at a model of reality that has some degree of predictive power.

> For me the only things that scare me are in the "I have no data on that" category.

I feel exactly the same way. It means that I have no idea what those things will wind up costing me, and that's the anxiety trigger as far as I'm concerned.


That's a fair point, I just want to call out no one except you can tell you what you can control, since you're able to just "do things."


I think it's probably worth mentioning that the principal concern for tests should be proving out the application's logic, and unless you're really leaning on your database to be, e.g., a source of type and invariant enforcement for your data, any sort of database-specific testing can be deferred to integration and UAT.

I use both the mocked and real database approaches illustrated here because they ultimately focus on different things: the mocked approach validates that the model is internally consistent with itself, and the real database approach validates that the same model is externally consistent with the real world.

It may seem like a duplication of effort to do that, but tests are where you really should Write Everything Twice in a world where it's expected that you Don't Repeat Yourself.


The database is often the thing that enforces the most critical application invariants, and is the primary source of errors when those invariants are violated. For example, "tenant IDs are unique" or "updates to the foobars are strictly serializable". The only thing enforcing these invariants in production is the interplay between your database schema and the queries you execute against it. So unless you exercise these invariants and the error cases against the actual database (or a lightweight containerized version thereof) in your test suite, it's your users who are actually testing the critical invariants.

I'm pretty sure "don't repeat yourself" thinking has led to the vast majority of the bad ideas I've seen so far in my career. It's a truly crippling brainworm, and I wish computer schools wouldn't teach it.


> The only thing enforcing these invariants in production is the interplay between your database schema and the queries you execute against it."

I'm unsure that I agree. The two examples you gave, establishing that IDs are unique and that updates to entities in the system are serializable (and linearizable while we're here), are plenty doable without having to touch the real database. (In fact, as far as the former is concerned, this dual approach to testing is what made me adopt having a wholly separate "service"[0] in my applications for doling out IDs to things. I used to work in a big Kafka shop that you've almost certainly heard of, and they taught me how to deal with the latter.)

That said, I'd never advocate for just relying on one approach over the other. Do both. Absolutely do both.

> I'm pretty sure "don't repeat yourself" thinking has led to the vast majority of the bad ideas I've seen so far in my career. It's a truly crippling brainworm, and I wish computer schools wouldn't teach it.

I brought up WET mostly to comment that, if there's one place in software development where copying and pasting is to be encouraged, testing is it. I'd like to shelve the WET vs. DRY debate as firmly out of scope for this thread if that's alright.

0: It's a service inasmuch as an instance of a class implementing an interface can be a service, but it opens up the possibility of more easily refactoring to cross over into running against multiple databases later.


I've often been tempted to make an "id service" also because you can potentially get compact integer ids that are globally unique. That'll likely save you more than a factor of 2 in your ID fields given varint encoding, which could be very significant in overall throughput depending on what your data look like. Never actually tried it IRL though.

I agree both approaches are important, and it's totally ok if they overlap. If your unit tests have some overlap on your integration tests, that's nbd especially seeing as you can run your unit tests in parallel.

EDIT: actually I'll make a much bolder claim: even if your unit tests are making flawed assumptions about the underlying dependencies, it's still pretty much fine so long as you also exercise those dependencies in integration tests. That is, even somewhat bit-rotted unit tests with flawed mocks and assertions are still valuable because they exercise the code. More shots on goal is a great thing even if they're not 100% reliable.


> If your unit tests have some overlap on your integration tests, that's nbd especially seeing as you can run your unit tests in parallel.

Exactly.

Another upside I've run into while doing things this way is that it gets me out of being relational database-brained. Sometimes, you really do not need the full-blown relational data model when a big blob of JSON will work just fine.


There's something very, very wrong in the way we write programs nowadays.

Because yeah, the database is you main source of invariants. But there is no good reason for you application environment not to query the invariants from there and test or prove your code around them.

We do DRY very badly, and the most vocal proponents are the worst... But I don't think this is a good example of the principle failing.


> There's something very, very wrong in the way we write programs nowadays.

I largely agree, but...

> ... the database is you main source of invariants.

I guess my upbringing through strict typing discipline leaves me questioning this in particular. I'm able to encode these things in my types without consulting my database at build time and statically verify that my data are as they should be as they traverse my system with not really any extra ceremony.

Encoding that in the database is nice (and necessary), but in the interest of limiting network round-trips (particularly in our cloud-oriented world), I really would prefer that my app can get its act together first before crossing the machine boundary.


> no good reason for you application environment not to query the invariants from there and test or prove your code around them

As a developer who primarily builds backend web applications in high level languages like golang and java I run the risk of sounding ignorant talking like this but.. I'm led to believe lower level systems and embedded software has a lot more invariant preserving runtime asserts and such in it. The idea being that if an invariant is violated better to fail hard and fast than to attempt to proceed as if everything is alright.


Hum... I'm not sure we are talking about the same thing. Of course system and embedded software won't have invariants stored in a database, the comment isn't about them.

But, there isn't a faster way to fail to an invariant than to prove statically that your code fails it, or to test it before deploying. I don't really understand your criticism.


Starlark is Turing-incomplete, which makes it somewhat unique among embeddable languages. It's definitely a draw for me for something I'm working on.


The primitive-recursive property of Cue (https://cuelang.org) is a big draw for me, and may be an alternative worth checking out. The authors have spent a great amount of attention to the type system (they learned a lot of lessons from previous config language designs that did not take lattice theory and unification into account).


The tl;dr is that inheritance is bad in config, whether it be from OOP or layering yaml files like Helm. The reason being that it is hard to understand where a value is coming from and where one must make an edit to correct it in high-stress SRE situations like downtime. Marcel worked on both major config languages at Google, and iirc Starlark is based on GCL ideas

The Logic of CUE is a great read: https://cuelang.org/docs/concept/the-logic-of-cue/


Dhall is another configuration language that's deliberately Turing-incomplete. Though its Haskell-inspired syntax turns people off who aren't already Haskell programmers. It's based on calculus of constructions.


I'll push back in defense of Scrum, but it probably bears a little explanation because my conceptualization of that framework is very likely a lot different from yours. (As something of a bonus: I'll bring in the military given the whole "wartime" trope.)

In particular, Scrum is only there to establish rituals that enable empiricism in decision making. A sprint is a reporting period to keep the team from spending too much time in the weeds. A standup is there to keep the team working together. The andon cord (which is often missing I find) is there when the facts have changed so utterly profoundly that everyone needs to regroup.

Anyone who's been through RTC ("boot camp"), and even some who haven't but have lived vicariously through others, understand that being constantly yelled at by RDCs ("drill instructors") on how you make your racks and fold your clothes is all about building certain habits and only tenuously related to what you'll be doing after A-school. It all has more to do with building trust that the rest of the folks in your ship will help carry you when the going gets tough. Scrum, at its core, is kinda like that.

I really dislike the term "Scrum Master." They're a team captain. The more military-minded might be keener to use "gunnery sergeant" or "chief petty officer:" they're just the most senior person in the rating group^W^W^W^Won the team. (Though, I'd probably take more inspiration from the Marines than the Navy here: a culture of servant leadership seems to bring out the best in people.)

The most popular implementations of Scrum tend to come with a ridiculous amount of meeting and tool baggage, and it's so unnecessary.

Use Excel. Hold your standups at the close of the day so people can go home. Write your product backlog items in delivery order so that sprint planning is less about sitting in one room playing poker and more about just getting valuable shit done.

That said, what isn't unnecessary, however, is kneecapping command a little: the engineering officer of the watch has comparatively little understanding of the actual operation of the machine. They just know that they want operational excellence. However, that excellence also sometimes comes with the watch supervisor—a subordinate—publicly calling out mistakes that the watch officer makes.


> I really dislike the term "Scrum Master." They're a team captain.

Originally that term was supposed to be a temporary role that someone (rotated each time) would take on during a scrum meeting, and referred to them being charged with keeping the meeting on track.


and... it would keep everyone on their toes a bit more, vs just having a group of folks that nod and say 'yes', 'no' or '3 points' a few times at the same time every day.


It was! I've, however, gotten way more use out of saddling whoever is most senior with the role of making sure the team as a whole is on track. This way, it's a little more familiar with the way Western management hierarchy operates without turning it too much on its head. It's something of a leadership billet without removing the ability to be technical, which is important for a lot of folks.

It tends to work pretty well in an environment that both lacks a bug tracker (so that individual people aren't assigned things) and has a culture of pairing or mobbing.


cmd.exe is largely still concerned with starting processes and hooking them together in much the same vein as Bourne shell, so I tend to use it for that.

(I actually use Yori[0], but it's pretty much to tcsh what cmd.exe is to csh.)

PowerShell leaned a little too hard into the structured data to be useful for me as a command shell. It's a pretty decent competitor to the Python REPL though.

0: https://github.com/malxau/yori


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: