WAFs have a few valid uses in my opinion: "virtual patching" and the ability to create custom rules such as blocking/challenging/rate limiting obviously bad traffic. But the giant rulesets are actively harmful IMO. "Defense in depth" is not a valid justification for doing something actively harmful to both your users and the time budget of your security team.
I welcome a k8s replacement! Just how there are better compilers and better databases than we had 10-20 years ago, we need better deployment methods. I just believe those better methods came from really understanding the compilers and databases that came before, rather than dismissing them out of hand.
Author here. Yes there were many times while writing this that I wanted to insert nuance, but couldn't without breaking the format too much.
I appreciate the wide range of interpretations! I don't necessarily think you should always move to k8s in those situations. I just want people to not dismiss k8s outright for being overly-complex without thinking too hard about it. "You will evolve towards analogues of those design ideas" is a good way to put it.
That's also how I interpreted the original post about compilers. The reader is stubbornly refusing to acknowledge that compilers have irreducible complexity. They think they can build something simpler, but end up rediscovering the same path that lead to the creation of compilers in the first place.
You would be horrified if you would know how much pre-‘20s or even pre-‘10s software is still running in production out there. Here we are talking about a huge enterprise and a somewhat complex migration (from tiller) but you can easily find outdated software without these aggravating circumstances as well.
Software from 2019 is horrifyingly outdated? If updates with security patches exist but haven't been applied, sure, but that's not really a default scenario depending on the stack.
I’ve only used 2020 because of the example in question. Security patches might or might not have been applied in both my imaginary example and in real world.
> The Secrets Manager Agent provides compatibility for legacy applications that access secrets through an existing agent or that need caching for languages not supported through other solutions.
Surprised FERPA wasn't mentioned explicitly. At least this version doesn't use the data for training, but I shudder to think of all the college administrators dumping student information into their personal ChatGPT accounts right now...
Seemingly, edu solutions from big tech are developed without ever asking input from actual educators. They usually come from a "I went to school, so I know what educators need" perspective, which is often wrong. The offerings are typically half baked, buggy, and deaf to the needs of educators. Github classroom comes to mind. ChatGPT is apparently the next example, where none of the features are educator focused, but instead are focused on bringing their products to students.
Educators have to be especially wary of these efforts, because every tech company comes up with the bright idea of "let's give it to students for free, they'll get hooked, and then when they make money they will buy our product." So we are inundated with spam messages for edutech the way doctors are spammed by big pharma reps. I've got messages in my inbox right now offering me $$$$ to blog about some AI startup. I've got some tech rep hounding me to push some online C++ tool. Now add OpenAI to the list.
> They usually come from a "I went to school, so I know what educators need" perspective, which is often wrong.
Change it to "I knew that educators would need if they were like me", and it (surprisingly) gets mostly correct. For example in the past, I volunteered to prepare talented pupils in math for studying this subject at a university, so I do claim that I have some experience in education.
I think I know some things about what would make sense for educating children, but the kind of people who actually become educators are a quite different kind of breed than me. So educators would likely not like my ideas (they don't fit the political climate and/or desired style of education), even though I think they do make sense (and would dogfood them). The latter is evidenced by the fact that I was told by work colleagues that they would love my ideas to be put into a textbook form. Thus, my ideas seem
to fall on much fertile ground for an audience of gifted parents who would love for their children to become gifted, too, than for people who become educators.
Agreed. I'm biased because my wife is an educator, but so so SO many edtech companies would benefit from hiring former teachers into business development/product roles instead of only hiring Ed.D/Ph.Ds for those roles, many of whom have limited field experience.
Maybe they did, I'm eternally optimistic. I'd have more confidence if OpenAI had used educator voices to say something that resonated with me. Instead the only quote they provide is marketing gobbledygook from some c-suite admin.
Why couldn't OpenAI get a quote from some professor using their OpenAI offering in their classroom, saying how it benefits their students. All I learned from Kyle is that he's using ChatGTP to "harness", "collaborate", "transform", and "integrate"... completely meaningless to me.
Makes you realize this website is not targeted to appeal to educators, but University CIOs.
Actually, for the heck of it I asked ChatGPT to write me marketing copy for a website for ChatGPT Edu, and it did a better job centering educators and students compared to what OpenAI released here: https://chatgpt.com/share/bd581724-273c-40d5-aa55-0cd0e4d88a...
It's especially frustrating that I firmly believe operations is a solved problem, but good luck getting a company to adopt the practices that every other mature tech company has already figured out.
WAFs have a few valid uses in my opinion: "virtual patching" and the ability to create custom rules such as blocking/challenging/rate limiting obviously bad traffic. But the giant rulesets are actively harmful IMO. "Defense in depth" is not a valid justification for doing something actively harmful to both your users and the time budget of your security team.