> Teams typically implement their earliest version of an access control system with a home-grown solution or an open source library. Many implement role-based access control, often with roles, attributes, and authorization logic hard coded and/or tightly coupled with their business logic.
Here's the thing, teams do this for a reason. Each one of these checks takes all of 2 minutes to add. And the next one takes 2 mins to add, and so forth. Until it's a total mess. But, as someone who been through this cycle multiple times, that's exactly what I would do again in the future. Because, on day zero, if my options are "the 2 minute solution" or "spend hours/days/weeks? evaluating a vendor for a problem I won't have for years"... well, the choice seems pretty clear there.
> As a product grows in usage and complexity, this is no longer enough.
But the thing is... while it's not enough... I can add to it. Far more easily than I can to refactor everything to support a vendor provided system. And I know it'll be a big ball of mud, but at just about every decision point along the way I'm better off not switching. And every time I add something to my system, it's that much harder to adopt yours.
It feels like there's a circular dependency here. The easiest time to adopt your product (day zero) is also when I'm least likely to get value out of it. Solve that for me, and I'm very interested in your product.
>Because, on day zero, if my options are "the 2 minute solution" or "spend hours/days/weeks? evaluating a vendor for a problem I won't have for years"... well, the choice seems pretty clear there.
This is a valid point. Although the goal should be creating a solution that is easier to start with which can be future proof.
That's both the problem and the solution. In the perfect world you'll have a solution that you can start in 2 minutes. Plus don't have to opt-in for the technical debt you will encounter in the further future.[1]
I had an account on pre-Yahoo!-purchase, and I vaguely remember that the login was merged with or replaced by the Yahoo! login system. Sadly, I either can't remember the former or it was supplanted by the latter and is now inaccessible to me.
Bummer. I hope the password reset works for me when it's finished. I found my first PG essays through del.icio.us, so it's probably responsible for my being on HN now. I'd love to dig through the old links hyping "folksonomy" and AJAX. :)
Exactly. If I am paying I am expecting top notch customer service. From what I have read here multiple times it's not Google's strong suit. I expect them to handle my customer service call by actual human rather than a bot if I am paying.
> Customer clusters are created/managed by programmatically running Terraform
I have soooo many questions about best practices doing this. I run a service that needs to dynamically provision AWS resources, and lacking a clear path to do this programmatically, I shell out to Terraform.
* I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?
* Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?
* What are you using for state storage?
* What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.
Yeah this isn't very easy to get right at the moment so there is not going to be any silver bullet here. We had to iterate on our runner a lot to get this right, but we have a lot of experience since we do this for Terraform Cloud too.
Answering your questions:
> * I assume you aren't shelling out :). Do you have any additional helper libraries on top of the Terraform code base to make it more of a a programmatically consumable API, as apposed to an end user application?
> * Are you still pointing at a directory with resources defined in HCL, or are the resources defined programmatically?
HCL mixed with the JSON flavor of HCL for programmatically generated stuff. Variables in JSON format also programmatically generated.
> * What are you using for state storage?
We output it to a file and handle this in an HCP microservice. We encrypt it using the customer-specific key with Vault and store it in a bucket that only the customer-specific credential has access to. If there is an RCE exploit somehow in our workflows, they can only access that customer's metadata.
> * What is the execution environment for the programmatic Terraform process? Since Terraform uses external processes for plugins, I've hit some issues with resource constraints around the max number of process sysctl's in containerized environment where I have multiple Terraform processes running in the same container.
Containers in HCP and VMs in Terraform Cloud due to increased isolation requirements. HCP has less strict requirements because the Terraform configs and inputs are more tightly controlled.
> Maybe this could be fixed in a json extension which allows words to be read as strings of themselves but if you extend json you lose any interoperability.
I worked on a product that did this for its storage file syntax, and the issue around interoperability was a huge drag. Both on us, and also on our customers. By nature of the product the customers often wanted to generate the files themselves, but generally didn't because they lacked tools to do so.
Interesting! I especially enjoyed the insight into the evolution of the company. Though with this insight I have slightly different conclusions than the author:
1. A bias to ship and a bias to ship new things are not one and the same. A lot of the problems, such as a failure to iterate on existing products/feature, sound very much a product of the later, not the former. If anything, the issue "Insufficient Iteration" is probably not correctable without a bias to shipping.
2. A bias to ship and a bias to ship things that impact your customers are not one and the same. There is a note about an early shift to micro-services. I can't speak to this organization, but, generally speaking, spending time on internal engineering work to the detriment obvious missing features is a common issue with early stage companies.
3. A bias to ship and a bias to ship to the right customers. Specifically in regard to the high value, high demand customers.
4. A bias to ship and... idk what to call this? "The product specs were well thought through, sometimes crafted for months." TBH, the problem with this one feels like a lack of a bias to ship.
As described, I think the real culprit was a lack of or poor prioritization. FWIW, I suspect the author and I may actually be in violent agreement as I did find my self nodding with most of his lessons learned. Though I'd be careful about letting too much hindsight bleed in (eg - do situations that'd be improved by more decision documentation justify the effort of documenting all product decisions, especially in the early phase when the product is rapidly evolving)?
I think the problem is that the leaders in the company (technical or otherwise) didn't recognize that software systems are like children: They have phases.
The ship it constantly is probably ok when you're small, but at some point you need to become more mature than that, both as an organization, and as an approach to the software itself.
> I try to minimize it as much as possible, even when it hurts productivity. Long term maintainability is more important to me.
I don't think this needs to be an either/or choice in these situations. You can have both if you make the make the conscious decision to not fight your tools. Yes, sometimes you have to be more verbose in one language than another, but the productivity hit in that case always pales in comparison to the hours, days, and in some cases I've seen, weeks, lost by someone fighting their language/tools.
My favorite example of this was a developer given a 2 week feature implementation that ballooned to two months. They had minimal experience C++, and they didn't like its looping syntax. Rather than accept that frustration and write the code in a syntax they disliked, they instead spent weeks writing a "re-usable library that abstracts away looping semantics".
Not a good example because being discontent with c++ for loops is how we got https://en.cppreference.com/w/cpp/ranges (which imo is a good addition to the standard library)
Also, not all AWS follow the same deletion semantics. Example: S3 buckets. The report as being deleted somewhat quickly, but their name may not be available again for hour or so.
In this case the delete will appear to succeed, but the recreation, if done with the same name, may fail.
Does the create during this time window return a specific enough error? This seems like the exact case where a controller that never gives up could provide value. Though I'm kind of amazed this is on the order of an hour instead of minutes.
This issue already exists with, say, Terraform to orchestrate infra with code. The solution is to append a random hex string to the resource unique identifier.
This AWS project will need to support a feature like that.
Whoa boy, is it ever, but maybe not for the reason you're thinking. ie it isn't caused by people typing `apt-get install python`.
There are many packages that have Python as a dependency these days. For example, on my Ubuntu system:
> ~$ apt-cache rdepends python|wc -l
> 4649
I think the best illustration of how this can happen is installing postgres libraries needed to build the psycopg2 PG client. If you know to install `libpq-dev` then you're great. But if you do something that on the surface feels totally reasonable, like installing the `postgresql-client` package... guess what? You just installed another Python interpreter.
> Today’s startups have a biologist talking about wet labs on one side and an AI specialist waxing on about GPT-3 on the other, or a cryptography expert negotiating their point of view with a securities attorney. There is constant and serious translation required between these domains, translation that (I would argue mostly) prevents the fusion these fields need in order for new startups to be built.
Is that all that different from a software engineer with little customer facing experience teaming up with a non-technical cofounder who does?
As someone from an academic background, it's a bit different than how academic labs are set up. My academic lab had a number of different projects in different research areas, ranging from human health to agriculture, but the unifying theme was big data analysis. The fact we all had this focal point meant we often had shared overlap with common tooling, which meant a lot of collaboration. Whether it was for alzheimers susceptibility or corn yields, you were working with tabular data.
Sure, we had people worked more on the bench and people who never set foot in the lab, but everyone made sure to know exactly how their data came into their hands and it's purpose, so if you were a statistician, you would learn everything about the corn sample you were given to analyze so you could make the correct considerations in your analysis. And if you were that wet lab person and wanted to present a figure that the statistician generated, you would learn everything about the test used, and all the assumptions made when choosing that method of analysis over others. Even in academia, this high level of collaborative interdisciplinary learning can be rare, but makes you a much better scientist who as a much better grasp on the wider project and your role to play.
I think a lot of startups operate with a mercenary mindset. Everyone is hired to play a discrete non-overlapping role, which tends to silo ideas. Central planning from upon high is also the norm, rather than collaborative discussion and solving problems from the bench up.
Depressingly, there are more and more big name academic labs that are adopting this startup oriented top down approach, with a head professor calling the shots and giving marching orders to a few sub research professors with their own postdocs, and grad students, and undergrads. I've known grad students and post docs in these labs who are outright denied to direct the research in their own projects, even if they have good ideas, simply because they didn't come from the top down. Pursuing your own ideas is the whole point of grad school and post doctoral training. On top of that, these labs siphon funding from more innovative and smaller groups by outputting higher numbers of ho hum papers, or affording expensive research with large, multi-institutional grants, both of which are heavily favored metrics in the grant proposal and tenure process.
The last two pairs are non-issues, both have plenty of funding. For the former however, one misstep and you have the FDA/DHS or one of state medical unions breathing down your neck.
Here's the thing, teams do this for a reason. Each one of these checks takes all of 2 minutes to add. And the next one takes 2 mins to add, and so forth. Until it's a total mess. But, as someone who been through this cycle multiple times, that's exactly what I would do again in the future. Because, on day zero, if my options are "the 2 minute solution" or "spend hours/days/weeks? evaluating a vendor for a problem I won't have for years"... well, the choice seems pretty clear there.
> As a product grows in usage and complexity, this is no longer enough. But the thing is... while it's not enough... I can add to it. Far more easily than I can to refactor everything to support a vendor provided system. And I know it'll be a big ball of mud, but at just about every decision point along the way I'm better off not switching. And every time I add something to my system, it's that much harder to adopt yours.
It feels like there's a circular dependency here. The easiest time to adopt your product (day zero) is also when I'm least likely to get value out of it. Solve that for me, and I'm very interested in your product.