When something needed does not evolve, it helps to understand why, and address that.
Yes, visualizing the expected programming model helps with programming. Tech leads should convey the programming model somehow, and graphics can help. The highest-traffic use is onboarding people, but because that's rare (we hope), there's little investment in optimizing that, and it doesn't happen.
Yes, it would be nice to have live views of the system, so we can see if the actual system is what we expect. That's hard enough that it's only done for high-traffic operational issues (typically after they bite, hard).
But that hints at the real issue.
The question is really about decision support for code/design issues, graphical or otherwise.
And like all decision-support questions with non-trivial domains, what's needed depends primarily on what you're trying to decide, not the domain. So there's no one or even few diagrams that will support all decisions for a given code base or operational behavior. Performance tools show hot spots, not logic errors. However, knowing it's about decisions, you can start to enumerate stakeholders and use-cases, looking for common or critical features as the high-value targets.
Yes, the domain makes for different results, e.g., for react+js vs elixir. (I'd argue that the bulk of the benefit from type/memory-safe languages and devops comes from the issue they foreclose -- the degrees of freedom and risk they remove.)
But if you're trying to track some programming model, you end up needing recognizable/literate code, i.e., metadata for whatever your time slice (design, prototype, analysis, compile, runtime, first-failure data capture, historical trends...). And since the various levels of compilation/assembly often efface the original model, that problem becomes not only system-wide but stack-deep. It sounds intractable in the general case.
In my experience one quick strategy provides most of the benefit: an easy stakeholder-driven interface for experiments.
That means things like a REPL for language, code navigation, a generated REST API web page, tests, Go's quick performance test wrapper, a full-text search of the live model, span/tracing displays, etc. Reducing the cost of asking questions is the best thing you can do to support decisions of whatever type.
When people have different perspectives on the proverbial elephant, I wouldn't start by arguing how to draw different models but by adding more people/perspectives. Once the sources stabilize, then you can integrate them, so you'll want to make sure there are common touchpoints in each that make integration possible.
Yes, visualizing the expected programming model helps with programming. Tech leads should convey the programming model somehow, and graphics can help. The highest-traffic use is onboarding people, but because that's rare (we hope), there's little investment in optimizing that, and it doesn't happen.
Yes, it would be nice to have live views of the system, so we can see if the actual system is what we expect. That's hard enough that it's only done for high-traffic operational issues (typically after they bite, hard).
But that hints at the real issue.
The question is really about decision support for code/design issues, graphical or otherwise.
And like all decision-support questions with non-trivial domains, what's needed depends primarily on what you're trying to decide, not the domain. So there's no one or even few diagrams that will support all decisions for a given code base or operational behavior. Performance tools show hot spots, not logic errors. However, knowing it's about decisions, you can start to enumerate stakeholders and use-cases, looking for common or critical features as the high-value targets.
Yes, the domain makes for different results, e.g., for react+js vs elixir. (I'd argue that the bulk of the benefit from type/memory-safe languages and devops comes from the issue they foreclose -- the degrees of freedom and risk they remove.)
But if you're trying to track some programming model, you end up needing recognizable/literate code, i.e., metadata for whatever your time slice (design, prototype, analysis, compile, runtime, first-failure data capture, historical trends...). And since the various levels of compilation/assembly often efface the original model, that problem becomes not only system-wide but stack-deep. It sounds intractable in the general case.
In my experience one quick strategy provides most of the benefit: an easy stakeholder-driven interface for experiments.
That means things like a REPL for language, code navigation, a generated REST API web page, tests, Go's quick performance test wrapper, a full-text search of the live model, span/tracing displays, etc. Reducing the cost of asking questions is the best thing you can do to support decisions of whatever type.
When people have different perspectives on the proverbial elephant, I wouldn't start by arguing how to draw different models but by adding more people/perspectives. Once the sources stabilize, then you can integrate them, so you'll want to make sure there are common touchpoints in each that make integration possible.