One problem with "literate programming" is it assumes that good coders are also good writers, and the good writers are also good coders.
Another problem is that the source files for the production code will have to be "touched" for documentation changes. Which IMHO is an absolution no-no for production code. Once the code has been validated, no more edits! If you want to edit docs, go ahead, just don't edit the actual source.
I would turn this around --- it acknowledges the fact that if one needs to write a complex program, and to maintain it over the long term, one will need not just the raw code, but also documentation for how that code was written, and how changes to it should be approached.
As I noted elsethread, the big thing which Literate Programming has netted me is that it makes editing easier/manageable, even for long and complex projects spread across multiple files --- having the single point of control/interaction where I can:
- make the actual change to the code to implement a new feature
- change the optional library which exposes this project to a secondary language
- update the documentation to note the new interface
- update the sample template files (one for the main implementation, the other for the secondary) to reflect the new feature
- update an on-going notes.txt file where the need for the new feature was originally noted
is _huge_ and ensures that no file is missed in the update.
I think to remember something, you have to replay it in your mind from time to time. The more you do that, the more you remember. (Reminiscing is probably the right word).
I guess differentiator between the "style" of memories is your personality and what you dwell on.
It's also scientifically proven that every time you recall a memory you change it. In other words - your further experience can affect the memory of past events.
We are witnessing a new eternal summer and the only way to stem to tide is to increase the amount of required personal identifying information to register, and then publicly shame these people as a warning to others. Maybe it is a good thing that I don't run any massively popular open source projects.
They were listed in the numerous other replies, and in the initial blog post. Both have been dismissed as undesirable/unworkable. What's your point. I'm not interested in a discussion as to whether alternatives exist, I'm interested in discussing the merits of those alternatives.
People make this argument a lot, but you should optimize for the most common situation before an edge case. I am usually at my own PC/keyboard and I greatly enjoy my weird keyboard and layout. I can't type qwerty as well as I used to, but I rarely have to. (If I had to do it more I'd probably retrain it to make switching back and forth less troublesome).
It reminds me also of people who swear by using default settings in their editor or other programs so they can feel at home anywhere. Yeah, that's sort of a benefit, but I don't think it outweighs optimizing your workflow at your own machine.
I had a friend who used a split keyboard, blank keycaps, and a very odd layout (QGMLWB or BEAKL2 I think) at work. IIRC he said he kept a second normal keyboard at his desk for when someone would come by to pair program. This is sorta the inverse of your scenario. I guess he'd need to carry his keyboard to someone else's desk, or just type slower.
In general, even if it may not be "optimal," significantly non-standard keyboards (yes, Mac isn't quite like Windows) is IMO more trouble than it's worth.
Back to the author's point: "good enough" is the bar, when we are talking about a choice that impacts millions of users. Changing "good enough" to "optimal" requires gigantic costs in retraining and redesign, and simply isn't worth it.
It is a problem for any well-designed application with shortcuts. For they'd make the shortcuts often used to be according to the OS HIG, and a priority to easy to reach keys being also shortcuts often used (in that order). The latter is just part of learning curve (we all grew up with Qwerty, right? right?!) but the latter is an issue for any non-standard Qwerty (depending on how much they differ from Qwerty; which Dvorak does a lot but Colemak and Workman and Azerty and Qwertz already less so).
So what you say counts for any non-standard keyboard. There's always a learning curve. I tried going to 60%, now I settled for 80%/TKL and there are situations where I miss the other 20%, but my (vertical) mouse is in a more natural position. At least with Dvorak, all the physical keys are the same size as a standard Qwerty, so you could just set to Qwerty English-American and be done with it.
This is called 'security thorough obscurity' - no one can mess with your computer in case that you forget to lock it (e.g. when you go make yourself coffee at work).
Tech support scenarios are most frequently remote, when both the IT person and the assisted user are typing on their own keyboards, sharing the screen.
In this normal scenario, the keyboard layouts do not matter.
"Tech support" when both people are in the front of the same computer happens more between friends or colleagues, when typing speed does not matter, than in professional corporate tech support.
So (in theory) you could hear the chirp of merging black holes, if they were close enough.
In fact, everyone on the planet would hear the same chirp. Someone should comb the historical records (or even, mythologies) for a birdless chirp heard by many people.
Wow, weak signal... "The measured experimental attenuation was found to be of the order 10^18, corresponding to a detection of around one photon per second for a 1.2 W source."
One problem with "literate programming" is it assumes that good coders are also good writers, and the good writers are also good coders.
Another problem is that the source files for the production code will have to be "touched" for documentation changes. Which IMHO is an absolution no-no for production code. Once the code has been validated, no more edits! If you want to edit docs, go ahead, just don't edit the actual source.
reply