It's unfortunate that Reddit silently went closed-source before being forced to admit it. Certainly a betrayal of trust with their long-standing userbase.
Yup. I understand the scale and micro services thing, but they could at least keep the mobile clients open (even if with a lagged release for feature launches, or just periodic tarballs).
I mean, it shouldn't. Humanity is perfectly capable of building secure web services without having to keep the way it works a secret. You don't publish your encryption keys with your source code, which is what your security should be depending on.
And what's more, Reddit themselves did not even use that excuse in their official statement for it, even though to me their excuse felt even less logical.
Basically, they don't want to leak the crazy features that they're developing and have such piss-poor source code management that they cannot provide tarballs of clean states of their source code.
I mean, how do they deploy new versions, if they cannot cleanly separate feature development from stable code?
When I worked there, it mostly came down two two things that leaked into every product: anti-abuse (spam mostly), and ads/tracking.
Anti-abuse is useful to keep secret because it includes tools that make spammers think they're successful. I think it's a little more nuanced than the "open source code is more secure argument", which I totally agree with - but the anti-abuse includes active mitigation measures that constantly evolve; and in this case, obscurity to how this all works is actually valuable.
A reddit-specific ad or metrics implementation isn't useful to anyone else, and it was a tough sell that the codebase should be made more complex only to accommodate configuration for a small handful of users. I know, because I made the argument that we should when I created the open-source reddit-mobile repo, which was originally broken up into plugins like the reddit python codebase. Eventually it just wasn't feasible to maintain as a 2-5 engineer team rebuilding a 10-year-old website in a couple of months. That's a story for another time.
Personally - I find it sad, and I think it got rid of one more thing that made Reddit special. Unfortunately, the only metric you can attach to this is "how much longer does it take us to ship shit", and thus, it died.
The origin of that statement (Kerckhoff's principle) refers to cryptography, not to application security.
If you take a quantitative, cost-based approach to modeling security through adversarial capability, obscurity becomes a perfectly valid security measure if it's not used in isolation. We don't use it for cryptography because the tradeoffs aren't worth it. It's better to design cryptography with provable security based on mathematically rigorous computational hardness assumptions than it is to make secret algorithms.
In the context of application security, if the decision to obscure some or all of your system incurs a non-trivial cost to an adversary, it makes sense. We can't rigorously and mathematically prove the security of applications in the same way we can prove e.g. an algorithm is sub-exponential instead of polynomial time.
You often see "security through obscurity" mentioned in the same way that people cite "appeal to authority" or "ad hominem" fallacies in internet debates. The reality is more complex than that. Fundamentally, anything that increases the effort required by an adversary to successfully compromise your system is worth considering. You just shouldn't depend on it in its entirety. Closed-source software is a good example of robust security through obscurity, as basically any security engineer will tell you (I'd rather look at source code line by line to find vulnerabilities than try to find them through trial and error in a penetration test).
No, having an open source kernel means a lot more developers looking at the code and working on a fix if some bug is found, rising the probabilities to find a bug and shortening the time required to fix it. How would keeping the source closed decrease the number of bugs?
In theory it shouldn't, but in practice people don't make perfect code that doesn't have vulnerabilities. But a lot of people would argue that by having many reviewers, you're reducing vulnerabilities. Thus, my stance is that "it depends on the situation"
Making something closed source does not make your product more secure, it only makes it harder to look at. Determined people will still try to understand how your software works in order to accomplish their goals.
To reinforce your point, see all pre-modern crypto techniques. It cannot be argued that they worked, and they were all certainly security through obscurity.
Aren't most examples things where it didn't work? The most famous case is the German "Engima" device from WWII (hardware- and 'software'-based, but cracked and readable for years before the Germans knew because they believed it was both obscure and effective) but it's wholly possible that most schemes were broken eventually. Keeping an obscure system secret is really hard, especially against a motivated attacker.
Enigma wasn't hard through obscurity. The Allies had the Enigma machine long before they were able to crack it. It was hard because with the equipment of the day, it was pretty much unbreakable in the same way that prime-number based cryptography is today. It was only A. Turing developing a completely novel kind of machine (https://en.wikipedia.org/wiki/Bombe) that enabled the decryption. In the same way that quantum computers could break the current cryptography easily. It's not obscurity, it's assuming that some (mathematical) task is hard.
Don't forget about the Polish. They too broke the encryption before, but then they were invaded, and no precision machinery was available to increase the number of rotors to 10.
https://en.m.wikipedia.org/wiki/Cryptanalysis_of_the_Enigma
Turing did it too, independently.
Didn't know about that! But it seems they were able to break the system only while the Germans where sending the settings of the plugboard in the header of each message. Once that was changed in the early 1940, their decrypting techniques wouldn't work anymore.
Btw, from the wikipedia article: "lazy cipher clerks often chose starting positions such as "AAA", "BBB", or "CCC"" Weak passwords were an issue already back then.
I went to Bletchley Park a couple of years ago. It's a very fascinating place. I remember hearing stories of code breakers who could infer that a piece of plaintext was all JJJJJJJJJJJ simply because, upon looking at the ciophertext, it contained no J (relying on the fact that no letter would ever encrypt to itself in Engima, because of the reflector). Indeed the Poles don't get enough credit for their contributions. And yeah, virtually all encryption was similar to Engima back then: the Allies too had a similar machine. I believe traitors sold secrets or Engimas were captures on U-boats and so on, so security through obscurity wasn't really a thing back then either.
From what I know Turing didn't do it independently: the Polish sent their work to England about two months before being invaded, what Turing did is improve on their work so it could scale (the Germans added more rotors so the Polish decrypting machine wasn't helpful anymore).
I would consider the Enigma to be a very good counterexample to security by obscurity. Even after capturing a few of the apparatuses, it took a lot of mathematicians and engineers a lot of time and effort to build something that could decipher messages before the key became obsolete.
Enigma security didn't rely on security through obscurity. Having the machine didn't enable the allies to decrypt the messages. It relied on the secret of the... secret keys and the monthly key books.
It's also quite interesting to see that the Polish cryptanalists were able to reproduce the enigma machine used by the german army without even having seen one. They were able to deduce the number of rotors, the wiring, etc.
What in the end doomed the Enigma was the fact it was more a kitchen recipe than cryptography based on solid principles. It was a smart recipe for the time, but it had flaws (like the fact a letter could not be the same letter once encrypted). In some regards, most of our symetric encryption algorithms today feel a bit that way (with a lot more external scrutiny from experts however).
Even in WWII, I don't think that security through obscurity was considered as an absolute barrier. It's more in line with a "defense in depth" pattern. It gives a little more work to your adversary as he now has to figure out how your encryption works before breaking it, but it's not expected to last for long.
The Enigma was sort of on the cusp of a modern crypto technique IMO, not to say I know that much about it. I was more referring to other techniques like wrapping a message around a dowel or the Code Talkers from WW2.
The trivial counterexample is that all modern crypto techniques rely on keeping a key, or part of a key, secret. That's security through obscurity, and you've just stated bluntly that obscurity never works under any circumstances, right?
What you want to do instead is talk about tradeoffs. Talk about how much information you need to keep secret in exchange for a given window of effectiveness, and state a preference for systems which provide longer windows of effectiveness while requiring less information (such as only a key, or part of a key, instead of a key and an algorithm) to be kept secret.
Also, take care with your argument about "pre-modern crypto techniques". Some of them remained effective for centuries after being invented, which is a far cry from your "cannot be argued that they worked", and not necessarily a favorable comparison with many modern techniques, which are lucky if they make it a couple decades before being broken.
(also, of course, all cryptographic systems eventually get broken, which is why every so often we switch to new algorithms, longer keys, etc., and you seem to be arguing that any system which eventually gets broken is a system which never worked, and that's also wrong)
In context it was fair because I was responding to a situation that was already playing with the definition, and once you allow that you have to allow taking it all the way.
Unfortunately, I started my reply to the wrong comment and didn't notice until after I'd posted it and it was too late to edit/delete.
tl;dr too many people have a knee-jerk "security through obscurity!" reflex action to things they don't like, and I have a reflex action of yelling at them about it, which sometimes misfires when I don't take care to reply at the right point in the thread.
"Security by obscurity" tries to keep the way that your encryption method works obscure, it does not try to keep a specific key obscure.
For example, if your way to encrypt works like this:
1) Shift all letters along by 5.
2) Cut out every second word and put them behind the message in order.
3) Whenever there's an f, s or y in a word, double up that word and shift the second word's letters by 7.
Then if your enemy figures out how your method works, you have to come up with a completely different method.
The opposite to security by obscurity would instead once come up with a method that entirely depends on a key. You can then publicize that method (or not), and if your enemy finds out your key, you just choose a new key and you're fine again.
Open source allows for the possibility of 'many eyes making all bugs shallow', but I think the open source community assumed that was a guarantee, at least in the case of mission critical software - it's important so obviously it's being scrutinized, right?
On the other hand, with closed source, people are presumably being paid to study the code, potentially fewer but a still fixed number of eyes on the code, as it were. But then, since it's closed source, no one really knows what's going on outside the company.
Wow. Offended much? Did I say (or even imply) I had one? All I did was provide a counter to the claim that open source means more eyes which could make your software more secure.
>All I did was provide a counter to the claim that open source means more eyes which could make your software more secure.
You really didn't say much about open source and it's ability find bugs; you just cited a particularly nasty set of bugs on an open source project as a way to condemn all of open source work to being as bug-ridden as other methods.
It was more snark than it was providing a clean example as a counter-point. Someone could easily point out the millions of bugs in closed source projects as a counter to your point of equal caliber; but I think that it's clear to most of us that NO methods that we yet understand will result in bug-free code.
That violates Kerckoff's principle[0],a cornerstone of modern information security. I would run far, far away from anyone coughtelegramcough who claims "its secure, don't worry about it" and otherwise refuses to expose their codebase to scrutiny.
That's just shitty hardcode, sane human beings build only prototypes like that, not production code. Going opensource would have that code reviewed and fixed -> means positive impact on security.
I sort of get reddits reason to hold the post ranking algorithm secret. So that it is hard to game it.
Until there is a way to easy 're-key' such an algorithm if someone found a way to abuse it, holding it secret is the best solution for them.
I currently can't think of a better solution to avoid gaming of ranking algorithms than holding it secret and changing/adapting it often. Maybe some machine learning algorithm?
What is it like maintaining a large project in such a dynamic language? I guess if you have proper unit tests and good coverage, you could quickly find any runtime bugs that you may unintentionally introduce with refactors/changes.
When in JavaScript, I'd always lean on TypeScript. Is there something similar for Python?
Never played around with Lisp, so excuse the ignorance. Is this typical to construct HTML in Lisp? This feels incredibly verbose and error prone, not to mention confusing and hard to grok. Good luck having a designer mockup/write HTML.
What you are looking at was designed by a designer.
Maybe you think another designer you know could make something that looks better if only they could use their own tools, instead of tools that this designer liked to use.
I remember an age when I would receive photoshop "designs" that I would have to re-code in HTML. Then they got dreamweaver and thought they didn't need me to code their stuff up anymore, just to debug their stuff and tell them why it didn't work in X/Y/Z browser combination.
When the lisp programmer is writing their login panel, they start out writing "code" like this,
(defun login-panel ()
(pbox "login/register"
then decide they need some HTML. `pbox` recognises HTML as long as it looks like lisp so:
But what is the alternative? To embed the html so that it still looks like HTML means making a string of some kind, and running the risk that there's some kind of variable expansion or XSS or other thing that will bite them in the butt. Maybe not in this simple example, but the need to get into the display-language is very common in a web application, so there are lots of fragments like this to answer to:
Maybe this one and bigger we could move into a file and either read it every request or add some extra code to save the contents to a global variable. But we haven't gained very much for all that complexity -- this hypothetical "when" we get a designer who knows HTML and not Lisp, and can contribute in a way that meaningfully moves the project forward without making more work for other programmers is far away at this point, and the reddit developers are just going to rewrite the whole thing in Python anyway.
Can you explain what you mean, if it's not one of the things I mentioned (that are often called "templates") and how it doesn't have the drawbacks for a small team that I mentioned?
I'm guessing they meant something like jinja,ejs,jade,etc. where you have your html in a separate file and your lisp code would just render said html file with the necessary variables.
I think you implied that in your last point but I'd be curious to find a designer who knew lisp better than HTML.
> I'm guessing they meant something like jinja,ejs,jade,etc. where you have your html in a separate file and your lisp code would just render said html file with the necessary variables
That doesn't benefit our designer who knows lisp well enough, and it doesn't benefit our programmer who knows lisp very well (or they would have done it).
> I'd be curious to find a designer who knew lisp better than HTML.
Well, if your team is a handful of lisp developers, then all of them.
Perhaps buried is the assumption that hiring/resources are free, or that there is some kind of gatekeeping for the term and title "designer". If it helps to make the point clearer: I program and I design, and I know lisp better than I know HTML.
That's why I like JSX so much. After 10+ years of trying different ways to output dynamic HTML, JSX is by far my favorite. I wanted to elaborate why, but I just figured that I don't know. I just like it better than everything else I've ever used.
It's not uncommon. An example in Racket is html-template.[0] Hiccup is one in Clojure.[1] There's a spectrum of methods for doing things like this: both Clojure and Racket have Mustache[2] implementations.
"Confusing and hard to grok"† is in the eye of the beholder. Syntax highlighting and indentation go a long way (as well as exposure), though even in black and white, the parens fade for me much more than angle brackets do. Personally, I'd wrap some of the longer lines, but that's a style thing.
It's less verbose than writing it as HTML (sexps > SGML markup for such uses). Moreover, it's the right way to go about it, because it treats HTML documents as the trees they are, instead of running free and wild with string substitution, creating both points of breakage and potential security vulnerabilities everywhere.
RE designer thing, I guess it depends on how you work. Frankly, Reddit to this day doesn't look like anything that has ever met a designer (and that's a good thing in its case). Still, if you want, you can just stick to the designer that does mockups in Photoshop, and skip the other designer that converts those into HTML.
Embedding templates in code is actually quite nice. I used to think like you, but then I gave React a try and it completely won me over.
Even without the transpilation step for JSX support, I find hyperscript [0] and friends [1] a notable improvement over other templating tools. Instead of having to jump to a separate file which magically inherits a bunch of implicit globals, you just call a function. If the system supports a component abstraction you can import them and compose views using the same programming constructs you use everywhere else.
An error in the template can be caught by your linter or compiler, since it's just regular code. I'd say it's actually safer, since you typically won't have to worry about escaping values manually. And along the way you can validate the input in order to confirm you only ever generate valid HTML.
Check out elm-css [2] as well, which lets you define your stylesheets with elm. This lets you take advantage of the their type system to make sure all class names are referenced correctly. A typo or accidental deletion would would cause it to fail to compile, allowing you to catch problems early.
I still don't get why non-code templating ever became popular. Especially that I saw it becoming popular in PHP, which itself is a better templating language than the templating languages people were using. Instead, people created plethora of languages that slowly accrued Turing-completeness with them, because religious adherence to "no code in views" is stupid.
That said, glue-strings-together templates are still a problem and should not be used. HTML document is semantically a tree, and if you don't treat it like that, bugs and security vulnerabilities follow.
(Respecting semantics applies to other languages as well. For instance, those who recognized SQL as a language, instead of some blob you concatenated out of strings, didn't have to worry about SQL injection.)
>I still don't get why non-code templating ever became popular.
Because if you're working on a team, not everyone on your team may be a programmer, particularly if they're just working on layout design, and the problem that templates solve in that regard don't require complete access to raw code. Separating one from the other makes it easy to focus on one versus the other. Just look at how messy a complex Wordpress template can get and how difficult it can be, just reading it, to get a grasp of what the HTML would look like, versus a Twig template which has a lot less noise.
>Especially that I saw it becoming popular in PHP, which itself is a better templating language than the templating languages people were using.
I've been down the road with PHP where everyone working on javascript and CSS also had to know just enough PHP not to break the site, because everything had raw PHP mixed into it, and then somewhere in the vast tree of includes including includes someone forgot to manually escape a variable that came from the database, or else they did, but now someone else escaped it twice, or they created some weird encoding problem because they were in a javascript context.
PHP is at best an adequate templating language, and a good base to build a sane framework on. But a framework is necessary beyond a certain level of complexity, because most projects benefit from features that raw PHP doesn't provide, such as context-aware and automatic variable escaping, template inheritance, template caching, etc. You will either wind up using an existing templating framework, or you will wind up implementing an ad-hoc, informally-specified, bug-ridden, slow implementation of an existing framework.
>That said, glue-strings-together templates are still a problem and should not be used. HTML document is semantically a tree, and if you don't treat it like that, bugs and security vulnerabilities follow.
There's no way to get that in PHP without a template framework. PHP, unfortunately, doesn't even know that HTML exists[0], despite having the sole purpose of being an HTML preprocessor.
[0]I just remembered there are XML functions like DomDocument that can be used to process HTML (with... effort) but I don't know whether or not you could get something like XHR out of it, and by extension, avoid the problem of HTML as concatenated strings.
> Because if you're working on a team, not everyone on your team may be a programmer
I think we're rapidly getting to the point in our civilisation that that's a bit like having an illiterate on the team. All programming is, is thinking logically & systematically about abstractions: everyone should be capable of thinking like a programmer — anyone who can't has a cognitive disability (like those poor folks who can't learn to read).
Fortunately, people who can't think logically & systematically about abstractions are pretty rare; people who won't think logically & systematically are sadly far, far too common.
> All programming is, is thinking logically & systematically about abstractions: everyone should be capable of thinking like a programmer — anyone who can't has a cognitive disability (like those poor folks who can't learn to read).
But not every aspect of web development is programming, any more than every aspect of book publishing is typesetting, or every aspect of film is camerawork. It's a lot more complex than it was when you could be a "web developer" with just Notepad++ (unfortunately.)
And we're not talking about "thinking like a programmer" in this case, but literal programming. Someone whose job it is to translate a Photoshop image or PDF into HTML and CSS can be capable of thinking in abstractions but still not need to write python or PHP or what have you.
> Is this typical to construct HTML in Lisp? This feels incredibly verbose and error prone, not to mention confusing and hard to grok. Good luck having a designer mockup/write HTML.
Do you mean that (:td :colspan "2" "username:") is verbose and error-prone compared to <td colspan="2">username:</td>, or what are you comparing it to? That code is probably denser than I would have written it, but I don't see any issue related to the fact the HTML is being generated from Lisp.
This example doesn't take advantage of it, but writing HTML in Lisp is easier and less error-prone than regular HTML, because you can abstract patterns and factor things.
How is it verbose? It has much LESS noise than HTML.
IMHO, Lisp is the simplest and smallest readable representation of data structure.
I will trade JSON for edn [1] any days of the week
I share your zeal, but the answer to apparent incredulity isn't to raise the level. People have different tastes when it comes to programming languages and methods, and that's okay. There's often things we can learn from different techniques.
To our parent's points, it's more verbose if you're used to passing data into a separate template file. That can help abstract the markup from the data processing. To their point regarding designers, many are more comfortable working with HTML files than application code. That can be a benefit for many teams. Clearly some lispers agree, as they've implemented alternatives like Mustache templates.
The notion of incredulity probably comes from fact that it's literally impossible for HTML encoded in s-expressions to be more verbose than HTML encoded in HTML!
> To our parent's points, it's more verbose if you're used to passing data into a separate template file. That can help abstract the markup from the data processing.
One does not exclude another. The "template" can be another lisp function (in another file).
> Clearly some lispers agree, as they've implemented alternatives like Mustache templates.
Probably popular demand. Personally, I don't think they should be used, like ever. Those template languages were a doubly bad idea.
First, they work on HTML serialization to string instead of HTML semantics as a tree, making it easy to introduce both bugs and security vulnerabilities. The concepts of "escaping", "sanitization" or "injection" are only relevant when you're abusing a serialization format instead of working with the language at the appropriate level.
Second, they all seem to have started with the assumption that "views don't need code". Each of them, over time, slowly discovered that no, views actually do need to have code, and slowly gained variables, conditionals, loops, function calls... and turned into shitty Turing-complete languages they tried so hard not to become. The problem here was always the premise. Views absolutely do need code, and so they should be written in a proper programming language.
Writing templates in Lisp, I think, is an awful idea. Template languages have evolved for a long time to include all the things you will probably need.
For example, imagine if you want to test some HTML code in a browser. Browser doesn't understand Lisp so you have to write it in HTML. And then you have to convert it to Lisp to use as a template. What a waste of time.
There are all king of template languages around. There are editors that can highlight the syntax, there are plugins to write HTML faster. I don't imagine how you justify writing your own template engine. Probably that is just a NIH symdrome.
> First, they work on HTML serialization to string instead of HTML semantics as a tree, making it easy to introduce both bugs and security vulnerabilities.
This can be easily solved by validating HTML code.
> The concepts of "escaping",
Tenmplate engines do necessary escaping automatically. For example, in Twig you just write
<div>{{ user.getName() }}</div>
and don't have to worry about escaping. I guess in Lisp you cannot even write user.getName() because it doesn't have objects.
> Views absolutely do need code, and so they should be written in a proper programming language.
In my experience, template engines like Twig have enough features. If you need soem complicated logic, you can write a helper function.
> Template languages have evolved for a long time to include all the things you will probably need.
Which is what turned them into Turing-complete languages that try to pretend they're not.
> For example, imagine if you want to test some HTML code in a browser. Browser doesn't understand Lisp so you have to write it in HTML. And then you have to convert it to Lisp to use as a template. What a waste of time.
Not much time is wasted, especially that s-exps are faster to write than HTML (especially with editor support). Browsers don't understand templates either, so in both cases you have to do some conversions. I'll give a point to HTML-ish template languages here, but I don't believe the time spent on manually converting large chunks of template code into browser-edible HTML and back is anywhere near the critical path on the project.
> This can be easily solved by validating HTML code.
That's validating vs. making it literally impossible to emit invalid HTML.
> I guess in Lisp you cannot even write user.getName() because it doesn't have objects.
Haha. Right. No, Lisp actually has objects. You'd write it as (get-name user), since the syntax for function calls in Lisp is consistent.
This gets complicated quickly. For example, what if you need to add a CSS class, a placeholder or some custom attribute to an input field? You'll have to write code for every such case.
And you can do the same thing with template engines like Twig with macros. Futhermore, Web frameworks like Symfony already have macros that are necessary to display a form. You don't even have to write them and that is a strong point of Symfony compared to weird self-made Lisp template engine.
A nice thing about Lisp is you can do whatever without much getting in your way -- though this is a pretty common way of doing HTML for get-it-out-the-door work. You can get a whole lot fancier (https://github.com/Day8/re-frame is an example from ClojureScript land). But the context of what you're making determines the important variations, and whether you ever let a designer (or a programmer) touch this example at this level of resolution or something approximately the same like a separated out HTML file (or whatever template system you like) that the Lisp function just wraps. Example details that could be important in general, not just for this specific code repo:
This thing has no concept of localization, is that a constraint? Accessibility? Are there dozens of other engineers working at this level of detail you need to worry about? Is this the output of a higher level tool used by either coders or designers? Is there a coherent component architecture behind this or is it just more of a utility function someone made? Do components need to be namespaced? Live in their own files/packages? Is the thing you're building a site with several unique pages or a SPA? Where are the JavaScript implementation points for things like the on-click's register() function? Or are you going to forbid JS in your markup (because it confuses designers, or because of CSP policy) and force programmers to bind things elsewhere? How does your routing system work, is it fine to hardcode those paths like that?
You may find varying answers depending on who you're asking and if they have any experience with Lisp.
As someone who doesn't, the HTML example is 100% easier to read in my opinion coming from a Rails developer. You don't have to go against the grain to get the output you desire, you can just write it as it is intended to be rendered.
I'll take sexps over sgml any time. Why do you think this is worse than the corresponding HTML code? You don't have the madness of the closing tag having to repeat the name of the element, the odd idiosyncrasies with self-closing tags, the arbitrary division and restrictions between attribute and children nodes...
And of course in the context of writing lisp code it means that you have a uniform syntax instead of having some parts of the file as HTML and others in an other language (which is often a pain with code editors in my experience).
It still baffles me that SGML and its descendants are so popular. They're quirky, terribly noisy for markup, incredibly large and slow for serialization and of course every variation introduces and removes arbitrary limitations.
Exactly. The thing that I always remember is that identifiers encoded this way simply can't get out of sync from client to server. Commands, paths, enums, ... Where a form is posted, say, is an easy constant, that defines both the routing AND the string in the form. Where forms always have the fields defined in a structs, never more, or less.
And this can result in sites that are very fast to add things to, as opposed to have dozens and dozens of files everywhere.
But everyone wants "single page apps". Because
client -> server ---[html]--> client --> server --[javascript]--> client --> server --[4*ajax requests necessary to fill in the initial page]--> client --> server --[4*images necessary to fill in said javascript]--> client
Is so obviously "much faster" than:
client -> server --[html+embedded everything]--> client
It depends and is a matter of taste. I'm very much a Common Lisp afficionado but I've come to dislike embedding HTML in code like this (also due to experiences in other languages).
It's not error prone maybe even less so since it can be checked by the compiler.
My preference is to use templating because it matches the final output closer and is, like you said, easier to show to other people / designers. However, even templating is sometimes turned into an enterprise confusing mess where it becomes very complex and hard to know where values come from.
It's common (and I agree it's ugly) but it's far from the only option. I'm using a library [1] that generates HTML from Google's Closure Templates [2] which any designer can easily understand.
A designer can just write HTML or some toy language.
A programmer will translate it to something manageable eventually anyway. A trivial automatic process can convert HTML to a DSL. Although making code sensibly refactored and readable is usually a manual job for the programmer.
Heh. If you see [1], they define a function called "user-email" (spelled correctly). The lisp convention here would be to define the accessor as something like "%user-email." While the code isn't too bad, these little things betray a lack of knowledge of lisp lore. Perhaps another reason for the rewrite was that the team actually didn't have much experience writing lisp code.
Sure, sure. But while it's definitely not in the standard, I do see it all over, and it's something I picked up from reading a lot of other people's lisp. SBCL internals, for example. You might not like the convention and chose not to follow along with it, but I would be surprised if, after 18 years of lisp, you had never seen it before, and would choose to misspell an accessor to prevent a clash instead of naming it differently.
Edit: ah, I think I understand. I don't mean to imply that all accessors should be named in this way. That would be gross.
To be explicit, the old Lisp convention I know of, which is not a super common one but one I've definitely used quite a bit, is that you might name a slightly lower-level/more primitive version of a function with a %-prefix.
So if we have a user-email function that is just an object slot accessor, We could have %user-email that actually does a database query. I can't remember for sure, but I wouldn't be surprised if I'd seen a double %%-prefix used, too.
You don't always want to put those functions in separate packages (traditionally in Lisp, packages are relatively heavyweight--e.g. an entire web server framework might have only one or two packages).
Another similar example might be the convention of defining a macro with the name my-macro, and a function that actually implements much of the macro, or is an equivalent of the macro that takes a thnk, that is named my-macro*.
(24 years of lisp here--I wouldn't be surprised if the conventions of mainstream lisp during the earliest six years were somewhat different than the following decades.)
Edited to add: Now that I think about it, this convention may have been heavily used in the Macintosh Common Lisp community.
Only in implementation internals? I'm fairly certain I've encountered it in other unexported package symbols in library code, things like struct constructors where you want a "smarter" "make-my-struct" function. But perhaps I misremember.
> What will you do if three or more modules want the same function name? Tack on %%, %%%, ...
Certainly not!
I think your points are all very fair: it would be a smell, and is evidence of a need for improvements at a more structural level. It just smells a little less than a seemingly purposeful misspelling to my nose.
Is this really all the source code? It seems to be missing the database schema, which is pretty critical to using it.
Beyond that, this is actually fascinating to look at. It's small enough that you can actually understand it, but it's more complete than a simple 'toy' application. I'm also fascinated to see how SQL constructs are expressed in LISP. I don't know LISP at all, but it's pretty obvious how the queries in the language map directly to SQL.
The interesting thing is you can still use pretty much that entire stack today. The only one that stands out as outdated is tbnl which changed name to hunchentoot.
The stack I used for a website just the other day:
Some Ediware in the dependencies (CL-PPCRE, CL-WHO and Hunchentoot from back when it was called TBNL). I guess they used CL-SQL rather than Postmodern because the latter didn't exist back then.
Should be reasonably easy to get it running on a modern CL.
Unrelated to the original post: Does anyone know if Reddit keeps or shares records of content that ends up on the (unauthenticated, uncustomized) homepage (top 30 results at any given time) for US-based users? If so, how far back does such data go?
I don't believe the mainstream archive sites would be a definitive source. Perhaps there is another?
Cheers. Looks like this most recent work was primarily front-end:
> "When we set out to rewrite our code to solve these problems, we wanted to make sure we weren't just fixing small, isolated issues but creating a new, more modern frontend stack..."