There's a Star Trek TOS episode "Errand of Mercy" where the simple primitive inhabitants of Organia are caught in the middle of a war between the Federation and the Klingons. Kirk is baffled because the Organians refuse his help to defend them against the Klingons. Finally when the Organians have had enough they simply make all the guns disappear, revealing themselves to be a much more advanced species than either of the fighting races.
Lisp reminds me of the Organians. It seems simple and kind of boring until you try something you know is impossible and it just works. It spoils you for other languages.
Impossible might not be the right word. When I was much younger, I would describe it as seeing things done easily in Lisp that I had no idea how to do at all.
Lisp could easily express ideas that I didn't know how to express in other languages. Just as Pascal had made it possible for me to express ideas that I had no idea how to express in BASIC.
Lisp gave me new tools. Just as earlier Pascal had given me new tools.
The Organians told Kirk and the Klingons that they would one day be friends. And rise to their level. Ironically, languages in the 90's and 2000's borrowed more and more from Lisp. Starting with most languages of the last quarter century having Garbage Collection.
This effect is real for some people, despite a lot of people don't like it.
It's like some post-war syndromes where people feel the hollowness and pointlessness when seeing other people living a normal, meaningful life, or have to go back to the village from large cities.
It's like a red pill, it always bugs me like "I wish I have macros here", or "type classes here", "Erlang process here", "Hindley-Millner or structural typing here"... Actually those languages are not that outstanding, however, it defeats the motivations to make small in-box-improvements like "I'm going to use some OO pattern to make this more SOLID".
You can also try hard to slowly push the industry forward but it's still tens of years to be "professional" based on the history you have read, which sound very defeating for mortals, too.
Imagine yourself has been throwback to an early 90s project where the mantra is still "if it ain't broke, don't fix it", and everyone endlessly patches everything while rejects new technologies because no one else is using them.
Some people have a psychological or emotional reaction to code, similar to the way others do to poetry or prose, and sometimes the shape and structure of code gives people a sense of order and centeredness, in a similar way to OCD. The relative simplicity of Lisp compared to other languages probably makes these effects more acute.
I don't know if that's necessarily a trait of the paradigm or of some of its adherents. I've felt that same calming effect but with other languages - and honestly I've found lisp code to be masochistically terse and opaque a lot of the time, although I've only messed with Arc and dabbled with Clojure and Racket. I guess that just means I haven't found enlightenment yet.
> Some people have a psychological or emotional reaction to code, similar to the way others do to poetry or prose, and sometimes the shape and structure of code gives people a sense of order and centeredness, in a similar way to OCD. The relative simplicity of Lisp compared to other languages probably makes these effects more acute.
I would really dislike to work with such people in a professional setting. Can you imagine having PR with someone who thinks code should inspire an emotional response?
You know what’s beautiful? Simple and effective things, even if you can’t fall in love with the way they look.
I think it would be hard to master programming without developing such an attachment -- emotion is what motivates people.
You've become emotionally attached to the operation of the program, it seems, in being "simple and effective".
I suppose then you have mastered, or are mastering, a kind of engineering.
It isn't clear to me why working with you would be any more or less pleasant than the other kind of a person. Sometimes a master of one craft is needed to repair/supplement the oversights of another. Arguments you cause about the "efficiency" of some approach are no doubt a problem, and may injure the long-run intelligibility of the project.
Those who master programming as such are the frontal lobes of teams who master engineering -- providing a global, planning, synthesizing, rationalizing perspective which keeps the program intelligible.
"Simple" is very subjective and could be hard to achieve (so is "easy" though). In many objective aspects Lisp is much simpler than "professional" languages in terms of spec, model, symbol.
Languages like Haskell traditionally considered "hard", but it also has a lot of simple aspects - For example, there's no variable in the first glance, only constant. And then it turns out there's even no constant because constant is zero parameter function. At last, almost everything is just a function including tuples which looks like a compiler primitive. Also, there's no such thing as multi-arity functions because there's currying. It eliminates most unnecessary concepts in order to achieve "simple", but it's infamous because it has a "hard" reputation among "professional" programmers.
PS: Surprisingly, Scala's has a much shorter spec than Java, and Scala is one of the most attacked language for being bloated.
I think it's less about believing code should inspire an emotional response (although what is "elegance" if not an aesthetic judgement) as being uncomfortable around code that doesn't inspire that response in yourself.
For instance, Python developers' preference for significant whitespace, or the visceral disgust with which some regard javascript or OOP, go well beyond mere technical considerations.
uh... clojure is a lisp and supports full blown macros. Surem 95% of the time, Functions, plain data and a bit of polymorphism is all you need. those higher abstractions are for flattening the curve on making easy to use libraries. They are sharp tools. Go decides to hand you safety scissors, clojure gives you a high end razor sharp knife and expects you to be educated on its proper usage.
I remember a lot of macros-based libraries early, however it turned out they are not composable, so they grew a plain functions-and-data interface. New libraries don't use macros much.
AFAICT the role of macros in modern Clojure is limited to new control constructs.
This happened to me too. The more I learn about alternative programming methods, the more my brain constructs elegant solutions that completely go around the problem at hand, or even the conditions which led to the context of the problem. I've reached the point where I mainly do data-driven and declarative programming with low-code or no-code in spreadsheets or GUIs. I feel almost physical pain now when I have to write traditional imperative or object-oriented code, because 95% of it feels like a waste of time to me. As in, most of the effort expended by a programmer is in initial setup, remembering mutable state, poorly-architected class hierarchies/interfaces, and rapidly breaking dependencies that only last a few months at most now. Less than 5% of what I do is the "real work" of solving the actual problem.
Not really sure where to go with this, but I found that getting away from programming for 6 months or so this last year has really helped my psyche.
No need to kill it per se, it would have been enough to allow Bell Labs to sell it at the same price as VMS, OS/360 and other competing mainframe OSes, instead of distributing its source code for symbolic price.
The most that would have happened is Bell Labs would have shut down the clones. There were already Unix clones in 1979. I worked on Cromix at Cromemco before I worked on Mesa/XDE at Xerox.
If I remember correctly nobody outside Xerox used Mesa or Cedar or any of the other amazing things at Xerox because Xerox never sold them. You must have worked at Xerox.
I went to Apple after Xerox and programmed in C++ using a pretty bad development environment. I certainly missed Mesa's exception handling and development environment. Mesa was missing garbage collection as did C++ at the time.
I think Xerox was pretty renowned for their “we’re a photocopier company; we make and sell photocopiers” attitude, or they’d never have given such golden opportunities away. Much as Kodak is a photographic film company that makes and sells photographic film… oh, wait.
Yeah, garbage collection came into Mesa via Cedar, with reference counting alongside a cycle collector, hence Mesa/Cedar, the replacement for XDE.
Had UNIX been a commercial product from day one, there wouldn't be any clones to start with, as they were mostly bootstrapped by the free beer source code that was taken via backup tapes from Bell Labs into a couple of collaborating universities.
I'm not sure about that. What you describe was enough to bootstrap university interest. That didn't lead Sun, Apollo, SGI, et al to decide that Unix was the right OS for making workstations, though. It didn't lead Sperry to decide that they needed to have a UNIX available on their mainframes (!) back in the mid-80s. And those were commercial licenses, not university bootlegs.
What are you trying to say? Are you saying that the commercial Unix licenses (to Sun et al) were only for a nominal price? Or are you saying that they were not for a nominal price, hence the BSD suit?
I agree that "the context changed" - IIRC, when the AT&T consent decree expired. That was 1982. Sun Microsystems was also founded in 1982. It was based on BSD, which had to go through the lawsuit to be proclaimed, essentially, a cleanroom Unix.
When Sun licensed from AT&T, (early 90s?) it was (I presume) at market price.
All that detail doesn't change my point: People cloned Unix because they wanted Unix. There wasn't a Berkeley VMS, not just because VMS was expensive, but because nobody wanted VMS that badly. (Why clone the thing you're getting for close to free, anyway? Doesn't it make more economic sense to clone the thing that you can't afford (VMS or OS/360), and pay the nominal price for the almost-free thing?)
People cared about Unix in a way that nobody cared about VMS or OS/360. Perhaps the ability to see the source code was part of that. But the story you're telling seems far too simplistic.
To clone UNIX, in whatever form they had to have had access to it in first place.
Which was with the tapes that flown out of Bell Labs into several universities and the commercial licenses that were almost gratis, given the price difference to other OSes.
From personal experience Cromix was written in 6502 assembly language. The comments were all in C but personal computers were not fast enough to run a C OS in 1979. I doubt it was licensed since they reverse engineered the API (which is legal). I actually learned C by reading the comments.
Small piece of history - Cromix was written single handedly by Roy Harrington who left Cromemco to found Informix.
Having said that - Mesa/XDE was awesome. I am sure Mesa/Cedar was even better.
> People keep reinventing history as if UNIX won based on merit, when it was licensing that made all the difference.
I think you keep reinventing history as if licensing was the only thing that made the difference. I think you are badly mistaken. The licensing may have made it popular with the university crowd, but it didn't do much to move the workstation customers.
This was so interesting. The way it handles copy & paste seems better than our current convention. Its use of typography was elegant. I suppose that is a benefit of using a text editor that can also format documents for publication. I don't understand why open source projects don't build on original great ideas like these and those found in Symbolics' Genera.
I have seen this said about almost any non-mainstream language from Haskell to Perl.
I think it comes down to these languages are mostly used by enthusiasts on their own projects, without the daily realities of evolving business requirements and a legacy code base built by mediocre developers.
Perl has never been non-mainstream. It's possible the average age of HN is too young to be aware of this today, but Perl powered the web during the 1990s and early 2000s. There are plenty of legacy Perl code bases out there, even ones that have nothing to do with the web (think data munging and ETL).
I met some of a couple of times, rewriting production Perl and bash scripts into Groovy and Java, shortly after being assigned maintenance roles for them, because "it is better" (aka "I have no idea and only know Java").
"Allen: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization...The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue....
Seibel: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?
Allen: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve. By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are ... basically not taught much anymore in the colleges and universities."
-- Excerpted from: Peter Seibel. Coders at Work: Reflections on the Craft of Programming
"A consequence of this principle is that every occurrence of every subscript of every subscripted variable was on every occasion checked at run time against both the upper and the lower declared bounds of the array. Many years later we asked our customers whether they wished us to provide an option to switch off these checks in the interests of efficiency on production runs. Unanimously, they urged us not to--they already knew how frequently subscript errors occur on production runs where failure to detect them could be disastrous. I note with fear and horror that even in 1980 language designers and users have not learned this lesson. In any respectable branch of engineering, failure to observe such elementary precautions would have long been against the law."
> C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine.
So C caused everybody to get a lobotomy? C caused all the existing literature to disappear? How did C do any such thing as this quote is saying?
The only way the quote makes any sense is if C made people to decide that automatic optimization, automatic parallelization, and automatic mapping of a high-level language to the machine were not actually the direction that we should be going in. I deeply suspect the "We're on the right path; all those others are fools who have been led into temptation" view of the history of computing. Just because of how comforting it is to those on the minority path, it is suspect.
C made mediocrity and unreliability the accepted standard. As parent says, C’s a great choice for writing OS kernels. It should have stayed there.
Imagine if upon purchasing an automobile you had to agree that if the wheels fly off and the engine explodes and you all die horribly in a hurtling ball of glame, you (or your estate) have absolutely no rights to sue the manufacturer for that outcome. Look at the Ford Pinto, look at Merck’s Vioxx; look at every bit of employee and customer safety legislation written over the years and why it got written in the first place. Now go read your software EULAs, and explain to us how a culture of shameless recklessness borne on zero liability is so positive for progress again?
As I’ve said before, geeks are not the only ones to blame for this, but they are the ones who actually enjoy it this way.
There is a third one, the ones that cause billions of monetary losses in fixing security exploits.
It was a big mistake from US government to have forbidden Bell Labs to be able to sell UNIX on its early days. C only got widespread because it had a free beer compiler and OS to back on.
C is not good. At least, not in the way parent means “good”.
C is this brilliant awful crazy-successful hack, yak-shaved into existence just to bootstrap an entire brand new OS (UNIX) in record time; subsequently Peter-Principled into domains it really has no business being in (basically anything userland). This is why, fifty years on, we’re still fighting memory corruption and security holes and generally doing an awful job of parallel computation and everything else C is not very good at.
Everyone should learn C, if only to understand where we’ve come from, where we are now, and where we should be trying to go in future.
Languages like Lisp and Forth and Smalltalk are fundamentally different beasts to Algol’s descendents.
Most developers, having being raised in a pure C/Algol culture, give no thought to the base complexity and limitations inherent in those languages; they just assume that is how computation is.
What the Lisps do is reveal all that complexity and limitations to be artificial constructs, imposed by the designers of those languages, whether as convenience to implementation (as in C, where features like flow control statements map closely to underlying machine instructions) or simply because that’s what they’re already used to (c.f. 90s scripting languages like Python and JavaScript, born on the back of mature, ingrained C culture).
..
Here’s a question to get you thinking: Why do Algol family languages have hardwired flow control statements? Not because that’s how programming needs to be, but because that’s how they’ve always done it. Often there’s some special-case behavior involved, e.g. deferred evaluation of operands, as in the case of conditional (`if`) statement bodies, and hard-wiring that particular special-case into the language guts is a quick way of addressing that need.
Heck, even traditional Lisps (1 & 2) are guilty of hardwiring hackery (don’t let its uniform syntax fool you: Lisp is ridden with special forms, including conditionals). You have to go to something like John Schutt’s Kernel language to see that problem solved correctly, by extracting the special-case behavior (lazy evaluation) into a general, reusable feature. Schutt calls it “vau”, but it’s just a way to dictate how and when arguments should be evaluated at the language level, instead of baking that behavior into the language implementation where it can be neither modified nor reused.
What’s the upshot? Well, first, you realize that 90% of what Algol languages treat as core features aren’t “core” at all. They can be pulled out, generalized, and expressed as plain old composable functions.
Suddenly your “core language” collapses from something big and complicated, with dozens of special-case features baked in, each with its own special syntax and behavioral rules, down to something ridiculously simple: an evaluator, a mechanism for composing behaviors (functions with vau support), and maybe some primitive datatypes to get you started (although even those can be pushed out into libraries if you want), … and that’t it. It is the mother of all refactorings, and it reveals the simple truth.
All those features and complexity the Algols had us believe were essential prerequisites to computation and programming… all gone, pushed out to external libraries which the user can freely import, modify, replace, or ignore as they need. Once you realize how small, simple, and elegant computation can (and should) be, you’ll wonder what the hell all that vast intricate pomp and ceremony is really for, and who it really serves.
Programming languages are big and complicated not because that’s how they need to be but because nobody tries to make them small and simple. But as Tony Hoare said:
“There are two methods in software design. One is to make the program so simple, there are obviously no errors. The other is to make it so complicated, there are no obvious errors.”
So if a language already falls in the latter camp, then Dog only imagine all the software subsequently written in it!
This is where C falls outwith its original purpose, and it’s telling that subsequent popular attempts to fix it have done so by adding even more complexity on top, e.g. C++, Java, Swift. (It’s only recently with the likes of Rust that trend has started to reverse, but Rust is still a Complex language compared to a Forth or a Smalltalk.)
..
Now, there is a tendency in Lisp circles to rave about macros as if they’re the True Revelation, but they’re not. The revelation is when you realize you have the ability to reshape a language you are given into the language that you need; a language that precisely and concisely expresses the concepts and behaviors that are of interest and relevance to you in your particular problem space. Lispers describe this as “bottom-up programming”, but the term hardly does it justice.
Find yourself writing “if not TEST do ACTION” a lot? Extract that out into your own first-class “unless TEST do ACTION” command and use that going forward. Want to evaluate commands non-sequentially according to your own [e.g. priority-based] rules? Done. Don’t want loops? Leave them out.
When you write an “Algol” program, the language always controls how your program is evaluated. If you don’t like the way it does it, tough. When you write a “Lisp 3” program, you get to control how your program is evaluated. You aren’t stuck with a fixed language-level set of first-class flow control statements that may or may not meet your needs, plus a weak second-class user-level mechanism for kind of creating your own (C-style functions, with their inherent limitations). There’s only one level: everything is a first-class feature, whether it’s provided by the language or added by you.
..
We often think of C as a “general-purpose language”, but I think this is wrong and unhelpful. We should really think of C as a DSL, one created specifically for bit-twiddling and crude algorithmic number crunching (which is what it’s good at). The true general-purpose languages are the meta-languages, because they have the power to be whatever we want them to be.
Working in C, you always speak in C. In a “Lisp 3”, you can speak in any language you like; you just have to grow that initial minimalist foundation of key building blocks into the language you want, and then off you go. It is the difference between Algorithmic Thinking and Compositional Thinking.
Yes, algorithmic thinking has achieved a lot in the last 50 years, but the software we’ve built that way has a lot of problems—safety, reliability, scalability, learnability, verifiability—that I strongly suspect are inherent to that approach. We really need to try compositional thinking to escape that trap, but we can’t even start to do that while the languages we commonly use (Algol family) are themselves fundamentally crap at compositionality.
--
Lastly, I think Nile (https://news.ycombinator.com/item?id=19844088) requires an honorary mention in parent’s list; not because of the language itself but because of how it’s constructed: not only is it a composed language, but the language it’s written in is a composed language too. If that doesn’t melt your mind at the possibilities, nothing will!
The problem with "you can roll your own language constructs" is that, er, everybody wants to roll their own language constructs -- as opposed to simply using a library of common ones that have been tested and proven stable and useful. That's what Algol-like languages give you. They free you from the pain of having to roll your own language and let you get on with developing applications. As an added bonus, developing to a single commonly accepted language spec means your boss can read the code and figure out what it's doing without having to understand the entire code base at once. (Ever been a neophyte on a Rails project? Multiply that bewilderment by like a thousand.) That's why they're so accepted in business.
I think that is much more a symptom of not having good core libraries built into the language in the first place. Programs written in modern C++ tend to make heavy use of the STL data structures. They aren't perfect, especially the maps, but they are solid and rarely a bottleneck. If you look at random numbers, time, the filesystem and even iostreams, there are plenty of rough edges to talk about it you want, but most people don't try to do their own thing in those areas because most programs don't need it at all.
People will fill in gaps, so it depends on what gaps are left and how big they are.
Algol languages give you a set of constructs that are the expressive equivalent of banging two rocks together. I would hope that good programmers, given grown-up building blocks, would use them to express concepts a bit more sophisticated than that.
Your argument is a common one, but it’s founded on a flawed premise: that Algol-language users know what they’re doing. Sure, they can read and write code written in that language, but how many of them actually understand the problem domain in which that software ostensibly solves problems. Truth is, a lot of programmers today are simply faking it: they don’t understand the business; what it does, why it does it, why it does it the way it does, and what the problems are that it has doing it. They know how to write code, and that’s all they know; or are interested in knowing.
For that type of programmer, rolling language constructs for the sake of rolling language constructs is simply a way to make themselves appear busy and productive without actually having to produce anything of value. Because to write useful code they’d have to learn the user’s business first—and that’s exactly what they want to avoid, because they hate having to learn stuff they’re not interested in and really don’t want to do it.
Take away the Lisp and give them an Algol; won’t make a blind bit of difference. They’ll still churn out the same endless makework, only this time it’ll be expressed as vast convoluted pointless class hierarchies and reams of autogenerated boilerplate. (You mention Rails: Worst Offender Ever.) Idiots will write useless code in any language.
Business accepts that type only because it doesn’t realize how bad it’s being scammed by them; which often as not is because the managers responsible for running that business are an absolute bunch of know-nothing bullshitters themselves.
..
The only people who will benefit from Lisp expressiveness are those who appreciate that the language itself and the code written in it is the least important part of the whole process. Because what they’re interested in is understanding and solving the users’ problems, and if they have in their toolkit the ability to construct a language that talks in the language of the business itself then so much the better for solving it.
You don’t have to be a great programmer to do this, and do it successfully too. (I’ve done it, and I’m a bear of very little brain.) You just need to understand the business itself; to get in the shoes of the folk who do those jobs day-in day-out and walk around in them till you see their world as they do. Once you can speak the language of that business well enough to “pass”, making a machine speak that language too is NBD. And once the machine speaks it, well, there you go. Cos that’s not a language that bangs two rocks together. That’s a language that lays a six-lane freeway in fresh-baked tarmacadam, all painted and ready to roll.
Code is not the product. Code is just tedious crap you have to wade through on your way to the product. As a developer, nothing pleases me more than not having to write code; or at least no more code than is absolutely unavoidable.
..
Those Nile links are good ones, I do recommend following them up. How to write a graphics pipeline in a hundred lines of code. Imagine if every program we wrote was like that, even programs that currently run to tens or hundreds of thousands of lines in those familiar “safe” Algol languages.
That’s what we should be shooting for in this profession, cos I dunno about you but I’d rather learn to read a hundred lines of code written in a custom language that’s tailored to that domain than 10,000 lines in a lowest-common-denominator crapfest like C++.
> Those Nile links are good ones, I do recommend following them up.
The work that Piumarta and Amelang et al did on STEPS is totally underrated. It has been quite sobering to watch these ideas go ignored by the larger community.
Agreed. Kay &co’s lack of followthrough pains me something awful. Great ideas, promising experiments; but without taking it all the way to finished proven product, that’s all they’ll ever be. And definitely, if they don’t do it themselves, no-one’s going to do it for them.
For the other truth is that most people just plain don’t like change, least of all radical change. And programmers are no exception. I mean, they may fancy themselves as brave pioneering visionary techno-utopians building this wonderful new world for everyone, but truth is most are so reactionary ultra-conservative they’d make your crazy John Bircher great uncle blush, and the only future they’re building for everyone else is the one that looks exactly like their own past. Because that’s where their comfort zone is, and that’s where they intend to keep it.
Me, I don’t have a comfort zone to preserve: I’m completely uncomfortable everywhere in life. But it is, at least, liberating. :)
--
“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.” – George Bernard Shaw
All of those guys had funding trouble. Even HARC didn't last more than a couple of years, I think because the funders pulled out.
That Nile/Gezira has languished is especially tough, but I think that's partially because Amelang had some personal troubles [1]. Note that at the bottom of that Github issue there is a link to a recent talk by Amelang in which, eventually, he talks about a new language he is working on called Bert or something. I'm thinking it's a next-gen follow up to Nile.
Yeah, I’m not surprised (tell me about funding…); even so, seeing it in black&white is still a gut-punch.
I mean, any one of FAANG alone could easily run a PARC or VPRI just off loose change in their kitchen kitty. And sure, it’s totally speculative blue-sky R&D that might [probably] never pan out; yet if no-one ever splurged on such gambles now and again we’d all be here having this argument by parchment and quill pen instead!
We learn by trying and failing, not by being afraid even to try. And in learning we succeed.
I have to admit: I've never been able to fully wrap my head around Piumarta's papers for STEPS (in particular the COLA stuff), but think I "get" about 30-40% of it. Even that part I understand sounds like magic.
Probably giving myself way too much credit even with the 30%. It's this paper [1] that I've struggled with for years. There is some intuitive part of my brain saying that there's something awesome happening here, but I can't quite understand it. The object model is pretty much the only part I do understand. Would love to have someone explain this all to me like I'm 5.
TL;DR: Those who program in Lisp know what they're doing; those who program in C don't.
Spare me your smug condescension. Your position is founded on a flawed premise.
You're in the minority for a reason, and it's not because you're smarter (or because you "get it" or "understand what matters" or whatever). Lisp is not the one right way; it's the right way for some problems and some circumstances. And the same is true of C. Pick the right tool for the job, not because of a worship of one tool. C is more often closer to the right tool than Lisp. That's the reality that you're trying to ignore with your "C users don't understand" rationalization.
Now, if you wanted to argue that C is overused, that in some circumstances where C is chosen Lisp would be a better choice, that would be a reasonable argument. But "C users just don't understand what matters"? BS. You need to get over your blindness.
BTW, I think you’ve misinterpreted “don’t know what they’re doing”†. I’m not talking about code, I’m talking about the business domain. (I thought it obvious from context, but clearly not.) And yes, a depressingly large number of working programmers today have zero interest in learning anything except how to write more code. But how can you write code that solves a user’s problem if you refuse to learn what that problem is and how it was arrived at?
And then you blame the users for “not speccing the job right” when they discover your “solution” not only fails to make their lives any better but often makes it even worse. I’ve worked a decade professionally as a user-turned-developer, so please don’t say this attitude isn’t cultural right down to the bone. I’ve seen it, I’ve dealt with it, I’ve walked out because of it. And looking around me that seems pretty much par for the course.
This is why I like a philosophy of reshaping your stock language into something that much better expresses the business concepts and processes you’re dealing with, then writing your solution in that. It forces you to learn the user’s job before you get underway, because you can’t fake understanding: your new “business language” either does the job or it sucks.
Building up that vocabulary is the first test of whether or not you understand what you’re doing; and until it proves you do you’ve no business proceeding or you’re just wasting everyone’s time (including your own), and giving this industry an even worse reputation for shafting the customer by giving them shit than it already possesses.
--
† Ironically for exactly the reasons I’m talking about: you’re unable to see anything that isn’t about coding. But the code is the least significant part of the overall puzzle; and until you understand the puzzle in its entirety you’re not fit to solve any of it.
No, I got that you were talking (at least in part) about the business domain. I have had the privilege of mostly working on smaller projects (two to 20 programmers), where the connection to what the business is trying to do was pretty clear. Maybe I've also had the privilege of mostly working with grownups. Or, possibly, maybe you're too cynical. (Your experience has driven you to that cynicism; mine hasn't. I have no data on which of our experiences best reflects the wider world of programming.) But I agree that, if you don't know what the user needs, you probably can't code it.
But I think you're overselling Lisp in this context. The kind of programmer you're talking about, given Lisp, won't develop a "business language" that matches the user needs. They will at best just develop a language that matches the product manager's feature request description, and at worst they'll just start hacking away in Lisp without developing any vocabulary at all.
And I think you're misreading where I'm coming from. I can see a lot more than just the code. I totally get that delivering what the user needs is what counts. Just one example: In a hallway conversation, I said we should do X instead of Y because it would be better for the users. The better part of a decade later, I got reminded of that hallway conversation, because it turned out that a litigation-happy competitor had patented Y, and in a court case wanted to apply that patent against us, but couldn't because we didn't actually do Y. I said that the attorneys should say that, if our competitor had patented delivering a worse user experience, they were welcome to the patent on it.
(@Mr Robber: I don’t know who flagged you, but I LOLed. It was cheeky but fair comment: if only I had a language to express myself with less than a thousand words.:)
Srsly, I hate the complacent acceptance—even proud embrace—of unsafe unreliable irresponsible tools in this profession. It’s like a badge of pride, and very unhealthy. Because while you may get away with it when your programs are small and simple and manually verifiable, it is an approach to software construction does not—can not—scale.
Worse, it breeds a culture of casual contempt and responsibility shirking that permeates everything we make and do. It’s not just developers—I’ve got plenty choice words for modern management too—but ultimately it’s the developers who are meant to be the domain experts in converting users’ problems into solutions. Too often they convert them into more problems.
As a user who got so hacked off at programmers’ failures to deliver the solutions I need that I eventually taught myself how to do the damn job myself, I’m well aware of all the pitfalls and mistakes on both sides—and have made most of them myself. But at least I learn by it and do my damndest to improve.
..
Now some of my bad attitude may be jealousy, as I really envy your big powerful capable Mk10 programmer brains while I’m struggling to emulate Programmer 1.0 on the wetware equivalent of a Timex 1000 with the wobbly rampack. But mostly it’s furious disbelief with a huge side of frustration as I see all you smart people squander your smarts lazily brute-forcing bloated ugly crappy not-quite solutions, instead of using your smarts to figure out much better ways of doing it and then doing that.
K&R worked under massive, crushing limitations (obsolescent 60s hardware) and turned those limitation to their advantage. Those limitations forced them to work small, smart, and fast; and that paid in spades. Delivered UNIX in record time, which had its flaws, but by golly unlike the “superior” MULTICS it shipped. Creating their own dedicated kernel-implementing language just to do that job, instead of bodging it in some unsatisfactory off-the-shelf language? Yeah, the follow through is not so great (Unix’s faults are well railed on elsewhere), but that original process of creation? Brilliant. #GreatArtists
It’s not a recognized term, hence the air-quotes. The terms “Lisp 1” and “Lisp 2” do describe concrete implementations of Lisp (1 keeps functions and data in separate namespaces; 2 puts everything in a single namespace). Many consider Lisp-2 an improvement over Lisp-1, in that it eliminates complexity and improves consistency and uniformity of behavior.
However, as even some Lispers will admit, Lisp itself is not a good Lisp.
Just like the Algols, Lisp 1s & 2s are riddled with special forms (they just hide them better). e.g. You can’t implement your own conditional operator in Lisp that replicates the behavior of Lisp’s built-in `if` operator, because `if` requires that its second and third operands be lazily evaluated, and Lisp itself always evaluates operands eagerly. Thus that special-case lazy evaluation behavior is hardcoded into Lisp’s core for just that operator.
However, there’s also a great saying (coined by Pythonistas, not Lispers, ironically) that “Special cases aren't special enough [to break the rules].”
And indeed, with just a little bit more thought and effort, all those special cases wired into Lisp’s core can be completely eliminated and replaced with a single general-purpose compositional mechanism that works the same for everything and can be used by anyone. That’s what Schutt’s Kernel does (my own Lisp-inspired kiwi language also solves it, albeit in a different way): it allows procedure writers to control exactly how and when/if each argument to that procedure should be evaluated.
In the spirit of Lisp 1s ands Lisp 2s I’d describe Kernel-like languages as “Lisp 3s”, in that a single fundamental change to the core language improves its simplicity, consistency, and uniformity. What’s really significant though is that it vastly improves language plasticity, i.e. the ability to reshape bog-standard Lisp into whatever custom language best expresses the concepts and behaviors specific to your problem space.
We talk a lot in software development about managing complexity, and indeed this is important. But the one thing that’s even better than managing complexity well is eliminating it entirely.
Lisp culture gets plenty castles-in-the-sky brainfarts of its own, natch, but at least the Lisp philosophy offers the opportunity in a way the Algols never will. It’s the question every Lisp asks, all the way back to McCarthy:
How simple can/should computation be?
(Answer: A lot simpler than we think!)
And then, having arrived at the answer to that, figure how to simplify it even further.
Now, if only someone could find a solution to all those Irritating Stupid Parentheses… ;)
> (1 keeps functions and data in separate namespaces; 2 puts everything in a single namespace). Many consider Lisp-2 an improvement over Lisp-1, in that it eliminates complexity and improves consistency and uniformity of behavior.
No, that's wrong. The number refers to the number of namespaces. Lisp-1 is a single namespace, and usually refers to Scheme. Lisp-2 is two namespaces, one for function names and one for variables that are non-functions. Lisp-2 is typically Common Lisp.
To anyone trying to search online for that "Schutt's Kernel language", the spelling turns out to be Shutt. At least according to the only match I got that wasn't pointing back to this thread, at https://web.wpi.edu/Pubs/ETD/Available/etd-090110-124904/
Exposure to Lisp is useful, not in that it’ll suddenly make you want to use Lisp yourself but in how it makes you realize just how crushingly, squalidly “Plato’s Cave” the entire Algol-descended language world really is. And disaffection is the first step toward change and improvement; just don’t expect those comfortable in their chains to welcome it.
I remember in college, there was a point while learning scheme where I felt like a cracked through to another reality. I went from feeling like "eh I can finish this assignment" to "holy shit, I can build _anything_".
I've never written lisp professionally, but every few years I try to get back to it to see if the feeling is still there -- only to get pulled away from the endeavor by life stuff before I can dig the same depth.
"APL has been compared to a diamond, which cannot be made into a larger diamond by the addition of a second one. LISP, in this context, is a ball of clay."
I couldn't have said it better myself. I've come to think that APL and Lisp represents two diametrically opposing philosophies about structuring programs. Right now I'm leaning toward the opinion that the ball of clay approach has run its course and is no longer (or maybe never way) viable. The thousands of layers of abstraction is no longer viable, for both performance and productivity reasons.
Your conclusion may well be correct but I must reject your reasons : 1) we have more (cheap!) hardware capacity than we can shake a stick at so sub-optimal performance is very tolerable and 2) depending on how you measure productivity, i'd argue high level/many layers of indirection languages let you deliver features and value faster than non ball-of-clay languages at the cost of weaker confidence in the code. Which can be mitigated to some extent by more clay :)
Personnaly, give me mud over diamonds any day.
Lisp-programmer does not count brackets, but visualizes the linked list world in some other way. My world consists of brown and red LEGO-bricks. They are clued together in pairs and brown bricks are things and red bricks abstract links. These greasy fake bricks were only playthings we had in the cheap-ass post-war daycare center 1957.
Anyways it would interesting to see 3D-environment where this Lego-paradigm is implemented. I sometimes think about it, but do not find really beautiful way to name things, except tags with letters on them.
Throughout the years, it has always been an inspiration to read up on Stephen K Roberts adventures! If you haven't yet, I strongly recommend exploring the rest of his website: https://microship.com/
I've only written a little bit of it, but it brings me anxiety.
Any language without static types brings me a certain degree of anxiety, thinking at all times about all the ways a given piece of code might be misused and go terribly wrong.
But in Lisp you add another layer: that anything looking like a function call could actually be a macro, further blowing open the door of possibilities for what an unfamiliar piece of code might do.
Of course, parsing Lisp is an exercise in tranquility ;)
It's all about the programmer's personality, I think. Graydon Hoare expressed my feelings: "Basically I've an anxious, pessimist personality; most systems I try to build are a reflection of how terrifying software-as-it-is-made feels to me. I'm seeking peace and security amid a nightmare of chaos."
I’ve never written in lisp, and my lack of knowledge of it always makes me feel a bit of respect towards people who do. Like maybe it’s this amazing thing that if I was smart enough to understand, I would achieve transcendence, or something.
But the biggest thing I look for in a programming language, is “how many bugs can it stop me from writing”, and a lack of any compile time checking is what keeps me from even bothering to look at it.
I’ve spent many years programming in perl, then ruby, before finally using Go and then Java and then Typescript and now Swift, and I feel like the period of time where I would tolerate a lack compile-time type checks is gone. It’s just table stakes at this point.
Maybe lisp is the end game for non-type-checked languages and it’s the best a dynamic language can hope to be. But I just don’t care at this point, because that whole category of languages seems to be a broken idea to me at this point.
Not by default. And herein lies the core social problem with Lisp:
[new/non-lisp programmers]: Lisp doesn't have X!
[lisp programmers]: Just make your own X! Lisp is extensible! Or pick from the twelve implementations already out there!
Here's the thing: probably all of those implementations are going to be half-baked. And good luck finding editor support for the one you end up going with when there's no standard to rally around. And have fun re-learning the nuances of basic language features every time you join a new team.
Hello late. Do you mean can we script the hot reload? Yes, here's a quick link: https://lispcookbook.github.io/cl-cookbook/web.html#hot-relo... you basically start a Swank server on the running app, connect to it from your machine, send it new code and call the compiler.
I wouldn't say it's a broken idea. It's just an idea that doesn't scale. Working in a dynamic codebase - especially a Lisp one - basically means you have to be familiar with how everything you're using works. You can't rely on interfaces; you have to fully hold everything in your mental model. When this is feasible - usually meaning that you or a small cohesive team wrote all of it yourselves - the tradeoff is a huge amount of expressiveness. The barriers between the program in your mind and the program in the code get very small, and you can just make things, concisely, unhindered. This is all well and good, but expressiveness tends to be at odds with constraints, safety, and team communication. Lisp is just one extreme of that spectrum.
Personally I feel better having some of those constraints even on throwaway solo projects, though that might not be perfectly rational.
I will also say that Clojure's heavy focus on immutable data helps somewhat with this problem. You can at least count on functions not having side-effects. Usually. Hopefully.
>I wouldn't say it's a broken idea. It's just an idea that doesn't scale. Working in a dynamic codebase - especially a Lisp one - basically means you have to be familiar with how everything you're using works.
That's FUD. Just make your code modular. Lisp gives you tools: namespaces, and systems.
>You can at least count on functions not having side-effects.
The great majority of functions in Lisp don't mutate data.
Hello late: Common Lisp does quite a lot of compile-time type checks though: https://lispcookbook.github.io/cl-cookbook/type.html it's not as extensive as type-safe languages, but clearly far better than any interpreted one. Add the interactivity where you compile a function (yes, just a function) with a keystroke and thus get the errors or warnings instantly, that makes a pleasant experience. We're also waiting for the Coalton library to mature, it brings Hindley-Milner type inference on top of CL…
Why would it be misused? I could just as easily misuse a function written with static types.
Think about all the ambiguities in the English language. Imagine if you just couldn't get your point across to someone because the words you chose together bring up a different image than the one you intended. And if you really want to have nightmares think about all the ways using the public highway might go wrong.
At some point you have to stop worrying about things that aren't a problem in practice. Type errors represent a tiny proportion of the bugs in dynamically typed programs. It just isn't a problem. Emacs doesn't even have namespaces or anything, yet it still works.
Lisp is very good at teaching you that you're not as good a programmer as you think you are, and that you don't understand the domain as well as you think you do.
These sound like negatives, but it's better to learn these things up front.
Most instructive would be to identify how reality turned out to be so different from the breathless ideal presented, and how to recognize when the next breathless ideal turns out to be similarly unable to be realized.
I'd say any app that will do any kind of (more than simple) data manipulation. If you plan to support more and more data formats, things will get exponentially worse.
One domain where I'd go with Lisp/Clojure/Scheme any day are banking/financial switches - software used to communicate between banks, card providers, POS devices, even ATMs. These programs usually has to support tons of weird binary protocols (among other things), designed in 70-ties and 80-ties. To make matter worse, bigger banks adds own "juice" to those protocols, making hybrid monstrosities. And usually, you can't find documentation about those protocols, unless you are deeply in banking business.
i would think elixir and erlang suit that better. they both have beautiful binary pattern matching built in, and are perfect to wrap up disparate pieces that talk to each other. they also share some commonality with common lisp that a lisper should like.
Erm.. I think it's a bit overrated TBH. Don't get me wrong - lisp is quite simply magical (and very ahead of the curve). However, when you look back at code, those written in Python just seem much nicer. The AIMA/PAIP code in Common Lisp, for instance, are ugly as hell, while that in Python (for AIMA) is far far elegant. No doubt this stems, in part, from CL's legacy, but I haven't found things to be much better in Scheme land either.
>The AIMA/PAIP code in Common Lisp, for instance, are ugly as hell
perhaps it just means you are familiar with python, not lisp.
I have 3 years of experience with both and let me tell you, there is nothing elegant in Python: there's nothing elegant in having to deal with one-line lambdas, global interpreter lock, 30x-50x longer execution times than Lisp, distinction between statements vs expressions, mutability everywhere, a flaccid OOP system, no interactive development features, and horribly written libraries.
Python isn't elegant. It's just more pragmatic and to the point with little waste. Lisp is definitely more elegant, but no one knew (and still don't know) how to tame the beast. We know it's a powerful language, but don't know what school of thought can utilize it safely. That's why Lisp has many horrible messes.
This is a matter of subjective opinion. I see Python code and I think "ew.. this has got 10-20 bugs or mis-handled edge cases I just haven't found yet."
Lisp isn't my favorite syntax, but a piece of Lisp code is much less likely to have hidden gotchas.
Lisp reminds me of the Organians. It seems simple and kind of boring until you try something you know is impossible and it just works. It spoils you for other languages.