Hacker Newsnew | past | comments | ask | show | jobs | submit | myco_logic's commentslogin

I've written on this matter before[1]. Code being appreciated as an artform would, to my mind, benefit greatly both the worlds of fine art and programming.

[1]: https://istigkeit.xyz/static/writing/essays/hello_artworld.p...


Personal choice, is what I'd say. You can get a lot of mileage out of implementing a dynamic language with NaN boxing[1].

It really depends on the kind of language you're trying to build an interpreter for, and what purpose it could serve. For dynamic languages, I'd say looking at the core types of Erlang is a great place to start (Integers, Symbols, Functions, etc.). For a statically typed language things get more complex, but even with just numeric types, characters, and some kind of aggregating type like a Struct you can build up more complex data structures in your language itself.

[1]: https://leonardschuetz.ch/blog/nan-boxing/


Really neat article, this is my first encounter with the concept of NaN Boxing. Thanks for sharing!


Depends on how beefy that laptop is...

I've been doing some local LLM stuff at work recently, and even with the amazing advances in quantization lately, doing that kind of stuff on a ThinkPad is feasible, but still strongly inferior to just renting out a VPS with a couple 4090/H100s for several hours.

The biggest thing with summarizing stuff is that most local LLM models often don't have very big context-windows, so they have trouble with larger texts like even a short Vonnegut novel (I was just testing em' with summarizing GitHub issues, and even with a 16k token context window they still sometimes struggle if there are a lot of comments).

There are probably smarter people than I who could get this working on a Raspberry Pi though... ;)


I fully support the idea that writing your own SSG can be not only a great learning experience, but also a chance to make your SSG do exactly what you want it to (and nothing more).

I've written a ton of little SSGs over the years, and every iteration I've learned what kind of features I really need, and which I don't.

When I started working on the current version of my personal website (istigkeit.xyz), I also wrote a new SSG just for it. The program is called Hyphae[1], and it's written in Ruby using the Kramdown markdown gem, and pretty much nothing else outside the stdlib. It works perfectly for me, and that's all that matters (that being said, the code is up there, and licensed with the Unlicense, so anyone who finds it useful is free to use and abuse my clumsy code to whatever extent they want).

I'm a big proponent in the idea of writing personal software: that is, programs that are made by you, for you, and with no expectation that they'll be used by anyone else. I think too often developers these days get caught up in trying to make their project be "the next big thing" in whatever domain it serves, but honestly sometimes it's nice to just write something for yourself :)

[1]: https://gitlab.com/henrystanley/hyphae


While reading this, I was immediately reminded of the reduce operator, glad to see my intuition wasn't far off.

The nifty thing about this operator in the array-langs compared to the usual fold function is that they usually define identity elements for all primitive functions, which means that no initial value has to be provided: https://aplwiki.com/wiki/Identity_element

The downside of this approach though, is that using reduce with non-primitive functions can result in domain errors (at least in APL). I think BQN's version of the operator is a bit nicer, in that it allows you to specify an initial value in this situation: https://mlochbaum.github.io/BQN/doc/fold.html


> they usually define identity elements for all primitive functions > using reduce with non-primitive functions can result in domain errors

ml-style folds in the presence of ad-hoc polymorphism solve this rather handily -- in haskell for instance monoid is the typeclass that only requires an associative operation and an identity element

typeclasses have some clunkiness in this regard; you have to wrap numeric types as "sum" or "product" etc to go "ah yes today i want to say numbers are a monoid under this operation" but at the very least it does enable formal, user-defined associations between identity elements and functions

luckily most things programmers deal with are plausibly just one kind of monoid. for instance the eleventy billion different string types haskell programmers love to use all tend to satisfy monoid under concatenation without any wrappers


> downside of this approach though, is that using reduce with non-primitive functions can result in domain errors

Yes, that's another problem. There is precedent for associating metadata with user-defined functions (eg inverses); identities seem to have fallen by the wayside, but I am planning to fix that for j.


Defining a number of related functions seems to be a pattern that comes up elsewhere. For example, consider functions that compute a hash value, canonicalize, and compute some notion of equality. It would be useful to associate all of these.


Haskell calls these 'typeclasses'; cl calls them 'protocols'. Apl style is not to expose sophisticated user-level abstractions, so I think that there it is not inappropriate that the scope of associable objects (say, a monad, a dyad, an inverse, and an identity; perhaps a few others) be fixed by the language.


I had thought APL style was to specify initial values simply by pre-catenating the desired initial value onto the argument?


No, as you can check with some of the weirder arithmetic functions:

        </ ⍬     ⍝ Empty list
  0
        </ ,5
  5
        </ 0 5
  1
        </ 5 0
  0
It would be more consistent in some ways though (for example forcing </ to always have a boolean result). APL designer Bob Smith's advocated for it, particularly for ,/ to make it behave better as a joining function: http://www.sudleyplace.com/APL/Reduction%20Of%20Singletons.p...


that doesn't work when the desired initial value is not the same shape as the major cells of the argument array


I also quite liked that idea. It would probably be a nightmare to actually use, but it's definitely a creative solution to dealing with infix operator precedence.

The only implemented language I know of with this feature is the obscure array lang I (an incrementation of the J language's name I assume) by Marshall Lochbaum:

https://github.com/mlochbaum/ILanguage


If you're talking about the curly braces after the backtick in the syntax diagram for operators, I think that's just a way to escape those special characters so you can use them in operator names. It would be the same as escaping a double quotation mark in a string literal with a backslash, i.e: "\""


Merci. Thets also the explanation I came up with for myself. And I guess it allows for some trickery with unquoting.


As a concatenative language lover/designer I just thought I'd put it out there how much of an inspiration Om has been to me, I absolutely adore your language! Om is one of the most beautiful programming languages I've ever seen, and surely one of the most unique too.

Most other concatenative languages are Joy/Factor-esque stack languages, so to see an entirely different vision with your prefix notation is an absolute delight . Your panmorphic type system is also genius (the only other language I know of that has something similar is TCL). The way you treat whitespace separators in your syntax is also very clever, I love that it basically enables one to encode strings without a dedicated literal syntax element.

Anyway, I just wanted to let you know how much I appreciate your design and implementation efforts. I hope your rewrite is going well, and I look very much forward to its eventual release...


If the WTFPL is a bit too blue for you, you might prefer the Unlicense[0], it's what I tend to reach for when licensing projects these days. Compared to the WTFPL it's a bit more explicit about usage rights (but less explicit word-choice wise), and is basically equivalent in respect to the freedoms it provides. There's also the CC0[1] license, though I tend only to use that one for actual media, and not code.

I really hope more people will start to use these kind of licenses. Releasing something you made into the public-domain without concern about attribution or copyleft nonsense is, to my mind, one of the noblest things a creative individual can do...

[0]: https://unlicense.org/

[1]: https://creativecommons.org/share-your-work/public-domain/cc...


I also came here to mention this video. Watching the solution emerge line-by-line is a profound experience, and highly revelatory towards the kind of iterative problem-solving workflow possible in the Iversonian languages.

In the same vein, John Scholes' collection of APL code in the Dfns workspace[0] is positively wonderful to read through. I think it's probably one of the finest repositories of annotated code ever assembled.

[0]: http://dfns.dyalog.com/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: