Hacker Newsnew | past | comments | ask | show | jobs | submit | jonsen's favoriteslogin

In safety-critical systems, we distinguish between accidents (actual loss, e.g. lives, equipment, etc.) and hazardous states. The equation is

hazardous state + environmental conditions = accident

Since we can only control the system, and not its environment, we focus on preventing hazardous states, rather than accidents. If we can keep the system out of all hazardous states, we also avoid accidents. (Trying to prevent accidents while not paying attention to hazardous states amounts to relying on the environment always being on our side, and is bound to fail eventually.)

One such hazardous state we have defined in aviation is "less than N minutes of fuel remaining when landing". If an aircraft lands with less than N minutes of fuel on board, it would only have taken bad environmental conditions to make it crash, rather than land. Thus we design commercial aviation so that planes always have N minutes of fuel remaining when landing. If they don't, that's a big deal: they've entered a hazardous state, and we never want to see that. (I don't remember if N is 30 or 45 or 60 but somewhere in that region.)

For another example, one of my children loves playing around cliffs and rocks. Initially he was very keen on promising me that he wouldn't fall down. I explained the difference between accidents and hazardous states to him in childrens' terms, and he realised slowly that he cannot control whether or not he has an accident, so it's a bad idea to promise me that he won't have an accident. What he can control is whether or not bad environmental conditions lead to an accident, and he does that by keeping out of hazardous states. In this case, the hazardous state would be standing less than a child-height within a ledge when there is nobody below ready to catch. He can promise me to avoid that, and that satisfies me a lot more than a promise to not fall.


I find infinitesimals more intuitive than the formal 'limits-based' approach. I'm currently studying my old degree material but using a fairly interesting book:

"Full Frontal Calculus: An Infinitesimal Approach" by Seth Braver.

I like his readable style. His poetic intro finally gave me an intuition why infinitesimals might be useful, compared to the good old reals:

"Yet, by developing a "calculus of infinitesimals" (as it was known for two centuries), mathematicians got great insight into `real` functions, breaking through the static algebraic ice shelf to reach a flowing world of motion below, changing and evolving in time."


Most people know about MediaWiki even if they don't realize they do, because it powers Wikipedia, but I wish more people used it for documentation.

You can create highly specialized templates in Lua, and there's a RDBMS extension called Cargo that gives you some limited SQL ability too. With these tools you can build basically an entirely custom CMS on top of the base MW software, while retaining everything that's great about MW (easy page history, anyone can start editing including with a WYSIWYG editor, really fine-grained permissions control across user groups, a fantastic API for automated edits).

It doesn't have the range of plugins to external services the way something like Confluence has, but you can host it yourself and have a great platform for documentation.


7 Habits for Highly Effective People transformed my life. But not before I read it 3 times. I skipped around each time. I still have never read it cover to cover.

Most crucially, the valuable parts that transformed me were not the actual seven habits or what the online cliff notes cover, but the other, non-headline concepts taught deep in some of the chapters (specifically: universal spiritual morality, Personality Ethic and Character Ethic, Circles of Concern and Influence, and some others)


In my younger years, particularly during my schooling, I held a deep resentment towards the educational system. It felt overtly clear to me, as a student, that schools failed to effectively foster learning and growth. However, my perspective has evolved over time. I've come to understand that the issues I observed are not unique to the school system but rather characteristic of large institutions as a whole.

The pervasive failure of these institutions to meet their stated objectives isn't an isolated phenomenon. It's symptomatic of a larger, systemic problem – the widespread presence of perverse and misaligned incentives at all levels within large organizations.

Unless we find a way to counteract this, attempts at reform will merely catalyze further expansion and complexity. The uncomfortable truth is, once an organization surpasses a certain size, it seems to take on a 'life of its own', gradually sacrificing its original mission to prioritize self-preservation and expansion. Who has ever seen an organization like this voluntarily reform itself? I certainly haven't.


My analog keyboards use analog multiplexers (74HC4067) which is similar to how shift registers work. Except they're full of flip-flops (2-way) instead of digital logic.

I made two videos talking about how I design my analog keyboard PCBs and how I'm using analog multiplexers here: https://youtu.be/TfKz_FbZWLQ


Ben Valleck has an ultra-approachable video on building your own keyboard [0].

PCBs are incredibly cheap to print (<$25 shipped for me to print at https://jlcpcb.com, IIRC), and he shows you how to do some mods to the schematic using KiCad [1].

BV also has some crazier minimal keyboard designs like this 18-key split one with lots of layers: https://www.youtube.com/watch?v=yNOGEtqn85o

[0]: https://www.youtube.com/watch?v=JqpBKuEVinw [1]: https://www.kicad.org/


Is there a reasonably neutral comparison of Mathics vs Mathematica anywhere?

Based on an amazing showcase[1] Mathematica is right at the top of my list of languages to learn if it (and at least some of the surrounding tooling) ever becomes open source. I wonder how many of those examples would give useful results in Mathics, or what their equivalents would be.

[1] https://codegolf.stackexchange.com/a/44683/9570


You might be interested in Walter Segal's ideas of building houses as assemblies from materials in their available dimensions so the parts are easy to reuse or modify. I find his ideas both compelling and a bit too idealistic or from another era. In the winter I have an architect friend who I talk to and hope to get his views on the pros and cons this year. https://theprepared.org/features-feed/segal-method

I created the Neat CSS framework in the same spirit. It's so minimalist that there's no publishing system; you just grab the CSS file and write HTML. I use it for all kinds of sites, including the occasional blog.

https://neat.joeldare.com


> keep in mind that complex numbers are not rotations

Complex numbers are (isomorphic to) "amplitwists": similarity transformations between plane vectors. If you want a pure rotation, you need a unit-magnitude complex number.

The complex number 3 represents scaling by 3. The complex number 4 represents scaling by 4. The complex number 1 – 2i represents a scaling by √5 combined with a rotation clockwise by arctan(2). The complex number 8i represents a scaling by 8 combined with a quarter-turn anticlockwise rotation.

> complex numbers can be thought of plane vectors

No, (physics-style displacement) vectors and complex numbers are distinct structures and should not be conflated.

Complex numbers are best thought of as ratios of planar vectors. A complex number z = v/u is a quantity which turns the vector u into the vector v, or written out, zu = (v / u)u = v(u \ u) = v. (Concatenation here represents the geometric product, a.k.a. Clifford product.)

Mixing up vectors with ratios of vectors is a recipe for confusion.

> non-obvious way to multiply them

Multiplication of complex numbers is perfectly “obvious” once you understand that complex numbers scale and rotate planar vectors and i is a unit-magnitude bivector.

> 3D rotations are often represented by quaternions, which are more "complex" than complex numbers.

Analogous to complex numbers, quaternions are the even sub-algebra of the geometric algebra of 3-dimensional Euclidean space. Used to represent rotations, they are objects R which you can sandwich-multiply by a Euclidean vector v = RuR* to get another Euclidean vector, where * here means the geometric algebra “reverse” operation. Those of unit magnitude are elements of the spin group Spin(3).

For more, see http://geocalc.clas.asu.edu/pdf/OerstedMedalLecture.pdf


I love this quote by John Cleese

“In order to know how good you are at something requires exactly the same skills as it does to be good at that thing in the first place,” Cleese elaborates, “which means — and this is terribly funny — that if you are absolutely no good at something at all, then you lack exactly the skills you need to know that you are absolutely no good at it.”


My favorite are the dreams I have before a semester starts: it is the end of the semester, and I've just discovered I've forgotten about an entire class for most of the semester. Then I wake up.

Written some 60 years ago, Information Theory and Coding by Abramson[1] is an absolute gem for those looking to get into info theory. The cover book [2] being the more complete resource (and somewhat of a grad level defacto standard text).

[1] https://www.amazon.com/Information-Theory-Coding-Norman-Abra...

[2] https://www.amazon.com/Elements-Information-Theory-Telecommu...


This is an awesome project! I think writing a bootstrapping Lisp is probably one of the best uses for a Forth.

I was surprised that they said, "One of the more involved parts of this interpreter is the reader, where I had to do quite a lot of stack juggling to keep everything in line", and I think I can offer some useful pointers not only for the original author but also for anyone else who decides to go off and write stuff in Forth even though it's 02021.

As it happens, I coded up just Lisp READ and PRINT in Forth the other day, and I avoided doing any stack juggling at all: http://canonical.org/~kragen/sw/dev3/readprint.fs

My READ is 14 lines of Forth, and even accounting for the more horizontal (not to say cramped) layout of my code and its lack of symbol support, I think it's still significantly simpler and more readable than the 60 or so lines of Forth used here. Contrast:

    : lisp-skip-ws ( e a -- e a )
        lisp-read-char
        begin
            dup 0<> over lisp-is-ws and
        while
            drop lisp-read-char
        repeat
        0<> if
            lisp-unread-char
        endif ;
with (slightly reformatted)

    : wsp  begin  peek bl =  while  getc drop  repeat  ;
There are four simplifications here:

1. My definition of "whitespace" is just "equal to the space character BL". Arguably this is cheating, but it's a small difference.

2. I'm handling EOF with an exception inside PEEK, rather than an extra conditional case in every function that calls PEEK; this implies you have to discard whitespace before your tokens rather than after them, but that's what both versions are doing anyway.

3. I'm using a high-level word PEEK to represent the parser-level concept of "examine the next character without consuming it" rather than the implementation-level concept "dup 0<> over". This is facilitated by putting the state of the input stream into the VALUEs READP and READEND instead of trying to keep it on the stack, which would have given me a headache and wasted a lot of my time debugging. PEEK and GETC can always be called regardless of what's on the stack, while LISP-READ-CHAR only works at the beginning of an "expression".

4. The choice of the PEEK/GETC interface instead of GETC/UNGETC is also a very slight improvement. It would be less of a difference if LISP-UNREAD-CHAR were capable of unreading an EOF, but in general to the extent that you can design your internal interfaces to avoid making temporary side effects you must undo later, you will have less bugs from forgetting to undo them.

In other parts of the code the situation is somewhat worse. Consider the mental gymnastics needed to keep track of all the stack state in this word:

    : lisp-read-token ( e a -- e a a u )
        lisp-skip-ws
        0 >r
        lisp-read-char
        begin
            dup [char] ) <> over 0<> and over lisp-is-ws 0= and
        while
            token-buffer r@ + c! r> 1+ >r lisp-read-char
        repeat
        0<> if
            lisp-unread-char
        endif
        token-buffer r> ;
I didn't have a separate tokenizer except for READ-NUM, because all my other tokens were parentheses. But contrast:

    : (read-num) 0  begin  eod? if exit then
        peek [char] - =  if  -1 to (sign) getc drop
        else  peek isdigit  if  getc digit  else  exit  then  then  again ;
    \ That took me like half an hour to debug because I was confusing char
    \ and [char].
    : read-num 1 to (sign)  (read-num)  (sign) *  int2sex ;
Mine is not beautiful code by any stretch of the imagination. But contrast PEEK ISDIGIT IF GETC DIGIT ELSE EXIT THEN — in popular infix syntax, that would be if (isdigit(peek()) then digit(getc()) else return — with TOKEN-BUFFER R@ + C! R> 1+ >R LISP-READ-CHAR! Imagine all the mental effort needed to keep track of all those stack items! Avoid making things harder for yourself that way; as Kernighan and Plauger famously said, debugging is twice as hard as writing the code in the first place, so if you write the code as cleverly as you can, how will you ever debug it? You can define words to build up a token and write your token reader in terms of them:

    create token-buffer 128 allot  token-buffer value tokp
    : token-length  tokp token-buffer - ;
    : new-token  token-buffer to tokp ;  : token-char  tokp c!  tokp 1+ to tokp ;
Or, if you prefer (untested):

    create token-buffer 128 allot  variable token-length
    : new-token  0 token-length ! ;
    : token-char  token-buffer token-length @ + c!  1 token-length +! ;
Or similar variations. Either way, with this approach, you don't have to keep track of where your token buffer pointer (or length) is; it's always in tokp (or token-length), not sometimes on the top of stack and sometimes on the top of the return stack.

In this case the code doesn't get shorter (untested):

    : lisp-read-token ( e a -- e a )
        lisp-skip-ws
        new-token
        lisp-read-char
        begin
            dup [char] ) <> over 0<> and over lisp-is-ws 0= and
        while
            token-char lisp-read-char
        repeat
        0<> if
            lisp-unread-char
        endif ;
but it does get a lot simpler. You don't have to wonder what "0 >R" at the beginning of the word is for or decipher R@ + C! R> 1+ >R in the middle. You no longer have four items on the stack at the end of the word to confuse you when you're trying to understand lisp-read-token's caller. And now you can test TOKEN-CHAR interactively, which is helpful for making sure your stack effects are right so you don't have to debug stack-effect errors later on (this is an excerpt from an interactive Forth session):

    : token-type token-buffer token-length type ;  ok
    token-type  ok
    char x token-char token-type x ok
    char y token-char token-type xy ok
    bl token-char token-type xy  ok
    .s <0>  ok
    new-token token-type  ok
    char z token-char token-type z ok
This is an illustration of a general problem that afflicted me greatly in my first years in Forth: just because you can keep all your data on the stack (a two-stack machine is obviously able to emulate a Turing machine) doesn't mean you should. The operand stack is for expressions, not for variables. Use VARIABLEs. Or VALUEs, if you prefer. Divide your code into "statements" between which the stack is empty (except for whatever the caller is keeping there). Completely abjure stack operations except DROP: no SWAP, no OVER, and definitely no ROT, NIP, or TUCK. Not even DUP. Then, once your code is working, maaaybe go back and put one variable on the operand stack, with the appropriate stack ops. But only if it makes the code more readable and debuggable instead of less. And maaaybe another variable on the return stack, although keep in mind that this impedes factoring — any word you factor out of the word that does the return-stack manipulation will be denied access to that variable.

Think of things like SWAP and OVER as the data equivalents of a GO TO statement: they can shorten your code, sometimes even simplify it, but they can also tremendously impede understandability and debuggability. They easily create spaghetti dataflow.

Failure to observe this practice is responsible for most of the difficulty I had in my first several years of Forth, and also, I think, most of the difficulty schani reports, and maybe most of the difficulty most programmers have in Forth. If you can figure out how you would have written something in a pop infix language, you can write it mechanically in Forth without any stack juggling (except DROP). For example:

    v := f(x[i], y * 3);
    if (frob(v)) then (x[j], y) := warp(x[j], 37 - y);
becomes something like this, depending on the particular types of things:

    i x @  y c@ 3 *  f  v !
    v @ frob  if  j x @  37 y @ -  warp  j x !  y c!  then
Now, maybe you can do better than the mechanical translation of the infix syntax in a particular case — in this case, maybe it would be an improvement to rewrite "v ! v @" to "dup v !", or maybe not — but there's no need to do worse.

This is not to diminish schani's achievements with forthlisp, which remains wonderful! I haven't ever managed to write a Lisp in Forth myself, despite obviously feeling the temptation, just in C and Lua. Code that has already been written is far superior to code that does not exist.

But, if they choose to pursue it further, hopefully the fruits of my suffering outlined above will nourish them on their path, and anyone else who reads this.


For a Mac GUI app, try Hexfiend: https://hexfiend.com. (App store link: https://apps.apple.com/us/app/hex-fiend/id1342896380?mt=12)

I've always found the bipartite graph conceptualization of matrix multiplication the most intuitive, and especially so if you're familiar with neural networks: https://www.math3ma.com/blog/matrices-probability-graphs

Obviously everyone's needs are different, but what I want from a modern engineering calculator (which I use constantly) is quick calculation of simple things. Anything more complex/verifiable than napkin math, and I'll just use something more serious like Python/Julia, Matlab, any CAS, or whatever engineering/math software I have at my disposal.

So for me, a proper calculator, a tool you master to augment your napkin math, should focus on this:

- Keyboard. Absolutely lowest number of keypresses to enter the problem; the thing that is totally lacking in most non-classic calculators. I can't stress this enough. I should be able to enter all those sophisticated functions without typing their full name and parens. Simple ODEs/integrals should be at my fingertips. I should be able to quickly repeat binary operators for a different argument, and 1/x anything I have on the screen without breaking the flow (on-the-fly calculation often conflicts with that). And many more tricks classic calculators had, which are missing from most modern apps.

- Startup time. It should pop up in less than 100ms. Modern phones and computers are very good at that, but apps sometimes aren't.

- Correctness! It sounds silly, but you can't trust most math applications out there, calculators are surprisingly unreliable, even for simple arithmetic and trigonometry calculations. What math libraries did you use? Or why should I trust your results in general?


This is a great book but a bit dense first. At a high level it goes through physics with an optimization viewpoint, as in find the actions that minimize a system's energy to figure out how a system will evolve.

I would strongly suggest you learn Lagrangian and Hamiltonian Mechanics from this book first [1] since it comes with many more illustrations and simple arguments that'll make reading SICM much easier. If you don't have time to read a whole book and want to get the main idea I've written a blog post about Lagragian mechanics myself [2] which has made it to the front page of Hacker News before. The great thing about SICM is that it's a physics textbook where the formulas are replaced by code [3] which you means you can play around with your assumptions to gain intuition for how everything works.

IMO I believe in introductory physics we overemphasize formalism over intuition and playing around with simulators is a truer way to explore physics since most physical laws were derived via experimentation not derivation. Another book that really drives this point home is [4]

[1] https://www.amazon.com/Jakob-Schwichtenberg/dp/1096195380/re...

[2] https://blog.usejournal.com/how-to-turn-physics-into-an-opti...

[3] https://github.com/hnarayanan/sicm

[4] https://natureofcode.com/


So I didn't include it in the original link, but he has a book he wrote under the same title (Domain Modeling Made Functional). I've been going through it now. It's a much more in-depth treatment of the same topics. He walks through a small product from start to finish.

Additionally he does a really great job of walking through two things that I don't think are covered well enough at all for beginning and intermediate programmers (and even experienced ones like myself may have missed parts of along the way).

1. How to effectively gather customer requirements, what questions to ask, what things to dig into, how to organize them. A simple walk through, better than the hand-wavy stuff most people do when requirements gathering.

2. How to model a domain / problem space before implementing. How to effectively reason about your entities, services and actions. And iterate on these with your customer before coding.

I seriously wish I had run across this when I first started coding. A really great collection of tangible, actionable advice.

I've never done it in javascript so won't try to guess, but the first two parts of the three part book are really applicable regardless of the language. Will have to finish the third to see how it is.

Domain Modeling Made Functional - Scott Wlaschin

  https://pragprog.com/book/swdddf/domain-modeling-made-functional

  https://www.amazon.com/Domain-Modeling-Made-Functional-Domain-Driven/dp/1680502549
(I have zero affiliation with the author and get nothing from any links)

His blog https://fsharpforfunandprofit.com/


The way I look at reading is if you don’t read then you will only ever be exposed to ideas that people around you choose to share, and you’ll never hear or learn anything at a greater level than the smartest person in your local area.

But reading and books opens the world of ideas and knowledge to anyone. Nowadays, post internet and www, when information is abundant, reading long form books isn’t the advantage it used to be. But it can still be beneficial to read long form exposition by experts.


This is a nice article. For those who have not yet read it (it's short, read it!), a one-paragraph summary: the author starts with a list of random numbers. Visualizing it (plotting the numbers, with the list index on the x axis) suggests / leads to (for the author) curiosity about how often numbers repeat. Plotting that leads to the question of what the maximum frequency would be, as a size of the input list. This can lead to a hypothesis, which one can explore with larger runs. And then after some musings about this process, the post suddenly ends (leaving the rest to the reader), and gives the code that was used for plotting.

This article is essentially an encouragement and a reminder of our ability to do experimental mathematics (https://en.wikipedia.org/w/index.php?title=Experimental_math...): there's even a journal for it, and the Wikipedia article on it is worth reading (https://en.wikipedia.org/w/index.php?title=Experimental_Math...). See also (I guess I'm just reproducing the first page of search results here) this article (https://www.maa.org/external_archive/devlin/devlin_03_09.htm...), these two in the Notices of the AMS (https://www.ams.org/notices/200505/fea-borwein.pdf, http://www.ams.org/notices/199506/levy.pdf), this website (https://www.experimentalmath.info), this post by Wolfram (https://blog.stephenwolfram.com/2017/03/two-hours-of-experim...), and there's even book by V. I. Arnold (besides a couple by Borwein and Bailey, and others).

Especially in number theory and probability, simple explorations with a computer can suggest deep conjectures that are yet to be proved.


I think what's missing is something like "Data Model Patterns: A Metadata Map" by David C. Hay

It's like C. Alexander's "Pattern Language" but for data models.

> ...I was modeling the structure— the language —of a company, not just the structure of a database. How does the organization under-stand itself and how can I represent that so that we can discuss the information requirements?

> Thanks to this approach, I was able to go into a company in an industry about which I had little or no previous knowledge and, very quickly, to understand the underlying nature and issues of the organization—often better than most of the people who worked there. Part of that has been thanks to the types of questions data modeling forces me to ask and answer. More than that, I quickly discovered common patterns that apply to all industries.

> It soon became clear to me that what was important in doing my work efficiently was not conventions about syntax(notation) but rather conventions about semantics(meaning). ... I had discovered that nearly all commercial and governmental organizations— in nearly all industries —shared certain semantic structures, and understanding those structures made it very easy to understand quickly the semantics that were unique to each.

> The one industry that has not been properly addressed in this regard, however,is our own— information technology. ...

https://www.goodreads.com/book/show/20479.Data_Model_Pattern...

I've pushed this at every company/startup I've worked at for years now and nobody was interested. You can basically just extract the subset of models that cover your domain and you're good to go. Or you can reinvent those wheels over again, and probably miss stuff that is already in Hay's (meta-)models.


Yes it is so good along with :

2veritasium

3Blue1Brown

Academy of Ideas

Adam Beatty

AdamantMC

Ahoy

Alexander Bus

Applied Science

Art of the Problem

Arxiv Insights

Astronomy - Topic

Backyard Brains

BadMouseProductions

Ben Eater

Biographics

Biology - Topic

blackpenredpen

Bloomberg

Bob & Brad

Bozeman Science

Brian Will

Calle Svensson

Carnegie Mellon University

Center for Brains, Minds and Machines (CBMM)

CGP Grey

Cheddar

Chemistry - Topic

Clip'wreck

CNBC

CNN Business

Cody'sLab

Coffee Break

Cognitive Science - Topic

colinfurze

Computer Science - Topic

Computerphile Conlang Critic ContraPoints Crash645 CrashCourse Cuck Philosophy Culadasa CuriousMarc CurseNetwork DarkkknuX Death Grips DEFCONConference Diamond Way Buddhism Dictionary of Obscure Sorrows Domain of Science DottierDig 95 Ecology - Topic EEVblog emacsrocks engineerguy Eric Dodson EricTheCarGuy

Errant Signal

exurb1a

FOSDEM

Francesco Micheli

Fredrik Knudsen

Future of Life Institute

GamingCorridor

Google Assistant

Google Chrome Developers

Google Developers

GreatScott! Historia Civilis Homemade Home How to Start a Startup I Like To Make Stuff illacertus InfoQ

Isaac Arthur

James Bruton

Jason Silva: Shots of Awe

jekor

Jimmy Built

JustAdamCurtis

Kalliopi Music

KensOfficeUSA

Knowing Better

Kurzgesagt – In a Nutshell

LeafyIsHere

Learn Engineering

LectiOpi

LeiosOS

Lex Fridman

Linus Tech Tips

LiveOverflow

Magic Marks

MakingGamesWithBen

mathematicalmonk Mathematics - Topic Mathologer MathTheBeautiful

MC [DDLC]

Meeting Cpp

Microsoft Research

Mike Smith

Millionaire Hoy

Miniclip

minutephysics

MIT OpenCourseWare

MN Millennial Farmer

mrpete222

Mustard

NASA

Nerdwriter1

Next Day Video NowThis World nptelhrd Numberphile

Numberphile2

Objectivity

Olivia Budgen

OneTooManyShots

OverSimplified

PBS Space Time

Periodic Videos

Philosophy Overdose

Philosophy Tube

PhilosophyFile

Physics - Topic

PolyMatter

PowerfulJRE

Psychology - Topic

Quanta Magazine

Quirkology

Radiohead

Real Engineering

Redefining Strength

Rich Rebuilds

Roblox

Sam Harris

Sarah Z

Sciencephile the AI

Scott Manley

Seeker

Serious Science

SETI Institute

Shrugged Collective

singingbanana

Sixty Symbols

SmarterEveryDay

Smarthistory

Sociology - Topic

SpaceX

Stanford

Startup Division

Startupfood

stdout

stickmasterluke

Stories

Talks at Google

TED-Ed

The Crazy Framer

The Institute of Art and Ideas

The Partially Examined Life

The Royal Institution

The School of Life

The Smiths

TheTrevTutor

Think Twice

Thinkerview

This Old Tony

Thomas Schwenke

THUNK

Tim Ferriss

Tom Scott

Townsends

Trinity College Dublin

Two Minute Papers

Veritasium

Vihart

Vsauce

WARRENMUSIC

Welch Labs

Wendover Productions

wenshenpsu

WhartonLeadership

What I've Learned

WhoMadeWho

Will Schoder

Wisecrack

Wojciech Mormul

Word Porn

Yurumates

ZaidyPlays


There are alternatives to Amazon!

For general household stuff, my go to store is manufactum.com

They only offer very few things, but the things they have are very high quality. If you buy something from Manufactum, you know it'll last for the rest of your life.

The stuff there is of course outrageously expensive compared to the plastic crap from Amazon, but for me that means I really think about the purchase. Do we really need two of those metal baskets for the shower that hold your shampoo bottles? Not if it costs 100€.

For tools, there's dictum.com, which follows a similar philosophy. They don't offer a million choices; they make a very deliberate choice which products they sell, so you don't have to make the choice. You can rely that something you order from them is a good choice.


The successor, Umajirushi DC Chalk, is available here from JetPens (San Jose, USA), $1.80 for 6 sticks, or 30¢ per stick (also available in a 72 pack like shown in the video)

https://www.jetpens.com/Umajirushi-DC-Chalk/ct/3774

“The two well-established chalk makers Umajirushi and Hagoromo worked together to develop this chalk.”

Thinking a six-pack might make a nice no-reason gift, to friends’ kids or my local baker or church who uses a chalkboard, I just placed an order.


See here for an example of an argument against using Venn diagrams to depict joins: https://dzone.com/articles/say-no-to-venn-diagrams-when-expl...

I want to model my home in 3D, with measurements. What's a good free alternative to autocad for that?

But using no words whatsoever is also wonderful. Here is "On Top", by Marilyn MacGregor: https://www.amazon.com/Top-marilyn-macgregor/dp/068807491X/r...

Book of Proof is hands down the best book to start with. https://www.people.vcu.edu/~rhammack/BookOfProof/

I’ve worked through the whole book twice because I loved it so much.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: