Hacker Newsnew | past | comments | ask | show | jobs | submit | dwwoelfel's commentslogin

Carmack is talking about variable reassignment here, which Clojure will happily let you mutate.

For example:

  (let [result {:a 1}
        result (assoc result :b 2)]
    ...)

He mentions that C and C++ allow const variables, but Clojure doesn't support that.

clj-kondo has a :shadowed-var rule, but it will only find cases where you shadow a top-level var (not the case in my example).


That's not mutation though.

The `assoc` on the second binding is returning a new object; you're just shadowing the previous binding name.

This is different than mutation, because if you were to introduce an intermediate binding here, or break this into two `let`s, you could be holding references to both objects {:a 1} and {:a 1 :b 2} at any time in a consistent way - including in a future/promise dereferenced later.


regardless of the mechanism, you still run into the exact same problem John had.


It's more nuanced, because the shadowing is block-local, so when the lexical scope exits the prior bindings are restored.

I think in practice this is the ideal middle ground of convenience (putting version numbers at the end of variables being annoying), but retaining mostly sane semantics and reuse of prior intermediate results.


You have to use the slug from the wiki page. `Jell-O` to `Philosophy` works.


Oh, it's case sensitive! Thanks.


If you want an RSS feed of your YouTube video subscriptions, I made an app for that:

https://yt-better-subs.web.app/

I went through quite the hassle to get the app's oauth scopes approved with Google so that it can keep your subscriptions up-to-date as you add or remove YouTube channel subscriptions.


Here's how we did it at OneGraph (RIP), where we not only upgraded versions without downtime, but we also moved hosting providers from GCP to Aurora without downtime.

1. Set up logical replication to a new database server. We used https://github.com/2ndQuadrant/pglogical, but maybe you don't need that any more with newer versions of postgres?

2. Flip a feature flag that pauses all database queries and wait for the queue of queries to complete.

3. Wait for the query queue to drain and for replication to catch up.

4. Flip a feature flag that switches the connection from the old db to the new db.

5. Flip the flag to resume queries.

It helped that we were written in OCaml. We had to write our own connection pooling, which meant that we had full control over the query queue. Not sure how you would do it with e.g. Java's Hikari, where the query queue and the connection settings are complected.

We also had no long-running queries, with a default timeout of 30 seconds.

It helped to over-provision servers during the migration, because any requests that came in while the migration was ongoing would have to wait for the migration to complete.


That is awesome, I dream of being able to do zero downtime SQL migrations.


One of the linked pieces in the Neon blog post is from Knock, where we pulled off a practically zero downtime migration: https://knock.app/blog/zero-downtime-postgres-upgrades

In that post we walk through all the steps we took to go from Postgres 11.9 to 15.3.


How do you handle shipping price calculations? Is that also a feature in Stripe's Product Catalogue?


This feature isn’t ready yet, but it’s something we’re actively working on. To have shipping right away, it could be outsourced to a third party, and some of our partners are developing such functionality this way.

In the long term, we aim to make it tightly integrated with Stripe, making Stripe the core infrastructure for your e-commerce needs.


There are several third parties to get this data.

I prefer Shippo, know the founders, and integrated inside Weebly (now Square).


I'm impressed. I asked it "How much is a flight from San Francisco to the rapid & blitz tournament over Christmas?" and it figured out which tournament I was talking about and showed me ticket prices.

https://g.co/bard/share/7966410c42af

ChatGPT also figured it out, but Bard is much better at displaying information: https://chat.openai.com/share/ba5d5acc-7b40-46e1-ada5-74b4a6...


Ugh, I tried Bard too, but wasn't as impressed. Granted, I had a specific request with a stop-over for a couple of days, but it wasn't able to complete it, only the first leg. A follow-up question then prompted it a look-up a round trip flight for the 2nd leg.


A few ideas to improve the schema based on looking at the examples:

1. Make `globalId` part of a "Node" interface that all of the types implement. This will work better with tooling like Relay (used for refetching and caching). It will also let you add a `node` field that can be used to fetch any node in the graph.

2. Make the sort input an enum so that you have `sort: TITLE_DESC` instead of `sort: {by: TITLE, order: DESC}`.

3. Implement the connection spec instead of returning a list of items: https://relay.dev/graphql/connections.htm. This will let you add pagination data to the field and other useful info like totalCount.

4. Spin up a postgraphile instance with the `@graphile-contrib/pg-simplify-inflector` and `postgraphile-plugin-connection-filter` plugins and copy everything they do.


1. I do already have the `Node` interface, but it's called `IdentifiableObject` as I cannot follow the convention, because the interface has 2 fields:

- id: the actual WordPress ID

- globalID

And the convention says that Node can have only a single field: https://graphql.org/learn/global-object-identification/

2. It's better to keep it as an Input Object as it's more extensible by plugins, which may have their own fields to sort by. Also existing resolvers can be reused, instead of having to create a dedicated enum for each single type every time. I think it's more elegant than using enums.

3. You have the totalCount field already for every field. Eg: there's `posts` and `postCount`, `users` and `userCount`, etc.

I could implement connections, and maybe I will in the future, but I need a compelling reason to do it: It was needed by Facebook for their never-ending feed, but as WordPress sites are naturally paginated, I believe it's not a real need.

And connections also bring some pain: I know that WPGraphQL has had many issues with it, maybe even ongoing, with some edge cases where it doesn't work well, and if I'm not mistaken it needs additional DB calls.

4. The plugin already attempts to provide all the filtering supported by WordPress. Check out all the `filter` inputs (in fields `posts`, `users`, `comments`, etc)

(In addition, I'll be releasing extra functionality via directives some time in the future)

And it uses the "oneof" input object to simplify fields, so you have `post(by: {id: 1}}` and `post(by: {slug: "some-slug"}}`


You can build curl without support for ftp.


Yes you're quite right that you can do this when building curl:

  ./configure --disable-ftp
But then you end up with a libcurl that can never support FTP clients. However FTP is still a useful protocol in some circumstances, perhaps very limited these days, but still used. I think that it's better to expose this through a module system reflected into the distribution packages. It makes things much more visible.


> It makes things much more visible.

In what way? Recompiling a few different binaries for various levels of usage (with/without whichever protocols needed) doesn't seem like an arduous task, especially for a distro. And the docs on how to do it from the curl project are very clear and "visible". I'm not sure how any module system would improve on this.

In fact, it seems like it would disimprove purely by virtue of not being idiomatic - compile flags are a familiar and straightforward, well-known approach.


> compile flags

they don't play nicely with os package managers like apt.


Which is another reason why dependencies shouldn't be shared


Your solution is multiple copies of libcurl all over the place, all compiled in different ways, probably different versions, and that's supposed to be more secure and maintainable?


More secure: not more, but equally. More maintainable: infinitely. This is how software is being built today in secure environments anyway, since you need to own the supply chain.

Dependencies are not actually shared that often. Here's a good post about it: https://drewdevault.com/dynlib


Might be better to redirect to the news.ycombinator.com instead of google.com. I'd be less likely to notice that, especially if I opened it in a background tab.

But maybe that's the kind of criticism he was trying to avoid in the first place!


According to the article, they likely are getting severance.

Those affected will gain access to the company’s “generous severance philosophy” and “a talent hub to allow them to opt-in to receive additional support services.” (The details surrounding the severance package are unclear, but some affected workers on Blind alleged they would receive two months worth of base pay; a representative from Coinbase did not provide further comment.)


I don't want a severance philosophy, I want a package.


They are getting a severance package.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: