Hacker Newsnew | past | comments | ask | show | jobs | submit | flimsypremise's commentslogin

Been interviewing for over a decade. Tests like this do not really tell you whether someone is a good programmer, they tell you whether a person has spent a lot of time practicing problems like this. The only way to tell if someone is good at the job is to have a conversation with them and pay attention to how they answer your questions. Ask your candidate their opinions on API interface design or whether they favor mono-repos. A good candidate will be able to speak legibly and at length about these things. The problem is that in order to judge those responses you also have to be very knowledgeable. So instead we have stupid little tests designed to let interviewers of varying ability screen candidates.


I've been interviewing people for more than twice that long. I started with the conversational approach, and it worked poorly. There are too many people that are good at talking about technology but bad at actually doing.

My mature interview style is a pair programming session on a specific exercise (the same for everyone). It's inspired by the "RPI" (Rob's Pairing Interview) from Pivotal (well, Pivotal of old days). There are no gotchas to it, it's not hard. But it's definitely programming. Because that's what I'm hiring for. Not talking.


to correctly judge some fizzbuzz solutions (or approaches to refactoring) you would also have to be knowledgeable. Guess what I think the problem is.


As someone who has built multiple custom macro film scanner setups, owns basically very consumer film scanner of note (including the Coolscan 9000 and the Minolta Scan Multi Pro), and is intimately familiar with the workings of various film scanners and science of digitizing film, I don't think this article provides particularly good advice.

Just for instance, the LS-2000 features in the post has an advertised optical resolution of 2700DPI, which means the absolute maximum megapixel resolution you can get out of that thing is a little over 10MP. Film scanners are notorious for overstating their optical resolution, which has nothing to do with the resolution of sensor used to digitize the image data and everything to do with the lens in the scanner. You can have a 200MP sensor scanning your film but if your lens can only resolve 1000DPI you will have a very high resolution image of a low resolution lens projection. It's maybe a little better than a flatbed and it features dust removal, but in the year of our lord 2024 the LS-2000 is not a good choice for scanning film.

As for his macro scanning setup, he appears to be using the digitaliza for film holding, which is a notoriously bad product with many known flaws. Negative supply makes a line of lower cost version of their very good film holders, and Valoi also offers an affordable system of components that I highly recommend. There is a ton of good information out there about macro scanning, and had the OP sought it out he could avoided his little adventure in retro computing.


Digitizing film seems to be a perennial pain point. As far as I know there is no mostly-automated option to scan multiple film formats at high resolution besides paying someone with very expensive equipment to do it for you. The obsolete equipment like those models you mentioned involves a lot of fastidious labor per-frame and is generally pretty awful.

Modern equipment has similar warts. Flatbed scanners are bad film imagers for a number of reasons, a few which you already wrote. There's a huge volume of new products coming out for scanning right now (film holders, copy stands, light panels, etc) but these setups are very inconvenient to set up or, to be charitable, demand practice and perfect technique. There's always people ready to insist they have an easy convenient time setting up their SLR scanners and capturing 1000 rolls at 9999 DPI in 2 minutes. I don't share their experience.

During the pandemic I tried to proof-of-concept a path forward without any real success:

- The first attempt involved modifying a Plustek scanner to take medium format. This ended up taking a ton of work for each medium format frame (4 captures for each of the 4 quadrants, and each of those is already slow for a single 35mm frame). Stitching these captures is tedious and flaky for images that don't have obvious sharp features.

- The other involved rigging the objective of a Minolta Dimage Scan Elite II on a Raspberry Pi HQ camera onto an Ender printhead to raster over the film with a light table. This could have worked but it had many mechanical problems I am not cut out to solve (lens mount, camera-to-film-plane alignment)

Leaving aside designing a proper optical path there are 2 killer problems:

- the problem of mechanically manipulating the negative and keeping it in focus

- the problem of stitching together partial captures with minimal human intervention

A few people seem to be working on open source backlit line-scanners but as far as I know no central path forward has emerged. I hope someone figures it out.


I see you mentioned using a 3D printer for scanning medium format film. I did something similar, but took the opposite approach. I placed the film on a lightbox and mounted that to the printer, then had that move around in front of a camera with macro lens. I did not have much of a problem with alignment.

That being said, this was a one-off, but once I had enough overlap with each capture, PTGui was able to switch it together relatively hands-free, even with it having lots of sky.


I've been doing something similar. I started with a 3D printer approach, then two cheap aliexpress C-beam linear actuators and finally managed to acquire a 2-axis microscope stage for cheap. The key I have found is that any issues with alignment can actually be solved with focus-stitching.

The real problem with most scanning setups is actually getting accurate color out of color negatives. The common wisdom these days is to use high-CRI light, but I believe that approach is flawed. Film scanning is not an imaging challenge, but a rather a densitometric one. You don't actually want to take a photo of the negative in a broad spectrum because the dyes in photo negatives were never intended to be used in a broad-spectrum context. You actually need to sample the density of the dye layers at very specific wavelengths determined by a densitometric standard (status M) that was designed specifically for color negative film. Doing this with a standard digital camera with a bayer sensor is... non trivial and requires characterizing the sensor response in a variety of ways.

Basically the hardware is easy, the software is hard.


Wow, do you have a write up of your scanning setup? It sounds like it could work for scanning 4x5 and 8x10.

I’m also curious about your comments on the light source. Although you’re 100% correct about the way the wavelengths are specified in the data sheets, the reality has always been different. When I was printing color in the darkroom, our enlargers were very basic lights with subtractive color filters. Dedicated film scanners used either fluorescent or basic LED backlights. Have you run into color reproduction trouble that you’re sure relates to the illuminant or sensor response curves?


Ooo I never thought about the issue of stitching blank space together. I wonder if film grain makes it trivial for the alignment algo.


Very interesting. What camera/lens/lightbox did you use and around what DPI you achieve?


I used an A7C2 + Sony FE 50mm f2.8 macro. The lightbox was a custom build based on the design that I found linked on HN recently: https://jackw01.github.io/scanlight/. This was then mounted vertically on the toolhead of my 3D printer with the camera on a tripod, I then used the Z and X axes to scan across the negative.

Although I had success with PTGui and it "just worked", I didn't fancy paying for it and instead used Hugin in the end. This lead me to take around 63 pictures with 50% overlap.

The film was a 4x5 negative and after stitching I'd say the effective DPI was ~4500


Also, the LS-2000 is a noisy POS. I owned this thing for years (bought new) and put plenty of time into it. It just sucks. It was only mediocre for slides and black-&-white negatives; for color negatives it was nearly useless. You could never remove the base negative color and retain good image color. The dynamic range sucked.

I sold it on eBay years ago, then researched what might be better. The general opinion was that consumer-accessible scanning peaked with the Minolta Dimage Elite 5400 II. Of course these were long out of manufacture, but I managed to find one new in the box on a small auction site. To this day I haven't gotten around to scanning a single piece of film with it. Maybe this post will finally get me off my ass...


B&Ws also scan poorly on it if the negatives are even a little bit dense. Tricky negatives that could still produce good images in the darkroom had no hope on the LS-2000.


Yeah, it's another one of those products that inexplicably collected cachet and reputation but was trash in reality.

I had a VCR of similar reputation, which also suffered from a noise-filled image coincidentally (the Panasonic AG-1960).


You seem knowledgeable in this space. I’ve researched scanners for only scanning old prints but get mixed messages. What would be your advice on a scanner for this purpose to get 90% good enough… epson 650/700?


At a sane price? The Epsons.

https://epson.com/For-Work/Scanners/Photo-and-Graphics/Epson...

It has a white panel on the inside of the lid which ruins a lot of scans. I always put black card on top of my scans.

That's probably the best scanner for photos. And I used to own a $25K Hasselblad too.

Make sure you clean the platten and the photos before scanning to save hassle later on dust removal.

I used those Epsons for scanning tens of thousands of old photos.

Start with a good scan, and there is so much good post-processing software out there now to help correct fading etc on old images.


Black card is a thin cardboard , aka card stock? Can you link to something that explains the process? It’s pertinent because I’ve got a V600.


Yep! Just card stock. For some reason having a white backing behind the image causes worse scans when the bright light can penetrate the image you're scanning and gets reflected internally off that white sheet. I always just put a thin piece of black cardstock on top of everything I scanned.


Any pointers on the post-processing software? Anything to correct red-tinted slide scans?


Honestly, the best thing out there for color correction right now (to me) is Adobe's Camera Raw feature.

https://blog.adobe.com/en/publish/2024/10/14/the-adobe-adapt...

The AI denoise is fantastic, too:

https://gregbenzphotography.com/lightroom-acr/acr-17-ai-adob...


The other nice thing about the higher end Coolscan would have been the relative triviality of hooking up a FireWire peripheral to that Mac mini M4 shown in the pictures, or even USB with the midrange LS-5000.


Have you used the FlexTight line of scanners? I've been able to get really great results from my Precision II.


Thank you very much for all those informations. I kind of agree. It's a nice retrocomputing adventure but that doesn't seem that efficient. But eh, we know process is very important for us film shooters, so if he enjoys, it's mission complete to me. I used to painfully scan bw negatives with Epson flat scanner 15 years ago and the experience was awful. I just bought an EASY35 from Valoi to dig backinto my films with a 70mm Macro from SIGMA on Lumix S5. So far it's very impressive. Focusing straight on the grain is a dream. Is it the setup you use now ?


Wow. Blast from the past. I had both the Coolscan 9000 and LS-2000.

I now use a system from Negative Supply.


I don't think the author of this article actually understands the pressures that increasingly drive all frontend development into javascript frameworks, but those pressures are actually very straightforward:

• A large portion of the cost of maintaining a code repository goes toward maintaining the build.

• Multiple builds per repo create significant costs.

• Any web application with a UI _requires_ a frontend build for CSS/JS. Anyone around from the JQuery/pre-SASS days will recall the mess that lack of things like dependency management and ability to control import order caused.

• If the frontend build is already baked into the process, you can save costs by _only_ using a frontend build.

• SPA patterns are the easiest to use with a frontend build, have the most examples/comprehensive documentation.


I think the author understands that just fine (I follow him on Mastodon and this is something he is very passionate about). To me his argument is that this shouldn't—and doesn't need to—be the case.

The vast majority of sites out there would be just fine, and in many cases much better, as traditional server-rendered pages with a thin layer of JS on top for enhancements and for islands of interactivity. That massively reduces the complexity and cost of creating and maintaining a build.

Most of us aren't working on anything that requires the entirety of every page and the entire navigation model to be implemented on the client.


So he's basically publishing a 20 pages philosophical logorrhea to make the simple point that developers should pay more attention to the difference between a web SITE and a web APP and choose their stack accordingly, which is a totally fair point to which I 100% agree with.

What I fail to see is how React is responsible for any of this because this sort of reads like his wife left him for one of the React engineer or some shit.


Another thing is that almost every complaint I see about React (except bundle size maybe, but who cares?) exists in the APP context.

If your use case is a simple website, React is just a nice templating lib and you won't need to use any of the things people generally dislike about it. That AND your experience when you inevitably have to add some interactivity is going to be 100x better than vanilla JS.

As for the build step, there are many turn key solutions nowadays that "just work". And isn't a small build step a plus, compared to being at the mercy of a typo breaking everything? To me that piece of mind if worth a lot, compared to whatever manual testing you'd have to do if you work with "text" files.


> is just a nice templating lib

Are these templates only used on the server-side to generate the HTML upfront? Or is it being generated on the client?

> experience when you inevitably have to add some interactivity is going to be 100x better than vanilla JS

I don't believe this can quantified. How are you measuring DX improvements? Are you also able to continue to measure these improvements as your application/codebase scales?


It's certainly possible to generate the HTML up-front. Tooling like Next.js even sets things up so it's easier to render the HTML for the first page load on the server than to push it to the client.

I have a website. It's not great, it doesn't get much traffic, but it's mine :). If you disable JS, it works: links are links, HTML is loaded for each page view. If you enable JS, it still works: links will trigger a re-render, the address bar updates, all the nice "website" stuff.

If I write enough (and people read enough) then enabling JS also brings performance benefits: yes, you have to load ~100kB of JS. But each page load is 4kB smaller because it doesn't contain any boilerplate.

Obviously I could generate the HTML any way I choose, but doing it in React is nice and composable.


If you really want to, you can have a react app that is just static templates with no interactivity with a simple Node server that just called renderToString and all of a sudden react is just a backend templating framework. If you want to get really fancy you can then re-render specific components on the client side without re-rendering the entire page. You don't need NextJS to do this either, its very simple and straightforward and lets you use an entirely frontend toolchain to do everything.


Building a web application with a UI in a professional context without a frontend build is borderline malpractice. Even a "thin" layer of JS on top requires some degree of dependency management, and I personally have no desire to go back to the days of vanilla CSS, so you need a SASS/SCSS transpiler. Then there's a lot of handy things that frontend builds do, like normalizing SVG icon formats, automatic organization of static assets etc. The fact is the "islands of interactivity" model still requires two builds.


> Building a web application with a UI in a professional context without a frontend build is borderline malpractice.

I sincerely disagree. I am not about to add node to a project that gets by fine with Django + HTMX.

I'm tempted to say that adding hundreds of perishable npm packages to a project is a better heuristic for 'malpractice'.


Very funny that you think a build with an entire CMS involved is somehow "simpler". You apparently have a lot of patience for Django's static asset management pipeline, but I do not.


> I personally have no desire to go back to the days of vanilla CSS, so you need a SASS/SCSS transpiler

Modern CSS is amazing. Why on earth would anyone use SCSS? It pays to look at what Vanilla can do these days.

> Even a "thin" layer of JS on top requires some degree of dependency management

Use modules and import away. If it is truly a thing layer, there's no need for further optimisation until far along in the product.


Modern CSS has _some_ of the features of SCSS/SASS. It does not have all of them. But most importantly, many of dependencies one might want to use also make use of SCSS/SASS downstream. If you're happy to build everything from scratch and eschew any dependencies that require a build system, then have fun explaining to your product person why it took so much time to build a thing that they know very well is a pre-built component in some frontend library somewhere.


> Modern CSS has _some_ of the features of SCSS/SASS. It does not have all of them

You say it like more those features are desirable.

> then have fun explaining to your product person why it took so much time to build a thing that they know very well is a pre-built component

Sure. I wasted more time getting an assortment of pre-built components to behave than I did building the basics from scratch. And then comes a breaking change. And then that component library uses styled components and doesn't run properly on the server. Why do people do this to themselves?


> Building a web application with a UI in a professional context without a frontend build is borderline malpractice.

Why do you think that? What problem is a build tool solving for you that without it you think you're being irresponsible for not doing it by hand?


Nowhere in my comment did I say abandon a build step?

I’m saying—if you do not have high interactivity requirements, which I would claim is most things on the web—you will encounter a lot less overall complexity shipping mostly server-rendered pages with isolated, self contained JS bundles where you need them.

I was using multi-entrypoint build steps outputting separate per-page or per-feature CSS and JS bundles long before I ever worked on an SPA, it’s hardly a good reason to move your entire UI and routing to the client-side.


What's rendering the pages on the server? Because if its not javascript, and you still have a frontend build, you have a repository with two separate builds, and builds are expensive to maintain. If your containerizing, you need two different containers, each with a dependency management system, a runtime, probably a separate workflow for development and production.

There are many ways to render pages on the server using a single JS builds, most template rendering engines have a node implementation, and most javascript frontend frameworks have a mechanism to render components statically to a string. If we're talking about a simple, mostly-static website, the content is going to be cached so the performance of the backend isn't a huge factor. So just use JS for the whole thing, and save yourself a build.


Do you know, that some of the most used features of SASS/SCSS are now in vanilla CSS?


> Building a web application with a UI in a professional context without a frontend build is borderline malpractice.

This is outdated nonsense for most sites.


What does "most sites" even mean? I do this professionally, and I assume that most of the people replying here do as well. The article we're discussing is written by a professional for an audience of professionals. The number of sites I've had to build that were entirely static with no interactivity I can count on one hand.


You don't need a heavy frontend build for interactivity. All you really need nowadays is asset fingerprinting.


> Any web application with a UI _requires_ a frontend build for CSS/JS.

Except it really doesn't. Core web technologies have gotten so much better since the jQuery/pre-SASS days that you can absolutely get by without a build step.

- http/2 makes bundling a questionable choice

- polyfills are pretty much no longer a thing

- CSS now has most (all?) of the features that people used SASS for (variables, nesting, etc.)

- es6 modules work

This has been a big talking point in the Rails community lately — one of the big selling points of Rails 8 was the fact that you can, by default, ship a whole webapp without a build step, and that this is considered the "happy path".


In web application terms, the "build" is everything that needs to happen to get your application running into production. That means a runtime and dependencies. Speaking of dependencies, does your perfect frontend simply not have any of them? Is every tool you will need to use perfectly packaged with vanilla CSS and ES6 modules? Browser support for import maps is around, but its nothing I would build a production application on. And god help if you if you work in a context that requires support for older browsers.

Maybe in 5 years this will be a practical approach, but there's a reason that old ways of doing thing hang around: they're well-documented and reliable.


I mean, people ARE doing it, and like I said it's mature enough to be the default way to build Rails apps. There's tradeoffs, no doubt, but this is absolutely a valid, productive way to write (certain types of) web apps.


I write web apps with UI using ES6 native modules and https://stimulus.hotwired.dev. It's very simple, and life is good. No-build is the way.


He works (worked?) for Google. I think he knows what it takes to build a site on a large team and the trade-offs.


Works for MS now


I have several 2-axis microscope stages from the 80s/90s that are driven by brushed motors with position feedback, and they are all capable of higher accuracy than any stepper motor I have. The capability was there, it was just pricey.

Hell, CNC machines existed back then too.


If the microservice has dependencies on other services it is not a microservice.


According to whom? How do those microservices get anything done if they just live in their own isolated world where they can't depend on (call out to) any other microservice?


Would anyone care to explain the reasoning behind their down votes?


Connecting to a messaging queue or database count as a dependency?

Why not break a microservice into a series of microservices, its microservices all the way down.


Only if you cannot change one service without changing the other simultaneously. It's fine to have evolving messages on the queue but they have to be backwards compatible with any existing subscribers, because you cannot expect all subscribers to update at the same time. Unless you have a distributed monolith in a monorepo, but at least be honest about it.

Multiple services connecting to the same database has been considered a bad idea for a long time. I don't necessarily agree, but I have no experience in that department. It does mean more of your business logic lives in the database (rules, triggers, etc).


> Only if you cannot change one service without changing the other simultaneously.

Not true at all.

You're conflating the need for distributed transactions with the definition of microservices. That's not it.

> Multiple services connecting to the same database has been considered a bad idea for a long time.

Not the same thing at all. Microservices do have the database per service pattern, and even the database instance per service instance pattern, but shared database pattern is also something that exists in the real world. That's not what makes a microservice a microservice.


> If the microservice has dependencies on other services it is not a microservice.

You should read up on microservices because that's definitely not what they are not anything resembling one of their traits.


what company have you ever worked for that was happy with the current rate of progress in software development?


Is the reason for development on features going slow usually the number of developers though? Nowhere I’ve worked has that really been the case, it’s usually fumbled strategic decision making and pivots.


None, of course.

And the “current rate” is competitively defined. So if AI can make software developers twice as productive, then the acceptable minimum “current rate” will become 2x faster than it is today.


I'm currently doing something similar to build a photographic film scanner. I will say that I've found that moving the optics is generally much more error and vibration prone than moving the target. I'm actually using a 2 axis microscope stage as the basis for my scanner, ironically enough, and CNC spindle z-axis for focus.


Cool, that sounds really interesting, what size film are you using that for? Is illuminating the film evenly hard too?


I'm shooting to do 35mm and medium format. 4x5 is a stretch goal. Even illumination is definitely a challenge, though I use the same technique you do, which is generally refered to as flat field calibration, where we capture the field of light without any target and use it to calculate the offset to apply an even field. One of the trickier aspects is finding affordable lenses with appropriate magnifications that can focus evenly out to the edges of the frame. There are various lenses pulling from scanners or that were specifically used for copying that are useful if you can find them.


In NYC I can tell you that the metropolitan area lost about 500,000 people since 2020, added ~20-30k housing units per year in that same time. The vacancy rate somehow dropped dramatically despite this and rents also rose dramatically. I've yet to see any good explanation for this, yet you'll still see people advocate for building more housing as the solution.

Simply using the rental vacancy rate as a proxy for supply and demand does not work, since there are lots of factors that can affect vacancies. One of then, as outlined in the article, is landlords keep units off the market to drive up prices.


The explanation is that there isn't enough housing to meet demand. That's it. Until there is, prices will keep going up even when building more units.

Landlords wouldn't be buying up a ton of units and renting them out at a profit if there was a glut of inventory, because it would be a terrible investment.


Or, accounting and tax law makes it less painful to keep units vacant than reprice them at lower rents.

Which is a thing we could change via something like Vancouver's vacancy tax.

Make it more in landlords' interest to reprice units lower, if the market has excess inventory.


The market will dictate lower prices if there's excess inventory. If landlords are hoarding units and keeping them empty instead of lowering rental prices, that indicates a lack of available inventory.


That's not how repricing works.

If a landlord is unable to rent a unit at a desired price, because the rental market has moved lower, then they have two options.

They can decrease the price.

Or they can not offer the unit for rent (or continue listing it at the higher price).

The second option's cost to landlords is largely defined by accounting/tax rules, in regards to how painful the vacancy will be to them.

Thus, vacancy can be made more or less painful by changing accounting/tax rules.


> The second option's cost to landlords is largely defined by accounting/tax rules

The cost is having empty properties, which require insurance, maintenance costs, property taxes, likely mortgages of their own to pay, all of which cost money and which are by far the biggest costs to letting things sit unused.

And the fact in this case is there simply isn’t all these mythical properties sitting unused; simply look at current housing and rental stats.


I'm not talking about repricing specifically? I'm talking about how differently the housing market would behave if there was enough housing to go around.

Landlords have a 3rd option: They can sell the unit, because their unit no longer commands high prices due to housing supply meeting demand, and their capital is best used elsewhere.

If they are underwater and cannot sell above break-even, their bank will eventually do it for them.


As a commercial property, both their loan and sale price are contingent on the rental price.

So again, a disincentive to ever decrease rent (and thus demonstrate the market is softer and therefore you property is worth less).

Versus claiming it still commands higher rent (and is temporarily unrented) and thus more valuable.


What you're saying is true because the market is severely distorted, and a very large part of that is due to zoning restrictions. Zoning restrictions are severely constraining supply, and enabling those with capital to hoard property as an investment vehicle, rather than use it to buy a basic necessity. This scarcity allows landlords to keep a property empty rather than sell or rent at a lower rate, which would not be possible if buyers/renters had ample choice.

Not all rental properties are bought using a commercial loan, many are simply conventional loans where the owner decided to rent out their property rather than sell it. At least in the US, properties purchased with conventional loans can be rented out after the owner has lived in them after a few years. No commercial loan required.

The tax/accounting schemes you mentioned earlier would simply distort the market further without addressing the root problem: There's not enough housing for people that want it. Relaxing zoning rules would allow more housing to be built, and if there was enough of it, it would cease to be an "investment" rather than what they were built to be in the first place: Homes.

As a concrete example, I live in the SFBAY which has an extreme housing shortage. Yet my house is built on an unnecessarily big lot (required via zoning) and any structure built on it cannot be more than 27 feet tall. These are the kinds of rules that are severely distorting the market. I can't build a fourplex on the lot if I wanted to even if I have the space for it, and the demand is there. My next best option is to rent it out for way more than I otherwise could if there was a bunch more housing (Selling isn't really an option either because my mortgage rate is lower than what I could get in a HYSA; I'm basically being paid to borrow an appreciating asset).


All wholeheartedly agreed, but why shoot one arrow when you can shoot a whole quiver?

In addition to mandating upzoning (without allowing local municipalities to overrule or delay), increasing the cost of hoarding property without use would also help, by incentivizing selling or lowering rents.

And because it would be a punitive tax (primary good created simply by existing), there's no reason you couldn't roll the proceeds into programs to facilitate densification. E.g. tax credits for rebuilding existing properties with more units


There is never enough housing to meet demand. Once people have housing they breed, that drives up the population and housing prices once more: only now the world is more crowded and shittier. Without habitat control, it will always be this way.


This meme has got to go: there is little if any evidence to suggest that markets are functioning either in the specific case of housing in high COL areas in the United States or frankly most times anyone trots out the Milton Friedman trope on HN.

Markets fail, they get captured, they get distorted by accounting treatments, they generate cartels. They get technologically disrupted by new forms of cartel pricing that blow past existing regulations(e.g. TFA).

Capitalism sounds dope, I hope I live to see it. But the idea that supply and demand in the Econ 101 formulation is anything to do with the lot of say a person renting a flat in 2024 is silly and borders on insulting.


Capitalism sounds dope, I hope I live to see it. But the idea that supply and demand in the Econ 101 formulation is anything to do with the lot of say a person renting a flat in 2024 is silly and borders on insulting.

It's really unclear to me why you think this is the case. The median cost of a house in the Chicagoland suburb I live in is north of $470k, and that's not because of technological disruption or cartel pricing, but rather because we've outlawed anything but single-family housing on lots, something we did deliberately back in 1923 and 1947 with the express purpose of preserving and increasing home values for people who lived there at the time and keeping Black families out.

"Markets" didn't "fail" or "get captured" and no hedge fund engineered this situation; people who lived here voted for this outcome.


I didn’t make my point either clearly or well and your scrutiny is merited.

I also know nothing about living or real estate in Chicago, which is by any measure a “high COL” area. I meant “the Bay and NYC” which I know a little better.

The phenomenon you describe is real in those two places: home owners try to restrict high-density construction, presumably to artificially limit supply. This seems to be more effective in Palo Alto than in Downtown Brooklyn, where high rise condo buildings go up practically every other week despite lobbying, but the effect is conspicuous in either case.

NYC is the more striking case by pick your study of available housing going up in a year, flat seekers trending steady or down, and prices spiking all at once.

But even in your given example: a homeowner pulling some NIMBY kick flip to fuck with black people or line their own pockets or both is by any measure a “market participant”. Manipulating the situation via side-channel to prevent actual functioning markets is what I was talking about whether one is BlackRock or the representative of a podunk HOA.

The meme that needs to die is that markets work absent referees that make rent-seeking unprofitable. This forum is hosted by an enterprise that began with the noblest of intentions and is now by far the most dangerous clique of insiders to get on the wrong side of in this line of work. It’s a selling point that on BookFace, your first several hundred SaaS customers are in the bag. Far from tearing down credentialism and old boys clubs, which is a grand vision requiring a grand strategy, it turns out that the end state was to stuff a monumental vision into a tiny, tinker-toy strategy as old as clay tablets: reshuffle the local oligarchy in my favor.

I hear all the time that “it’s not what you know, it’s who you know” in the same breath as some faux-Reaganism: “government isn’t the solution to our problems, government is the problem”.

The former sounds like how to get a decent pair of shoes in East Berlin in the 1970s, the latter sounds like someone who is on the take.

Capitalism sounds dope: I hope I live to see it.


[flagged]


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


A lot of native New Yorkers live with a lot of family members or roommates, and these are the types of people who are most likely the move out. Meanwhile, the most likely people to move into New York are well paid young professionals who can afford more space.


> In NYC I can tell you that the metropolitan area lost about 500,000 people since 2020, added ~20-30k housing units per year in that same time. The vacancy rate somehow dropped dramatically despite this and rents also rose dramatically. I've >yet to see any good explanation for this

Taking these numbers as given, the obvious explanation is latent demand. A lot of people who used to live five to a 900 sqft. New York apartment are now living 2 or 3 to an apartment instead. Probably the rent dipped briefly before soaring, yeah? People took advantage, and when leases are up, many of those people will presumably consolidate back with their families.

Very similar to the concept of "induced demand" (which is also really latent demand) with regards to highways. Build new lanes, people who were unwilling to drive before use the lanes, traffic delays stay the same (but with higher throughput, and therefore still a net positive, even if the money would've been better spent on trains).


Landlords are not keeping units off the market to drive up prices. There are no landlords who have the pricing power to make that work. There is no landlord that can keep 10% of his inventory off the market to drive up rental prices 11%.


There is, however, in any market a small number of big players, all politically connected, who will conspire against newcomers building new units. A couple years ago stories about that "historic laundromat" in SF were making the rounds, that is very typical.


yeah because now that we've all been asking about it, that answer is in its training data. the trick with LLMs is always "is the answer in the training data".


Because I don't want to pay monthly for a bunch of content I probably won't read. I want to pay a small amount of money, with as little friction as possible, for the specific content I want to read now.



Until we have anonymous electronic money, this still does not overcome the problem of privacy (it may worsen it).

"Problem of privacy" which incidentally made me very relieved to find in your article: it is nice not to be alone

> I don’t want or need entities with strong (e.g., credit-card-payment grade) proof of my identity tracking to the paragraph what I’m reading


This is what I want too. Been wanting it for years.

Maybe once payments are bundled into the browser coupled with some W3 standard…


You're basically describing the BAT from Brave


I know, but most people want to pay with their credit card and not a volatile altcoin, and they do not want to switch browser.


That's been a dream for nearly as long as the web has been around. I'm pretty sure there are mailing list threads from the '90s about turning micropayments into a standardized web API. As far as I can tell, this never caught on because it's almost always more profitable to operate your own paywall scheme or payment network than to participate in someone else's (provided that you're powerful enough to get away with it).


The point is you should be able to operate you own paywall. The tech is mature enough in 2024 to make it work.

Make the browser store you credit/debit card info, make the browser handle the payment UI, make the browser expose JS apis to invoke payments and receipt fetching against pluggable payment providers.

My ideal world looks like this. New html button element:

`<pay amount="1.00" currency="USD" reference="my-article-123" checkoutUrl="https://...">Unlock for $1.00</pay>`

Clicking it opens browser checkout flow. The url you get from stripe/paypal or another whitelisted payment provider that has implemented the spec, some flow similar to OAuth. On a successful tx, a signed receipt (something like a jwt) is returned from the provider and saved by the browser, on disk on your computer.

The webpage can then load signed receipt references from the browser api, sends it to the backend which can return the article content if the receipt jwt is valid.

It can be fixed if the right people from Chrome and Stripe got together in a room and brainstormed for a bit. Then everyone else would follow.


Cue wave of "micropayments" deja-vu from the 1990s.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: