Hacker Newsnew | past | comments | ask | show | jobs | submit | dryark's commentslogin

I'm against AI. I've written two books on the topic so far. I also use AI, as I must if my software company is going to make money. There is no choice any more.

If I had a choice I wouldn't use AI for anything, ever. It's a blight on humanity. But it is a blight everyone has chosen so we are stuck with it.


One important thing the article glosses over: even if you sign your binary with task_for_pid, that does NOT mean you can attach to arbitrary processes on modern macOS, especially on Apple Silicon machines.

There are two separate layers people often confuse:

1) Having the task_for_pid entitlement 2) Being allowed to obtain a task port for a target process

AMFI and the kernel enforce the second one.

Even if your binary has the entitlement, the kernel will still refuse task_for_pid() for many targets (Apple platform binaries, hardened runtime processes, protected tasks, etc). In those cases the call simply fails.

Older blog posts and guides often mention disabling AMFI with a boot argument like:

    amfi_get_out_of_my_way=1
    (also seen as amfi=0x80)
Historically that worked because AMFI behavior could be weakened via NVRAM boot arguments. The flag effectively disabled entitlement validation and allowed processes to obtain otherwise restricted capabilities. :contentReference[oaicite:0]{index=0}

That advice is now largely outdated on Apple Silicon.

On modern M-series Macs the boot chain is tied into Secure Boot and the Secure Enclave. The kernel image, boot policy, and security configuration are verified during boot, and the system enforces what boot arguments are allowed to affect security policy.

In practice this means:

• You cannot freely change security-sensitive boot args from a normal system. • Boot policy is enforced by the secure boot chain. • Root does not get to override it.

Changing these policies requires booting into Recovery and modifying the machine’s security mode (Reduced Security). Even then, many AMFI protections remain active.

So the old “just set amfi_get_out_of_my_way and reboot” trick that worked on older Intel systems does not translate cleanly to Apple Silicon machines.

As a result, signing a tool with task_for_pid does not magically give you the ability to attach to arbitrary system processes on modern macOS. Without weakening the system’s boot security policy or patching the kernel, AMFI-protected processes remain non-attachable by design.


For JIT you self-manage run protections for code segments. That isn’t free editing of arbitrary executables out of the gate, but you could develop code running in a self-harness supporting arbitrary runtime changes during development.

There would be indirection somewhere, but that could be high up the code tree, so zero impact on downstream performance sensitive code.


I find this political mess to be interesting, but I'm not sure it's an appropriate topic of conversation for the "professional community."

It looks to me like people on both sides were bothering the other over and extended period of time and it got worse till it led to this.

Having seen similar over my career I don't understand what digging into this accomplishes besides making all sides more upset.

I do think it is wrong for the professional relationship ( continuing to sell technology ) to be ended over a personal dispute.

From where I stand it looks like Phil is attempting to defend himself by being open about what happened even though the whole situation seems unpleasant.

Besides that I think it is likely best if all sides in this just cease communicating or saying anything more and move on from the personal stuff.

My two cents.


I've been working on two projects recently. A video game titled TentFires, and letterpress book remastering.

TentFires is a variant of the puzzle game Camping / Tents-and-Trees, but with a huge overworld.

Creating the overworld led to some advancements in topology, which led to realizing those advancements can be used to accurately reconstruct a theoretical "metal die shape" that was used for a glyph to creation impressions in old letterpress books.

As a result, my current first application of that is to remaster and re-release the first book of the "Hardy Boys" series from 1927, "The Tower Treasure".

To do that, I've constructed a "macrogrammetry rig", which is essentially a 2d x y panning machine using 3d printer parts and stuff from my local hardware store, and a camera with a macro lens, in order to "scan" the pages of the book at the highest possible resolution I can reasonably do so at, which is currently around 6000dpi.


The Monotype pricing change is brutal, but there’s a workaround. Derive new Japanese font families directly from public-domain sources.

I’ve been working on doing exactly that. Reconstructing clean vector glyphs from old metal-type Japanese books. The quality of those prints is surprisingly high, and they include thousands of kanji in consistent style. With some new technological innovations and a reasonable amount of hard work, you can produce a completely new, fully legal font family without touching any commercial IP.

The method I've devised is proprietary, but I’ll say this: it’s absolutely possible, and the output rivals modern JP fonts.

Given the sudden jump from ~$300/year to ~$20k/year for some devs, I expect more people to go down the “rebuild from PD artifacts” route instead of staying locked to a monopoly.


Indeed. Scan a book in public domain, feed into an online font generation service, pay somebody to clean it up.

A few hours later, you have a font you can use how you like. Is it as good? Probably not, but it's much cheaper.

Edit: oh look https://news.ycombinator.com/item?id=46127400


Yes. I did see that other article. No the process we are using is not using AI. We are not using OCR either. We are using computational geometry and forensics methodology. No flatbed scanners. No sheet fed scanners.

This isn't like anything ever done before. It's entirely different and higher quality than any result you can get through AI or OCR.

I do agree that detailed work is required to do it correctly and produced high quality results. I'm not offhandedly saying "just do these simple things and bam perfection."


Wow that sounds incredible. I'm super into fonts, I understand the proprietary nature but if OCR isn't used and neither is flatbed scanning, does that mean the 3D model is obtained? I can't think of another method.

It's very cool, would love to see some fonts you have available whenever it's out!


The initial input is high resolution images using a DSLR and a macro lens, or at least it will be soon. Initial testing of the method has been done using 200mp images taken casually with a standard modern cell phone.

The underlying new computational geometry method can be extended to 3d but that isn't necessary for this application unless we also extract a 3d image of the page itself. For now at least we are not doing that as it would be even more complicated and finicky. Possibly for soft enough pages the letterpress imprint will deform the page enough that the deformation can be detected and help figure out where the original metal pressed and where the ink is due to page bleed.

Essentially what we are doing is taking high resolution photos, using computational geometry methods on those to extract the shapes, and then refining those shapes through a mixture of automation and manual labor.

The entire thing is called "Donkey Free" and will have information online in the near future. I just bought the domain ( donkeyfree.com ) for this 2 days ago; this is all extremely new. I'd like to release the resulting fonts under a license allowing free use for many purposes but we still need to think through that to figure out how to make that sustainable.


Sounds great, best of luck! Hope you make a Show HN post when it's ready


It's fascinating how different this challenge must be between Latin vs CJK.

How do you match up the scans with unicode entities? Human supervision and/or OCR? To what extent is the breadth and quality of OCR the limiting factor?

How do you define your target entity coverage?


Great questions — and you’re absolutely right that Latin vs. CJK is effectively two different universes in terms of reconstruction.

1. Latin vs. CJK differences Latin glyphs are structurally simple: limited stroke vocabulary, mostly predictable modulation, and relatively low topological variation. Once you can recover outlines and stroke junctions accurately, mapping to Unicode is almost trivial.

That can be done with standard OCR methods for Latin.

CJK is the opposite. Each character is effectively a miniature blueprint with dozens of micro-decisions: stroke order, brush pressure artifacts, serif style, shape proportion, and even regional typographic conventions. Treating it like Latin “but bigger” doesn’t work. So the workflow for CJK has extra normalization steps and more constraints, especially when reconstructing consistent glyph families rather than one-offs.

From a simple perspective, CJK has many characters with disconnected pieces that are still part of the same character.

2. How we match scans to Unicode entities We don’t rely on conventional OCR at all. OCR engines are optimized for reading text, not recovering the underlying design intent. Our process is closer to forensic glyph analysis — reconstructing stable structural signatures, then mapping those signatures to references.

This ends up being a hybrid: • deterministic structural matching • limited supervised correction when ambiguity exists • and zero reliance on any off-the-shelf OCR models

It’s not “OCR first, match later.” It’s “reconstruct the letterpress structure, then Unicode becomes a lookup.” OCR quality literally doesn’t limit us because OCR isn’t part of the critical path.

3. What determines coverage Coverage is defined by what we can physically access and reconstruct cleanly. For Latin, coverage is straightforward. For CJK, coverage is shaped by: • typeface completeness in the source material • the consistency of impression depth • survivability of fine strokes in early printings • and the practical question of how many thousand characters the original font designer actually cut

There’s no need for the entire Unicode set per book. The historical font only ever covered a finite subset. It is unfortunate that every book doesn't use every glyph, but not catastrophic because we can source many public domain books from the same era and eventually find enough characters matching the style.

In short: Latin is an engineering challenge. CJK is an archaeological one. OCR is not a bottleneck because we don’t use it. Coverage follows the historical material, not Unicode completeness.


Would love to hear or read more about this if it is public.


It isn't exactly what you are asking for but could be adapted to do that: https://github.com/dryark/curl_progress


Came here to say this. My guess is that Amazon paid them to go away. If my guess is accurate ( I could certainly be wrong ), then Amazon could have them add a robots.txt banning archive.org. If they do that access to the archive will be removed. Mirror it now if you want the content.

One nice way to do so ( handy for any site that you think may vanish off Way Back Machine ): https://github.com/hartator/wayback-machine-downloader


I don't disagree that more work can get done if you work more.

What is being discussed is not "shorter work weeks" but "shorter work days", as many people who do shorter work weeks still work 40 hours a week but do it in less days. It seems that Carmack may actually be in support of that.

Working shorter days is not about maximizing the quantity of work but the quality. It is done when it is undesirable to have any work done that is below a certain quality, or just to increase worker happiness. Many people do their highest quality work after having sufficient rest, and then continue to do well building up context till they are burned out for the day.

It very much depends on the person though as some people gain more relevant short term memory context the longer they have been awake, and are able to create ingenious code in late hours more so than in their morning.

Basically, what it seems Carmack is arguing here is that he wants to get all the possible work out of all employees. Feel free to correct me John if that is not what you mean.

I personally don't find it necessary to extract all possible work from employees, as I respect their free time and personal life. I value happiness of my workers over how much work they get done. Happy workers create better results imo.


This comment amuses me as my company is a browserstack competitor and thus my company and our clients have large numbers of devices. When you are the cloud host, you have to worry about this and monitor devices for battery damage / swelling.

In our case though the battery temperature can be monitored and this partly lets you know when the battery is going bad before it actually gets to that point and risks catastrophic failure.


Potential customer here. Is dryark up and running yet? On my iPhone it’s a blank page.


I've been working with mobile device farming for a few years, and I have not heard of or seen a device that is powered down spontaneously combust despite having many with "puffed batteries". I definitely am concerned that such a battery or device containing such a battery can spontaneously combust but it doesn't generally seem to happen.

That said, I would definitely get rid of any swollen batteries or devices that are swollen ASAP. You can tell with phones as a swollen battery will generally pop the phone apart from the pressure of it.

As others are saying, you should also take precautions whether it happens or not. Place the devices in metal containers, pottery, or fireproof bags. Don't place them in a cardboard box filled with a bunch of other flammable items next to a shelf of books...


> Don't place them in a cardboard box filled with a bunch of other flammable items next to a shelf of books...

Uh that’s totally not happening right now, obviously… I mean what kind of chump would, er… excuse me for a moment…


Or we could use the new vocabulary I introduced to Mom & Dad recently: "spicy pillow".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: