Sounds awesome. I've been using mitmproxy's --mode local to intercept with a separate skill to read flow files dumped from it, but interactive is even better.
At least on image generation, google and maybe others put a watermark in each image. Text would be hard, you can't even do the printer steganography or canary traps because all models and the checker would need to have some sort of communication.
https://deepmind.google/models/synthid/
You could have every provider fingerprint a message and host an API where it can attest that it's from them. I doubt the companies would want to do that though.
I'd expect humans can just pass real images through Gemini to get the watermark added, similarly pass real text through an LLM asking for no changes. Now you can say, truthfully, that the text came out of an LLM.
My hunch is VSCode or more likely Cursor. I’ve spent some time this summer trying to get IDE independent tooling running and have settled on Ruff + basedpyright. Also switched over to using UV. You may want to look into Astral’s TY or facebook’s rust based Pyrefly if keen to alpha / beta test.
I found getting VSCode properly set up and figuring out what extensions were needed a real pain in the ass and have never found something as good as Pycharm’s Git integration.
I think I need to try Cursor. I have held off but the world is changing fast and jumping into a more code assistant first approach may be a good answer. The thing that is driving me crazy in this world though is the 'tab tab tab' view that these approaches have. It is hard enough when predictive text tries to finish the word I am typing much less a whole sentence or code block. It is very hard to think freely when something is whispering in your ear what you should say next.
I am very hesitant to look at VSCode. I have strong push back against Microsoft related tools (related, best alternative to GitHub?) That is mostly on principle now though. I have avoided them so long that I can't honestly comment on their quality anymore. Everything Microsoft though long term seem so to...degrade. It does it in a way that when you finally realize you hate the tool you also realize you should have jumped 2 years ago. They are so good at finding the line where it is just good enough and just barely keeping you there, but not clearly above it.
Yeah, even to just know what's up it's probably important to try.
If switching between multiple editors you should look to include a .editorconfig file in your project to have 1 place to configure things.[1]
The following are the extensions I've found to use in VSCode / Cursor (these can be saved to / recommended to a project by being listed in the `.vscode/extensions.json` file).
* [2] Ruff
* [3] BasedPyright
* [4] Todo Tree
* [5] Rainbow CSV
* [6] Mermaid Chart (I’ve found Claude to be good at generating these)
I’ve got one of these [1] and it’s not bad. It doesn’t have the local find my functionality you get with AirPods and AirTags but works well with the rest of find my.
Now I understand. There are two ways of dimming a screen. One is dimming the backlights in the display while the other is faking it via Gamma table alteration
A good example happened to me yesterday. I brought my MacBook and charger to my partner’s family’s place along with my USB-C SSD that has some files I thought I might need on it; however, I managed to forget the USB-C charging cable for my computer. I ended up using the USB-C cable that came with my SSD. It’s not charging at full speed, but it’s working!
I’ve been thinking more about the navigation of their little helicopter.
On earth were used to being able to use GPS for route planning. If you could use this process in reverse to constantly determine ones position in 3D space above the surface using stored satellite imagery with a downward facing camera cross referenced with whatever gyro / accelerometer based positioning they’re using I wonder if there’d be any benefit. Maybe what they’ve got already is sufficient for anything you’d want to do in the near future.
> If you could use this process in reverse to constantly determine ones position in 3D space above the surface using stored satellite imagery with a downward facing camera cross referenced with whatever gyro / accelerometer based positioning they’re using I wonder if there’d be any benefit
That is pretty much exactly how TRN worked for the EDL. I don't think Ingenuity has much in terms of navigation ability, probably just basic INS. But its also not intended to fly any extended distances, so it doesn't really need any navigation abilities. I'd imagine future copters would use TRN style navigation.