Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

At what point are we able to deliver low resolution video and have the system make up on the fly believable high resolution version of it?


This technology (or at least variants) already exists. Image upscaling with convolutional neural nets is an old trick at this point, but with Nvidia integrating real-time denoising into their RTX technology I suspect that real-time upscaling is right around the corner if someone hasn't done it already.

[0]https://developer.nvidia.com/optix [1]https://topazlabs.com/gigapixel-ai/


ENHANCE!


Nah, the CSI "enhance" thing is "multi-frame super-resolution image recovery", a different (though related) ML technique.

Speaking of, though: you'd think, by now, that security cameras that capture footage at very low framerates for the sake of storage space, would have ASICS in them using those models to convolve together a bunch of grainy input frames into a stream of fewer, but very good and clean frames.

Any hardware on the market with this capability yet?


It makes sense for entertainment but not for security cameras - then you're filling it in with made up information. A security camera is supposed to be a record of truth.


Imagine a world where low information sorts interpret a sampling of possible hi-res reconstructions from low-res security videos as ground truth. That to me is far scarier than the OpenAI and MIRI fear-mongering about GPT-2.


It’s not made-up information; it’s parallax / compressed sensing, in the same way that you can see through the grate on the front of a microwave oven to what’s behind it by moving your eyes around.

If it’s good enough for generating accurate fMRI images from sequentially-overlaid magnetic flux readings, it’s definitely good enough for generating visuals from slightly suckier visuals.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: