Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been (I admit) just starting to try and learn about FPGA's and where their applications might be. Outside of the painfully obvious - 100G networking, HFT - I can't see a case where their expense makes them a valid solution when compared to a GPU. Ideas?


Many people think FPGAs have the advantage over GPUs in machine learning inference. See the Bing FPGA stuff. Mostly due to lower power operations, but also a higher level of customization e.g. optimized 1 bit math if you want it, leading to potentially higher speed or larger networks. The memory architecture of FPGAs can also offer advantages - e.g. bigger memories.

The GPU instruction set is limited for some applications because of SIMD. If the GPU threads aren't doing exactly the same thing then they stall. The FPGA can have parallel processes going on that are more differentiated. e.g. HMMs work better on FPGA than GPU.

Anything with low latency feedback (e.g. LSTMs) probably works better on FPGA.


Bing search uses FPGA for scoring. See FPGA for storage/nvme.

I'm hoping for a 10x-100x increase in memory-size for gpu to unlock new possibilities.


If you take apart a gpu and look at the die you can see they basically frankenstein 1 or 2 HBMs with the GPU die. Xilinx is doing this too in their latest line of VU33 - VU38P HBM chips too (4-8GB on chip, 460 GB/s).

Any density upgrades for the GPU will also reflect for FPGAs, but we're physically limited in terms of how many HBM dies you can slap on :(





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: