Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What matters is support contracts and uptime.

If you have a dozen customers on a server that cannot access things because of an issue, then as a startup, without a whole customer support department, you're literally screwed.

I've been on HN long enough to have seen plenty of companies get complaints after growing too quickly and not being able to handle the issues they run into.

I'm building this business in a way to de-risk things as much as possible. From getting the best equipment I can buy today, to support contracts, to the best data center to just scaling with revenue growth. This isn't a cost issue, it is a long term viability issue.

Home lab... certainly cut as many corners as you want. Cloud service provider building top super computers for rent... not so much. There is a reason why not a lot of people start to do this... it is extremely capital intensive. That is a huge moat and getting the relationships and funding to do what I'm doing isn't easy and took me over 5 years to get to this point of just getting started. I'm not going to blow it all on cutting corners on some used equipment.



I'm building this business in a way to de-risk things as much as possible

Then why did you go with AMD and not Nvidia? Are you not interested in AI/ML customers?


In my eyes, it is less risky with AMD. When you're rooting for the underdog, they have every incentive to help you. This isn't a battle to "win" all of AI or have one beat the other, I just need to create a nice profitable business that solves customer needs.

If I go with Nvidia, then I'm just another one of the 500 other companies doing exactly the same thing.

I'm a firm believer that there should not be a single company that controls all of the compute for AI. It would be like having Cisco be the only company that provides routers for the internet.

Additionally, we are not just AMD. We will run any compute that our customers want us to deploy for them. We are the capex/opex for businesses that don't want to put up the millions, or figure out and deal with all the domain specific details of deploying this level of compute. The only criteria we have is that it is the best-in-class available today for each accelerator. For example, I wouldn't deploy H100's because they are essentially old tech now.

> Are you not interested in AI/ML customers?

Read these blog posts and tell me why you'd ask that question...

https://chipsandcheese.com/2024/06/25/testing-amds-giant-mi3...

https://www.nscale.com/blog/nscale-benchmarks-amd-mi300x-gpu...


OK, I just looked at the first blog post: “ROCm is nowhere near where it needs to be to truly compete with CUDA.”

That’s all I need to know as an AI/ML customer.


That is fine. Nobody is pretending that the software side is perfect. What we and AMD are looking for is the early adopters willing to bet on a new (just available in April) class of hardware that is better than previous generations. Which, given the general status of AI itself today, should be pretty easy to find.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: