Hacker Newsnew | past | comments | ask | show | jobs | submit | throw46365's commentslogin

When the iPad came out, people actually mostly said it was just a big iPhone.

It is, in fact, pretty much still just a big iPhone.

It turns out that's enough for a lot of people.


True! I remember a quite funny meme where the iPad being held by Jobs during WWDC was photoshopped and replaced with 4 iPhones taped together: https://knowyourmeme.com/memes/ipad-spoofing

I still chuckle at that one.


It's odd how unexcited Apple people often sound when they say they couldn't be more excited.


They've been trying to follow Steve's old presentations ways but it's been so long that it just feels like a copy of a copy without anything genuine left in it.

As someone who stupidly watched a bunch of Apple's keynotes around 2005-2015 it all feels like yet-another-corporation presentation using Apple's style. Pretty off-putting.


Very much this. I know Steve Jobs was a showman, but at least he was charming in a way.

Like, I know you're twisting me around your little finger, but I'll let you do it, because it's kind of fun.

Whereas now, it feels more like car salesmen (and saleswomen of course) trying to sell me a new car even though my "old" car is still perfectly fine. And they know it and I know it.


As someone that regularly follows WWDC, many seem just standing there, reading a script.


Dopamine burnout. They are so excited every couple of months that they literally cannot be more excited anymore.


I am so glad presentation training for Cook at least helped some. I remember whenever I heard him speak in the initial days, I couldn't supress the word "valium?" appearing all over my thought-spectrum. It was actually very distracting trying to listen to him speak.


That's interesting. Yesterdays keynote was actually the first time when I felt like Tim is getting old.


> Palm CEO on the iPhone

This quote pre-dates the iPhone by a couple of months. So it doesn't remotely qualify as a "failed product" comment. It's just a bad prediction.

(And it's not that bad: the iPhone launched with 2G and without an app market or text-editing beyond delete-and-retype, for example)


Also the more blatant fact that OF COURSE the CEO of Palm is going to be bearish on the damn iPhone, otherwise they would have made the damn iPhone at some point!

How often does a direct competitor CEO outright say that they expect a new competitor product to not be underwhelming?

99% of the time a CEO says anything future looking that isn't regulated by the SEC, they are just leaning on suvivors bias to look good retroactively.


Bill Gates about the iTunes Music Store

https://appleinsider.com/articles/21/07/09/bill-gates-said-s...

> Steve Jobs['] ability to focus in on a few things that count, get people who get user interface right and market things as revolutionary are amazing things," Gates wrote. "This time somehow he has applied his talents in getting a better licensing deal than anyone else has gotten for music."


Honest question: Do they not each worry they are each, in their own way, going to be Linda Yaccarino?


The thing with Linda is that while she's theoretically the boss, and Musk reports to her as CTO/CPO everybody knows that Musk is the boss and she's there to... sell ads? Actually I don't know what she's supposed to do, but it doesn't matter, she's certainly not allowed to touch either the product or the technology and Musk can fire her at any point as owner/chairman.

This, OTOH is a just a simple straightforward C-Suite. Sam is the boss, and he hired two reports to handle areas where they have expertise. This might be the simplest corporate thing OpenAI has ever done, lol.


I understand they have each taken steps to avoid that, notably by installing the OpenAI app on their phone's home screen.


In what way?


I think he's insinuating that with a figurehead such as Sam Altman, that Sarah and Kevin might be more showpieces than given autonomy to operate.


I'm not insinuating. I'm asking.

It's a pretty reasonable question for both of them. Perhaps more for the CFO, given what we know of Altman's, er, lack of candour with the board where money is concerned. But maybe given the "Her" debate it applies to both of them, in different ways.

Is this a company where the executives can really be more than rubber stamps?


I don't think anyone in an executive position cares about the Her "debate". It's Twitter-level gossip.


Yeah, they don't care so much that there were direct follow ups to the up roar


Sure. Sam and other executives probably took an hour of their time to decide what they would write in the blog post to appease the critics (which is par for the course for any public-facing company as widely known as OpenAI †), and then maybe another hour meeting with their lawyers to prepare in case of a suit, three weeks ago.

But in the grand scheme of things, I don't think it's in their top 15 priorities. People here talk as if the Her thing was almost an existential threat to Altman's leadership.

† See Apple's recent apology for their ad. Do you think Tim Cook or his executives lose sleep because of the backlash? https://edition.cnn.com/2024/05/09/tech/apple-apologizes-for...


I don't think losing sleep is a requirement. However, you know that the next time an ad is going to be release, it will be in people's minds of what backlash might happen from this before it gets green lit. At least, one would hope that would be the take away.


The lesson I learned from the dot-com era is that people who are dependent on the hype to make profit will crane their necks to believe the hype.

Salespeople, executives, engineers, it doesn’t matter.

Every day on HN reminds me a little more of 1998.


Yep. Hacker News is one of the only social medias that I've seen where you have to reject reality in order to be accepted and upvoted by the majority. After a while it becomes more funny than sad, really.


Indeed, and presumably well before 1965, which is when Greater London was created. Most of the non-disambiguated High Streets will be in the Greater London boroughs (Bromley, Bexley etc.)

When I was a kid, Bromley was already a London borough. But we sure as heck didn't consider Orpington to be proper London! ;-)


Free idea: a website called But Humans Also, where one collects bad justifications for applying LLMs.

<mid 2000s product specialist> the dot-com is available!


The "zen" of LLMs is that they do not see a real distinction between these two things, or either of these two things and success ;-)


I dunno. I tend to annoy people when taking on jobs by telling people what I am concerned about and do not understand, and then sharing with them the extent to which I have managed to allay my own concerns through research.

I turn down a lot of jobs I don't feel confident with; maybe more than I should.

An LLM never will.


> I don't think this is indicative of people who don't know what they're doing. I think this is indicative of people using "AI" tools to help with programming at all.

I think using AI tools to write production code is probably indicative of people who don't really know what they are doing.

The best way not to have subtle bugs is to think deeply about your code, not subcontract it out -- whether that is to people far away who both cannot afford to think as deeply about your code and aren't as invested in it, or to an AI that is often right and doesn't know the difference between correct and incorrect.

It's just a profound abrogation of good development principles to behave this way. And where is the benefit in doing this repeatedly? You're just going to end up with a codebase nobody really owns on a cognitive level.

At least when you look at a StackOverflow answer you see the discussion around it from other real people offering critiques!

ETA in advance: and yes, I understand all the comparison points about using third party libraries, and all the left-pad stuff (don't get me started on NPM). But the point stands: the best way not to have bugs is to own your code. To my mind, anyone who is using ChatGPT in this way -- to write whole pieces of business logic, not just to get inspiration -- is failing at their one job. If it's to be yours, it has to come from the brain of someone who is yours too. This is an embarrassing and damaging admission and there is no way around it.

ETA (2): code review, as a practice, only works when you and the people who wrote the code have a shared understanding of the context and the goal of the code and are roughly equally invested in getting code through review. Because all the niche cases are illuminated by those discussions and avoided in advance. The less time you've spent on this preamble, the less effective the code review will be. It's a matter of trust and culture as much as it's a matter of comparing requirements with finished code.


> And where is the benefit in doing this repeatedly? You're just going to end up with a codebase nobody really owns on a cognitive level.

You could say the same about the output of a compiler. No one owns that at a cognitive level. They own it at a higher level - the source code.

Same thing here. You own the output of the AI at a cognitive level, because you own the prompts that created it.


>No one owns that at a cognitive level

Notwithstanding the fact that compilers did not fall out of the sky and very much have people that own them at the cognitive level, I think this is still a different situation.

With a compiler you can expect a more or less one to one translation between source code and the operation of the resulting binary with some optimizations. When some compiler optimization causes undesired behavior, this too is a very difficult problem to solve.

Intentionally 10xing this type of problem by introducing a fuzzy translation between human language and source code then 1000xing it by repeating it all over the codebase just seems like a bad decision.


Right. I mean... I sometimes think that Webpack is a malign, inscrutable intelligence! :-)

But at least it's supposed to be deterministic. And there's a chance someone else will be able to explain the inner workings in a way I can repeatably test.


> You could say the same about the output of a compiler.

Except, for starters, that you're not using the LLM to replace a compiler.

You're using it to replace a teammate.


Yes, and when compilers fail, it's a very complex problem to solve, that usually requires many hours from experienced dev. Luckily,

(1) Compilers are reproducible (or at least repeatable), so you can share your problem with other, and they can help.

(2) For common languages, there are multiple compilers and multiple optimization options, which (and that's _very important_) produce identically-behaving programs - so you can try compiling same program with different settings, and if they differ, you know compiler is bad.

(3) The compilers are very reliable, and bugs when compiler succeeds, but generates invalid code are even rarer - in many years of my career, I've only seen a handful of them.

Compare to LLMs, which are non-reproducible, each one is giving a different answer (and that's by design) and finally have huge appear-to-succeed-but-produce-bad-output error rate, with value way more than 1%. If you had a compiler that bad, you'd throw it away in disgust and write in assembly language.


    > I think using AI tools to write production code is probably indicative of people who don't really know what they are doing.
People said the same to me for using Microsoft IntelliSense 20 years ago. AI tools for programming are absolutely the future.


But not the now, quite obviously.

Colour me cynical but I don't feel like pretending the future is here only to have to have to fix its blind incompetence.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: