Local AI sounds nice but most of Apple’s PCs and other devices don’t come with enough RAM for a decent price needed for good model performance and macOS itself is incredibly bloated.
That's true for current LLMs, but Apple is playing the long game.
First, they are masters of quantization optimization (their 3-4 bit models perform surprisingly well).
Second, Unified Memory is a cheat code. Even 8GB on M1/M2 allows for things impossible on a discrete GPU with 8GB VRAM due to data transfer overhead. And for serious tasks, there's the Mac Studio with 192GB RAM, which is actually the cheapest way to run Llama-400B locally
Depends what you are actually doing. It's not enough to run a chatbot that can answer complex questions. But it's more than enough to index your data for easy searching, to prioritise notifications and hide spam ones, to create home automations from natural language, etc.
Apple has the ability and hardware to deeply integrate this stuff behind the scenes without buying in to the hype of a shiny glowing button that promises to do literally everything.
That might work well for Apple to be the consumer electronic manufacturer that people use to connect to OpenAI/Anthropic/Google for their powerful creative work.
I find it more likely that the entire "second" level of software companies are in OpenAI's cross hairs more so than Google. Salesforce, ServiceNow, Intuit, DocuSign, Adobe, Workday, Atlassian, and countless others are easier to pick off than Google.
Those don't seem like reasonable targets at all to me. OpenAI's product is information and their power is engagement. It's more like a cross between Facebook that thrives on engagement and Google that delivers information.
Googles biggest advancement in the last ~15 years is to produce worse search results so that you spend more time engaging with Google, and doing more searches, so that Google can show more ads. Facebook is similar in that they feed you tons of rage-bait, engagement spam, and things you don't like infused with nuggets of what you actually want to see about your friends / interests. Just like a slot machine the point is that you don't always get what you want, so there's a compulsion to use because MAYBE you will get lucky.
OpenAI's potential for mooning hinges on creating a fusion of information and engagement where they can sell some sort of advertisement or influence. The problem of course is that the information and engagement is pretty much coming in the most expensive form possible.
The idea that the LLM is going to erode actual products people find useful enough to pay for is unlikely to come true. In particular people are specifically paying for software because of it's deterministic behavior. The LLM is by its nature extremely nondeterministic. That's fully in the realm of social media, search engines, etc. If you want a repeatable and predictable result the LLM isn't really the go to product.
Not every kid born in the last five years will know Google as a verb as we do. They’ll be adults in 15 years, which is a paltry investment timeline for the type of Black Swan event we’re talking about, which AI is.
I don’t disagree with you entirely, but I’d argue the second level apps are harder to chase because they get so specialized.
Death of Google (as everyone knows Google today) is a tricky one. It seems impossible to believe at this exact moment. It can sit next to IBM in the long run, no shame at all, amazing run.
It was real over a decade ago. This is sheer stupidity and selfishness of bringing hobbies into their jobs. If it makes Front-End devs feel better, the "DevOps" world isn't much better.
Imagine you want to deploy some microservices to Kubernetes. You can just create EKS/AKS/GKE cluster from the GUI, `kubectl apply` a few resources and create a load balancer to point your domain there. That will work. But...
You probably want to automate the infra creation (so Terraform, Pulumi, CDK...), you want to automate building (so GitHub Actions, Jenkins, Bitbucket Pipelines, GitLab CI...) and artifact storage (so Nexus, Arti, ECR, GHCR...), you want to automate deployment (so Argo, Flux, Helm, Kustomize...), you want to automate monitoring (so Prom stack, Datadog, many APMs, Splunk, Graylog, ELK... could easily name a dozen more).
Each part of the stack can easily bring a dozen different tools. I work in SRE and I use at least 40-50 tools for a mid-sized project. And this is "normal" :)
Meh. For a long time people have been saying stuff like "devops is dead, long live the platform engineer" exactly because the figuring-out-what-works phase of wild experimentation is over and there are unambiguous "winners" for technology in most every niche. You've listed lots of commercial vendors and alternate choices for backends/front-ends here as if to illustrate a lack of standardization in tools/frameworks, but is it really?
Whereas churn in web-dev seems self-inflected.. devops practitioners don't actually create vendor/platform fragmentation, they just deal with it after someone else wants the new trendy thing. Devops is remarkably standardized anyway even in the face of that, as evidenced by the fact that there's going to be a terraform'y way to work with almost all of the vendors/platforms you mentioned. And mentioning 20 ways to use kubernetes glosses over the fact that.. it's all just kubernetes! Another amazing example of standardization and a clear "winner" for tech stack in niche.
Well, many years ago you could use CFEngine, Puppet, or Chef, or Ansible, or Salt/SaltStack, and then later Terraform, Otter, Pulumi, and now we have Nix, and a Terraform fork as OpenTofu.
Not all of those tools do identical jobs but there's a ton of overlap within them and they all have idiosyncracies.
Rest assured STUMPY was replaced with another home grown protocol! Though I think a stream oriented protocol is a better match for large scale services like S3 storage than a synchronous protocol like HTTP.
I work on tiny systems now, but something I miss from "big" deployments is how smooth all of the metrics were! Any bump was a signal that really meant something.
I've always felt it's probably a wrapper around the Amazon EFS due to the similar pricing and that S3 One Zone has "Directory" buckets, a very file system-y idea.
Seems to indicate the storage underneath might be similar in cost and performance, and this might in fact really be similar. Not that the software on top is the same.
S3’s KeyMap Index uses SSDs. I also wouldn’t be surprised if at this point SSDs are somewhere along the read path for caching hot objects or in the new one zone product.
This blog is atrocious from an ad standpoint and the recent flood of posts feels promotional and intentionally controversial. The articles are also devoid of any interesting perspectives. Are people actually reading this?