This is pretty interesting write-up*, though I'm not sure my employer would be happy with me putting out EULA-violation instructions to our company homepage.
* - at least for me, as the bugs in the stock reader drive me nuts, and have been waiting for this opportunity for a while
I heard that a lord two provinces to the North had seven of his serfs severely whipped when he found out that they had been talking about how to violate the EULA. These agreements have to be respected!
Well, you can always pray to only get a DMCA takedown request, because possibly you might get something, if not the whip. Surely the internet snarky comment coins will allow you pay the rent.
While not strictly necessary, it is a great power multiplier.
It helps as it is both a gauge of the success of the strategy, and also a lever where the process can be fine tuned, eg. slowly buying stock then strategically dumping in the right time, correlated with other external shocks can have wider effect to whole industries through controlling the public opinion on specific industries.
> Oh, well, and I actually also have a dedicated Copilot button on my new Lenovo laptop powered-by-Windows-11. And, guess what, it does exactly nothing! I can elect to either assign this button to 'Search', which opens a WebView2 to bing.com (ehhm, yeah, sure, thanks!) or to 'Custom', in which case it informs me that 'nothing' meets the hardware requirements to actually enable that.
How did you manage this? Probably some company-wide group policy saves you. It keeps starting copilot for me, drives me crazy.
I did absolutely nothing special, other than running the latest-and-greatest Windows 11 Enterprise, which is what we put on most of our laptops without any customizations other than "require 2FA and some antivirus and firewalling" via Intune.
And I just went into our Azure admin portal, looking for any AI goodies to enable, and... there just doesn't seem to be anything there? And we have an Enterprise P2 subscription, which is usually where all the good stuff is, but, yeah...
Well, I happened to think many companies woild configured copilot to be disabled: claims to prevent corporate secrets from leaking out or refusal to install internal closed-net Microsoft CoPilot server.
I searched but could not find the "bought" or "money" or "dollar" or "stock" words in the marketing fluff piece, so it definitely does not answer the question in the title.
The term "joining" irritates me more than it should, because you're correct in asking "What is the value of the transaction?". My guess is that they aren't joining anything, CloudFlare bought the company and is keeping the team.
Sure, but Replicate will probably cease to exist in the near future. So a more accurate title could be: Cloudflare buys out Replicate and transfers staff to internal teams.
That's not a given as well. An acquisition usually involves restructuring the acquired company, sometimes in a way where the original team ceases to exist.
> An incredible journey is: One company buying another and closing its services down. This is a purchase of the second company’s staff, rather than their product. An acquihire.
> This is what is galling. A company that can afford to pay millions for some new staff but not for what those staff built. The people who used the service, and invested their belief and time in uploading photos, or forming friendships, or logging data, are left to find new virtual homes while their former hosts enjoy a nice (if possibly delayed) payday.
> This repeated pattern only encourages more people to create flashy services that have no hope of being sustainable businesses in their own right, but may survive long enough, with VC funding, to attract the attention of a large company eager for new ideas and staff.
The last paragraph is what gets me -- it makes sense to me found startups in hopes to be acquired (continue their work with the support of a big company), but founding with the intention to abandon your users? Yuck.
> We’ll be able to do things like run fast models on the edge, run model pipelines on instantly-booting Workers, stream model inputs and outputs with WebRTC, etc.
Benefit to 3rd party developers is reducing latency and improving robustness of AI pipeline. Instead of going back and forth with https request at each stage to do inference you could make all in one request, e.g. doing realtime, pipelined STT, text translation, some backend logic, TTS and back to user mobile device.
Depends on how much the latency matters to you and the customers. Most services realistically won't gain much at all. Even the latency of normal web requests is very rarely relevant. Only the business itself and answer that question though.
> "Even the latency of normal web requests is very rarely relevant."
Hard disagree. Performance is typically the most important feature for any website. User abandonment / bounce rate follows a predictable, steep, nonlinear curve based on latency.
I've changed the latency of actual services as well as core web vials many times and... no. Turns out the line is not that steep. For the range 200ms-1s, it's pretty much flat. Sure, you can start seeing issues for multi second requests, but that's terrible processing time. A change like eliminating intercontinental transfer latency - barely visible in results in ecommerce.
There's this old meme of Amazon seeing a difference for every 100ms latency and I've never seen it actually reproduced in a controlled way. Even when CF tries to advertise lower latency https://www.cloudflare.com/en-au/learning/performance/more/w... their data is companies reducing it by whole seconds. "Walmart found that for every 1 second improvement in page load time, conversions increased by 2%" - that's not steep. When there's a claim about improvements per 100ms, it's still based on averaging multi-second data like in https://auditzy.com/blog/impact-of-fast-load-times-on-user-e...
In short - if you have something extremely interactive, I'm sure it matters for experience. For a typical website loading in under 1s, edge will barely matter. If you have data proving otherwise, I'd genuinely love to see that. For websites loading in over 1s, it's likely much easier to improve the core experience than split thing out into edge.
Ok, I think we're actually in agreement -- given your all-important qualification "for the range 200ms-1s". Yes, ofc, that first part of the curve above the drop is quite flat; there's hardly time for the user to get impatient and bounce.
My point about the shape of the curve stands. 100ms can matter more on the steepest part of the slope than 2s does further to the right.
Also: Google One gives you 2 TB, MS 365 Family gives 6x1TB for roughly the same price (for a total of 6 seats). The packages differ in multiple ways, being different fit for different use cases.
Also note: I'm using Google docs at work, and while Word for one example has too many features, Google docs is lacking even on the basics like document styling (think custom paragraph styles), it is painfully inadequate for business use, lot of work is wasted on working its limitations around.
To be honest for most developers editors are much alike. While Emacs and Vim guys debate on their philosophies and config files others just open their favourite editor/IDE and ship.
Lol, I haven't tinkered with my emacs config in nearly 10 years now. Most vim and emacs users put together a config they like over maybe a few weekends then get to work. We have deadlines to meet just like all the rest of you.
Maybe there were advanced human civilizations on the planet before the current (there are such theories), but at some point they also got so advanced that they accidentally/systematically removed all of their traces, and have declined in some way. (though the theories are have better explanation for their lack of artifacts apart from a few OOPARTs)
Our age unfortunately will have long-lasting traces in the forms of various plastics and forever-chemicals.
And they also took care to replenish the deposits of iron, copper, tin, lead, and rare-earth ores plus coal and oil before ultimately disappearing. Very considerate of them!
Not sure if this really counts as tinkering, but the other day I needed a custom HID device for my PC. I ordered an Arduino Micro (I think?), one that supports HID out of the box, and with under 300 lines of code my problem was solved.
The Arduino HAL and the overall comfort of the Arduino IDE are genuinely valuable. I didn’t have to learn new flashing tools or a new debugging toolchain just to light a few LEDs, read some buttons, and emulate keypresses on a PC. The learning curve was basically zero.
I’ve worked with embedded systems before, and this level of simplicity is incredibly useful for people who just want to ship simple solutions to simple problems without fighting through vendor-specific, arcane tooling.
I've got some RP2350s since then with Micropython, now those might be even better for getting stuff done (without network or extreme low power needs)
> AI training presumably isn't super time-sensitive, so could you just pause it while it's cloudy?
or pause it when "organic traffic" has a peak demand, and resume in off-peak hours, so that the nuclear powerplant can operate efficiently without too much change in its output.
* - at least for me, as the bugs in the stock reader drive me nuts, and have been waiting for this opportunity for a while
reply