MCP includes a standard for some more advanced capabilities like:
- tool discovery: for instance you can "push" an update from the server to the client, while with OpenAPI you have to wait for the client to refetch the schema
- background tasks: you can have a job endpoint in your API to submit tasks and check their status, but standardization on the way to do that brings additional possibilities on the client side (imagine showing a standard progresss bar no matter which tool is being used)
- streaming / incremental results / cancellation
- ...
All of this is http based and could be implemented on a bespoke API but the challenge is cross-API standardization so that agents can be trained on representative data. The value of MCP is that it creates a common behavioral contract, not just a transport or schema.
I was a little puzzled as we got notified our apps were down, and then I tried to login in the Azure portal with no success. But the Azure status page reported no incident, so I posted here and quickly confirmed that others were impacted! They did a pretty bad job with their status page as the front door service was shown green all along
it took a good half hour after we detected the problem to see a notification on the Azure status page. Thanks to those who responded to my question as it validated the issue was global and we contacted our users t
right away
"We’re investigating an issue impacting Azure Front Door services. Customers may experience intermittent request failures or latency. Updates will be provided shortly."
The business cost with PRs isn't the first review but another developer, it's the number of iterations on a pull request due to defects and change requests. The way I am trying to promote the use of LLMs with more junior developers in my team (I am the CTO) is to use AI-assisted tools (we used Windsurf and recently switched to Github Copilot) for a first pass, e.g asking the agent for a review and catching potential defect before involving the human reviewer.
This doesn't mean the human reviewer will need to spend less time reviewing, but potentially this PR will be merged faster with on average a lower number of iterations and improved code quality.
I do have in my team some senior developers that are excellent, and it's very very rare I catch an issue in their PRs (maybe 1 out of 50). But I also have greener developers for who the ratio is way higher (like 8 or 9 out of 10) and this means repeated context switching for the reviewers.
As a CTO of a small company, spending a lot of time reviewing other developers' PRs, I completely agree. I recently enabled Github Copilot Agent and use it a lot for dealing with smaller and lower priority tasks in an async workflow (e.g. while I am doing other things). A lot of developers are not very good writers, and reviewing PRs from Copilot, with access to the full "thinking process" and being able to request changes in a few comments is sometimes more pleasant and effective
we have standardized on windsurf in my company. Cursor was part of the review but Windsurf has better support for dev containers and development on WSL, which most of our developers use. So beyond the actual AI part, there are some surprisingly rough edges with Cursor (using it with WSL is super hacky)
Developers tend to consider that picking a cloud provider is a technical decision, but it's actually a business one. If you aim for B2B, there is a massive incentive to use Azure, as your clients are typically Microsoft users. From there, you can benefit from coselling programs and the cost of your solution can look more attractive to your clients as they can include your license in their overall Azure/Microsoft spend.
Granted, the price of services is higher than on other platforms, but you would be mistaken to thing that's the price you are paying at scale, on a multi-year deals with reservations.
If I were to start a B2B startup I'll definitely go with Azure. For B2C or e-commerce, I'll probably look at others
All of this is http based and could be implemented on a bespoke API but the challenge is cross-API standardization so that agents can be trained on representative data. The value of MCP is that it creates a common behavioral contract, not just a transport or schema.