Yes, it is huge to spread the work out on embedded UIs in chat interfaces. But I think the design direction is going, is exactly the same direction of how Google Assistant, Amazon Alexa, or any of the other assistants work, ifkyk.
The MCP community is just reinventing, but yes, improving, what we've done before in the previous generation: Microsoft Bot Framework, Speaktoit aka Google Dialogflow, Siri App Shortcuts / Spotlight.
And interactive UIs in chats go back at least 20 years, maybe not with an AI agent attached...
The next thing that will be reinvented is the memory/tool combination, aka a world model.
Probably not for everyone. The current limit of 4096 types could be expanded if there’s a real need — it’s not a hard technical barrier.
I’m curious though: what’s an example scenario you’ve seen that requires so many distinct types? I haven’t personally come across a case with 4,096+ protocol messages defined.
If your binary has a small function set, probably not. But in a use case if you want to proxy/intercept cloud APIs, then something like Google APIs has 34K message types:
I think this more speaks to the tradeoff of not having an IDL where the deserializer either knows what type to expect if it was built with the IDL file version that defined it, e.g., this recent issue:
If schema consistent mode is enabled globally when creating fory, type meta will be written as a fory unsigned varint of type_id. Schema evolution related meta will be ignored.
It seems if the serialization object is not a "Fory" struct, then it is forced to go through to/from conversion as part of the measured serialization work:
I'd think that the to/from Fory types is shouldn't be part of the tests.
Also, when used in an actual system tonic would be providing a 8KB buffer to write into, not just a Vec::default() that may need to be resized multiple times:
I can see the source of an 10x improvement on an Intel(R) Xeon(R) Gold 6136 CPU @ 3.00GHz, but it drops to 3x improvement when I remove the to/from that clones or collects Vecs, and always allocate an 8K Vec instead of a ::Default for the writable buffer.
If anything, the benches should be updated in a tower service / codec generics style where other formats like protobuf do not use any Fory-related code at all.
Note also that Fory has some writer pool that is utilized during the tests:
Benchmarking ecommerce_data/fory_serialize/medium: Collecting 100 samples in estimated 5.0494 s (197k it
ecommerce_data/fory_serialize/medium
time: [25.373 µs 25.605 µs 25.916 µs]
change: [-2.0973% -0.9263% +0.2852%] (p = 0.15 > 0.05)
No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
2 (2.00%) high mild
2 (2.00%) high severe
Compared to original bench for Protobuf/Prost:
Benchmarking ecommerce_data/protobuf_serialize/medium: Collecting 100 samples in estimated 5.0419 s (20k
ecommerce_data/protobuf_serialize/medium
time: [248.85 µs 251.04 µs 253.86 µs]
Found 18 outliers among 100 measurements (18.00%)
8 (8.00%) high mild
10 (10.00%) high severe
However after allocating 8K instead of ::Default and removing to/from it for an updated protobuf bench:
fair_ecommerce_data/protobuf_serialize/medium
time: [73.114 µs 73.885 µs 74.911 µs]
change: [-1.8410% -0.6702% +0.5190%] (p = 0.30 > 0.05)
No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
2 (2.00%) high mild
12 (12.00%) high severe
The Rust benchmarks in Fory are intended more as end‑to‑end benchmarks for typical OOP‑style application scenarios, not just raw buffer write speed.
Protobuf is very much a DOP (data‑oriented programming) approach — which is great for some systems. But in many complex applications, especially those using polymorphism, teams don’t want to couple Protobuf‑generated message structs directly into their domain models. Generated types are harder to extend, and if you embed them everywhere (fields, parameters, return types), switching to another serialization framework later becomes almost impossible without touching huge parts of the codebase.
In large systems, it’s common to define independent domain model structs used throughout the codebase, and only convert to/from the Protobuf messages at the serialization boundary. That conversion step is exactly what’s represented in our benchmarks — because it’s what happens in many real deployments.
There’s also the type‑system gap: for example, if your Rust struct has a Box<dyn Trait> field, representing that cleanly in Protobuf is tricky. You might fall back to a oneof, but that essentially generates an enum variant, which often isn’t what users actually want for polymorphic behavior.
So, yes — we include the conversion in our measurements intentionally, to reflect the real‑world large systems practices.
Yes, I agree that protos usually should only be used at the serialization boundary, as well as the slightly off-topic idea that the generated code should be private to the package and/or binary.
So to reflect the real‑world practices, the benchmark code should then allocate and give the protobuf serializer an 8K Vec like in tonic, and not an empty one that may require multiple re-allocations?
DOP is great for certain scenarios, but there’s always a gap between DOP and OOP. That gap is where an extra domain model and the conversion step come in — especially in systems that rely heavily on polymorphism or want to keep serialization types decoupled from core business models.
TXSE's goal is to provide greater alignment with issuers and investors and address the high cost of going and staying public.
The alignment part translates IMO to avoiding political / social science policy issues like avoiding affirmative action listing requirements like the Nasdaq Board Diversity Rules that was just recently repealed: https://corpgov.law.harvard.edu/2025/01/12/fifth-circuit-vac....
So it is as one might imagine, the formation was probably for similar reasons why owners are moving their company registration out of Delaware.
In a structurally-biased environment, the loss of policies that counteract that bias does not allow companies to "avoid" politics and social science; it allows them to take the side in favor of the structurally-biased status quo. Just so we're clear about what that is.
Delaware law exclusively protects the interests of the board of directors. It allows for a unique provision - the hilariously misnamed "Shareholders Rights Plan" that enable a board of directors to issue shares as they please, in order to make sure every attempt at takeover isn't against the interests of the directors.
The only check on the power of the board in a Delaware corporation is the Delaware court of chancellery.
The irony is that the Levine article the parent provided argues that DE did the exact opposite of shareholder wishes!
> it is weird that Tesla’s management and board of directors and (a large majority of) shareholders all agreed that Musk should get paid $55.8 billion for creating $600 billion of shareholder value, and he did do that, and he got paid that, and a judge overruled that decision and ordered him to give back the money. I can see why Musk — and Tesla’s board, and its shareholders — would find that objectionable! They’re trying to run a company here.
The "real reason" people "freaked out" about Trump dismantling agencies is that he was and has been ignoring the law by fiat and not executing the law as is his constitutionally defined role. It would be one thing to veto a refunding of the DoEd, to approve a dismantling of the DoEd; it's another to unilaterally dismantle institutions that have been enacted into law by Congress. The DoEd is no more unconstitutional than the DoD or any other cabinet-level institution.
I think it's fair to have a hard discussion about the effectiveness of or need for the DoEd, but the way to do that is in Congress, not by fiat by the president. The way the Trump administration has approached it IMHO is grossly unconstitutional and a violation of the separation of powers. The only semi-reasonable rationale I can think of is that Congress is implicitly approving of or voting on the president's actions by not impeaching him, but that seems like an unreasonably high bar, equates lack of action with active approval, and it also infringes on the power of the Congress that enacted the law.
As someone else here on HN noted recently: what is the point of anything pertaining to congressional vote procedures, veto authority, overrides, and so forth if the president ignores, and is allowed to ignore, the laws that are passed anyway?
> mental shackles of subordination to psychological abusers and manipulators that are constantly pushing the idea that state's rights are subsumed to federal rights
Wow. The inter-state commerce clause is a real thing and it does give the federal government broad lattitude to regulate "commerce" across state lines. Commerce seems to entail the flow of both goods and services. We are in this situation because people at the state level decided, democratically, that some decisions should be made federally so as to avoid a huge patchwork of differing laws. To put it bluntly, I don't want to have to carefully review and compare Oregon state law with say Texas state law before I undertake any travel lest I accidentally commit a felony in Texas by doing something that isn't against the law in Oregon, and that's a really good reason to try to limit the differences between the two. If you don't, you'll necessarily chill travel and commerce across state lines because those differences will present a huge barrier to entry and create a big suck on peoples' time and attention.
> These United States, and after the Civil War the de fact illegitimate federal government called itself The United States
This is getting into Soverign Citizen type reasoning.
Regarding the concept, it's cool to see you using LLMs to quickly generate protocol versions.
But asking the community to review an AI-generated implementation week-old announced protocol, is more or less putting the recently coined-term AI "workslop" upon others. It doesn't really matter if it happens to be a good implementation or not.
There are two main issues I can think of right now:
1) Work going into the protocol is only useful for your implementation of it. The capnweb-core crate depends on the tokio runtime, and parts of the protocol/definitions are in the client crate:
What if someone wants to leverage work into the core parts of the protocol to use a different runtime or no-std?
2) The project has namespace squatted/swiped the best name for the official implementation of the project. I understand Rust/Crates-IO allows for this free-for-all, but isn't it entirely possible that Cloudflare already has Rust crates for this that they might open source? Or if someone else wants to make a competing implementation? Maybe it's just me, but I like to put my organization prefix on all my crates in the event I ever open source any of them.
Would you offer to transfer the crate names to Cloudflare if they were going to offer an implementation -- just like what happened with protobuf/Google?
That is was boring is what appealed to me. That is was also thought through with rigor was the other part. I wanted something to use in some pet projects, in rust, so having an implementation of the "wrangling" code I could reuse was value to me. Learning how to put guardrails on sing an LLM productively was another.
How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
I'm assuming that the PaaS/IaaS providers already have solutions for secrets. So a new centralized system may help with just dev and DIY bare metal?
But the centralized method, as in secretspec, not everyone will accept reading secrets in environment variables, as is also done with the 1password cli run command [1]. They also may need to be injected as files or less secure command line parameters. In the Kubernetes world one solution the is External Secrets Operator [2]. Secrets may also be pulled from an API as well from the cloud host. I won't comment on how that works in k8s.
To note, the reason for reading from file handles is so that the app can watch for changes and reload, e.g., key/token rotations without restarting the server.
But what could be useful to some developers is a secretspec inject subcommand (the universal version of the op inject command). I use op inject / dotenvy with Rust apps -- pretty easy to manage and share credentials. Previously I had something similiar written in Rust that also handled things like base64 / percent-encoding transforms.
If you aren't tied to Rust, probably could just fork external-secrets and get all the provider code for free.
The system won't be able remember why the user was created unless the content of the post includes data saying it was a signup. That's important for any type of reporting like telemetry and billing.
So then one gets to bike-shed if "signup" it is in the request path, query parameters, or the body. Or that since the user resource doesn't exist yet perhaps one can't call a method on it, so it really should be /users:signup (on the users collection, like /users:add).
Provided one isn't opposed to adopting what was bike-shedded elsewhere, there is a fairly well specified way of doing something RESTful, here is a link to its custom methods page: https://google.aip.dev/136. Its approach would be to add information about signup in a request to the post to /users: https://google.aip.dev/133. More or less it describes a way to be RESTful with HTTP/1.1+JSON or gRPC.
That's correct, the example you are giving represents bike-shedding among request path variations.
I assumed most readers of my comment would get that the idea that /users/signup is ambiguous whether or not that is supposed to be another resource, while /users:signup is less so.
I wouldn't recommend Cargo as something to copy for a real project, even though I've a fan of and have been using Rust exclusively lately. It suffers from not being able to handle global features without manually/conditionally propagating features to dependencies, as well as not being able to propagate metadata to dependencies without abusing the links functionality.
Why is that important? Well that's useful if you want something like json/serde or not in all transitive dependencies for a particular artifact you are generating like a library or a binary. That applies for other configurability that C/C++ developers bake into their libraries too.
Is this an educational learning experience as part of Hackclub which is a linked organization on your GitHub profile? Whether or not if so, trying to build this will be a good learning experience.
Think beyond just C/C++ and maybe Rust...
The entire set of ideas of things to implement is just to look at the feature set of Bazel and Buck 2 (which happens to also be written in Rust). Those offer functionality to build complete products in any language, locally or distributed across a build farm of servers, and glue them all together in any format. For example you can't build a multi-arch OCI/Docker container image for a Rust-binary server in a single command with Cargo.
Except for the initial learning curve, using them could be as simple as including their "build" files in your published git repo. No central repository needed.
The MCP community is just reinventing, but yes, improving, what we've done before in the previous generation: Microsoft Bot Framework, Speaktoit aka Google Dialogflow, Siri App Shortcuts / Spotlight.
And interactive UIs in chats go back at least 20 years, maybe not with an AI agent attached...
The next thing that will be reinvented is the memory/tool combination, aka a world model.