Hacker Newsnew | past | comments | ask | show | jobs | submit | rsc's commentslogin

FWIW, the versions are not semver but they do follow a defined and regular version schema: https://ai.google.dev/gemini-api/docs/models#model-versions.


I am seeing a lot of demand for something like a semver for AI models.

Could thereotically there could be something like a semver that can be autogenerated from that defined and regular version scheme that you shared?

Like, Honestly my idea of it is that I could use something like openrouter and then just change the semver without having to worry about these soooo many things as the schema that you shared y'know?

A website / tool which can create a semver from this defined scheme and vice versa can be really cool actually :>


I'm not sure if this is a joke or not, but in case it isn't: Semver was mostly created so users of libraries could judge if a new release would break the API interfaces or not, by just looking at the version. So unless the first number changed, you're good to go (in theory, in practice this obviously didn't work as expected).

With that in mind, what exactly would semver (or similar) represent for AI models? Setup the proper way, your pipelines should continue working regardless of the model, just that the accuracy or some other metric might change slightly. But there should never be any "breakages" like what semver is supposed to help flag.


Models have changes worthy of semver style major changes. Tokenizer, tool support, tool format, JSON modes, etc. Pipelines absolutely must change when these change.

This thread is more about the minor number: not incrementing it when making changes to the internals is painful for dependency tracking. These changes will also break apps (prompts are often tuned to the model).


[Also posted to Lobsters: https://lobste.rs/s/ms94ja/what_is_go_proxy_even_doing#c_vz2...]

I apologize for the traffic. We clearly need to look at the “thundering herd” you are observing. That shouldn’t be happening at all.

Separately, the bug we fixed last time were about repeatedly cloning a repo in sequence even if it was unchanged, not about redundant parallel fetches. We fixed that only for Git, because Git makes it very easy to look at a branch or tag and get the tree hash, without downloading the full repo. This is exposed as the go command’s -reuse flag. The relevant Git code is at https://go.dev/src/cmd/go/internal/modfetch/codehost/git.go#... (CheckReuse).

What we need from any VCS to implement the reuse check is a cheap way to download a list of every tag and branch along with a cryptographic file tree checksum for each one, without doing a full repo clone. At the time, I convinced myself Mercurial did not support this. Perhaps I was wrong or perhaps it does now. If anyone can help us understand how that works, we could implement the -reuse flag for Mercurial too. (Any other VCSs would be great too, but Git is #1 by a very wide margin and I believe Mercurial is #2 also by a wide margin.)

Again, apologies for all the traffic, and thanks to Ted for the excellent analysis. We will look into both.


This is a strange and dangerous thing to try to do from assembly. In particular, all these details about write barriers being hand-coded in the assembly are subject to change from release to release.

Better to structure your code so that you do the pointer manipulation (and allocation) in Go code instead, and leave assembly only for what is absolutely necessary for performance (usually things like bulk operations, special instructions, and so on).


> Better to structure your code so that you do the pointer manipulation (and allocation) in Go code instead, and leave assembly only for what is absolutely necessary for performance (usually things like bulk operations, special instructions, and so on).

While I generally agree with this, one way to mitigate the maintenance issue is to offer a macro assembler instruction that performs a write barrier that is kept up to date with what the Go compiler emits. If it the compiler itself uses that macro assembler method, it's already pretty easy.

After many years of racking my brain on how to make each of the things "blessed and bulletproof", I've realized that systems languages will inevitably be used by people building high performance systems. People doing this will go to crazy lengths and low-level hackery that you can't anticipate, e.g. generating their own custom machine code[1], doing funky memory mapping tricks, etc. The boundaries between that machine code and your language are always going to run into representational issues like write barriers and object layouts.

[1] Pretty much all I do is language VMs, and there is no way around this. It will have a JIT compiler.


In this case, "atomic 128-bit store" is the special instruction, with the twist that half of those 128 bits contain a pointer.


Officially, I have left the Go team too; I started on a new team at Google a few weeks ago. I still use Go quite a bit, I still talk to people on the Go team regularly, and you will still see the occasional code change, code review, or blog post from me. Most importantly, I have high confidence that the team we built will do an excellent job continuing the work.


Very curious what you’re working on now, if you’re willing to share


Gotcha. In your golang-dev announcement you wrote that you are "not leaving the Go project", but maybe there was another announcement?


The parent comment is the first time I've said anything publicly (it has only been a few weeks). I didn't feel like it needed an announcement, but since it came up, it seemed worth correcting. It's not a state secret. :-)

In my August announcement, I was careful to say I wasn't leaving the Go project. I'm still involved with the Go project and expect to keep being involved. I'm just not officially on the Go team at Google anymore. Stepping back from the actual team to give others room to lead always seemed to me both likely and appropriate.


That's interesting! I hope you're excited about the new team!


Those are semantically different (one is nil and one is not) but neither allocates.


Not sure what the betrayal is? He contributed a quote for yesterday's post. https://tailscale.com/blog/tailscale-enterprise-plan-9-suppo...


from the above post:

  > April 1, 1999
  >
  > FOR IMMEDIATE RELEASE
Forward to the past?


This was explained in the post. 1999 was when Intel released the Pentium 3 with SSE instructions, which caused the first major issue that had to overcome.


We had to do some Plan 9 work, which makes sense when doing something new, but the actual Tailscale implementation is far _less_ work than for other Unixes.



In the link in the parent comment. :-)


While you can create and build a local package with U+FE0E in its file name, you cannot create or download a module using that character in a file name. So you could run this attack in someone's top-level repo but not in any of their dependencies. That's something at least.

https://go.googlesource.com/mod/+/refs/heads/master/module/m... https://go.googlesource.com/mod/+/refs/heads/master/module/m...


Huh, that gives me a little pause.

People who clone a project and compile it manually get different output than people who `go install` it?

Is that inconsistency something that … should be fixed? Seems like it should be.


People who go install it get an error that it's not a valid source tree at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: