Hacker Newsnew | past | comments | ask | show | jobs | submit | ltratt's commentslogin

I'm assuming you're referring to the Python finaliser example? If so, there's no syntax sugar hiding function calls to finalisers: you can verify that by running the code on PyPy, where the point at which the finaliser is called is different. Indeed, for this short-running program, the most likely outcome is that PyPy won't call the finaliser before the program completes!


We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

I don't think Rust's design in this regard is ideal, but then again what language is perfect? I designed languages for a long while and made far more, and much more egregious, mistakes! FWIW, I have written up my general thoughts on static integer types, because it's a surprisingly twisty subject for new languages https://tratt.net/laurie/blog/2021/static_integer_types.html


> We don't exactly want Alloy to have to be conservative, but Rust's semantics allow pointers to be converted to usizes (in safe mode) and back again (in unsafe mode), and this is something code really does. So if we wanted to provide an Rc-like API -- and we found reasonable code really does need it -- there wasn't much choice.

You can define a set of objects for which this transformation is illegal --- use something like pin projection to enforce it.


The only way to forbid it would be to forbid creating pointers from `Gc<T>`. That would, for example, preclude a slew of tricks that high performance language VMs need. That's an acceptable trade-off for some, of course, but not all.


Not necessarily. It would just require that deriving these pointers be done using an explicit lease that would temporarily defer GC or lock an object in place during one. You'd still be able to escape from the tyranny of conservative scanning everything.


If you've used Chrome or Safari to read this post, you've used a program that uses (at least in parts) conservative GC. [I don't know if Firefox uses conservative GC; it wouldn't surprise me if it does.] This partly reflects shortcomings in our current compilers and in current programming language design: even Rust has some decisions (e.g. pointers can be put in `usize`s) that make it hard to do what would seem at first glance to be the right thing.


Also most mobile games written in C# use a conservative GC (Boehm).


Not just mobile games - all games made with Unity.


As Koffiepoeder suggests, since the vast majority of content on my site is static, I only have to compress a file once when I build the site, no matter how many people later download it. [The small amount of dynamic content on my site isn't compressed, for the reason you suggest.]


That’s a good point, didn’t know it was cached on top.


As an example, I like to point people at https://doc.rust-lang.org/std/cell/struct.UnsafeCell.html which for many years now has contained this line:

> The precise Rust aliasing rules are somewhat in flux, but the main points are not contentious

I've sometimes found myself in situations where the only way I've been able to deal with this is to check the compiler's output and trawl forums for hints by Rust's developers about what they think/hope the semantics are/will be.

Historically speaking, this situation isn't uncommon: working out exactly what a language's semantics should be is hard, particularly when it has many novel aspects. Most major languages go through this sort of sequence. Some sooner or later than others --- and some end up addressing it more thoroughly than others). Eventually I expect Rust to develop something similar to the modern C spec, but we're not there yet.


Excellent - thank you for the example and the clarification. This is exactly what I was looking for.


Because Morello is an experimental platform, only a small number were manufactured. They are/were allocated mostly to people involved in early stages CHERI R&D and, AFAIK, none were made available to the general public. [That said, I don't know whether there are still some unallocated machines!] One can fully emulate Morello with qemu. While the emulator is, unsurprisingly, rather slow, I generally use qemu for quick Morello experiments, even though I have access to physical Morello boards.


You're quite right, I over-simplified -- mea culpa! That should have said "often unify these phases". FWIW, I've written recursive descent parsers with and without separate lexers, though my sense is that the majority opinion is that "recursive descent" implies "no separate lexer".


For what it's worth, in my little corner of the world, all of the recursive descent parsers I've seen and worked with have separate lexers. I can't recall seeing a single recursive descent parser in industry that didn't separate lexing.

However, I do often see a little fudging the two together for funny corners of the language. Often that just means handling ">>" as right-shift in some contexts and nested generics in others.


That's not my impression of the majority opinion, fwiw. (I wrote my first recursive-descent parser in the 80s and I learned from pretty standard sources like one of Wirth's textbooks.)


As another data point in addition to the sibling comments, all IntelliJ language parsers use recursive descent with a separate lexer.


Hello Filip -- I hope life is treating you well! I'm happy to clarify a couple of things that might be useful.

First, VM authors I've discussed this with over the years seem roughly split down the middle on microbenchmarks. Some very much agree with your perspective that small benchmarks are misleading. Some, though, were very surprised at the quantity and nature of what we found. Indeed, I discovered a small number had not only noticed similar problems in the past but spent huge amounts of time trying to fix them. There are many people who I, and I suspect you, admire, in both camps: this seems like something upon which reasonable people can differ. Perhaps future research will provide more clarity in this regard.

Second, for BBKMT we used the first benchmarks we tried, so there was absolutely no cherry picking going on. Indeed, we arguably biased the whole experiment in favour of VMs (our paper details why and how we did so). Since TCPT uses 600 (well, 586...) benchmarks it seems unlikely to me that they cherry picked either. "Cherry picking" is, to my mind, a serious accusation, since it would suggest we did not do our research in good faith. I hope I can put your mind at rest on that matter.


I don’t buy it.

- Academics don’t publish results that aren’t sexy. How many people like you ran the same experiment with a different set of benchmarks but didn’t publish the results because they confirmed the obvious and so were too boring? How many times did you or your coauthors have false starts in your research that weren’t published? You’re cherry picking just by participating in the perverse reward system.

- The complexity of the data analysis sure makes it look like you’re doing something smart, but in reality, it’s just an opportunity to cherry pick.

- These results are not consistent with what I’ve seen, and I’ve spent countless hours benchmarking VMs I wrote and VMs I compete with. I’ll believe my own eyes before I believe published research. This leads me to believe there is something fishy going on.

Anyway, my serious accusation stands and it’s a fact that for large real-ish workloads, VMs do “warm up” - they start slow and then run faster, as designed.


I not only welcome reasonable scepticism, but I do my best to facilitate it. I have accrued sufficient evidence over time of my own fallibility, and idiocy, that I now try to give people the opportunity to spot mistakes so that I might correct them. As a happy bonus, this also gives people a way of verifying whether the work was done in the right spirit or not.

To that end we work in the open, so all the evidence you need to back up your assertions, or assuage your doubts, has been available since the first day we started:

* Here's the experiment, with its 1025 commits going back to 2015 https://github.com/softdevteam/warmup_experiment/ -- note that the benchmarks are slurped in before we'd even got many of the VMs compiling. * You can also see from the first commit that we simply slurped in the CLBG benchmarks wholesale from a previous paper that was done some time before I had any inkling that there might be warmup problems https://github.com/ltratt/vms_experiment/ * Here's the repo for the paper itself, where you can see us getting to grips with what we were seeing over several years https://github.com/softdevteam/warmup_paper * The snapshots of the paper we released are at https://arxiv.org/abs/1602.00602v1 -- the first version ("V1") clearly shows problems but we had no statistical analysis (note that the first version has a different author list than the final version, and the author added later was a stats expert). * The raw data for the releases of the experiment are at https://archive.org/download/softdev_warmup_experiment_artef... so you can run your own statistical analysis on them.

To be clear, our paper is (or, at least, I hope is) clear to scope its assertions. It doesn't say "VMs never warmup" or even "VMs only warmup X% of the time". It says "in this cross-language, cross-VM, benchmark suite of small benchmarks we observed warmup X% of the time, and that might suggest there are broader problems, but we can't say for sure". There are various possible hypotheses which could explain what we saw, including "only microbenchmarks, or this set of microbenchmarks, show this problem". Personally, that doesn't feel like the most likely explanation, but I have been wrong about bigger things before!


For those of us with Unix-y mail setups the move to OAuth2 can be a bit tricky, but there are now several different programs to help (spurred, I suspect in no small part, by Microsoft/Exchange's stance). The ones I know about are:

Email OAuth 2.0 Proxy <https://github.com/simonrob/email-oauth2-proxy>; mailctl <https://github.com/pdobsan/mailctl>; mutt_oauth2.py <https://gitlab.com/muttmua/mutt/-/blob/master/contrib/mutt_o...> (some suggestion that it might not always work these days?); pizauth <https://github.com/ltratt/pizauth>; oauth-helper-office-365 <https://github.com/ahrex/oauth-helper-office-365>. Disclaimer: I wrote pizauth and it's just about to move into the alpha stage.


Not only it’s tricky and user-hostile, but it also severely decreases security by forcing people to use fundamentally insecure mechanism to obtain the authentication token.


Could you expand on that? How is OAuth 2.0 fundamentally insecure in this setting?


It makes it necessary to use a browser to obtain the token. That browser is a huge attack surface. With web, it doesn’t matter, since you need to be using it anyway, but for mail it’s just additional cruft.


That's just for certain flows, like the common authorization code flow. The client credentials flow does not require a browser, for example.

Not sure about Google, but Microsoft supports client credentials for IMAP/POP3[1], but not for SMTP yet. IIRC it was supposed to be rolled out this January but is still missing. Hopefully they can get that deployed ASAP.

[1]: https://learn.microsoft.com/en-us/exchange/client-developer/...


God forbid you actually have functioning token rotation and revocation alongside 2FA. So insecure. /s


What does oauth have to tho with authentication (let alone 2FA)?



More like authorization. Authentication is completely opaque for most people using gmail. (except for those very few using service accounts and signing their own authorization tokens)

Or maybe you can enlighten me how you can get the token for XOAUTH2 from just your gmail email address and password without involving any opaque google service.

Authentication is happening completely outside of OAuth inside some google black box. 2FA has nothing to do with OAuth at all. It's just another feature of the google's black box which decides whether to give you the access/refresh tokens or not.


You can use 2FA with static password authentication. Remember the “password” here only means “character string”, it can easily carry an OTP.


Right, just as with XOAUTH2, the "password" sent to the server is actually the (encoded) OAuth token.


So what does it improve then, that would justify the incompatibility and added technological debt (dependencies)?


Well the question I responded to was "What does oauth have to tho with authentication".

I fully agree with the move away from plain passwords in this case, given that it's no longer "just" the password to a mail account, but to much, much more.

Now while I think OAuth adds some features that can be useful in certain settings, I'll be inclined to agree that requiring OAuth isn't the best move.

However the alternatives would probably require a lot of extra work on Microsoft's behalf, like being able to set up device-specific passwords or similar.

So, given the need to move away from plain account passwords, I can understand why they wouldn't want to do that and just use what they already had.


Honestly, I haven't tried Xvfb in years. That said, I did have another motive by keeping things simple. Even though xwininfo is very X11 specific, I hope that it's easier for people to work out an alternative for other platforms and adapt the recipes from my post to their situation.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: