Hacker Newsnew | past | comments | ask | show | jobs | submit | EvanWard97's commentslogin

Good points. Somehow typing latency might actually be better, lol.

V8 might just invent like 3 more execution engines though, 1 of which uses an external TPU (open source though!) to run code JITed to HVM (Higher Order Virtual Machine) that everyone is eventually compelled to adopt, one can't be too sure JS will lose. /s


20x more compute isn't much in terms of cryptographic security concerns, no? Ah but triple-DES was recently depreciated.

Definitely sounds right that we'd get an earlier, heavier emphasis on parallelism and hardware acceleration. I'm guessing the slower speed of causality also applies to propagation delay and memory latencies, so there wouldn't be new motivation for particular architectural decisions beyond "God please make this fast enough for our real-time control systems or human interaction needs".

If we got deep learning years or decades earlier, that also seems scary for AI existential risk, as we are just barely starting to figure out how the big inscrutable matrices work, and that's with the benefit of more time people have had to sound the alarm bells and attract talent and funding for AI interpretability research.


Sounds right to me. Without being able to rely on flashy visuals and low-latency so much, games would've had to be somewhat more strategic and intellectual to sell (although I imagine graphics would eventually catch up due to its fitness for parallel processing). Even if brain rotting visual spectacles were just pushed 7 years down the line, they still would probably have a more sophisticated flavor that might be cemented with time (e.g. this counterfactual TikTok might have given users much more direct control over their feed algorithm).


We had DOOM and Quake and Fallout 2 long before CPUs were 20x slower than today.


Regardless of where anyone thinks the maximas are in software trade-off space, developers are going to experiment shipping software at new points anyway, as markets at known points become saturated and exploration again becomes worthwhile in expectation.

The aspect of this collective optimization process that seems particularly helpful and tractable to focus on is ensuring that users know the risks and benefits of their various options.

I'm more interested in seeing web platforms point users to excellent, impartial 3rd party analyses of their options and their associated risks/benefits, rather than stomp out innovation that some people clearly think is worth trying.


I didn't realize this was Nature, my bad.

And of course I am the one who decides for myself if anything is worth thinking about more. I was simply trying to elicit the sort of information which would be relevant to this decision such as, "Here's a bit of evidence you might not be aware of that suggest this is more intractable than you may think. Also, considering the existentially pressing X and Y risks on the horizon, which better programmer productivity presumably wouldn't help, you may want to consider that your comparative advantage may be A or B."


Sounds like you already know what questions you want answers to but didn't actually put them in your prompt and made it much more open ended than necessary.

Personally I think language wars are silly. I enjoy learning new languages and the more of them there are the better so I will not be making any pledges that involve standardizing on a specific language or set of languages.


I am not really interested in having people switch who don't want to and are perfectly happy with their language. I am proposing the r&d of a way to coordinate people switching who would like to switch languages, but only under certain conditions, such as if their would be jobs in it and sufficient expected growth in attention to the language's development.


Many programming languages would fit a project\team better if they had more community support. It's often worth putting up with worse-designed languages due to the existing libraries and documentation.

Also, the whole 'subjective\objective' distinction is tiresome and doesn't really help here. If we held a RCT where half of people started learning and developing in Julia, and the other half in Python, and they both only had the standard library and documentation, and we measured productivity, code performance, dev satisfaction, etc. and pehaps even had the developers eventually switch, we'd likely see data supporting the idea that one of the languages is overall favored by developers more than the other.


I have a working situation where I need to work with Python and Julia. I'd like to say that there are great interoperability between these runtimes, so you can easily call a Python function from Julia and vice versa.

Clearly, if you find Julia more productive and think that's the future, then that's great! There is no reason to hold back given that you can call out to legacy Python code as needed.

The downside, obviously, is that you have two languages to work with, which is not ideal. Depending on the size of your project and how much appetite you want to migrate code in your longer term roadmap, you can make a good judgment how to proceed.

I would have to say the world needs to move forward no matter what. COBOL used to be the best language for business applications and it clearly went out of favor. The recent events with COVID-19 brought up a clear technical debt issue. What I' trying to say is, sooner or later, the code will need to be rewritten.

Google tends to rewrite their software every few years to keep it fresh. That's not a bad model to have for any technology-centric company.


Congrats! There's been a lot of solid object level advice here...I'll just repeat some basic & meta advice in case anything resonates: - you can't lose it if you don't spend it (aside from the real small negative yield from inflation outpacing banks' savings interest rate).

- this is enough money to really provide a lot of investment opportunities, be extremely picky about a number of the first 'good' opportunities that come your way. There's a very good chance that you'll find better opportunities just by waiting a bit longer (where rate of return will make up for the opportunity cost of not investing).

- this is enough money that's it's probably worth reading at least a few books about investing and wealth management. You spend thousands of hours a year to otherwise earn $150k, it probably makes sense to spend at least 1/100th of that amount of time to really become informed about how to manage an additional $150k.

- the majority of non-profits and charities suck in terms of their impact/dollar efficiency. If you are actually trying to maximize your impact rather than donate to feel good (which is okay too! Just recognize when you are doing so), I'd hold off on donating to charities until you've done at least X hours of research per say, $1000k that you donate. Considering you make around $100 an hour, a few hours of research per $1000 donated probably isn't unreasonable. Additionally, the best charities aren't just 5 or 10x more impactful than the average ones, but probably 100's, 1000's, or even more times more impactful.

- don't ignore the peace of mind that a solid runway from a variety of uncorrelated, fairly liquid assets may provide. Regardless of what happens--a solar flare knocks out our electric grid for months, the US defaults, banks can't let you withdraw cash for whatever reason--you want to know that you'll be able to incentivize other's labor and buy goods from other people. Cash, gold, BTC/ETH/Monero/Zcash/etc. in cold storage wallets, and perhaps even other 'currencies' like common caliber bullets or cigarettes in a safe deposit box and/or a safe at home, might be worth storing $5k or even more in.

- consider using getguesstimate.com, www.causal.app, or at least Excel/Sheets to try to quantify the different risks and returns of all the options you are considering. The first two apps allow you to easily include uncertainty in your estimates of values, as well as do sensitivity analysis, which can help you decide which model inputs are probably most worth reducing your uncertainty about by researching them further.

- when in doubt about a spending decision, especially if you haven't exhaustively researched and thought about it, just wait a day and sleep it off. And if you don't feel good the next day, just wait again. For most people, it's too easy to spend money and too hard to save it. Don't be like most people.


Thanks for the advice!


`But those animals that do live at depth will clearly need some special adaptations, says Dr Jamieson.

"They'd have to do something clever inside their cells. If you imagine a cell is like a balloon - it's going to want to collapse under pressure. So, it will need some smart biochemistry to make sure it retains that sphere," the scientist explained.`

I don't understand how octopi would need special cellular adaptations for living at t those depths. So long as their cells do not require air cavities (fairly certain they don't), I can't see what the issue could be. Differential pressure can cause problems, but there's no delta-p when your cells are equally incompressible solids and liquids.

I hope that I'm wrong though and that this scientist isn't as mistaken as they sound.


High hydrostatic pressure seem to affect cell morphology, possibly due to changes in protein shapes while under extreme pressure. I only did some cursory googling and found several research papers about how pressure affecting cell morphology, for example this one on epithelial cells: https://pubmed.ncbi.nlm.nih.gov/3052872/

> At atmospheric pressure, cells were flat and well attached.

> Exposure of cells to pressures of 290 atm or greater caused cell rounding and retraction from the substrate.


- Far UVC lights (200 to ~222nm) such as Ushio's Care222 tech. This light destroys pathogens quickly while not seeming to damage human skin or eyes.

- FPGAs. I'm no computer engineer, but it seems like this tech is going to soon drastically increase our compute.

- Augur, among other prediction platforms. Beliefs will pay rent.

- Web Assembly, as noted elsewhere. One use case I haven't read yet here is distributed computing. BOINC via WASM could facilitate dozens more users to join the network.

- Decision-making software, particularly that which leverages random variable inputs and uses Monte Carlo methods, and helps elicit the most accurate predictions and preferences of the user.


I'm an FPGA engineer and I doubt they will go mainstream. They work great for prototyping, low-volume production, or products that need flexibility in features, but they are hard to use (unlikely to get better in my opinion) and it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

That said, they are very cool! And learning to create FPGA designs teaches you a lot about how processors and other low level stuff works.


>it's hard to see where they would fit into a compute pipeline given that you need to transfer the data to the FPGA, perform your computation/processing, and then transfer the data back.

I see them going mainstream when brain computer interfaces go mainstream (prob a long way away) since a lot of it (in my experience working in a couple of labs and some related hardware) depends on processing a lot of the data from the sensors, of which most is thrown away due to the sheer volume, and transferring it back and being able to update the filtration matrices easily tailored to sampled data.


Fpgas are too expensive, power hungry, and large. We use them for many tasks at my workplace and we are spinning up an ASIC team because using fpgas just doesn't meet our power and size requirements. Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.


> Also, building asics can be cheaper in the long run if the future of what needs to be done is relatively stable.

I don't doubt it, yet I found hard to describe the human brain over time, especially across people, as that; at least from a DSP and beamforimg of impedance measurements from the scalp to gauge the relative output of power at variable regions in the brain perspective.


> Far UVC lights (200 to ~222nm)

OK, these are not safe wavelengths, and whatever you're reading is not right. This is absolutely ionizing radiation. The rate of formation of thymine dimers in this regime is similar to that around 260 nm. That is, it causes DNA damage. Please see Figure 8 below:

https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1751-1097....

The logic of the claim that you can destroy a pathogen with UV but not cause damage to human tissues is incongruous. If it kills the pathogen, it also causes radiation damage to human tissues as well. One cannot dissociate these because they are caused by the same photoionization mechanism.


https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5552051/

> We have previously shown that 207-nm ultraviolet (UV) light has similar antimicrobial properties as typical germicidal UV light (254 nm), but without inducing mammalian skin damage. The biophysical rationale is based on the limited penetration distance of 207-nm light in biological samples (e.g. stratum corneum) compared with that of 254-nm light. Here we extended our previous studies to 222-nm light and tested the hypothesis that there exists a narrow wavelength window in the far-UVC region, from around 200–222 nm, which is significantly harmful to bacteria, but without damaging cells in tissues.

> As predicted by biophysical considerations and in agreement with our previous findings, far-UVC light in the range of 200–222 nm kills bacteria efficiently regardless of their drug-resistant proficiency, but without the skin damaging effects associated with conventional germicidal UV exposure.


So if I'm reading correctly, the 207-nm ultraviolet light simply doesn't make it past the outer (dead) layer of skin.


That's not relevant, and the paper itself doesn't really measure anything pertinent either. Ionizing radiation does not cause molecular ionization that stays in one place. It generates free radicals that propagate in reaction chains. Reducing the penetration depth only increases the volumetric dose.


Correct, but I’d still like to see their data as to what the impact is to eye tissue.


FPGAs have been around for quite awhile. Is something changing?


Non-stupid open toolchains are slowly happening. Vendor toolchains are the biggest thing holding back FPGAs. Everyone hates them, they're slow, huge, and annoying to use.


One thing that is changing quickly: deep learning, particularly inference on the edge. FPGAs are more versatile than ASICs.


Everyone making ML ASICs would disagree.


This just provides a cost advantage though right? I mean that’s great, love me some margin, but it’s not really a new frontier. Unless I’m wrong?


Dozens!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: