Hacker Newsnew | past | comments | ask | show | jobs | submit | biridir's commentslogin

The content in the article is a bit dated. Jupyter + Numba + Dask is the direction scientific computing (in Python) is taking. Ipyparallel is not really scalable in my experience.


This is an extremely limited view of "scientific computing" that seems to only focus on analytics, which is a tiiiny part of sci comp.

Your "stack" does nothing for solving/including ODEs, PDEs, DAEs, Fourier analysis, numerical integration, automatic differentiation, linear equation system solvers, preconditioners, nonlinear equation system solvers, the entire field of optimization, inverse problems, statistical methods, Monte Carlo simulations, molecular dynamics, PIC methods, geometric integration, lattice quantum field theory, molecular dynamics, ab initio methods, density functional theory, finite difference/volume/element methods, lattice Boltzmann methods, boundary integral methods, mesh generation methods, error estimation, uncertainty quantification...

Those are just off the top of my head, the list goes on and on.


I completely agree that there are many scientific libraries in python which scale up. I was addressing the article which showed a more advanced way to use python with the purpose of making it applicable to large datasets. If you were to implement a method from scratch or scale up to a larger dataset then you'll end up with using numba, numpy and dask. This is completely from a lower level programming perspective to implement and integrate methods rather than pipelining methods from higher level scientific libraries.

Just for some context: https://www.scipy.org/about.html https://www.scipy.org/topical-software.html


I have yet to see a situation where Numba makes real sense, as compared to just dropping down into C(++) or Fortran when you need to do the heavy lifting. Can you give me a good example?


I thought so, too - enough that I checked the date, but the date on the article is yesterday. Maybe it was written a while ago and just now uploaded. I thought it was strange to refer to IPython as a standalone interpreter outside of Jupyter... I mean, you can, but I don't think anybody does anymore.


> I don't think anybody does anymore

Counter-example: I use IPython in the terminal, outside of Jupyter.


So do I, and many others. Nowadays you can type "jupyter console", which gets you almost the same thing; and JupyterLab has console components.


Me too! I actually hook up an Ipython terminal to a Jupyter notebook and in order to do REPL type work in the terminal. Much better to work in a REPL terminal and also save my work in the notebook.


Synthego custom crispr kit seems like a very useless service. Guide RNA design for knockout is extremely trivial. You just need a couple of 20bp+NGG sequence which is unique to the gene.. Who would pay to do that?..


That doesn't interfere with any other part of the genome... That's a hard compute problem.


Do you mean the effect of the actual gene knockout, crispr specificity or off-spot mutations?


CRISPR specificity. Designing good gRNAs isn’t quite trivial, although it’s not as hard as the article makes it seem. I’m not sure about the claim of reducing the time of experiments by “month”.


http://crispr.mit.edu/about

Designing a good gRNA is quite straightforward if you compare it to any other bioinformatics tasks. Even designing a degenerate primer is more complex than this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: