> This works exactly as intended! We were able to edit our program while it was running, and then re-run only the part that needed fixing. In some sense, this is an obvious result—a REPL is designed to do exactly this, allow you to create new code while inside a long-running programming environment. But the difference between Jupyter and a REPL is that Jupyter is persistent. Code which I write in a REPL disappears along with my data when the REPL exits, but Jupyter notebooks hang around. Jupyter’s structure of delimited code cells enables a programming style where each can be treated like an atomic unit, where if it completes, then its effects are persisted in memory for other code cells to process.
> More generally, we can view this as a form of programming in the debugger. Rather than separating code creation and code execution as different phases of the programming cycle, they become intertwined. Jupyter performs the many functions of a debugger—inspecting the values of variables, setting breakpoints (ends of code cells), providing rich visualization of program intermediates (e.g. graphs)—except the programmer can react to program’s execution by changing the code while it runs.
I just don't understand what's so amazing about this. This is total standard debugging. In python you can do it with with the built-in debugging module pdb:
import pdb; pdb.set_trace()
Or you can run your script with
python -m pdb script.py
Also the REPL doesn't just exit, it only exits if you allow it. E.g. if you call the script as
python script.py
it will exit, but you can also call it as e.g.
python -i script.py
or do any number of things and it will not exit.
I mean it's great that more people start debugging code, but calling this a feature of jupyter is a little ridiculous. The exact same feature exists in the python REPL and that's reflected in jupyter.
I think the article's point is that you don't restart script.py again and again but start it once, then modify it while it's running, and once you are happy with the final debugged version, the script.py program saved on your hard disk is that final debugged version.
You can sort of do something like this with REPLs, by copying code around between an editor window and a REPL window until you are satisfied that it's what you want. But you have to keep the REPL and the editor in sync manually. For example, if by experimenting in the REPL or debugger you find a bug that requires changes to three functions, you must either change them in the editor and copy all off the changes to the REPL to test the new program state; or you can redefined them in the REPL's less-than-ideal editor and then make sure to copy the updated versions to the real editor to save them in the source file.
I've worked with a few systems that were not REPL-based but more notebook-based, and it's very cool if the system keeps track of stuff for you. In particular, Coq and Isabelle have such environments.
(I've never worked with Jupyter, but I think I should give it a try.)
After programming my whole life I have to say this industry is surprisingly math averse, regressive, and led by cargo cults.
What wheel will we reinvent next week!? Stay tuned...