Everything is a network of computers. Some networks are very fast (QPI between sockets, PCIe), others only somewhat fast (SATA/SAS, USB, Ethernet). At both ends of the network there are discrete computers (hard disk controller, network card, keyboard, monitor, ...) talking to each other.
Essentially the difference between reading data from local memory and reading it from say Amazon S3 is just the speed/latency of the various networks, and how many computers had to store, transform and shuffle the data onto another network.
It only gets more amazing when you think about all the analog stuff that happens at the end of those networks, the button presses of people on their keyboard, light hitting a sensor in a camera, spinning particles of magnetized rust, stored electrons in a flash chip, light travelling through fibers of glass, crystals moving in the pixels of your monitor etc. In the end those analog actions are what make it "real" and what the whole network of networks is build for.
It seems to me like a big difference between multi-core CPUs and conventional networked systems is the expectation of reliability. For example, when a CPU core randomly fails, you don't expect the CPU as a whole to continue operating, do you?
Essentially the difference between reading data from local memory and reading it from say Amazon S3 is just the speed/latency of the various networks, and how many computers had to store, transform and shuffle the data onto another network.
It only gets more amazing when you think about all the analog stuff that happens at the end of those networks, the button presses of people on their keyboard, light hitting a sensor in a camera, spinning particles of magnetized rust, stored electrons in a flash chip, light travelling through fibers of glass, crystals moving in the pixels of your monitor etc. In the end those analog actions are what make it "real" and what the whole network of networks is build for.