> Even if it were a daemon, it would still require spool-up time to interpret the first byte-code down to machine code before the built-in jvm optimizing compiler could kick in and compile down to native machine code.
The first time you loaded that bytecode, sure. Every time after, it should be hashing the byte code it's being told to load, and then taking that hash as a key into a disk cache of already-profiled-and-JITed object code. In fact, the application can ship with those cache files (like a pNaCl binary shipping with its native counterpart), or the developers could dump (signed copies of) those cache files into a DHT which the runtime would look in as an extended cache.
Either way, the API is still "load this byte code", but the bytecode is effectively just a really long key that looks up the real code that should be loaded. It's just that in the absolutely-cold path, the JIT (effectively the load-time compiler) can kick in and actually parse the bytecode into the required native code.
The first time you loaded that bytecode, sure. Every time after, it should be hashing the byte code it's being told to load, and then taking that hash as a key into a disk cache of already-profiled-and-JITed object code. In fact, the application can ship with those cache files (like a pNaCl binary shipping with its native counterpart), or the developers could dump (signed copies of) those cache files into a DHT which the runtime would look in as an extended cache.
Either way, the API is still "load this byte code", but the bytecode is effectively just a really long key that looks up the real code that should be loaded. It's just that in the absolutely-cold path, the JIT (effectively the load-time compiler) can kick in and actually parse the bytecode into the required native code.