Or videos, Udacity courses, etc - hard stuff, not novels. Two birds one stone, blood-pumping and endorphins helping maintain focus. I get a quality hour of education every day, where before it was a constant TODO.
I was prescribed Aderall for ADHD growing up; 14-25yo. Towards the end doctors were loath to continue my prescription, until finally a doctor refused and I couldn't get it since. Their story has always been: it's an amphetamine, and comes with all those risks and health concerns. Particularly around heart health.
When I took it, it felt like that movie "Limitless". Superpower concentration. I took their word on the health bit though, no free lunch, so I stayed away. I developed A-fib (Atrial Fibrillation) at age 30, which is very rare at that young. Could be any number of things, most notably genetics (though I'd be my family's first); but doctors to whom I mention Aderall all have this "ahhhhh" reaction. "Could be something else, but if I were a betting man..."
Frankly, I've always figured the way Aderall abusers abuse - here and there, for finals or work deadlines - couldn't be that dangerous, unless you get into the habit. I (and many others) was prescribed 1x/d for ~10 years. Seems to have caught up to me, but that's some relative heavy usage. I certainly don't condone, just brain-dumping experience.
I'm with the others on this. Never mind the cringe - he's all show, so much so I think he's bluffing (doesn't know ML). He amps up on "character" so much you're excited for the knowledge drop - when it comes, it's so fast and technical there's nothing to gain from it. The adage "if you can't explain something simply you don't understand it" applies. I was hoping he understood ML enough to boil things down; instead he spews equations and jargon so fast (1) you don't catch it, (2) I think he's just reading from a source. He doesn't go for essence, he goes for speed - and that's not helpful.
Again, the cringe isn't the problem directly; but that it's a cover for his bluff. The result is a not-newbie-friendly resource.
I just checked out the "About" section of his Youtube channel.
> I've been called Bill Nye of Computer Science Kanye of Code Beyonce of Neural Networks Osain Bolt of Learning Chuck Norris of Python Jesus Christ of Machine Learning but it's the other way. They are the Siraj Raval of X
* Book: Hands-On Machine Learning w/ Scikit-Learn & TensorFlow (http://amzn.to/2vPG3Ur). Theory & code, starting from "shallow" learning (eg Linear Regression) on sckikit-learn, pandas, numpy; and moves to deep learning with TF.
I wouldn't compare AI to mars colonization. AI is coming in strong, we've made tremendous progress - mars colonization is still in its infancy / theoretics. AI's a constant-moving target of a definition; by all accounts, we've "achieved" AI already if you'd ask someone from 50 years ago. Art, music, conversation, research, ... If he wants to say "what if the Singularity never happens," that's fine and good - but it just seems weird to me to say "what if AI never happens." It's like saying "what if self driving cars never happen" just because he's not yet driving one.
False-starts: in this regard, AI is like VR. VR had its own winter too, after Virtual Boy and the like. We're in VRs second stand; same as AI. And in both cases, both are making a very strong case, and making lots of money. I'd put my money on both horses now.
So much value. Cross-platform compatibility (browser/JS, server/Node, mobile/React-Native, robotics/Johny-Five, etc). In-built asynchronous execution of nodes (a boon in ANN architectures).
Then there's dev mindshare. So many people know JS, empowering them would add bodies to meet rising ML demand. I learned Python specifically for TensorFlow. Python's easy to learn, but like any language takes much time to master. I've mastered JS, so Python was a frustrating little reset.
All that said, this cazala/synaptic project doesn't look promising to me save as showcase. Better to focus on exposing JS APIs on existing computation-graph GPU-runnable frameworks, eg node-tensorflow (https://github.com/node-tensorflow/node-tensorflow).
Or possibly 'extrapolation'. We've seen science explain magic time and again: eg, what was previously an evil spiritual infestation is now a bacterial infection. Think on the analogy, a metaphysical phenomenon became physical (and observed/manipulable). Unless you're a dualist, you buy that the brain (a science-accessible object) equals the mind in a fundamental way (from MRIs, brain damage, etc). Whether it's connected to a separate physical phenomenon yet unobserved, or creates the mind by emergence in a way that's not inductively accessible but only deductively (through information theory and the like, per this article). So yes, 'faith' in science to do what it does best - but 'extrapolation' from prior scientific achievements in explaining magic.