This is a valid point and a real challenge of brain-computer interface (BCI) technology. It is largely trying to help those who suffer from locked-in syndrome, where they really do not have any reliable motor control at all, including eye movement. If you do have the ability to reliably execute any kind of motor control, such as eye movement or muscle twitch, that can be exploited for more effective, durable interface control than current state of the art of BCI.
As stated elsewhere, this can actually be a very frustrating process. I lost a good chunk of my long weekend trying to build TF from source for CUDA 8.0 / cuDNN 5.1. Generally speaking the culprit is that the CUDA installers for Linux are highly dependent on your kernel and gcc versions. This is a huge headache for people who want to stay up-to-date on their distro packages. CentOS has no problem because hardly anything changes, but you're essentially handcuffed to whatever version s of Ubuntu or Fedora were out when NVIDIA decided to start packaging up the next release. Bumping gcc to 5.4 in Ubuntu 16.04.1 broke the 16.04 installer, which relied on gcc 5.3.
Reinforcement Learning is one of the most exciting areas of research in machine learning and AI going on right now in my opinion. It is going to play heavily in creating AI that can make decisions in dynamic environments.
A great introduction to the topic is the book Reinforcement Learning: An Introduction by Sutton & Barto. You can find the official HTML version of the 1st edition and a PDF of a recent draft of the 2nd ed. here: https://webdocs.cs.ualberta.ca/~sutton/book/the-book.html
He wrote the Learning OpenCV book but I'm not sure it's accurate to say he created openCV... It was created by intel internally (by a ton of folks like Vadim Pisarevsky) and later open sourced.
thanks! -- I didn't know the history. these notes are from 2008 (long after opencv was open sourced) but it's clear from his linkedin page that he was a part of the founding team at intel.
IBM's TrueNorth chip is taking a much more neuromorphic design approach by trying to approximate networks of biological neurons. They are investigating a new form of computer architecture away from the classic Von Neumann model.
TPUs are custom ASICs that speed up math on tensors i.e. high-dimensional matrices. Tensors feature prominently in artificial neural networks, especially the deep learning architectures. While GPUs help accelerate these operations, they are optimized first and foremost for video rendering/gaming applications -- compute-specific features are mostly tacked on. TPUs are optimized solely for doing ML-related computations.
You absolutely do not need a PhD for industry unless you want an R&D job in a handful of domains. A PhD is not simply learning more facts about a particular topic. It's an apprenticeship for conducting independently directed academic research. There are so many topics that you can not just learn from reading some online sources e.g. most experimental work. Most of the time spent in a typical PhD program is spent trying to solve problems that have no easy answers and no easy guidelines to follow. While you can definitely gain similar knowledge and experience in an industry setting, you almost never have the freedom to take the 3+ years often necessary to explore a narrow topic, struggle and fail repeatedly, be faced with and overcome crushing doubt and frustration, and do so in a generally supportive community.
> You absolutely do not need a PhD for industry unless you want an R&D job in a handful of domains.
I know Phd.s at {MSFT, Google, Uber, Deepmind, GE, P+G} research divisions. I don't know anyone who works at those who does not have a doctorate. So I would say this applies to more than 'a handful of domains' -- If you want to do research, you should plan on having a doctorate, even in industry. The majority of data scientists I know have doctorates, even if that strikes me as (generally) overkill.
Otherwise, I agree with the rest of your statements.
What monopoly? You totally have a choice, it's just that NVIDIA made a large bet on GPGPU and it is paying off for them. You don't see AMD heavily pushing their cards for compute purposes or developing computational developer relations.
NVIDIA does not have a monopoly in the traditional sense. But yes, the have a de facto one because there is no viable competition.
It's like saying MATLAB has a monopoly in academic research because so much of the code is written in it. That is slowly changing and moving over to Python now, which is great. Maybe OpenCL will get there someday, but I don't see it happening any time soon.
this is wrong. No mainstream deep learning library uses openCL, and the non-mainstream ones that do are much much slower. I remember reading up to 10x slower, but I can't seem to find the reference right now.
You are correct. My initial response was a pedantic point about the semantic use of monopoly in this context, which isn't helpful.
I would love it if AMD would care more about GPGPU, but they don't, and NVIDIA has little incentive to make their OpenCl drivers equal to their CUDA ones.
Ideally, yes, we want to pre-train in a virtual environment using as close to the real model robot as possible. I worked on such a problem as part of my PhD research on mobile robots using the Webots simulator (https://www.cyberbotics.com/overview) as my virtual environment.
In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.
I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw
This was totally unexpected. When I saw the Kickstarter update email in my inbox, I just assumed we were getting to jump the pre-order line. This was a great goodwill move on their part.