Watson is not a simulation any more than your brain is a simulation. Your brain is (likely) mostly just algorithms and data structures, albeit on an enormous scale. This could be wrong, but I'm betting not. What else could the brain possibly be? It's not magic.
> We will know when Watson is dangerous when it feels fear, angst, want. Not just an algorithm to sort facts and simulate speech.
Nah. Look at good science fiction to see why this is false. The most dangerous AI ever conceived might be something like Skynet or HAL 9000; machines that are cold, calculating and have no emotion whatsoever. But they do hold weapons, they do speak English and they are intelligent. I'm not suggesting we'll ever build anything like what we see in science fiction, but good writers often are able to predict - or even influence (see Clark, Asimov) - the future.
Algorithms to sort facts are indeed the beginnings of machine intelligence.
I deny that the human brain IS algorithms. It is neurons connected by dendrons/synapses. Right?
Small bits of it can be algorithmically simulated. Large processes can be algorithmically simulated. But to call the algorithm "intelligence" is sympathetic magic.
Algorithms work in a different way; they break in a different way; they are hard-coded so don't change. They are a simulation.
No disrespect intended but, I think you are lacking a fundamental understanding of machine learning, genetic algorithms, neural networks and AI in general.
I was introduced to Watson when I took a sub-project from IBM related to Watson, I had no idea what it was untill taking the contract, at which point I was introduced to Watson and it is an impressive feat, it is with out a doubt the state of the art in NLP AI. Sufficient to say, AI algorithms differ from the static and rigged structures that most developers write for business apps, web or mobile apps (even the good developers that decouple stuff), the scale of difference is orders of magnitude.
In many cases machines learning has the ability to generate new code based on learning and build new connections and new algorithms to deal with learned problems. Applications literally generate new and novel applications and then connect themselves to these new nodes.
This is not stuff usually broached when you just need to credit a line in an account payable or get the sales volume for last month. There is a huge gap between the code used to write business applications and the structures used in AI.
This is not meant as a critique of you or anyone in general, and is not intend to belittle business app, mobile or web developers but rather to help inform people that, those who have not worked in the field of AI, that their understanding of software development has little relevance when applied to the field of AI. It is literally a different world where the concepts are totally foreign to a non-AI developer.
I've only spend a relatively short amount of time studying the subjects you mentioned, so correct me if I'm wrong, but the meta-programming aspects you're mentioning here are either highly exaggerated, or significantly less impressive than you make them sound.
Machine learning is really computational statistics - it applies fairly standard and well understood techniques to fit a function to a noisy data set. Genetic algorithms and neural networks are really fancy words for optimization algorithms - they're merely a set of tools (not unlike hill-climbing) for searching a large space. The de-facto books on AI are PAIP and PPAI. I've read both, and example programs there, while very interesting, are not much different than a combination of reasonably clever techniques.
"Generating new code" is the same thing as generating a data structure and running a predefined interpreter over it. These systems do that, but in a much more restricted way than you imply. They certainly don't design new algorithms in an intelligent fashion, merely use a set of predefined inference rules, not unlike any other rewriting system.
I don't know anything about Watson, but it is a well understood fact that every AI system to date is nothing more than a clever marionette (and it's very unlikely that this will change for a very long time). You can't just throw terms around - show an example. In every case so far a result that initially appears impressive, when understood, is immediately disappointing. They're all clever, but they're a far cry from "self-learning systems" for any reasonable definition of the word "learning".
but the meta-programming aspects you're mentioning here are either highly exaggerated, or significantly less impressive than you make them sound.
I don't think that I have exaggerated, the fact that currently AI systems are built by developers with a predetermined set of rules in not in dispute, those rules constrain what an AI system will generate this is analogies to the function of serotonin in the brain, it's level directly affects a factor of our state (happiness, empathy). Machine learning employs the same factor, basic here is serotonin (data) and here is the serotonin regulation mechanism (algorithm) but the machine learns (abuse drugs) to defeat the regulation mechanism, none the less it has to work within the constraints of the system, just as our minds have to work with the constrains of their biological functions. It works with in the constraints of the system to find ways to adapt the system. Whether this is impressive or not is subjective to the observer (I personally think it is). When contrasted to a biological level, it is pretty primitive reward logic is pretty low level when it comes to biology. None the less I think it is impressive.
I think when talking about AI a lot of people confuse consciousness with intelligence. While evidence suggest that intelligence is a prerequisite of consciousness the converse cannot be said. I think we have made great strides simulating the constructs of intelligence on a mechanical level. As for consciousness, we have to master the former before we will know how to tackle the latter. And when most people think about AI they think about the latter which sets a pretty high bar when measuring the state of the art.
I agree. Consciousness is not required for intelligence. Before I read your post, I was going to write to mention as well that people's definition is not only extremely wide ranging - running the gamut from memorizing digits of pi or spitting out trivia to self awareness and introspection - but it is also inconsistent. Changing scope and form based on whether the entity in question is autistic or a machine.
I muse that there is a race of AI type intelligences where genius is measure only in their ability to create moving works of arts and any fool can perform advanced mathematics and exceedingly complex computations involving a vast amount of variables.
You make a large mistake here. I feel that the only way your argument holds is by assuming its conclusion. Which would be: what is going on in [some/most] of our brain is something more than or very different from 'just' statistical inference and optimization. Right now, we don't know whether this is true or not but more people are beginning to think not.
For example let us consider that based on time of day and knock you can guess who is most likely at your door. How why, can you know this? Have you learned or figured out anything there?
Or say, I tell you that a person is of gender X, race Y and lives in city Z. You will automatically generate an idea of what this person is like. And it will be different from what I would generate and these data points would likely mean nothing of significance to a 3 year old. Why? Because we have learned a model from our past experience/data. Machine Learning also uses generative models to infer situations. And in fact, we humans perform a very weak form of machine learning. It goes by the name of stereotyping or profiling.
When you are trying to figure out something. The process is not some clean logical step by step deduction. It is more like a search with dead ends (local optimums), back tracking and restarts. Trying and throwing away different ideas. Or when trying to learn a new sport, dance or flip. You do not consider the physics of the situations to try and figure amount the correct amount of impulses to apply. You try again and again to learn or statistically generate a satisfactory approximate local optimum of the correct physics model for the situation at hand.
As for systems which generate code. We can look at it most literally in terms of those which evolve rules in some way or loosely by considering that all machine learning does is use lots of data to prevent the programmer from hand coding a giant restricted system. Regardless of your stance, these systems differ from mere rewriting in that they are not deterministic. They interact and respond to different situations in varying ways. The more sophisticated methods can develop new algorithms - a set of rules - that were not programmed and make no sense to the developer to develop behaviours to cope with their situations. It is true that we provide a base, but that does not mean some limited form of learning is not occurring. What Machine learning cannot do that we can is introspect, abstract and generalize across domains.
I am the reverse of you. Before I picked up machine learning I thought the brain was something special. But now I cant help but feel that we are just clever marionettes and that whats going on is simply mundane mathematics by clever co-opting of physics by nature. I find this fact to be amazingly beautiful.
I think anyone who has worked in the field of AI would admit there are a lot of philosophical questions around the definition of intelligence and it's relationship to algorithmic problem solving.
One of my favourite thought experiments in the area being Searle's Chinese Room:
Sure, intelligence is much like religion or faith, what it is is a very personal matter as well a philosophical matter to each observer. To me if a machine can make a logical decision that it arrives at via it's own conclusion independent of human involvement (after the initial construction). It can be said to be intelligent. I personally think we have already achieved intelligent machines. Granted not intelligent in the traditional biological sense but intelligent in their own right. I do believe that we are close to hyper-intelligent machines than many think we are. I personally think it will be the next big step. I also believe that it will fundamentally and irrevocably change humanity. It is a Pandora's box that we will not know the ramifications of until we pass that event horizon, it is much like Schrodinger cat, full of possibilities but until we experience the event we will not know how it affects humanity.
If you read further on Searle and the Chinese Room, you'll find that most of his argument has been debunked and what remains is an Appeal to Common Sense. "The man in the room can't really understand Chinese--right guys?"
How is this anything but pure semantics? I still don't know the conceptual difference between Searle's strong AI and weak AI, other a single word change in the definition.
Searle claims through this experiment that strong AI does not exist. The robot in the room doesn't know how to talk Chinese, it just matches up symbols by using an elaborate dictionary.
Turing defined intelligence by the appearance of intelligence. If the Chinese room can make you think it houses someone that speaks Chinese, than that means the person in the room is intelligent.
To Searle everything is weak AI. Just calculation, without knowing what they really calculate. Intelligence is more than appearance, like a hologram of someone, is not that person self.
Pragmatists and functionalists play around Searle's conclusion. For them it is about the behavior, not the system itself. In Searle's vision something like the China Brain (every person in China uses a radio to act as a neuron) is ridiculous. In the functionalist view the China Brain is intelligent and self-aware.
Arguing that consciousness and the brain is different from deterministic algorithms, or neural networks is perfectly valid. It doesn't show lack of understanding of AI. Just that the debater is not an adherer of strong AI functionalism.
Using Gödel's incompleteness theorems one can argue that no set of algorithms is capable of perfectly modeling human consciousness. A logically correct algorithm can not give faulty output, yet internally conclude that output to be correct. We do not make the same mental steps as a set of algorythms: I can't say this post is correct with a 97.77% accuracy. In fact I wonder if you will respect me more or less after this reply, if you'll believe I lack understanding of machine learning... not if I passed the Turing test for intelligence or not. Calculation != Intelligence.
Like the robot in the Chinese room is still a puppet, that doesn't really understand Chinese. Attaching a radar to a flying drone doesn't make it feel or act like a bat.
How is Godel's incompleteness theorems an issue here? AI does not have to model the human brain by simulating it. And it need not be rigorously/axiomatically defined as a decidable formal system nor does it need the ability to prove its own consistency. Nor does it need to be consistent, making the GITs inapplicable. Heck the AI could use paraconsistent logical reasoning or couple bayesian inference with a suitable multi-valued logic as its base.
I was directly responding to this portion of the parents posts:
Small bits of it can be algorithmically simulated. Large processes can be algorithmically simulated. But to call the algorithm "intelligence" is sympathetic magic.
Algorithms work in a different way; they break in a different way; they are hard-coded so don't change. They are a simulation.
By the structuring of his description, it is apparent that he is reasoning from an application or "computer" if you will, developers perspective. My point was that it is flawed to look at AI software as rigid structures, applying traditional development patterns is flawed and does not reflect the realities of AI development. Put simply the description of AI as hard-coded paths and developer generated (implied) algorithms is in no way factual.
I agree with you on that. AI has advanced beyond rigid structures and scripts. Using a system that operates with fuzzy logic or building a neural network and teaching it, 'till you the programmer can't make heads or tails of its computations and derivations is a wonderful thing indeed.
As an aside: like Penrose chose quantum physics (a mysterious thing) to explain consciousness (another mysterious thing), and therefor didn't succeed to convince others, so we should guard against using a fuzzy, complex, black-box, dynamic system (a mysterious thing) to explain (or fully model) consciousness and human intelligence.
We just replaced the wonder with another wonder :)
"I deny that the human brain IS algorithms. It is neurons connected by dendrons/synapses. Right?"
"I deny that Watson IS algorithms. It is transistors assembled into logic gates. Right?"
My statement is obviously a silly thing to say. Watson is both algorithms and transistors, depending on how you care to think about it. If you believe that your statement is not equally silly, please explain.
Well HAL 9000, while cold and calculating seems to be scared when he's being turned off. So I'm not sure I agree there are no emotions. Who knows, maybe things like emotion just appear as side effects of the simulation once it becomes complex enough.
> We will know when Watson is dangerous when it feels fear, angst, want. Not just an algorithm to sort facts and simulate speech.
Nah. Look at good science fiction to see why this is false. The most dangerous AI ever conceived might be something like Skynet or HAL 9000; machines that are cold, calculating and have no emotion whatsoever. But they do hold weapons, they do speak English and they are intelligent. I'm not suggesting we'll ever build anything like what we see in science fiction, but good writers often are able to predict - or even influence (see Clark, Asimov) - the future.
Algorithms to sort facts are indeed the beginnings of machine intelligence.