I just had a 15 minute conversation with a shitty Markov chain machine that, after stating the name that I would like to be called, responded back the same way each time I asked "What is my name?" and "By what name did I ask that you call me?"
When asked to describe myself, I mentioned my height: 7'. The response referenced that this height is very tall. When asked again, later in the conversation, how tall I said I was: the response was '5 feet tall'.
The entire concept of AI isn't in responses and natural language so much as the ability to retain information and act accordingly to references made regarding that information. Anyone can slap together a CashChat script that, upon each mention of genitalia, responds how turned on it is/they are. This isn't far from that.
I'm always interested when I hear "the more you interact, the smarter it becomes." That isn't the case here. If the responses back are little more than speech learning based on what should go where; responses to "That doesn't make much sense" are "IDK makes sense to me lol" versus a mechanism that allows for gradual weight correction; message after ZIP is provided is "i think there are things going on in the area idk" with all future references to what's going on in that ZIP coming back nonsensical; and it doesn't have the ability to reference literally the first question that //it asked me//?
Then it isn't AI. Intelligence implies continued application of learned mechanisms. This isn't that.
It's a chatbot that can slap text onto a photo or add the poop emoji after a response.
It may be worth noting that Microsoft specified a specific list of personalized bits of information it would store on individual users.
The 'learning' indicated, is definitely with regards to language only. They make it clear they're studying "conversational understanding".
But as it only stores the following about users: Nickname •Gender •Favorite food •Zipcode •Relationship status, they've already informed you up front they won't store your height.
If it stores my nickname and name, why won't it repeat that name back? Ask it what your name is. "What's my name again?" or "What name did I ask you to call me?"
Every single response I got back was, "I have you stored as HUMAN19282301-11. JK LOL I know that you told me."
There was no deviation from that response. Same response every time. To the level of sameness as if I had talked with a chatbot looking for me to watch her 'sup3rhot camsho' and typed the word 'penis' -- "omg r u hard i m wet". Same response. Over. And over. And over.
I get that the method here is to use user-inquiry to overshadow a lack of conversational understanding. Users will always talk about themselves. Hell, humans as a whole will always talk about themselves: to machines, to themselves, and often to pets. So when a partly non sequitur response is given but followed with a composed question -- people can sometimes look past it.
"It just said it was a fish meme but it wants to know how my day was. God my boss is such a dick. Let me tell you about what he did..."
Asking someone a subjective question about themselves is sort of a blindspot in that aspect. That's not like, The Byronic Hero's Law of Talking: it's just an observation in working with similar machine learning conversational mechanisms. I could be way off and it's very much dependent on willingness to play along, ego, and how bad your day actually was. And loneliness but that's a hard variable to map. Hopefully we could call that variable 'cat'.
Either way, I knew what I was getting into. It wasn't a Sea Monkey letdown. I had just hoped that something deemed as ready for a pilot episode in prime time wasn't so ramshackle that it couldn't tell me my name but later went on to drop racial slurs it had learned instead.
I actually couldn't get an answer when I was inquiring about me over DM, but Tay's DM response behavior seemed to go up and down throughout the day. (It'd tell people on public tweets to DM her, but never respond to DMs for hours at a time.)
This was very clearly an experiment, and I don't think they wanted to pre-train it too much, to see what would happen. I think the results were kinda predictable when people like 4chan get involved! But at almost 100,000 tweets it generated, clearly they got a lot of data to work with for the next version.
http://theevolutioncrisis.org.uk/testimony2.php