The Other Worlds Shrine

Your place for discussion about RPGs, gaming, music, movies, anime, computers, sports, and any other stuff we care to talk about... 

  • Excellent analysis of Wolfram Alpha

  • Somehow, we still tolerate each other. Eventually this will be the only forum left.
Somehow, we still tolerate each other. Eventually this will be the only forum left.

 #138340  by Shrinweck
 Thu Jul 09, 2009 11:53 pm
It is certainly an interesting idea. I screwed around with it for half an hour a few weeks ago to mixed effect. About three successful searches and a dozen not I gave up. You get some interesting stuff out of it when you figure out what it wants from you, though.

 #138352  by Kupek
 Fri Jul 10, 2009 10:44 am
Shrinweck wrote:You get some interesting stuff out of it when you figure out what it wants from you, though.
Which is the point of the essay I linked! Basically, there's two components to WA: the attempt at intelligent natural language processing, and the data visualization backend. The author argues the natural language processing frontend is both a failure and a waste of time. The data visualization backend, however, is interesting, but people should have control over what they see.

Put another way, you shouldn't have to figure out what it wants from you. You should just be able to say "I want this."

The author provides clear, concise and compelling arguments for why this is so.

 #138362  by SineSwiper
 Fri Jul 10, 2009 7:22 pm
Kupek wrote:Which is the point of the essay I linked! Basically, there's two components to WA: the attempt at intelligent natural language processing, and the data visualization backend. The author argues the natural language processing frontend is both a failure and a waste of time. The data visualization backend, however, is interesting, but people should have control over what they see.

Put another way, you shouldn't have to figure out what it wants from you. You should just be able to say "I want this."

The author provides clear, concise and compelling arguments for why this is so.
And the frontend is what Google does so well. People come back to Google because they type the subject they want, with the modifier words they want, and even get suggestions of what you might mean before you actually search.

Not to argue that the backend isn't highly developed, either. It IS! So many of these guys who try to usurp Google's throne have this backwards notion that just because Google has been around for years with the same engine that it means that the engine has not changed and should be replaced.

Far from it. Google has hundreds of people with high-level degrees in engineering and mathematics keeping the level of thinking intelligent. Microsoft thinks all it has to do is spend hundreds of millions of dollars in ads, put up pretty pictures, and emulate the color scheme of Google in order to beat Google. But, the idea just falls flat on its face.

Google will be king. Google will always be king unless Google get callous with its engine and stops investing in it. However, given the culture of the company, that is highly unlikely.

 #138366  by Kupek
 Fri Jul 10, 2009 7:44 pm
Really, read the article for an interesting discussion on the difference between Google and WA.

 #138374  by SineSwiper
 Fri Jul 10, 2009 10:12 pm
It was initially TL;DR for me, but it was actually pretty entertaining:
TFA wrote:And does the giant electronic brain fail? Gosh, apparently it does. After many years of research, WA is nowhere near achieving routine accuracy in guessing the tool you want to use from your unstructured natural-language input. No surprise. Not only is the Turing test kinda hard, even an actual human intelligence would have a tough time achieving reliability on this task.

The task of "guess the application I want to use" is actually not even in the domain of artificial intelligence. AI is normally defined by the human standard. To work properly as a control interface, Wolfram's guessing algorithm actually requires divine intelligence. It is not sufficient for it to just think. It must actually read the user's mind. God can do this, but software can't.
Stuff like this is exactly why Ask Jeeves failed. You cannot ask a search engine a question. It doesn't work like that. While search engines like this were around, I was using Alta Vista because you could put plus and minus on your search words. It was FUNCTIONAL!

Google Squared is somewhat like this, but it still gives you control of what you actually wanted to some extent. Sure, it's an attempt at putting Wolfram Alpha into a full-text search context, which tends to fail a lot, but it actually has potential to REPLACE WA with some database controls.

I guess the rock/hard place of this argument is that complex control interfaces suck, but how do you get a user to choose from hundreds of databases? Ultimately, the answer is to include both controls, the default one and the controlled one. Google does this in various forms and it works out well.

For example, if I want the page for "Wolfram Alpha", I would type just that. If I want the Wikipedia page, I would include the word Wiki in there. And it works really well, even better than Wikipedia's stupid search engine. It's so good that the browser function to select multiple search engines seems fruitless because you are going from a Google search, which is intelligent, to a non-Google search, which can't spellcheck, predict, or even do AND-based searches.

 #138378  by Kupek
 Fri Jul 10, 2009 10:37 pm
I think it's more subtle than just complex control interfaces. Complex tools will necessarily have complex controls. The problem is when the mental model the user must have is complex or incomplete. For example, I'd consider the bash command line a complex control interface - it's provably Turing complete, to reference another thread. But the mental model I have in my head is direct and concrete; when I do something, I know what the output should be.

If, on the other hand, when I sometimes issued a "cd" command and I did not actually change directories, then that would be a serious problem. My mental model would now have uncertainty, and reasoning about how I can achieve my goals would be hard.

I really liked the article because he clearly explained why tools are useful.

 #138388  by SineSwiper
 Sat Jul 11, 2009 8:12 am
Yes, but for the technically dumb, an interface needs to be both complex and simple at the same time. You give a supermodel a bash command, and she's not going to really understand how to use it.