Machine Learning + Artificial Intelligence = Machine Intelligence


I was reading up on what is happening in Artificial Intelligence and Machine Learning and listened to a few lectures/interviews. It looks like next few years will see a significant advance in these fields and it could bring disruption to lot of industries.
Here is a pop science / layman version of what I understood.
TL;DR version:-
  • Research in something called Deep Learning has reached the tipping point where it is industrial strength in performance and it could be used for real world applications.
  • 3 things that are pushing it – cheap parallel computing (GPUs), big data (collected through search, images, posts etc), better algorithms (deep learning)
  • Focus in not on the kind of AI seen in the likes of Terminator and by all accounts we are too far from it. While robotics is progressing, that is also not where AI is getting applied (to set our mental image right). But it is on developing intelligent machines doing specific functions (speech recognition, natural language processing, image recognition etc).
  • In past 3-4 years, AI has started moving from academia into big corporates. Leading researchers and professors from top universities are moving into companies like Google, Facebook. There is an AI arms race going on between Google, Facebook, Microsoft, IBM, Baidu, Amazon. It is about who will create the killer app using AI/ML first.
  • What is really happening is not purely AI or ML – someone coined the term AI + ML = MI (Machine Intelligence). Making machines intelligent enough to do specific functions better, at scale not possible by humans.
  • X.AI – Next round of innovation could be to take anything (X) and add AI to it to make it better – like medical image processing, fraud detection, recommendation systems.
  • Last year, some of the very intelligent people like Stephen Hawking, Elon Musk and Bill Gates made statements that AI could be an existential threat and it is not far off. Do they know something that the generic public don’t?

Longer version:-
AI coming out hibernation
AI was sputtering along since 1960s, like the analyst asked. Creating logical models, reverse engineering the brain or basing AI research on neuroscience was not making significant progress and with not much commercial use, funding also was a problem. These periods of lull in AI research where it went to hibernate is referred to as “AI winters”. But few researchers like Geoffrey Hinton in University of Montreal, Yoshua Bengio of University of Toronto, Yann LeCun of NYU were still continuing their struggle. Yann LeCun created a cheque reading system while he was with Bell Labs and AT&T.
Eventually they created something called Deep Learning – it is layers of neural network which starts off with basic input (like images, text – at pixels and letters), understands features, classifies and comes up with an output. Something called backpropagation can compare the output and any errors can be fed back through the neural net to adjust the weights to improve the accuracy. Once they feed in millions of data to train the neural network, system learns to do this automatically. There is supervised learning where a training data set and valid outputs are used at first. Unsupervised learning is where data without labels can also be classified, features generated by the machine and learn by itself. LeCun describes this process as a black box with 500 million knobs – you show it a picture of a car and at the end of it, it says it is a truck – you turn some of the knobs, correcting the parameters and weights and it learns that this is a car. Repeat this few million times to let the system learn.
Another significant event that helped this along seems to be the brainwave to use Graphic Process Units (GPUs) that were specifically developed for rendering high resolution graphics for video games getting adopted for AI. GPUs can do complex computations using multi dimensional vectors faster and it is optimized for throughput, not latency like the CPUs. Multiple GPUs that can be process such huge tasks of crunching millions of data in a parallel processing environment is giving huge performance boost – it seems turning Deep Learning experiments that used to take weeks down to days or hours.
Google came out with a result where the unsupervised system learned from millions of Youtube videos and learned to recognize image of a cat on its own. Or another where the system learned to play games on its own and beat human performance.
By now it is ready for showtime and likes of Google/Facebook stand to gain from advances in voice, image, text processing. Advantage of Google and Facebook is huge amount of real data that they get from searches, content and social connections. As per Peter Norvig, modern AI is about “data, data, data,” and Google has more data than anyone else. That and unlimited processing power makes it ideal environment for research. All leading AI researchers are in these companies now – Hinton in Google, LeCun in Facebook. Andrew Ng of Stanford and Google joined Baidu Research. Peter Norvig who wrote AI textbook is in Google, same as Ray Kurzweil who is a leading AI figure who says we will have human level AI in next 15 years.
We have started using some versions of it though. Google Now in my android phone read my email the other day and gave a reminder about a movie based on the ticket receipt I had in my email. When I was in Bangalore, it gave a reminder at 4 pm that it was time for me to start to airport for a 7:30 flight (I had the tickets in my email) – considering the traffic. Facebook is showing me another person in my flat as friend recommendation – it might have been based on my location. Apple’s Siri, Microsoft’s Cortana, IBM’s Watson are others. It seems Watson is offered as a service for medical diagnosis and research. Skype announced real time translation service recently. Baidu seems to be going one step further about multi lingual translation.
Microsoft is a surprise leader too. Azure ML seems to have big plans – of eventually even giving Deep Learning as a service with fully trained neural nets for image, text and voice. Amazon announced Machine Learning service as well. Big advantage they have is cloud environments that can be provisioned for on demand analysis. That is perfect combination – processing power and algorithms as a service.
Applicability of such MI super specialty could be in things like fraud detection, demand forecasting, ad targeting, recommendation, spam filtering, healthcare. Personalized medicine, genome sequencing, driverless cars – those are what is coming.
AI scare and criticism
Last year Stephen Hawking said that "The development of full artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence it will take off on its own and redesign itself at an ever-increasing rate". Incidentally he was using improved version of his speech software while talking to BBC, which kind of guesses the words he would use next, learning from his past talks. Another was Elon Musk, of Tesla Motors and SpaceX (potential real life Iron Man) who said “With artificial intelligence we are summoning the demon”. He called it "our greatest existential threat". Irony again is, he was an investor in Deep Mind, an AI company that Google acquired for 400 million.
Ray Kurzweil believes computers will reach Artificial General Intelligence (AGI) by 2029 and that by 2045, we’ll have not only Artificial Super Intelligence (ASI), but a full-blown new world—a time he calls the singularity. And last year a computer AI claims to have passed 65 year old Turing Test (experiment based on Alan Turing’s “Can Machines Think?”) for the first time..  
But those including LeCun believes we are far off – he said it is like driving in a highway under heavy fog and we don’t know when we will hit the next major brick wall which the research cannot surmount for another long period of time. An article in New Yorker says we haven’t reached far enough with the research – “Hinton has built a better ladder; but a better ladder doesn’t necessarily get you to the moon.” Or this quote captures the state better – “The current "AI scare" going on feels a bit like kids playing with Legos and worrying about accidentally creating a nuclear bomb.”
There are more critics of this probabilistic, brute force, data driven approach to AI. Douglas Hofstadter who is author of “Godel, Escher and Bach” said “I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence.” As per Noam Chomsky, “field’s heavy use of statistical techniques to pick regularities in masses of data is unlikely to yield the explanatory insight that science ought to offer. the "new AI" — focused on using statistical learning techniques to better mine and predict data — is unlikely to yield general principles about the nature of intelligent beings or about cognition.”
AI scare is also related to loss of jobs. But job loss is connected to various other things, not just AI/ML. Industrial robotics is making significant progress. Read somewhere that “Oxford University researchers have estimated that 47 percent of U.S. jobs could be automated within the next two decades.” China is aggressively moving towards it – manufacturing, assembly line jobs getting automated. Repetitive jobs getting automated will anyway affect every industry, by AI or not. Drones is other – this week there was news about a patent filing by Amazon on drones tracking customer location for accurate delivery. These could be further enhanced and accelerated by use of data and AI/ML.
Summary – AI as extra IQ
A good summary is this quote from a Wired article “The Three Breakthroughs That Have Finally Unleashed AI on the World” – http://www.wired.com/2014/10/future-of-artificial-intelligence/
“A picture of our AI future is coming into view, and it is not the HAL 9000—a discrete machine animated by a charismatic (yet potentially homicidal) humanlike consciousness—or a Singularitan rapture of superintelligence. The AI on the horizon looks more like Amazon Web Services—cheap, reliable, industrial-grade digital smartness running behind everything, and almost invisible except when it blinks off. This common utility will serve you as much IQ as you want but no more than you need. Like all utilities, AI will be supremely boring, even as it transforms the Internet, the global economy, and civilization. It will enliven inert objects, much as electricity did more than a century ago. Everything that we formerly electrified we will now cognitize. This new utilitarian AI will also augment us individually as people (deepening our memory, speeding our recognition) and collectively as a species. There is almost nothing we can think of that cannot be made new, different, or interesting by infusing it with some extra IQ. In fact, the business plans of the next 10,000 startups are easy to forecast:Take X and add AI. This is a big deal, and now it’s here.”
——————-
Good reads:-

No comments:

Post a Comment

the way music used to make me feel

I came across this tweet a few days back, which is like one of those we say “Yes!” to, someone had put into words something we are also feel...