By Sharan Mujoo
It is 9 AM—time to leave for work. You enter your car, fire the ignition, but the car does not start. After facing failure multiple times, a sense of frustration starts building up. Another day late to work? How did this happen? Not the engine failure, but the brain arriving at a fork of choice where a decision must be made. You can diagnose the issue, fix it, and probably reach office late or leave the car at home, take a cab, and reach on time. The choice may seem simple, however, the underlying mechanisms which make this choice are not.
The most basic functional unit of a person’s brain is the neuron. It consists of a cell body having dendrites which take inputs from other neurons, and an axon ending in synapses which transmits the information forward. Extend this mechanism to a large number of neurons and what we have is a neural network with several connections. When this network is exposed to similar stimuli over and over again, a pattern gets encoded. So the next time we put our foot on the clutch, slide the key in and twist it, the brain receives these tactile and visual inputs and expects the car to start, but it does not. The pattern is violated and an error is signalled, forcing one to make a choice. This is precisely the kind of model which artificial neural networks replicate to recognise patterns, compare actual and desired outputs, and learn.
Artificial neural networks
An artificial neural network broadly consists of three layers—an input layer, a hidden layer, and an output layer. The hidden layer is usually where a function acts on the inputs given and produces an input for the output layer. These layers are connected with each other by weights, which represent the strength of an input. When the neural network generates an output, it is compared with the desired output to check for any differences. These differences are known as errors and are fed back in order to rectify the connections and get the desired output. This method of learning is known as back-propagation.
Simple neural networks consist of a few layers. Insert several layers in the hidden layer and the result is a deep neural network which is responsible for deep learning. A deep neural network consists of a hierarchy of layers wherein each layer converts the data from the previous layer into more abstract data. The output layer then combines all these features to make predictions. The applications of these models are massive. The next time you point your selfie camera towards your face and a number signifying your age pops up, you can be assured that it is a neural network behind this prediction. However, the application of these technologies does not stop just here.
Deep learning and augmented reality
Neural networks have tremendous applications and implications for the future. One of them is augmented reality (AR). It is the rendering of virtual data on real-world data with a layer in between, usually a device with a camera. A good example could be Snapchat filters. With the help of deep neural networks, the rendering of virtual data can bridge the gap between reality and the virtual world. It might be possible some years down the line for doctors to point a camera at the body and identify where a disease is localised. AR and deep learning already allow for knowing the human anatomy in a way never before imagined. Not only this, deep learning can also help identify and classify objects, and mark where they are. In fact, every task or process which requires learning could potentially be replaced by an artificial intelligence (AI) with learning capabilities.
Automotive giants such as Tesla rely heavily on deep learning networks to improve their performance. To be able to differentiate between a light post and a human may seem a trivial task for a human, but an AI needs lots of data and iterations to establish an accurate pattern. With the help of markers and simulated objects, AR can enhance the learning ability and advance the development of neural networks. These two technologies stand to potentially transform the future of the education industry with the help of visual aids. A large part of our brains is devoted to processing visual data. With the help of AR and deep learning networks, visual data can be manipulated in a way that was never possible before. However, with so much reliance on external aids for visualisation and imagination, will our own abilities to do so atrophy? That is a question only the future can answer. Nevertheless, research already shows that too much dependence and time spent on such devices hampers our emotional intelligence and social skills. As bright as the future may look, the path to it is equally slippery.
Privacy
Along this slippery path, privacy is a huge stumbling block. Deep learning requires tremendous amounts of data. This, in turn, demands several data points. The next time an application asks for access to camera, photos, contacts, phone calls, it is probably a deep learning network looking to learn more. For some people, the benefits of these technologies far outweigh the costs to their privacy. However, for those who are concerned, they are right to believe so. In order to sell better and sell more, giants such as Google, Amazon, and Facebook require a supreme understanding of their customers. And in order to achieve this, no data point is irrelevant. Everything is in scope from our search patterns, photos, likes, retweets to Instagram shares and items viewed recently.
AI and surveillance
It will not be surprising if the word surveillance comes to mind thinking of all this. As important it is for these giant engines of capitalism to know more, it is equally important for governments and their security agencies. To effectively safeguard their interests, governments are investing massively into AI and deep learning. Some have been doing so for years. The PRISM programme stands to serve as an example of that. In order to predict external threats, predictive analytics are used which use statistical models to make the network learn. As the neural network learns, it gets better at identifying patterns and trends which inform future decision making.
There is no doubt that there is an invisible fight taking place behind the thick curtains of advertising and marketing. The European Union has implemented a lot of regulations to protect the privacy concerns of their citizens. As recent as October 2017, WhatsApp came under the scrutiny of privacy regulators in Europe for sharing phone numbers with its parent company Facebook.
The future
Machine learning still has a long way to go. AI may be good at solving one problem, but humans are expert generalists. An AI which exhibits all the generalist capabilities humans have is still far away. Even though advances have been made to better mirror the learning capabilities of humans, is there any conscious focus on mirroring human values in AI? Privacy is just one facet. Values such as honesty, trust, and loyalty among others stand to be affected. Can deep learning become as intuitive as the human subconscious? The further we try to peek into the future, the more questions emerge.
Since the 1940s, the field of computational neuroscience has evolved. Both fields have synergetically helped each other evolve. Insights into the human brain are revealing remarkable facts about our brains. It is a plastic organ, capable of adapting to any context. Can machine learning also be made more adaptive, therefore? Whatever the future of machine learning and AI is, it will depend to a great extent on advances in cognitive sciences. Neuroscientists along with software engineers are becoming parts of multidisciplinary teams in large corporations to design products and services. As AI pulls the future nearer, it is imperative to tread the path with increasing caution for the trade-off in future may not be in favour of humans.
Featured Image Source: GLAS-8 on Visual Hunt / CC BY-NC-ND under Attribution-NonCommercial-NoDerivs License
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius