By Lauren Gilmore
Technology moves at breakneck speed, and we now have more power in our pockets than we had in our homes in the 1990s. Augmented reality (AR) has been a fascinating concept of science fiction for decades, but many researchers think we’re finally getting close to making AR a reality thanks to advancements in computer vision.
By definition, computer vision is a field that includes methods for acquiring, processing, analyzing, and understanding images and, in general, high-dimensional data from the real world in order to produce numerical or symbolic information, e.g., in the forms of decisions. In layman’s terms, computer visions allows machines to recognize and understand sight – just as humans can.
This means that with AR, you can process image and video sources to extract meaningful information and take action based on that.
Human beings use sight to process, understand, and navigate the world around us. While much of this technology is still currently fairly rudimentary, we find ourselves with the ability to use AR and computer vision to one day significantly impact our everyday lives.
According to Danny Lopez, the COO of Blippar – leading technology company specializing in augmented reality, artificial intelligence and computer vision – the challenge we have now is to find ways in the short, medium, and long term to apply and align AR to social good.
Here are four ways Lopez feels AR might affect us in the future.
We’re already seeing the beginnings of self-driving cars, though the vehicles are currently required to have a driver present at the wheel for safety. Despite these exciting developments, the technology isn’t perfect yet, and it will take a while for public acceptance to bring automated cars into widespread use.
More importantly, however, won’t be that the car is driving itself, but its ability to be fully autonomous and protect all passengers. Currently, cars can only detect that there are people down the street. In the future, advanced cognition will help the car understand that those people are dangerous and you are in harm’s way.
Medical imaging has attracted increasing attention in recent years due to its vital component in healthcare applications. The advancement in computer vision – such as multimodal image fusion, medical image segmentation, image registration, computer-aided diagnosis, image annotation, and image-guided therapy – has opened up many new possibilities for revolutionizing healthcare.
With literally millions of medical images indexed, AR and computer vision can match patterns of these images with similar ones from around the world to help doctors bring the best care to their patients.
Initiatives in this field has been the improvement of the student’s experience through the use of computer vision. Integrating AR helps students with varying learning abilities.
Additionally, computer vision applications may play a significant role in improving the effectiveness of traditional classroom tools – such as books and study materials – and aims to improve knowledge in specific areas.
Training and manufacturing
Product quality if a major concern for any manufacturing process. In every facility, the quality control division plays a big role. While these roles were traditionally done by humans, nowadays, it’s possible for computer vision to make quality control decisions.
Cameras and lighting capture images which are then algorithmically compared to a predefined image or quality standard; thus eliminating human error.
AR is also present in tasks that are too dangerous for humans alone including mining, fire-fighting, mine disposal and handling radioactive materials.
It’s absolutely fascinating how far technology has advanced in a relatively short amount of time and according to Lopez, we’re “on the verge of it becoming mainstream.” But we’ve been on this tipping point for the last four years.
As Lopez explains, “you can’t strongly scale AR if you don’t understand the reality in front of you. For this to happen, computer vision is absolutely necessary for AR immersive technology to really come to life.” But it’s coming.
Over the last 25 years the progress of computing has learned to mimic more complex human behavior – and now it can mimic the whole set of human behaviors.
Lauren Gilmore works with TNW.
Featured Image Source: Visual Hunt.
Stay updated with all the insights.
Navigate news, 1 email day.
Subscribe to Qrius