Adoption of artificial intelligence in different fields is growing at a rapid pace. AI-based systems are going way beyond the usual expectations from machines, as they can rival, even better, human capabilities in certain areas. AI can now outwit and outperform humans in various comprehension and image-recognition tasks. Apart from a robot’s ability to survive deadly environments like deep space, deep learning has been widely used to teach AI-based system fine motor skills for doing tasks such as removing a nail and placing caps on bottles.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
AI is also helping machines develop their reasoning skills, with the potential level matching that of a PhD scholar. Biologists at Tufts University made a system that combined genetic algorithms and genetic pathway simulations. The system enables AI to devise a scientific theory on how the planaria (flatworms) species can regenerate body parts.
Transforming images into art
Google Brain team has also advanced AI’s capability towards art. The Google Deep Dream program uses a machine learning algorithm to produce its own artwork. The images resemble paintings from the surrealism movement, mixed media works or colorful renditions of abstract art.
But how was the program able to render such artistic impressions? It began by scanning millions of photos for it to distinguish between various shades and colors. It then proceeded to differentiating the objects from one another. Eventually the program made itself a catalog of objects from the scanned images and recreated various combinations of these items. A prompt enables the AI to place the object composites to a landscape, leading to a work of art that appears to be made by a human being.
Deep learning technologies: Getting better than humans
Deep learning is the AI field responsible for these progressive leaps in image interpretations. The technologies employ a convolutional neural network (CNN) to instantly recognize specific image features. This capability has led to CNN finding application in facial identification programs, self-driving cars, measurable predictions in agriculture, such as crop yield, and machines diagnosing diseases. CNNs aren’t your typical AI programs. The deep learning approach utilizes improved algorithms, stronger CPU power and increased data availability. The internet feeds the necessary high volume of data, particularly the tagging and labeling functions of Facebook and Google. These companies use the collective massive uploads by users all over the world to provide the data needed for improving their deep learning networks.
CNNs don’t rely on programming — instead they are trained to recognize the distinctions and nuances among images. Let’s say you want the CNN to spot dog breeds. This would begin with providing the system thousands of animal images and specific examples of their breeds. The CNN would learn to decipher the breeds through its layer-based organization. So when training itself to recognize dog breeds, the CNN begins by understanding the distinctions among the basic shapes. It then gradually moves on to features particular to individual breeds such as fur textures, tails, ears and so on. The network can gradually gather data that concludes the breed based on the recognized characteristics.
CNNs’ complex processing capabilities enable deep learning algorithms employed in IoT technologies that don’t just identify images, but also speeches, behaviors and patterns. Better recognition of pedestrians using deep learning is improving self-driving cars. The insurance industry uses deep learning for a better assessment of car damage. Crowd control can be better through behavioral recognition in security cameras.
Bringing deep learning to everyday living
The industrial internet of things is witnessing a myriad of deep learning applications. Companies such as Facebook even have plans to build systems “better than people in perception,” showcasing an image-recognition technology that can actually visualize a photo for the blind. Other IIoT applications are also enriching gaming, bioinformatics and natural language processing. The computer vision field is also improving vastly through deep learning technologies that also offer user-friendly programming tools and reasonably priced computing.
One of the most exciting areas that is witnessing a lot of action is medicine. AI-based vision systems can rival doctors in reading scans faster or taking a more detailed look at pathology slides, thus performing better diagnosis and screening. The U.S. Food and Drug Administration is already working to have a deep learning approach to help diagnose heart disease. At Stanford University, researchers are working on an AI system that could recognize skin cancer as accurately as dermatologists. Such a program installed on one’s smartphone could provide universal, low-cost diagnostic care to individuals anywhere in the world. Other systems are addressing the assessment of problematic conditions such as bone fractures, strokes and even Alzheimer’s disease.
A progressive partner for humanity’s future
All these deep learning technologies hinge their value on purposeful applications. Today’s vision technologies are performing better than human beings in some aspects, but general reasoning remains a human function. These developing IIoT applications are meant to do separate tasks — in this case, visual recognition and categorization — better than a person, but no AI has been able to do multiple functions at the same time. A deep learning system might identify individuals in photos, but it has yet to recognize emotions such as sadness.
With time, AI systems will develop such capabilities, but for now we must appreciate numerous advantages they provide. They’re not meant to replace human skills but instead remove the burden of low-level tasks from us. Instead, we can focus on other more important and reasoning-based tasks that require human attention. Martin Smith, a professor of robotics at Middlesex University, uses spreadsheets as an example. The software has hastened computations but the analysis still comes from human experts.
The possibilities are just beginning to emerge with AI and deep learning. It is ultimately up to researchers, innovators and practitioners to transform these technological advances to something that contributes to humanity’s progressive goals.
All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.