Currently set to No Follow

Post-Disaster Structural Damage Can Now Be Rapidly Analyzed by a Deep Learning System

What used to be done by engineers for hours can be done by this system in just a matter of minutes.


After natural disasters like earthquakes and typhoons strike, engineers have to make sure that buildings and structures under their supervision are safe. They have to document damages through assessments done by the naked eye and cameras, which usually takes long before a report is submitted.

Shirley Dyke, a Purdue University professor of mechanical and civil engineering, agrees. She says that engineers take a lot of photos, perhaps 10,000 images per day, to learn how the disaster affected structures. “Every image has to be analyzed by people, and it takes a tremendous amount of time for them to go through each image and put a description on it so that others can use it,” she adds.

Not only does this system consume a lot of time, it also poses a potential for human error, states doctoral student Chul Min Yeu. A key drawback of engineers doing this job alone is that they can easily get tired after an hour of checking and organizing the thousands of images.

To speed up the process of analyzing disaster damage in structures, the Purdue researchers have developed an automated method, or deep learning, using advanced computer vision algorithms. It is capable of turning the several hours of work into several minutes.

Source: The Borneo Post

Dyke claims that this is the first-ever implementation of deep learning for these types of images. He added, “We are dealing with real-world images of buildings that are damaged in some major way by tornados, hurricanes, floods and earthquakes. Design codes for buildings are often based on or started by lessons that can be derived from these data. So if we could organize more data more quickly, the images could be used to inform design codes.”

Read more  Researchers Are Teaching Drones to Land on Moving Objects

The challenge lies in training the algorithms to recognize scenes and locate objects in the images. Its system depends on graphical processing units (GPUs), which are already used in the industry for high-performance machine vision applications.

Testing the automated method required the researchers to use a large set of data which are about 8,000 images complete labels identifying the building components if collapsed or not collapsed. The photographs also include areas affected by spalling, a typical structural phenomenon wherein concrete chips fall off due to large tensile deformations.

The researchers were able to automatically classify images based on whether spalling exists or not, and also to pinpoint specifically where it was located within the image. In the pictures, the damages are highlighted with green boxes for easy reference.

Source: Purdue University
Source: Purdue University

Dyke reiterates, “The nation has been investing for years in the acquisition of these valuable and perishable data, and is now investing in large databases of experimental data and also field mission data to preserve it and make it easier to study and distill the important lessons to improve the resilience of our communities,” she said.

Applying this method covers all existing photos of recent disasters. About 90,000 digital images from recent earthquakes in Nepal, Chile, Taiwan and Turkey were gathered by the researchers for the algorithm to be analyzed.

This technology started two years ago. Recently, it earned itself a three-year National Science Foundation (NSF) grant of $299,000.

Source: Phys.org

Share via

Post-Disaster Structural Damage Can Now Be Rapidly Analyzed by a Deep Learning System

Send this to a friend