PEER PHI-Net Framework and Data Sets

February 11, 2020

PHI-Net FrameworkPEER Hub ImageNet (Φ-Net or PHI-Net) is a general automated framework (see figure) for vision-based structural health monitoring (SHM) and rapid damage assessment after disasters. Based on past reconnaissance efforts and related studies, Φ-Net represents the vision-based detection tasks by a tree-like hierarchy framework. Based on such framework, eight basic benchmark classification tasks are proposed, namely: (1) scene level, (2) damage state, (3) spalling condition, (4) material type, (5) collapse mode, (6) component type, (7) damage level, and (8) damage type.

According to the Φ-Net framework, a large number of structural images were collected, preprocessed, labeled, and split to form a large-scale dataset, named Φ-Net dataset, containing 36,413 pairs of images until October 2019. Φ-Net dataset is the first large-scale high-variety multi-attribute labeled structural image dataset, which may contribute to introduce the new artificial intelligence (AI), machine learning (ML) and deep learning (DL) technologies to the structural engineering community. Furthermore, eight labeled sub-datasets corresponding to eight detection tasks are now open-sourced online at https://apps.peer.berkeley.edu/phi-net/.

Benefits to the PEER Community

The concept of the automated Φ-Net framework is developed in PEER Report 2019/07 and eight Φ-Net sub-datasets are open-sourced online at https://apps.peer.berkeley.edu/phi-net/.  Benefits to the PEER Community include:

  1. The Φ-Net framework is general and generic, and its branches can be further expanded based on user’s demand. The concept of such automated hierarchy representation provides the reference for future studies.
  2. Eight defined vision tasks and eight open-sourced sub-datasets provide the benchmark for relevant studies. The relatively large-scale and high-variety of the Φ-Net dataset makes it more close to reality, which can provide a good benchmarking reference. For example, if users are conducting similar detection tasks with their self-developed algorithms or AI models, besides tested on their own collected dataset, these algorithms or models can be also tested on Φ-Net datasets for validation. 
  3. The open-sourced paired images and labels can be directly used for user’s ML or DL training in similar study. Besides, these images can further complement a user’s own dataset, and users can relabel them based on own demands, e.g., different classification tasks, or even extent to localization and segmentation tasks.
  4. The best trained DL models on Φ-Net named Structural ImageNet Model (SIM) achieve promising results in some detection cases, e.g., scene level identification, damage detection, spalling condition recognition, etc. Such models are expected to be developed online soon, and the PEER community can use them for real-time automated recognition just via uploading structural images.

Contributions from the PEER Community

As mentioned above, Φ-Net is a generic framework, and currently it only contains eight well-defined detection tasks, so the PEER community can contribute and add more details to this framework:

  1. Add more vision tasks to expand branches, adjust the inter-task relationships, etc.
  2. Upload new structural images to the Φ-Net database and label the existing images. All such contributions will be acknowledged. The old version database and labeling tool are on https://apps.peer.berkeley.edu/spo, and the new version of them are under development and released soon on  https://apps.peer.berkeley.edu/phi-net/.

Moreover, it is suggested that the PEER community can use the Φ-Net dataset as one additional validation dataset besides their own dataset for model generalization validation with comparison to benchmark results presented in PEER Report 2019/07.

How to Access

The Φ-Net datasets are open-sourced on https://apps.peer.berkeley.edu/phi-net/ , where users provide requested information, and then the download links will be sent to valid email address directly.

For more information, refer to PEER Report 2019/07.