top of page

Research Blog

  • chenjj10


I am excited to know that my PhD thesis was recently awarded the "excellent doctoral dissertation in the Hydraulic category" across the country. The award is extremely competitive. With so many people graduating with a Hydraulic-related degree each year, only 33 dissertations were qualified to enter the evaluation round, among which only 10 would finally be awarded. It is definitely a great honor and a huge recognition of my work to win such an award. My sincere gratitude to Prof. Donghai Liu from Tianjin University, who was my PhD supervisor and had played an important supportive role in pushing me to complete the work.

I passed my PhD defense and earned a doctoral degree in the summer of 2020. Title of my PhD research is "耦合BIM的长距离输水渠道无人机巡检与险情智能图像识别研究" (English title: BIM-assisted UAV Safety Inspection and Automated Visual Recognition of Hazards for Water Diversion Channel). The research systematically explores the theory and specific approaches to integrating unmanned aerial vehicle (UAV), building information modeling (BIM) and visual computing in the safety inspection and hazard recognition of water channels.

Overarching framework of the research, which covers the entire process of “Aerial image collection – BIM integration – Image preprocessing – Image recognition”.

Augmenting aerial inspection with digital contents retrieved from BIM.
Registration of aerial photographs with BIM to extract regions of interest for hazard detection.
Automated detection of slope damages from the collected aerial photographs.

Relevant Content

Interested readers can access the dissertation here:

Contents in the dissertation have also been published separately as journal papers:

[1] Chen, J., & Liu D. (2021). Detecting, extracting and classifying foreign objects in inter-basin channels to ensure water supply safety. Journal of Hydroinformatics, jh20211118. [Linkage]

[2] Chen, J., Liu, D., & Li, X. (2021). Extracting Water Channels from Aerial Videos based on Image-to-BIM Registration and Spatio-Temporal Continuity. Automation in Construction, 132, 103970. [Linkage]

[3] Chen, J., & Liu, D (2020). Bottom-up image detection of water channel slope damages based on superpixel segmentation and support vector machine. Advanced Engineering Informatics, 47, 101205. [Linkage]

[4] Chen, J., Liu, D., Li S., & Hu, D. (2019). Registering Georeferenced Photos to a Building Information Model to Extract Structures of Interest. Advanced Engineering Informatics, 42, 100937. [Linkage]

[5] Liu, D., Chen, J.,Hu D., & Zhang, Z. (2019). Dynamic BIM-Augmented UAV Safety Inspection for Water Diversion Project. Computers in Industry, 108, 163-177. [Linkage]

[6] Liu, D., Chen, J., Li, S., & Cui, W. (2018). An integrated visualization framework to support whole-process management of water pipeline safety. Automation in Construction, 89, 24-37. [Linkage]

[7] Liu, D., Xia, X., Chen, J., & Li, S. (2021). Integrating Building Information Model and Augmented Reality for Drone-Based Building Inspection. Journal of Computing in Civil Engineering, 35(2), 04020073. [Linkage]

  • chenjj10


The monocular vision algorithms proposed by iLab provides a new solution to efficiently and reliably quantify construction wastes.

Construction is an emission-intensive industry, and construction waste management (CWM) can help alleviate its adverse impact to the environment. Effective CWM requires quantitative information on its composition and amount, which, however, is not easy to quantify because of the bulky, cluttered and mixed nature of the materials.

Typical construction waste dumps, which are bulky and cluttered

To tackle the challenge of construction waste quantification, I, together with my colleagues at iLab, developed a computer vision algorithm that only requires a single camera and other common sensors (e.g., weighbridge, range finders) to estimate the composition and volumes of different materials in a waste dump.

Our proposed monocular vision approach for construction waste quantification

(a) Photos used to calibrate the camera; (b) Reference points used to calculated the camera extrinsic parameters; (c) The installed range finders for waste depth measurement.

Examples of the waste quantification results. First column: the input photo; Second column: the reconstructed 3D geometry of the waste materials; Third column: 3D semantic of the waste materials with estimated volumes.

Relevant Content

For more details, please refer to our papers published on Resources, Conservation & Recycling (RCR):

[1] Lu, W., Chen, J., & Xue, F. (2022). Using computer vision to recognize composition of construction waste mixtures: A semantic segmentation approach. Resources, Conservation and Recycling, 178, 106022.

[2] Chen, J., Lu, W., Yuan, L., Wu, Y., & Xue, F. (2022). Estimating construction waste truck payload volume using monocular vision. Resources, Conservation and Recycling, 177, 106013.

bottom of page