We are grateful to conduct research sponsored by NSFC, CCF and multiple industrial partners including Huawei, Alibaba Group, and Ant Group. Recently, we are mainly working on the following exciting research themes.
For AI models (e.g., large language models or deep learning models in general) or AI-based systems (e.g., autonomous cars), we are working towards a systematic testing>verification>repair loop to comprehensively and automatically evaluate, identify and fix the potential risks hidden in multiple dimensions, e.g., robustness, fairness, safety and copyright. This line of research is crucial for human beings to be aware of, manage and mitigate the risks in the emergence of diverse AI models and AI-based systems.
Related selected publications: [RA-L 24, TOSEM 24, ICSE 24, ISSTA 24, TDSC 24, ICSE Demo 23, ISSTA 23, S&P 22, ICSE 22, TOSEM 22, ASE 22, ICSE 21, TSE 21, TACAS 21, ISSTA 21, IJCIP, ICSE 20, ASE 20, ICECCS 20, ICSE 19]
Software is the core driving force for the digital operation of industrial safety-critical systems (industrial control systems, autonomous systems, etc). It is thus crucial to formally verify the correctness of their software foundations (e.g., OS kernel, compilers, security protocols or control programs) for industrial safety-critical systems. In this line of research, we are working on developing new logic foundations and specifications to better model, test and verify the desired safety or security properties in different system software layers (especially those commonly used in safety-critical industries).
Related publications: [TSE 24, AsiaCCS/CPSS 24, TSE 23, CCS 23, CONFEST/FMICS 23, FITEE 22, IoT 22, TSE 21, ICSE 18, DSN 18, STTT 18, FM 18, FASE 17, FM 16]