IS2 Lab: Certifiable Security for Intelligent System

About The Intelligent System Security (IS2) Lab is a research group led by Prof. Jingyi Wang affiliated with the College of Control Science and Engineering, Zhejiang University, Hangzhou, China. Our lab’s research focuses on developing novel software engineering (SE) methodologies towards building more trustworthy AI models/systems/applications. The detailed research topics of our lab include:

  • Software engineering for trustworthy AI, e.g., testing, verification and repair of AI models or AI-based systems/applications;
  • AI safety, security and fairness;
  • Formal aspects of system security.

Collaborations Our lab aims to conduct research with practical relevance and impact by working closely with our industrial partners like Huawei, Alibaba, Ant Group and UWinTech. We are also actively collaborating with top universities like ETH, UC Berkeley, UIUC, University of Manchester, National University of Singapore, Singapore Management University, etc.

Vacancies Our lab is always actively looking for self-motivated PostDoc/PhD/master/research assistants/research interns with competitive packages to work with us at ZJU. Feel free to contact Prof. Wang with CV (and transcript for PhD/Master applications) if you are interested. We welcome candidates from diverse background to apply.


News

Apr 2025

One paper on optimal symbolic execution is accepted by TSE 2025, congrats to Shunkai!

Mar 2025

S-Eval is accepted by ISSTA 2025, congrats to Xiaohan!

Mar 2025

Prof. Wang will serve as the PC member of ASE 2025 and ISSRE 2025, consider submitting your work!

Jan 2025

One paper is accepted by WWW 2025, congrats to Xinyao!

Jan 2025

Two papers are accepted by ICSE 2025, congrats to Ziyu and Zhiming!

Nov 2024

ICFEM 2025 will be held in Hangzhou and Prof. Wang will serve as the PC Co-chair, consider submitting your work!

Oct 2024

Prof. Wang will serve as the PC member of ACM CCS 2025, consider submitting your work!

Oct 2024

One paper on TVM fuzzing is accepted by TOSEM, congrats to Xiangxiang!

May 2024

We release our safety evaluation benchmark (largest to date) for LLMs powered by automatic and adaptive test generation. See details at Paper link, Github link, HuggingFace Leaderboard Link.

... see all News