[note] Introduction to automated AI security auditing framework petri

Note: This page is an AI-generated (gpt-5-mini-2025-08-07) translation from Traditional Chinese and may contain minor inaccuracies.

📌 Introduction

Petri is a red-team tool for AI safety testing that simulates realistic interactive scenarios to detect potential model risks. Through collaboration between the Auditor, Target, and Judge, it performs various tasks such as general audits, multi-model comparisons, and whistleblowing tests to check whether models leak information, exhibit bias, or show other issues, improving AI safety and reliability in complex scenarios.

Read more