[go: up one dir, main page]

Skip to content

Latest commit

 

History

History
11 lines (9 loc) · 1.53 KB

README.md

File metadata and controls

11 lines (9 loc) · 1.53 KB

PKU-Alignment Team

Large language models (LLMs) have immense potential in the field of general intelligence but come with significant risks. As a research team at Peking University, we actively focus on alignment techniques for LLMs, such as safety alignment, to enhance the model's safety and reduce toxicity.

Welcome to follow our AI Safety project: