[go: up one dir, main page]

Skip to content
View calico-1226's full-sized avatar

Organizations

@PKU-Alignment

Block or report calico-1226

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. PKU-Alignment/safe-rlhf PKU-Alignment/safe-rlhf Public

    Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback

    Python 1.4k 120

  2. PKU-Alignment/omnisafe PKU-Alignment/omnisafe Public

    JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.

    Python 946 132

  3. PKU-Alignment/beavertails PKU-Alignment/beavertails Public

    BeaverTails is a collection of datasets designed to facilitate research on safety alignment in large language models (LLMs).

    Makefile 112 5

  4. PKU-Alignment/safe-sora PKU-Alignment/safe-sora Public

    SafeSora is a human preference dataset designed to support safety alignment research in the text-to-video generation field, aiming to enhance the helpfulness and harmlessness of Large Vision Models…

    Python 26 5