🐢 Open-Source Evaluation & Testing for AI & LLM systems
-
Updated
Dec 13, 2024 - Python
🐢 Open-Source Evaluation & Testing for AI & LLM systems
Corruption and Perturbation Robustness (ICLR 2019)
A Harder ImageNet Test Set (CVPR 2021)
Deep Anomaly Detection with Outlier Exposure (ICLR 2019)
Deliver safe & effective language models
PromptInject is a framework that assembles prompts in a modular fashion to provide a quantitative analysis of the robustness of LLMs to adversarial prompt attacks. 🏆 Best Paper Awards @ NeurIPS ML Safety Workshop 2022
Self-Supervised Learning for OOD Detection (NeurIPS 2019)
Aligning AI With Shared Human Values (ICLR 2021)
ImageNet-R(endition) and DeepAugment (ICCV 2021)
Repo for "Benchmarking Robustness of 3D Point Cloud Recognition against Common Corruptions" https://arxiv.org/abs/2201.12296
📚 A curated list of papers & technical articles on AI Quality & Safety
The Combined Anomalous Object Segmentation (CAOS) Benchmark
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
[ICML 2019] ME-Net: Towards Effective Adversarial Robustness with Matrix Estimation
Predicting Out-of-Distribution Error with the Projection Norm
Evaluation & testing framework for computer vision models
Code for the attack multiplicative filter attack MUFIA, from the paper "Frequency-based vulnerability analysis of deep learning models against image corruptions".
This repository contains the project for the Advanced AI course @CentraleSupélec
Add a description, image, and links to the ml-safety topic page so that developers can more easily learn about it.
To associate your repository with the ml-safety topic, visit your repo's landing page and select "manage topics."