I'm excited to share that I’ve updated the code for our AAAI 2024 paper, "Learning to Unlearn (L2UL)," based on many requests. The updates include experimental code for ViT, UTKFace and etc. Our paper introduces an instance-wise unlearning method for pretrained classifiers, using only forgetting data, adversarial examples, and weight importance. This approach can serve as a baseline for various unlearning scenarios in classification tasks and LLMs—and it's already being applied across different domains! I hope these updates will be helpful for your future research. GitHub: https://lnkd.in/ev2aTEjp Paper: https://lnkd.in/eTYMd8VZ #MachineUnlearning #Unlearning
Sungmin Cha’s Post
More Relevant Posts
-
I am excited to share my first blog post, "No Straight Lines Here: The Wacky World of Non-Linear Manifold Learning." Have you ever encountered complex data and struggled to make sense of it? In this post, I delve into the fascinating world of manifold learning, which helps us visualize hidden patterns in data that traditional linear methods often overlook. Discover how algorithms like Sammon Mapping, Isomap, Laplacian Eigenmaps, t-SNE, and UMAP tackle the challenge of unraveling hidden structures within your data. Read the full post here: https://lnkd.in/gt-JXCnd P.S. This post is written as a part of the OMSCS ML blog post series. Hope you enjoy the read. #machinelearning #OMSCS #datascience
No Straight Lines Here: The Wacky World of Non-Linear Manifold Learning
https://sites.gatech.edu/omscs7641
To view or add a comment, sign in
-
#machinelearning Classification in ML is a method for teaching a program to recognize, understand, and group data into predefined categories, also known as “sub-populations.” It starts with a training dataset where each piece of data is already categorized. Read more about 👇 📌Types of Classification Algorithms 📌Different Types of Classification Tasks in Machine Learning With Real Life Examples 📌Learners in Classification Problems 📌Classification Vs. Regression in our latest hashtag#blopost👇 https://lnkd.in/gG9rjC3Z #machinelearning #classificationmachinelearning #ieeeblendedlearingprogram #ieee #blendedlearningprogram #learnmachinelearning
What is Classification in Machine Learning-Types & models
https://blp.ieee.org
To view or add a comment, sign in
-
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance. Adopt these strategies, and the transition becomes seamless!
To view or add a comment, sign in
-
Completed another training in Machine Learning Specialization covering #supervisedml, #advancedlearningalgorithms, #unsupervisedlearning, #recommenders, and #reinforcementlearning. Excited to delve deeper into #reinforcementlearning
Completion Certificate for Machine Learning
coursera.org
To view or add a comment, sign in
-
a nice comparison between ML & LLM: #ML #LLM #hallucination #training #modeltraining #finetuning #featureengineering #prompt #promptengineering #MLflow #Langchain #langsmith #LangFuse
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance. Adopt these strategies, and the transition becomes seamless!
To view or add a comment, sign in
-
output for the code of the Kmeans in machine learning *prodigy *machine learning
To view or add a comment, sign in
-
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance. Adopt these strategies, and the transition becomes seamless! hashtag #AIML #ML #LLM #GenAI
To view or add a comment, sign in
-
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance. Adopt these strategies, and the transition becomes seamless! #AIML #ML #LLM #GenAI
To view or add a comment, sign in
-
The increasing importance of Large Language Models (LLMs) has transformed the AI landscape, as they give machines the ability to accurately understand and produce human language.
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance. Adopt these strategies, and the transition becomes seamless! #AIML #ML #LLM #GenAI
To view or add a comment, sign in
-
Transitioning from ML to LLM is easier when you grasp these 5 key patterns: 1. Labels → Texts In ML, we use datasets with input signals and output labels. In LLMs, the dataset consists of prompts (as inputs) and example texts (as outputs). 2. Model Training → Fine-Tuning In ML, models are trained from scratch. In LLMs, we start with pre-trained models like GPT, Claude, or LLama and fine-tune them to fit specific datasets. 3. Feature Engineering → Prompt Engineering In ML, we improve model accuracy using techniques like one-hot encoding. In LLMs, we focus on crafting better prompts, like using K-shot learning to enhance output quality. 4. Evaluation → Evals In ML, evaluation is done through methods like cross-validation or metrics such as AUC/MSE. In LLMs, custom evals like semantic similarity or human-in-the-loop judgments are key. 5. MLFlow → LangSmith In ML, we use tools like MLFlow for operationalization. In LLMs, LangSmith (or LangFuse) is the tool to operationalize and monitor performance.
To view or add a comment, sign in
M.Tech. | CSPML (Communication Signal Processing and Machine Learning) | IIT Dharwad
2moVery helpful!