You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used your pre-trained DNA6M model to fine-tune my datasets to do binary classification; it takes 16 hours for a very small dataset. And I tried to use GPU, but it didn't work. Any suggestion for using GPU?
And also, I would like to know where I can find the training log file, to track loss and accuracy during fine-tuning.
After model training, here are all the files I got. I need to plot model loss to see whether the model is trained enough and stable.
├── config.json
├── eval_results.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
├── training_args.bin
└── vocab.txt
hi @Zhihan1996 ,
Thanks for developing this useful tool.
I used your pre-trained DNA6M model to fine-tune my datasets to do binary classification; it takes 16 hours for a very small dataset. And I tried to use GPU, but it didn't work. Any suggestion for using GPU?
And also, I would like to know where I can find the training log file, to track loss and accuracy during fine-tuning.
After model training, here are all the files I got. I need to plot model loss to see whether the model is trained enough and stable.
├── config.json
├── eval_results.txt
├── pytorch_model.bin
├── special_tokens_map.json
├── tokenizer_config.json
├── training_args.bin
└── vocab.txt
I do fine-tune by using the below parameters.
I really appreciate any help you can provide.
Best,
Xuan
The text was updated successfully, but these errors were encountered: