Load sentencepiece model in marianMT (huggingface transformers)
I’ve used sentencepiece to train a tokenizer. It gave me a .vocab file and a .model file. I want to load them in my MarianMT model. But I can’t figure out how to do it.
Trainer huggingface – RuntimeError: cannot pin ‘torch.cuda.FloatTensor’ only dense CPU tensors can be pinned
I recently got the following error:
RuntimeError: cannot pin 'torch.cuda.FloatTensor' only dense CPU tensors can be pinned
when doing LoRA on a small LLM.
Transformers error: RuntimeError: Failed to import transformers.training_args
I am trying to use transformers in a task of building a chatbot
Trainer Error: ImportError: Using the `Trainer` with `PyTorch`
I am trying to fine tune a Bert Pretrained model and I am using Transformers trainer and I use the TrainingArguments to tune some hyperparameters
Hugging Face Transformers embedding layer position and token_type embeddings
I am trying to evaluate different Integrated Gradients methods on my RoBERTa based model, and I came to a paper introducing “Sequential Integrated Gradients” with this github repo:
text
How to decode a sequence of tokens into multiple words in Hugging Face Transformers?
I’m working on an automatic natural language processing (NLP) project using Hugging Face transformers. I’m a real beginner so maybe I’m misunderstanding. I’ve created a custom model from GPTNeoForCausalLM to implement a customized forward method that will always return the same encoded sentence (“wrong token”).
How do I specify that different Huggingface Trainers in the same process use different GPUs?
I have instantiated multiple Huggingface Trainers in the same process and each of them train different models. Now I need to count the GPU cost of each Trainer, so I need to make different Trainers exclusive to one GPU for training to get accurate statistics.
Fine-tuning CodeBert for classification with more than 512 tokens
I have a dataset of SmartContracts source code written in Solidity labeled with one ore more vulnerabilities they contains.