How to Optimize Prompt Engineering for Financial Advice from the FinguAI Language Model?
I’m working on a personal finance application that utilizes the FinguAI language model (https://huggingface.co/FINGU-AI/FinguAI-Chat-v1) to generate personalized insights and financial advice based on user-provided expense data. However, I’m facing challenges in getting the model to produce accurate and relevant responses.
phi 3 vision model tokens
I am looking at using phi-3-vision
models to try and describe an image. However, I couldn’t help but notice that the number of tokens that an image takes is quite large (~2000). Is this correct, or a potential bug? I have included a code snippet so that you can check my assumptions:
Is this the correct method to get probability of a specific token in a generated sequence using HuggingFace?
I want to compute the probability for a specific token t
given a prompt p
, is this way of calculating correct? I’m extracting logits for t
from the logit vector and then applying softmax. Am I missing anything?