You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your hard work and dedication to building such an awesome library. Keep it up! 🚀
I have a few questions about using multimodal models.
1️⃣ Where can I find up-to-date information about what prompt techniques the model supports?
For example, the llama3.2-vision description page doesn't say anything about the model supporting prompt techniques like system prompt, few-shot examples or structured output. Am I right in thinking that for each model you need to go to the official GitHub page?
For example, on this issue I learned that the model does not support few-shot prompting.
===
2️⃣ Should the same way of representing an image as bytes be used for few-shot examples and the actual request?
Wouldn't this also cause assertion error assert cv.imread('dummy.jpg').tobytes() == img.tobytes() ? Imwrite saves in RGB format while internally opencv uses BGR?
Nevermind. The problem is you're using JPEG to save, which compresses the image. So the decompression is not exactly the same as the original image. Try using PNG to save.
Hello, developers of
ollama-python
!Thank you for your hard work and dedication to building such an awesome library. Keep it up! 🚀
I have a few questions about using multimodal models.
1️⃣ Where can I find up-to-date information about what prompt techniques the model supports?
For example, the llama3.2-vision description page doesn't say anything about the model supporting prompt techniques like system prompt, few-shot examples or structured output. Am I right in thinking that for each model you need to go to the official GitHub page?
For example, on this issue I learned that the model does not support few-shot prompting.
===
2️⃣ Should the same way of representing an image as bytes be used for few-shot examples and the actual request?
The problem is this:
===
ollama server version
ollama client version
The text was updated successfully, but these errors were encountered: