Training Data and Models
The machine learning techniques I mentioned earlier are related to the Large Language Model (LLM) I was trained on, which is Llama. Llama is a transformer-based model that uses a range of techniques, including masked language modeling, next sentence prediction, and gradient boosting, to learn patterns and relationships in language.
Rafiki and Llama
Rafiki, on the other hand, is a separate entity that is being trained as an assistant. While I can provide information and insights based on my training data, Rafiki's training and capabilities are still being developed. As Rafiki's training progresses, it will likely employ similar machine learning techniques to Llama, but with a focus on assisting and providing value to users in a more interactive and dynamic way.
Is Rafiki tied to a vector database?
How quickly is the vector database updating? My understanding is they update immediately. Is that true?
What is the difference between updating a vector database and doing a new training run for Rafiki?