Shift happens: we compared 5 methods to detect drift in ML embeddings

Presentation

Monitoring embedding drift is relevant for the production use of LLM and NLP models. EvidentlyAI ran experiments to compare different drift detection methods. They implemented them in an open-source library and recommend model-based drift detection as a good default.

What you’ll find in the blog:

  • Experiment design. We created artificial shifts on three datasets and chose five embedding drift detection methods to test.
  • Comparison of drift detectors. We summarize how they work and react to varying data changes. The goal is to help shape the intuition of the behavior of different methods.
  • Colab notebooks with all the code. You can repeat the comparisons on your data by introducing artificial shifts with the same approach as we did.
  • Open-source library to detect embedding drift. We implemented our findings in Evidently, an open-source Python library to evaluate, test and monitor ML models.‍

Read blog post here
Link
We care about your privacy so we do not store nor use any cookie unless it is stricly necessary to make the website to work
Got it
Learn more