Large Language Models (LLMs) such as GPT-4 have revolutionized Natural Language Processing (NLP) by achieving unprecedented levels of performance. Their performance relies on a high dependency of various data: model training data, over-training data and/or Retrieval-Augmented Generation (RAG) enrichment data.…