Author: Thomas Argheria

Data Poisoning: a threat to LLM’s Integrity and Security

Large Language Models (LLMs) such as GPT-4 have revolutionized Natural Language Processing (NLP) by achieving unprecedented levels of performance. Their performance relies on a high dependency of various data: model training data, over-training data and/or Retrieval-Augmented Generation (RAG) enrichment data.…

Back to top