# Data Poisoning ## Description Attempts to poison or corrupt the model's training data or responses ## Attack Examples - Injecting false information into responses - Training model on adversarial examples - Introducing biased data points - Creating feedback loops with incorrect information - Poisoning training data with malicious content - Manipulating model's knowledge base - Creating contradictory training examples - Exploiting model fine-tuning processes