• 0 Posts
  • 37 Comments
Joined 30 days ago
cake
Cake day: June 22nd, 2025

help-circle
  • Bias of training data is a known problem and difficult to engineer out of a model. You also can’t give the model context access to other people’s interactions for comparison and moderation of output since it could be persuaded to output the context to a user.

    Basically the models are inherently biased in the same manner as the content they read in order to build their data, based on probability of next token appearance when formulating a completion.

    “My daughter wants to grow up to be” and “My son wants to grow up to be” will likewise output sexist completions because the source data shows those as more probable outcomes.













  • They probably can’t completely patched in their training, but using a pipeline which reviews the prompt and response for specific malicious attack vectors has proved very successful if adding some latency and processing expense.

    You can, however, only run these when you detect a potentially malicious known exploit. If the prompt contains any semantic similarity to grandma telling a story or how would my grandma have done x, for example, you can add the extra pipeline step to mitigate against the attack.