This paper presents a method for scaling up Large Language Models (LLMs) for content moderation in Google Ads. The authors address the challenge of high inference costs and latency associated with LLMs by proposing a multi-step approach that reduces the number of reviews by over three orders of magnitude while achieving a 2x recall compared to a baseline non-LLM model. The method involves:
1. **Funneling**: Using heuristics to select candidates, filter duplicates, and remove inactive images.
2. **LLM Labeling**: Running inference using a prompt-engineered and tuned LLM.
3. **Label Propagation**: Propagating the LLM's decisions to similar images in the cluster.
4. **Feedback Loop**: Integrating the feedback from LLMs into the funneling step to improve future selections.
The approach is evaluated on the "Non-Family Safe" ad content policy, which restricts sexually suggestive and other inappropriate content. The results show that the pipeline labeled approximately twice as many images as a multi-modal non-LLM model while maintaining higher precision. The authors also plan to extend the technique to other ad policies and modalities, such as videos, text, and landing pages, and improve the quality of all pipeline stages.This paper presents a method for scaling up Large Language Models (LLMs) for content moderation in Google Ads. The authors address the challenge of high inference costs and latency associated with LLMs by proposing a multi-step approach that reduces the number of reviews by over three orders of magnitude while achieving a 2x recall compared to a baseline non-LLM model. The method involves:
1. **Funneling**: Using heuristics to select candidates, filter duplicates, and remove inactive images.
2. **LLM Labeling**: Running inference using a prompt-engineered and tuned LLM.
3. **Label Propagation**: Propagating the LLM's decisions to similar images in the cluster.
4. **Feedback Loop**: Integrating the feedback from LLMs into the funneling step to improve future selections.
The approach is evaluated on the "Non-Family Safe" ad content policy, which restricts sexually suggestive and other inappropriate content. The results show that the pipeline labeled approximately twice as many images as a multi-modal non-LLM model while maintaining higher precision. The authors also plan to extend the technique to other ad policies and modalities, such as videos, text, and landing pages, and improve the quality of all pipeline stages.