Thank you for the excellent work on this project.
There is a question regarding harmful content classification. When a prompt contains multiple types of harmful content, does the model support returning multiple harmful categories, or is it designed to provide only a single category? (From my experiments so far, the model only produces one harmful label per input.)
Thank you for your time and clarification.