GenAI Safety
Cross-lingual LLM safety evaluation and alignment — understanding how safety mechanisms fail across languages.
Making sense, building together, teaching machines.
How do humans co-create knowledge in digital spaces?
We study the emergent dynamics of collaborative platforms — from Wikipedia edit wars to open-source communities — to understand how collective intelligence forms, evolves, and sometimes breaks down.
How does information shape public understanding of complex issues?
From climate misinformation to heatwave journalism, we investigate how media frames public perception of scientific and social issues — and how computational tools can help us track these dynamics at scale.
How can AI and humans learn from each other?
We explore the frontier where social science meets machine learning — from cross-lingual LLM safety to using AI as research instruments. The goal: AI that understands social context, not just patterns.
Cross-lingual LLM safety evaluation and alignment — understanding how safety mechanisms fail across languages.
AI for pets/cats, LLM bias evaluation, and social simulation — bridging social inquiry and machine intelligence.
Heatwave journalism, risk perception, and attention fatigue — how media shapes public understanding of environmental crises.
Wikipedia dynamics, collaborative platforms, and the formation of collective opinions in digital spaces.
Social media and market data pipelines — tracing how narratives propagate and shape economic behavior.
Anime, comics, and games cultural analysis — understanding media consumption and fan communities at scale.
> We do this NOT because it is easy...> ...> But because we THOUGHT it would be easy.