Join the Language Science Center and Baha'i Chair for World Peace for a hybrid panel on language and conflict, featuring panelists Philip Renik (LING), Julia Mendelsohn (INFO), and Erik Nesse (ARLIS). This panel will be moderated by Hoda Mahmoudi, Director of The Baha'i Chair for World Peace.
This event is both in-person and online.
Philip Resnik (LING: political framing):
Prediction-based perception creates systematically divergent interpretations of shared information, making certain kinds of conflict structurally inevitable.
People don't just take in information. Every act of perception is also an act of prediction -- we view incoming information through the lens of our existing knowledge and beliefs. People who study framing call this framing-in-thought. On the flip side, people don't just provide information either. What we hear and read is often designed to promote a particular interpretation. People who study framing call this framing-in-communication. One perennial source of conflict is the fact that different people can be looking at exactly the same information and seeing entirely different things. A recent student of mine explored a fascinating puzzle related to this in studying misinformation: people with a history of misinformation-dissemination were sharing news articles online from publications they were ideologically opposed to, Why? It turns out that even a true news story can, through its phrasing or especially its headlines, support a misinformation narrative -- that is, even an accurate story can be misleading, even if the story itself isn't misinformation. I think the general principle, that we don't just perceive, we create perceptions partly out of our expectations, is pervasive at all levels of cognition: there are scientists who describe perception as "controlled hallucination", and that feels like an equally accurate description of what we see at higher levels like the interpretation of political events and political discourse. Except of course sometimes it feels like the hallucination isn't even particularly well controlled.
Julia Mendelsohn (iSchool: internet discourse):
Anti-immigration sentiment and how this shapes conversations on social media.
I'm interested in the role of common ground when thinking about language in conflict contexts, maybe specifically discrimination.
- Dogwhistles rely on a lack of common ground among the full audience/platform moderation
- Metaphorical dehumanization relies on the common ground we share about source domains like animals, water, etc.
I'm going to spend a bit of time talking about multimodal antisemitic dogwhistles (appropriated images, video, music), and how these have really proliferated online in just the past year or so.
Erik Nesse (ARLIS: multilingualism, machine translation):
Computational linguistics and natural language processing for the Intelligence and Security Community.
Premise: understanding each other across linguistic/cultural/etc. barriers is good (reduces chances for conflict), and also really hard to do well.
CompLing has generated a bunch of very useful tools. On the one hand, the tools have obvious value in reducing the potential for conflict (in general, and there’s a little bit of an intelligence community-adjacent perspective there too). They work pretty well! But on the other hand, there’s a kind of ‘paradox of quality’ the tools introduce re: language barriers and conflict
The fact that tools like machine translation and LLMs work pretty darn well has engendered somewhat misguided views about how “solved” certain problems like MT actually are, and for machine translation, for instance: when we see really good translations coming out of the tools, fluid translations, things that sound like (but are not necessarily) what a human would produce…
…we start providing those translations to lots and lots of people to use – and the people using them and making decisions based on them, and…
…we deprioritize the types of training/thinking that equip us to see the limits of interpretation (of ‘what the other person said’ – if the models do that well, do we really need translators…?), i.e., the process of learning second languages, learning about the associated culture(s), living there, history, context, habits of mind, … etc. etc. – in part because of a fundamental limitation (compared to humans) of what MT (or any other NLP tool) is and does
So: it’s pretty easy to be seduced into thinking we know what’s really being said here, we solved it when you have great-looking, fluid translations – but ‘knowing what’s really being said here’ is a much, much more complicated process than making excellent and statistically valid text patterns appear in response to source material.
In that, we risk letting ourselves be seduced by a misunderstanding of what translation really is, leading to a (kind of paradoxically) increased risk of conflict – we stop looking at things quite so critically (and stop training people to do so) when we convince ourselves the machine is doing it perfectly well, thank you very much
Same kind of dynamic applies to other tools, too, like large language models – marketing surrounding them is a good example.
All of this is happening at a time when we are providing relatively less of the type of training that opens folks’ eyes to how hard it really is to communicate – to truly communicate – which (not exclusively! Just trends) tends to be more concentrated in things like languages/literatures, not in more technical fields.
This Event is Co-Sponsored by: