.png)
POST-DOC
PHILIP PARNAMETS - NYU Social Psychology Postdoctoral Fellow
Philip completed his PhD in cognitive science at Lund University and has spent two years as a postdoc with the Emotion Lab at Karolinska Institutet. His research is grounded in a broad interest in the cognitive and computational mechanisms underlying preference change, decision making and learning, especially in the moral domain. In the Social Identity and Morality lab his work focuses on dynamic models of social learning about moral agents and of moral choices generally. Philip spends his spare time creating, listening or dancing to electronic music.
​

Misinformation and Belief in the Digital Age
In an increasingly polarized information landscape, there is growing division over what is true, what is false, and how to interpret the world around us. The spread of misinformation is widely regarded as one of the greatest threats to societies, undermining democratic processes, public health, and efforts to combat climate change. Social context and identity play a crucial role in shaping how people form beliefs, interpret information, and develop opinions about critical issues such climate change (Spampatti et al., 2024), new technologies (Globig et al., in progress) and public health (Van Bavel et al., 2020). Our Identity-Based Model of Belief (Van Bavel et al., 2024) lays out the psychological and neural pathways underlying people’s motivations to accept (or reject) information based on its alignment with their social identities—whether those identities are partisan (Pereira et al., 2021), national (Sternisko et al., 2023), or religious. In our research, we find that when affirming a salient identity provides a sense of belonging or social status, individuals are more likely to believe and share (mis)information that supports that identity, regardless of its factual accuracy (Rathje et al., 2023; Sternisko et al., 2020). Similarly, people tend to adopt beliefs that align with those of their social networks and communities, where shared values and perspectives reinforce collective worldviews (Pretus et al., 2023; Van Bavel et al., 2020). Our research suggests that these processes are further amplified by the unique dynamics of social media, where economic incentives, biased algorithms, and curated information diets compound people’s exposure to and motivation for engaging with misinformation (Rathje et al., in progress; Rathje et al., 2024; Rathje et al., 2021; Robertson et al., 2024b). The good news is, however, that even relatively minor changes to social media platform design—such as allowing users to flag or label misleading content—can significantly reduce the spread of false and harmful content (Pretus et al., 2024; Robertson et al., 2024a).