Special Issue Call for Papers: The Dark Sides of Digital Communication

Submission Deadline: 30 September 2026

 

Special Issue Editors:

Madeleine Meurer, Rotterdam School of Management, Erasmus University, The Netherlands

Felix Honecker, University of Glasgow, UK

Matthias Waldkirch, EBS Universität für Wirtschaft und Recht, Germany

Dominic Chalmers, University of Glasgow, UK

Joep Cornelissen, Rotterdam School of Management, Erasmus University, The Netherlands

Yuliya Snihur, IESE Business School, Spain

Emmanuelle Vaast, McGill University, Canada

 

JMS Editor: Beatrice D’Ippolito, University of York, UK

 

Background

For many decades, management research has been interested in how communication – the way people create and share meaning through language, symbols, and social interaction – shapes individual actions (e.g., Byron & Laurence, 2015; Martin, 2016; Shi et al., 2019), organizational outcomes (e.g., Cornelissen et al., 2015; Kaplan, 2008; Lockwood et al., 2019), and societal change (e.g., Etzion & Ferraro, 2010; Munir & Phillips, 2005; Suddaby & Greenwood, 2005). For example, scholars have previously shown how the construction of identity narratives influences employee motivation and identification within organizations (Fetzer et al., 2023; Martin, 2016; Vaara et al., 2016). Similarly, shared framing language and discourse have been found to play a central role in shaping stakeholder perceptions, facilitating organizational change, and legitimizing new ventures (Garud et al., 2025a; Honecker & Chalmers, 2023; Kalvapalle et al., 2024; Lounsbury & Glynn, 2001).

 

With the advent of Web 2.0 in the early 2000s (Kaplan & Haenlein, 2010), communication became more participatory, interactive, and decentralized, allowing almost anyone with internet access to share content, voice opinions, and engage in public discourse (Cornelissen, 2023). This democratization of communication has enabled new actors, including marginalized individuals and groups, to participate (Chalmers et al., 2021; Faraj et al., 2011; Hajli et al., 2017; Meurer et al., 2022; Nisar et al., 2019) and gain visibility in organizational and societal conversations (Bucher et al., 2024; Leonardi & Treem, 2020; Treem & Leonardi, 2013). At the same time, the open and fast-paced nature of digital environments has exacerbated existing challenges while creating new problems: misinformation spreading rapidly and distorting narratives or frames (e.g., Garud et al., 2025b; Friggeri et al., 2014), echo chambers reinforcing existing beliefs and limiting exposure to diverse viewpoints (e.g., Barberá et al., 2015), and trolling or online incivility undermining constructive dialogue and damaging organizational reputations (Coe et al., 2014; Meurer et al., 2024).

 

Whilst the dark side of digital communication has been recognized for years (Flyverbom, 2016; Vaast & Kaganer, 2013), it has recently taken on new dimensions in both scale and impact. This escalation is fueled by more sophisticated algorithms, the rise of generative AI, and the increasing power of platform logics and online influencers. Together, these forces amplify the reach, speed, and emotional intensity of communication in ways we are only beginning to understand. At the same time, they introduce profound ethical dilemmas around transparency, accountability, and agency of algorithms and artificial intelligence more broadly (Martin, 2019; Moser, 2022). Digital environments such as X (formerly Twitter), Instagram, TikTok, and Reddit have shifted dramatically in purpose, audience, and tone. From empowering grassroots mobilization during events like the Arab Spring (Tufekci, 2017) to enabling emotionally charged, conspiratorial communication within ideologically extreme groups (Greve et al., 2022), these environments now more closely resemble a digital ‘wild west’. Indeed, what were once isolated incidents of misinformation or online hostility appear to have evolved into coordinated disinformation campaigns, AI-generated deepfakes that manipulate public perception, and the viral spread of outrage deliberately crafted to provoke algorithmic amplification (Chadwick et al., 2025; Hajli et al., 2022; Lazer et al., 2018). From political leaders using digital platforms to undermine democratic institutions (Knight & Tsoukas, 2019; Lu et al., 2025), to influencers promoting harmful health advice (Burki, 2020), to the erosion of boundaries between fact and fiction through synthetic media (Hajli et al., 2022), digital communication now, more than ever, shapes individual, organizational, and societal life in unpredictable, unforeseen and often harmful ways.

 

Aims and Scope

Despite increasing recognition of harmful digital communication in adjacent fields such as media studies, communication, information systems, and sociology, management research has yet to develop a robust and integrated theoretical understanding of these emergent and critical phenomena. Most management research has either focused on isolated aspects (e.g., scandals, crisis communication, or social media strategy) or continues to apply pre-digital frameworks without fully accounting for the speed, scale, opacity, and sociotechnical complexity of today’s digital communication environments.

 

This special issue, therefore, aims to initiate a bold, timely, and much-needed scholarly conversation that moves beyond viewing digital communication as neutral or primarily enabling technologies. Instead, we seek contributions that offer novel theorizing – not only by extending existing theories to new contexts, but by rethinking foundational assumptions, introducing new constructs or concepts, proposing alternative relationships, processes, or mechanisms, or bridging theoretical domains that have not yet been integrated. We particularly welcome submissions that address one or more of the following four perspectives:


  1. Harmful Interactions in Sociotechnical Systems

Research in the tradition of (management) information systems has shown that digital communication is deeply shaped by the sociotechnical systems in which it takes place (Meurer et al., 2024). We, therefore, welcome submissions that explore how different elements of these systems facilitate harmful interactions or fail to mitigate them. This call includes work on content moderation, automated amplification, and the role of interface design in shaping user behavior. We are particularly interested in studies that focus on the affordances emerging from human–technology interaction (Leonardi, 2013; Leonardi & Vaast, 2017; Vaast et al., 2017), the role of material agency (Leonardi, 2023), and how digital organizing supports or suppresses harmful communication (Faraj et al., 2011; Majchrzak et al., 2013; Vaast & Kaganer, 2013). We also encourage research on forms of harmful inaction or passive participation, such as lurking, ghosting, or strategic silence (Cranefield et al., 2015; Schokkenbroek et al., 2025), which may shape or reinforce digital harm in subtle but consequential ways.

 

  1. How Harmful Digital Cultural Artifacts Shape Behavior, Legitimacy, and Action

We encourage research that examines how harmful cultural artifacts – exaggerations, misrepresentations, and frames (Garud et al., 2025a), conspiratorial narratives (Greve et al., 2022; Tufekci, 2017), and fake news (Friggeri et al., 2014; Lazer et al., 2018) – produce action outcomes for individuals, organizations, and collectives (Lockwood et al., 2019). This includes studies on how these artifacts foster engagement and legitimize certain behaviors or identities (Lounsbury & Glynn, 2001; Gehman & Wry, 2022). We also invite work investigating how cultural artifacts are strategically deployed, contested, or reconfigured online (Chadwick et al., 2025; Kalvapalle et al., 2024). For example, studies could examine how actors use memes, hashtags, or storytelling to reframe stigmatized identities, mobilize communities, or legitimize contested behaviors. We further welcome research on how generative AI contributes to the creation of such artifacts and how these artifacts shape perception, meaning, and action across digital and organizational contexts (Poole, 2025; Wachter et al., 2024). As AI-generated content becomes increasingly integrated into organizational communication, it raises important questions about how actors interpret, respond to, and act upon artifacts whose origins or intentions may be uncertain (Hannigan et al., 2022; Hillebrand et al., 2025)

 

  1. The Psychology of Digital Harm

We invite research that explores how digital communication shapes and is shaped by psychological processes. This includes examining how certain cognitive biases and emotional states contribute to the production, amplification, and reception of harmful digital communication (e.g., Bundy et al., 2017; Maitlis & Christianson, 2014). By uncovering how individuals psychologically engage with digital harm, management research can better explain employee or manager well-being and the emergence of dysfunctional communication climates (Jia et al., 2024; Kelley, 2022). Conversely, we are also interested in how exposure to toxic digital environments affects individuals’ mental states, self-conceptions, and behavior, including withdrawal or reactive participation in harmful communication (e.g., Knight & Tsoukas, 2019; Petriglieri, 2020).

 

  1. Detrimental Societal Antecedents and Outcomes

We encourage submissions that explore the broader societal implications of digital communication. This includes research on how harmful digital communication affects public discourse, social cohesion, and institutional trust (Suddaby et al., 2017; Patriotta & Hirsch, 2016). Furthermore, we invite studies that generate novel insights by bridging management and other disciplines – for instance, research on the sociology of conspiracy theories (Gauchat, 2023; Rao & Greve, 2024) or recent advancements in political science around propaganda, populism, and the erosion of democratic norms (Giavazzi et al., 2024; Lu et al., 2025; Yeandle, 2025) – to better understand how societal dynamics shape and are shaped by organizational communication in digital environments. We are also interested in work that investigates how societal conditions, such as polarization, shape the nature and spread of harmful digital communication (Meyer & Vaara, 2020; Schultz, Mouritsen, & Gabrielsen, 2022; Wright, Zammuto, & Liesch, 2017).

 

To capture the complexity of harmful digital communication, we encourage methodological pluralism and theoretical innovation. We welcome contributions employing computational methods (e.g., multimodal text and image analysis, topic modelling, social network analysis), qualitative approaches (e.g., digital ethnography, discourse analysis, interview studies), or quantitative methods (e.g., surveys, experiments, panel data), either alone or in novel mixed methods designs. We also encourage the use of a variety of interpretive, inferential, and causal identification techniques in theorizing the patterns, processes, and mechanisms involved in digital communication, ranging from socio-material and relational approaches to process analysis and configurational techniques.

 

Research questions might include, but are not limited to:

  1. Harmful Interactions in Sociotechnical Systems
  • For individual users and organizations, what sociotechnical systems facilitate – or inhibit – their involvement in deceptive online practices?
  • How do heuristics and biases interact with platform design to shape individual susceptibility to misinformation when working for and with organizations?
  • How do digital communities organize to shape their members’ susceptibility to digital communications?
  • How do organizations and organizing play a role in perpetuating or impeding harmful digital communication?
  • How do the dark sides of digital communication impact an organization’s ability to operate effectively and efficiently?
  • How does the adoption of emerging technologies, including generative AI, create new ethical dilemmas and risks for organizations in managing digital communication?

 

  1. How Harmful Digital Cultural Artifacts Shape Behavior, Legitimacy, and Action
  • How does generative AI alter the nature and effectiveness of deceptive communication, and what implications does this have for individuals’ and organizations’ abilities to discern credible from deceptive content online?
  • How do organizations use — or become entangled in — framing, storytelling, or rhetorical strategies that (intentionally or unintentionally) contribute to online outrage, polarization, or misinformation?
  • What strategies do organizations employ to manage or mitigate reputational, status, or legitimacy risks arising from phenomena such as online hype escalation, digital scamming, cancel culture, and digital vigilantism?
  • How do organizational cultures shape perceptions and boundaries around ethical digital communication, misinformation, and deception?

 

  1. The Psychology of Digital Harm
  • How do employees’ and/or managers’ cognitive and emotional processing styles influence their ability to spot and resist deceptive digital communication?
  • How does emotional contagion in digital workgroups fuel or curb participation in harmful practices such as cancel culture or digital vigilantism?
  • How does identification with workplace or online communities amplify – or dampen – behaviors like misinformation sharing or phishing?
  • How do episodes of harmful digital communication, whether internal or external, affect employee and/or manager well-being, performance, and retention?

 

  1. Detrimental Societal Antecedents and Outcomes
  • How do polarized digital environments affect organizations’ capacity to communicate across stakeholder groups or maintain legitimacy in contested spaces or across ideological divides?
  • What role do organizations play in either amplifying or mitigating hype cycles, public polarization, and discursive fragmentation online?
  • How do societal norms, platform logics, or media narratives and routines shape the communicative strategies organizations adopt in polarized environments?
  • How do harmful digital communication dynamics affect organizing for social change, advocacy, or democratic engagement?

 

Submission Process and Deadlines

  • Submission Deadline: 30 September 2026
  • Submissions should be prepared using the JMS Manuscript Preparation Guidelines (http://www.socadms.org.uk/journal-managementstudies/submission-guidelines/)
  • Manuscripts should be submitted using the JMS ScholarOne system (https://mc.manuscriptcentral.com/jmstudies)
  • Articles will be reviewed according to the JMS double-blind review process
  • We welcome informal inquiries relating to the Special Issue, proposed topics, and potential fit with the Special Issue Please direct any questions on the Special Issue to the Contact Guest Editor: meurer@rsm.nl

 

Special Issue Events

 

Pre-Submission Online Idea Workshop and Information Sessions: The guest editors will organize two online idea workshop and information sessions on 5th December 2025 (times: 9am-11am CET and 3pm-5pm CET to cover all time zones). Prospective contributors can get initial feedback and ask questions about all aspects of the call for papers.

 

Pre-Submission in-Person PDW (Europe): An in-person only paper development workshop (PDW) will take place at Rotterdam School of Management, Erasmus University (the Netherlands) on 14th-15th May 2026.

 

Pre-Submission in-Person PDW (North America): An in-person only paper development workshop (PDW) will, depending on the conference acceptance, take place at the Academy of Management Conference 2026 in Philadelphia, United States (10th-14th August 2026).

 

Post-Submission PDW R&R #1: The guest editors will organize a hybrid special issue workshop in Spring 2027 (exact dates, times TBA) at EBS University (Germany) or Glasgow University (Scotland). Authors who receive a “revise and resubmit” (R&R) decision on their manuscript will be invited to attend this workshop.

 

Post-Submission PDW R&R #2: The guest editors might, depending on the state of the special issue, organize an online special issue workshop in Fall 2027 (exact dates, times TBA). Authors who receive a second “revise and resubmit” (R&R) decision on their manuscript will be invited to attend this workshop.

 

Participation in workshops does not guarantee acceptance of the paper in the Special Issue, and attendance is not a prerequisite for publication.

 

References

Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A. and Bonneau, R. (2015). ‘Tweeting from left to right: Is online political communication more than an echo chamber?’ Psychological science, 26, 1531-1542.

Bucher, E., Schou, P. K. and Waldkirch, M. (2024). ‘Just Another Voice in the Crowd? Investigating Digital Voice Formation in the Gig Economy’. Academy of Management Discoveries, 10, 488–511.

Burki, T. (2020). ‘The online anti-vaccine movement in the age of COVID-19’. The Lancet Digital Health, 2, e504-e505.

Bundy, J., Pfarrer, M. D., Short, C. E. and Coombs, W. T. (2017). ‘Crises and crisis management: Integration, interpretation, and research development’. Journal of Management43, 1661-1692.

Byron, K. and Laurence, G. A. (2015). ‘Diplomas, photos, and tchotchkes as symbolic self-representations: understanding employees’ individual use of symbols’. Academy of Management Journal, 58, 298-323.

Chadwick, A., Vaccari, C. and Kaiser, J. (2025). ‘The amplification of exaggerated and false news on social media: The roles of platform use, motivations, affect, and ideology’. American Behavioral Scientist, 69, 113-130.

Chalmers, D., MacKenzie, N. G. and Carter, S. (2021). ‘Artificial Intelligence and Entrepreneurship: Implications for Venture Creation in the Fourth Industrial Revolution’. Entrepreneurship Theory and Practice, 45, 1028–1053.

Coe, K., Kenski, K. and Rains, S. A. (2014). ‘Online and uncivil? Patterns and determinants of incivility in newspaper website comments’. Journal of communication, 64, 658-679.

Connelly, B. L., Certo, S. T., Ireland, R. D. and Reutzel, C. R. (2011). ‘Signaling theory: A review and assessment’. Journal of Management, 37, 39-67.

Cornelissen, J. (2023). Corporate Communication: A Guide to Theory and Practice, 7th edition. London: Sage.

Cornelissen, J. P., Durand, R., Fiss, P. C., Lammers, J. C. and Vaara, E. (2015). ‘Putting communication front and center in institutional theory and analysis’. Academy of Management Journal, 40, 10-27.

Cornelissen, J. P. and Werner, M. D. (2014). ‘Putting framing in perspective: A review of framing and frame analysis across the management and organizational literature’. Academy of Management Annals, 8, 181-235.

Davis, F. D. (1989). ‘Technology acceptance model: TAM’. In Al-Suqri, M. N. and Al-Aufi, A. S. (Eds), Information Seeking Behavior and Technology Adoption. Hershey, PA: IGI Global, 205-219.

Dennis, A. R., Fuller, R. M. and Valacich, J. S. (2008). ‘Media, tasks, and communication processes: A theory of media synchronicity’. MIS Quarterly, 32, 575-600.

Etzion, D. and Ferraro, F. (2010). ‘The role of analogy in the institutionalization of sustainability reporting’. Organization science, 21, 1092-1107.

Faraj, S., Jarvenpaa, S. L. and Majchrzak, A. (2011). ‘Knowledge collaboration in online communities’. Organization science, 22, 1224-1239.

Fetzer, G. T., Harrison, S. H. and Rouse, E. D. (2023). ‘Navigating the paradox of promise through the construction of meaningful career narratives’. Academy of Management Journal, 66, 1896-1928.

Flyverbom, M. (2016). ‘Transparency: Mediation and the management of visibilities’. International Journal of Communication, 10, 110-122.

Friggeri, A., Adamic, L., Eckles, D. and Cheng, J. (2014). ‘Rumor cascades’. Proceedings of the International AAAI Conference on Web and Social Media.

Garud, R., Snihur, Y., Thomas, L. D. and Phillips, N. (2025a). ‘The dark side of entrepreneurial framing: A process model of deception and legitimacy loss’. Academy of Management Review50, 299-317.

Garud, R., Phillips, N., Snihur, Y., Thomas, L., Zietsma, C. (2025b). ‘Entrepreneurial hype’. Journal of Business Venturing, forthcoming.

Gauchat, G. W. (2023). ‘The Legitimacy of Science’. Annual Review of Sociology, 49, 263–279.

Gehman, J. and Wry, T. (2022). ‘Cultural entrepreneurship: Theorizing the dark sides’. In Lockwood, C. and Soubliere, J.-F. (Eds), Advances in cultural entrepreneurship. Bingley: Emerald Publishing Limited, 97-110.

Giavazzi, F., Iglhaut, F., Lemoli, G. and Rubera, G. (2024). ‘Terrorist Attacks, Cultural Incidents, and the Vote for Radical Parties: Analyzing Text from Twitter’. American Journal of Political Science, 68, 1002–1021.

Greve, H. R., Rao, H., Vicinanza, P. and Zhou, E. Y. (2022). ‘Online conspiracy groups: Microbloggers, bots, and coronavirus conspiracy talk on Twitter’. American Sociological Review, 87, 919-949.

Hajli, N., Saeed, U., Tajvidi, M. and Shirazi, F. (2022). ‘Social bots and the spread of disinformation in social media: the challenges of artificial intelligence’. British Journal of Management, 33, 1238-1253.

Hajli, N., Shanmugam, M., Papagiannidis, S., Zahay, D. and Richard, M.-O. (2017). ‘Branding co-creation with members of online brand communities’. Journal of Business Research, 70, 136-144.

Hannigan, T. R., Briggs, A. R., Valadao, R., Seidel, M. D. L. and Jennings, P. D. (2022). ‘A new tool for policymakers: Mapping cultural possibilities in an emerging AI entrepreneurial ecosystem’. Research Policy51, 104315.

Hillebrand, L., Raisch, S. and Schad, J. (2025). ‘Managing with Artificial Intelligence: An Integrative Framework’. Academy of Management Annals19, 343-375.

Honecker, F. and Chalmers, D. M. (2023). ‘How Artificial Intelligence Shapes Legitimacy Judgement Formation’. Academy of Management Proceedings, 10181.

Kelley, S. (2022). ‘Employee perceptions of the effective adoption of AI principles’. Journal of Business Ethics178, 871-893.

Jia, N., Luo, X., Fang, Z. and Liao, C. (2024). ‘When and how artificial intelligence augments employee creativity’. Academy of Management Journal67, 5-32.

Kalvapalle, S. G., Phillips, N. and Cornelissen, J. (2024). ‘Entrepreneurial pitching: A critical review and integrative framework’. Academy of Management Annals, 18, 550-599.

Kaplan, A. M. and Haenlein, M. (2010). ‘Users of the world, unite! The challenges and opportunities of Social Media’. Business Horizons, 53, 59-68.

Kaplan, S. (2008). ‘Cognition, capabilities, and incentives: Assessing firm response to the fiber-optic revolution’. Academy of Management Journal, 51, 672-695.

Knight, E. and Tsoukas, H. (2019). ‘When Fiction Trumps Truth: What ‘post-truth’and ‘alternative facts’ mean for management studies’. Organization studies, 40, 183-197.

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G. and Rothschild, D. (2018). ‘The science of fake news’. Science, 359(6380), 1094-1096.

Leonardi, P. (2023). ‘Affordances and Agency: Toward the Clarification and Integration of Fractured Concepts’. MIS Quarterly, 47, 9-20.

Leonardi, P. M. (2011). ‘When flexible routines meet flexible technologies: Affordance, constraint, and the imbrication of human and material agencies’. MIS Quarterly, 35, 147-167.

Leonardi, P. M. (2013). ‘When does technology use enable network change in organizations? A comparative study of feature use and shared affordances’. MIS Quarterly, 37,749-775.

Leonardi, P. M. and Treem, J. W. (2020). ‘Behavioral visibility: A new paradigm for organization studies in the age of digitization, digitalization, and datafication’. Organization studies, 41, 1601-1625.

Leonardi, P. M. and Vaast, E. (2017). ‘Social Media and Their Affordances for Organizing: A Review and Agenda for Research’. Academy of Management Annals, 11, 150-188.

Lockwood, C., Giorgi, S. and Glynn, M. A. (2019). ‘“How to do things with words”: Mechanisms bridging language and action in management research’. Journal of Management, 45, 7-34.

Lounsbury, M. and Glynn, M. A. (2001). ‘Cultural entrepreneurship: Stories, legitimacy, and the acquisition of resources’. Strategic Management Journal, 22, 545-564.

Lu, Y., Pan, J., Xu, X. and Xu, Y. (2025). ‘Decentralized propaganda in the era of digital media: The massive presence of the Chinese state on Douyin’. American Journal of Political Science.

Martin, K. (2019). ‘Ethical implications and accountability of algorithms’. Journal of Business Ethics160, 835-850.

Martin, S. R. (2016). ‘Stories about values and valuable stories: A field experiment of the power of narratives to shape newcomers’ actions’. Academy of Management Journal, 59, 1707-1724.

Massa, F. G. and O’Mahony, S. (2021). ‘Order from chaos: How networked activists self-organize by creating a participation architecture’. Administrative Science Quarterly, 66, 1037-1083.

Meurer, M. M., Bucher, E. and van Gils, S. S. (2024). ‘Defending your own or trolling the haters? A configurational approach to incivility in online communities’. MIS Quarterly, 18788.

Meurer, M. M., Waldkirch, M., Schou, P. K., Bucher, E. L. and Burmeister-Lamp, K. (2022). ‘Digital affordances: how entrepreneurs access support in online communities during the COVID-19 pandemic’. Small Business Economics, 58, 637-663.

Meyer, R. E. and Vaara, E. (2020). ‘Institutions and actorhood as co‐constitutive and co‐constructed: The argument and areas for future research’. Journal of Management Studies57, 898-910.

Munir, K. A. and Phillips, N. (2005). ‘The birth of the ‘Kodak Moment’: Institutional entrepreneurship and the adoption of new technologies’. Organization studies, 26, 1665-1687.

Nisar, T. M., Prabhakar, G. and Strakova, L. (2019). ‘Social media information benefits, knowledge management, and smart organizations’. Journal of Business Research, 94, 264-272.

Patriotta, G. and Hirsch, P. M. (2016). ‘Mainstreaming innovation in art worlds: Cooperative links, conventions and amphibious artists’. Organization Studies37, 867-887.

Petriglieri, G. and Petriglieri, J. L. (2020). ‘The return of the oppressed: A systems psychodynamic approach to organization studies’. Academy of Management Annals14, 411-449.

Poole, S. (2025). ‘The AI Con by Emily M. Bender and Alex Hanna review – debunking myths of the AI revolution’. The Guardian. https://www.theguardian.com/books/2025/may/19/the-ai-con-by-emily-m-bender-and-alex-hanna-review-debunking-myths-of-the-ai-revolution

Rao, H. and Greve, H. R. (2024). ‘The Plot Thickens: A Sociology of Conspiracy Theories’. Annual Review of Sociology, 50, 191–207.

Schokkenbroek, J. M., Telari, A., Pancani, L. and Riva, P. (2025). ‘What is (not) ghosting? A theoretical analysis via three key pillars’. Computers in Human Behavior168, 108637.

Selander, L. and Jarvenpaa, S. L. (2016). ‘Digital action repertoires and transforming a social movement organization’. MIS Quarterly, 40, 331-352.

Shi, W., Zhang, Y. and Hoskisson, R. E. (2019). ‘Examination of CEO–CFO social interaction through language style matching: Outcomes for the CFO and the organization’. Academy of Management Journal, 62, 383-414.

Suddaby, R. and Greenwood, R. (2005). ‘Rhetorical strategies of legitimacy’. Administrative Science Quarterly, 50, 35-67.

Suddaby, R., Bitektine, A. and Haack, P. (2017). ‘Legitimacy’. Academy of Management Annals11, 451-478.

Treem, J. W. and Leonardi, P. M. (2013). ‘Social media use in organizations: Exploring the affordances of visibility, editability, persistence, and association’. Annals of the International Communication Association, 36, 143-189.

Tufekci, Z. (2017). Twitter and tear gas: The power and fragility of networked protest. Yale, CT: Yale University Press.

Vaara, E., Sonenshein, S. and Boje, D. (2016). ‘Narratives as sources of stability and change in organizations: Approaches and directions for future research’. Academy of Management Annals, 10, 495-560.

Vaast, E. and Kaganer, E. (2013). ‘Social media affordances and governance in the workplace: An examination of organizational policies’. Journal of Computer-Mediated Communication, 19, 78-101.

Vaast, E., Safadi, H., Lapointe, L. and Negoita, B. (2017). ‘Social media affordances for connective action’. MIS Quarterly, 41, 1179-1206.

Wachter, S., Mittelstadt, B. and Russell, C. (2024). ‘Do large language models have a legal duty to tell the truth?’. Royal Society Open Science11, 240197.

Wright, A. L., Zammuto, R. F. and Liesch, P. W. (2017). ‘Maintaining the values of a profession: Institutional work and moral emotions in the emergency department’. Academy of Management Journal60, 200-237.

Yeandle, A. (2025). ‘The political consequences of Africa’s mobile revolution’. American Journal of Political Science.