Meet the Author: Ian McCarthy

Ian McCarthy is a Professor of Innovation and Operations Management at Simon Fraser University and Luiss. His research and teaching focus is on operations management, change and innovation management, and social media. He has published many well-cited articles and has been asked to speak at industry and academic events across the world. He spoke with SSRN about the functional building blocks of social media, how lying and bullshitting differ, and the role changing technology plays into all of this.

Q: You’ve done research on many interesting subjects, like workplace bullshit, gamification, crowdsourcing, deep fakes, understanding the fundamentals of social media, and more. What would you say are some of the underlying themes that tie together the different topics you’ve taught and written about?

A: Technology and control. I’m originally a professional industrial engineer and professor of industrial engineering. When we think about industrial engineering and the business equivalent, operations management, it’s about how we control resources to produce outputs. About 25 years ago, I switched more to innovation and technology management but still interested in how we design technology processes to control the movement of raw material, the behaviors of people, and the flow ideas to produce outcomes which are valuable to the firm and to society. That’s what brings it together: it’s innovation management that spans marketing, operations, and information systems.

Q: Is there a common motivator that influences your decision to study something new?

A: I look at how interesting an issue or problem will be in terms of impact, i.e. how the audience of researchers and practitioners is interested in the knowledge and how beneficial it will be to them. I’m less interested in crafting incremental tweaks to theories. The research opportunity must be an enduring and important practical problem or an emerging phenomena or problem. Some of my most successful papers have helped society to understand the rise of social media, the mechanics of deep fakes, and the power of bullshit: understanding these during times of COVID and political elections, and in a world of misinformation and disinformation.

Q: In your paper “Confronting Indifference Toward Truth: Dealing With Workplace Bullshit,” you and your co-authors make a distinction between lying and bullshitting, saying that “the liar knows the truth and willfully distorts it, while the bullshitter simply doesn’t care about the truth.” Talk a little bit about your framework for dealing with workplace bullshit and in what circumstances it can be most effective.

A: That paper is a practitioner paper. It explains that often when we think our colleagues are lying to us, they’re not really lying to us, they’re just making stuff up. This distinction is important for how we work together.

First of all, lying involves subverting the truth and is much shadier. If I know nothing about you, I can make stuff up about you. For example, if I say that you support the Boston Red Sox, I have no idea whether it’s correct or wrong. I just made it up. But for me to lie about what baseball team you like, I need to know what team you like. I need to have done research on you to lie about you. I must know you, and that’s harder. Lying requires work, bullshittng does not.

In the workplace, when we have meetings, communicate, and do work, it’s probably underestimated how often we just make stuff up and present it as if it’s truth. Why does it happen? We have a number of reasons. One, we want to be inclusive. We want everyone to attend every meeting and have a voice, and we often think that those voices are always truth-based, rather than hunch and opinion-based. Then we have workplaces where it’s uncomfortable for people to say, “I don’t know,” and “I shouldn’t really be at this meeting,” because it makes you feel incompetent or you’re perceived to be incompetent. So, psychological safety is a big issue. To what extent can we call out a colleague or a boss on their logic and their evidence for something that they’ve offered as fact?

To help confront indifference to truth in the workplace, we developed the C.R.A.P. framework, a playful tribute to the phenomenon of bullshit. The C is comprehend, R is recognize, A is act, and then P is what we can do to prevent it. It starts with comprehending the difference between bullshit, lying, and other forms of misinformation and disinformation we call misrepresentation. Once we comprehend that there’s a difference, then how do we recognize the presence of bullshit? Well, usually it’s appealing. It wants to catch our attention, please us and persuade us.

Then, we talk about how people react to it in the workplace. If the bullshit is appealing to you and rewarding to you because it’s supporting your department, supporting your particular job… you’re more likely to believe it, circulate it, add to it, and be supportive of it. If you work in an organization where you have psychological safety, then you might call it out, saying, “I don’t think this is correct,” or even saying, “I don’t understand this. Can you explain this?”

And then another [response] is to just disengage, keep your head down and check out. For some people who don’t like the levels of bullshit, if they have opportunities they will exit, they leave the organization. That’s how people react. Those [are] consequences of bullshit in the workplace.

Then we talk about how to prevent it, … which is, be careful about who you invite to meetings. Create a culture of psychological safety, limit jargon, restrict acronyms, create a culture where people value expertise and data. […] It’s appropriate to be able to offer imperfect opinions, hypotheses and hunches, as long as they are offered as that, and you’re not making decisions based on things that people are asserting as truths, when they’re just making stuff up.

Q: In terms of the prevention aspect, does the effectiveness of that depend at all on the size of an organization? Does it become harder as you add more people?

A: It may not depend on the size but instead the culture in terms of leadership and industry and professional expectations for truth. Why do we bullshit? We bullshit to persuade, and we bullshit to impress. We bullshit to avoid getting caught out.

Let’s take different professions, within and across organizations. In accounting, there are more absolutes – they rely on numbers – but still, these data are not always perfect absolutes. Similarly, you’d hope in science-based and analytical-based professions that there would be more focus on saying “I don’t know,” and questioning the veracity and integrity of the information they rely on. Whereas [it’s different] in marketing where they want to convince consumers to buy, [and] they need to persuade, and often ‘puff’ up claims. And consider politics, and even entrepreneurship, where they’re pitching policies and ventures, …they’re making very future forward statements, where the truth is always evolving.

Also, consider how bullshit propensity varies from North American to South American cultures, to Asian cultures to North European to South European cultures. One very interesting study that came out of Switzerland presented teenagers in the English-speaking world with a mathematical problem that couldn’t be solved. The non-bullshit answer is, “this can’t be solved” or “I don’t know how to do it” and the bullshit answer is, “here’s is the solution” and claiming that it is correct, [usually] with some persistence. From many the tens of thousands of teenagers in English-speaking countries, boys were found to be more likely to bullshit than girls, and teenagers from privileged backgrounds and private schools were more likely to bullshit than those not.

Which country has the highest proportion of teenage bullshitters, and which country has the lowest? […] Well, the highest, in this one study, is Canada and number two is the U.S. and… the lowest by far is Scotland. That study is not testing causal mechanisms and saying “why.” But hunches and hypotheses around why Canada is so high is that they don’t like to upset people, they don’t like to tell it as it is, and they would much rather be nice than share uncomfortable truths or unpleasant opinions, even though they might be true. Scotland is the opposite. They don’t mind actually upsetting people. They don’t mind telling you how it is.

Q: One of your most recent papers on SSRN, “The Risks of Botshit,” discusses the dangers of made-up, inaccurate, and untruthful chatbot content that humans use for tasks and how it can negatively impact businesses: reputation, safety, legality, economic factors, decision-making, etc. Do you see this as the technological equivalent of the bullshit we were just talking about?

A: People might think that large language models and chatbots are bullshitting machines, and to some extent, they do help with that process, but… while human bullshitters can bullshit knowingly and unknowingly, large language models don’t have the capability of knowing. They are forecasting models, they are prediction machines, they are not knowing machines.

Large language models… are trained on human data, a lot of which comes from social media platforms. Think about the quality of that social media data and other online data produced by humans, and the extent to which it has, for some time now, been infiltrated by bots producing misinformation and everything else. So, the short answer is, they’re very bullshit-like but they’re not technically bullshitters. What we argue in the botshit paper is that when humans use AI outputs for work that are contaminated with flaws, errors, and misinformation, they are spreading botshit because they didn’t generate or make up the flaws themselves. They just uncritically use and spread the flaws.

Q: The paper “Social Media? Get Serious! Understanding the Functional Building Blocks of Social Media” introduces a framework of defining social media using seven functional building blocks: identity, presence, relationships, reputation, groups, conversations, and sharing. A few years later, your paper “Social Media? It’s Serious!: Understanding the Dark Side of Social Media” looks at the darker impacts of social media through that same honeycomb framework. Social media has changed a lot in the past decade and a half. Which specific parts of this framework do you think are most important to consider now, in 2026, given the way the landscape of these platforms has changed?

A: The honeycomb framework was a simple functional framework which helped people to understand how, back in 2010, social media platforms varied and evolved over time to do different things. Facebook was originally just a photo-sharing and ranking application, and now we can use it for messaging, dating, promotion, selling, forming groups, and all sorts of things. It relies on user-generated content, where users are sharing content and opinions to appeal to other humans, but also causing addiction, misinformation, privacy issues, and cyberbullying.

At the moment, we engage with these platforms using keyboards, cameras, microphones and GPS. There will come a point when we are engaging with them in augmented and virtual reality ways, resulting in different ‘metaverse realms’ for distinct, immersive user interaction and value creation. We will be wearing or even embedding technology in our bodies to allow us to immerse ourselves in a metaverse realm. The realm will track our body and facial movements and record our heart rates as we explore and engage in the realm. It will be listening to the tone and inclination of our voice and learning when we are happy, when we are annoyed.

Q: My questions have really only scratched the surface of your work, so there’s a lot we haven’t touched on. What papers or other research that we haven’t discussed do you want to highlight as particularly interesting or timely?
A: I’m doing an interesting project where… we study chief information officers (CIOs) and their approach to cybersecurity. We interviewed them, and asked them to complete a survey, and found the presence of ‘illusory superiority’ which is a decision-making bias in which people overestimate their own abilities, skills, or qualities relative to those of their peers, regardless of their actual competence level.

We then presented the same interview and survey questions to different large language models and asked them to be CIOs, and found that they provided very similar answers to human CIOs and also exhibited illusory superiority. This means that large language models can mimic nuanced human behaviors, including cognitive biases like illusory superiority, suggesting they could sometimes replace humans in interview and survey-based research. This could make studies faster, cheaper, and more accessible globally, but raises concerns about response homogenization, probabilistic and unpredictable outputs, and diminished human roles in research.

Q: What do you think SSRN contributes to the world of modern research and scholarship?

A: It makes it more open, more accessible, and more real-time. I think that the original vision of SSRN, with accessibility and openness in providing that information, has a valuable mission, which I’ve been happy to participate in. I read papers on SSRN, which are released early and not yet available via the publishers’ paywalls, and I share as much of my work as possible so that it’s accessible.


More About Ian McCarthy

Ian McCarthy is the W.J. VanDusen Professor of Innovation and Operations Management at Simon Fraser University (SFU) and a Professor at the Center in Leadership, Innovation and Organisation (CLIO) at Luiss University. He came to SFU from the University of Warwick, England where he was a Reader and Head of the Organizational Systems Strategy Unit. He worked for several years as a manufacturing engineer before earning his Ph.D. in operations strategy from the University of Sheffield. He was also a Fulbright Scholar at the Georgia Institute of Technology, studying the impacts of university innovation on local and national economies. He studies and teaches operations management, innovation management, change management, social media, creative consumers and the world of management education and has published well-cited articles about these subjects.

Leave a Reply