8 Questions with Nicholas Thompson ‘93 (PAA), CEO of The Atlantic and Former Editor in Chief of Wired Magazine

By: Zadie Robinson ‘26

Nicholas Thompson, an esteemed American technology journalist and media executive, has left a memorable mark on the world of digital journalism and technology discourse. With a career spanning pivotal roles at Wired, The Atlantic, and The New Yorker, Thompson’s journey reflects the evolving nature of journalism in the digital age. As the former editor-in-chief of Wired and editor of New Yorker, his tenure witnessed the implementation of digital paywalls, resulting in significant increases in digital subscriptions, a testament to his strategic insight in navigating the shifting tides of media monetization. Throughout his career, he has remained deeply engaged in discussions about the intersection of technology, media, and society, grappling with complex ethical considerations and the transformative potential of emerging technologies such as artificial intelligence (AI) and blockchain.

In this interview, Thompson provides insights into the evolution of technology journalism, the ethical imperatives facing journalists amidst AI advancements, and the challenges of combating misinformation in an increasingly digitized world. Reflecting on his experiences founding SpeakEasy AI, Thompson offers candid reflections on the complexities of fostering constructive online discourse and the realities of navigating the competitive social media landscape. As society grapples with the ethical implications of technological progress, Thompson’s expertise offers invaluable guidance for aspiring journalists navigating the ever-changing media landscape.

So transitioning from journalism to technology, I saw you’ve held a lot of editorial roles at Wired and the Atlantic. How do you see the role of technology journalism evolving, especially in today’s rapidly changing digital landscape? 

Technology journalism for me was an amazing, almost serendipitous career trip… what happened is I went into political journalism and worked at the Washington market, but I covered a little bit of tech. Then I fortuitously got hired at Wired, and that put me fully into the tech track. It wasn’t that I didn’t know anything about tech–I’d gone to Stanford during the tech boom and worked at a computer company afterward, but I wasn’t all about tech. I didn’t have a computer science degree, but I knew enough about tech… to be hired. So then I was a technology journalist, and the beautiful thing about being one is that you have a niche that is growing. You’re writing about something that becomes more important every year, and that changes every year. If you are, let’s say, a drama critic, that’s great, but it’s the same size every year. As a technology journalist, suddenly there’s more to cover every year… [and] not only that, as a technology journalist, you start to learn about the tech industry. You learn how these platforms work, what product managers do, what engineers do. You learn all of this stuff about the infrastructure that underpins the media industry. And so what happened to me is that, by nature of being a technology journalist, I then started getting extra responsibilities. I went to the New Yorker and they needed someone to manage their iPad app. Well, how about the guy who knows a little bit more about iPads than everybody else? So gradually, my career started adding responsibilities that were related to things I had learned by actually being a reporter. It’s kind of a roundabout way to end up with a pretty interesting career with a bunch of things I work on.

As AI continues to advance, what ethical considerations do you believe journalists and media professionals should prioritize to maintain accountability, transparency, and overall trust within their audience? 

Yeah, that’s a great question. So it depends, but if you’re writing and you’re talking about how [AI] is being used, you have to disclose it. If AI writes a sentence, you have to disclose it… [or] you’d be misleading your readers. We have a rule that if we do anything that anybody could interpret as having come from a human, we declare. [For example] we read our stories with voices generated in AI. So you can listen to the Atlantic, but we always say this story was generated with AI made by whatever our AI partner is. We’re always very clear about what is AI and what is human. You may think you can cut a corner on that, but I don’t think one should ever cut one. So that’s one level of it, and that’s in our use case. I also think that journalists, as a class of people, have a responsibility to try to understand and explain what the effect is that AI is having in the world…I view one of my most important responsibilities as trying to understand [AI] and then trying to explain it. I do these daily videos, mostly about AI, [and] I go to lots of conferences, and part of what I’m trying to do is to understand this huge force, what the possible outcomes are, and therefore how you can best shape it for the maximum amount of good. And I think journalists as a whole have a real responsibility to understand AI, to hold the AI industry to account, and to help shape public conversations that will lead us to the best outcomes in AI.

You are the founder of SpeakEasy.ao, where you are reimagining how to conduct online conversations to enable healthy and positive discussions, including AI moderation and facilitation. How do you envision this company transforming online discourse, particularly in fostering a more constructive and civil conversation, in an era in media that’s characterized by a lot of polarization and toxicity?

I’m gonna put this one in the past tense because we just sold it, we announced it last week. The goal was to create a new, gigantic social platform where people have productive conversations and go along and learn from each other, and we’re not pushed into filter bubbles and empathy [is] increased worldwide. Turns out that is a very hard problem, so we built some cool software. We had sold the software, but we, for better, or worse, did not succeed in supplanting Twitter. 

I saw that you have a very extensive experience in journalism and technology, and I was wondering what advice you would give to aspiring writers and journalists, especially those that are interested in covering science and technology. 

Yeah, well, the most important trait you can cultivate is curiosity. So if you are a curious person, technology, journalism, and covering science is a great place to be. You know, part of why what drew me into this field is just the chance to constantly be learning. You get to tackle a new subject, learn a new thing, meet new people, and go to new places constantly. So if you are an extremely curious person, it’s a wonderful profession for you. If you go into [tech journalism], I think the most important advice is to take that curiosity and just always be trying to learn different things, trying to understand different skills, and being able to evolve. What makes somebody a good journalist now is quite different from five years ago, from ten years ago. There are forms of journalism that didn’t exist when I was growing up—nobody was a videographer, nobody did audio journalism. Nobody worried about their social media profiles. Nobody could build a journalism career by being particularly compelling on TikTok. All of those things are possible now. That doesn’t mean doing all of those things, but try everything you can to understand what you’re good at, what you’re passionate about. [T]he other bit of advice would be to lean in and try to understand AI as much as possible. I’m using it as much as I can… both because it can help me and help me be more efficient at what I do, but also because by using it I understand it better, and this will be one of the most powerful forces reshaping the Internet, reshaping journalism, reshaping the way we work. I don’t know what it’s going to do, but I do know that knowing more about it is a really good position. So if I were a young technology journalist, I would be experimenting constantly in all, you know, ethical, appropriate ways that your teachers fully approve of, in understanding how these tools can be used.

You’ve been deeply involved in instituting digital paywalls at both the New York Times and Wired, which I saw significantly increases digital subscriptions. I was wondering if you could speak to the challenges and opportunities of implementing such strategies in the video industry. 

So the opportunity is, it’s a great way to get people to pay for your content. Advertising is fickle—goes up, it goes down. It’s highly dependent on an audience; it’s dependent on the economy. You go into a recession, advertising budgets can get cut. It’s very complicated. You don’t want to be wholly dependent on it. The beautiful thing about subscription revenue is it’s recurring, it’s based on direct relationships with individuals. And if you can write better stories, they’ll pay you for it. So the advantage of a paywall is it shifts your business model to be focused on recurring reader revenue as opposed to episodic advertising. The trade-off is that if you have a paywall, you’re making it harder for people to read your site. So you are perhaps decreasing your influence, you are perhaps harming democracy, you are perhaps antagonizing your audience… The goal with a paywall is to build a paywall that convinces people to subscribe without making your content feel completely inaccessible. And those are really hard trade-offs… there’s a lot of math behind it. A lot of my career has been spent trying to understand that math and optimize for all the various things you’re optimizing for as a media person.

In my time on the Internet, I’ve seen a rise in misinformation and deep fakes. I was wondering how journalism and media and those kinds of organizations can combat the spread of false information, while upholding principles of free speech and press freedom. 

I don’t know if we can… [but] we certainly should try. See, we have to try a couple of things. We have to make sure that we are not ourselves ever tricked. There are going to be people who try to trick me, try to trick other Atlantic journalists, and create deep fakes and say, “Hey, write a story about this.” So that’s one step. We also need to be doing the best we can to identify things that are false and debunk them. If there’s something important that is out there in the world and we can prove it’s false, that is a real service. And then the most important thing is just as a publication, always standing for truth, integrity, and accuracy, and just trying to hold that as much as possible, even as the world gets incredibly confusing and full of falsehoods that were impossible to create even a year ago.

In what ways do you think blockchain technology could revolutionize the media industry, particularly in terms of content distribution, copyright protection, and decentralized journalism? 

That’s a good question. The company we sold SpeakEasy to is actually based on building a platform based on the blockchain. So some of the technology we built to try to improve conversations will eventually be based on a blockchain identity layer… called Amplica Labs. I think the blockchain could play a really important role in verification… [Y]ou could figure out a way of putting verified content onto the blockchain and then having that verified content easily accessible by, say, the social platforms, you may solve a problem. So if Twitter could link up to a blockchain and verify that something was published on the Atlantic, you could then make sure that if somebody says something was published… it actually was. So I think there’s a real verification use. As for creating new publications, people have tried. I think it’s been pretty hard to pull off. There are some logistical issues with blockchain latency questions… [but] I do think that we’re entering a good era of Web 3 and blockchain. We’re going to kind of get out of the grifting phase and go into a new building phase.

As someone deeply involved in discussions about the future of technology, what trends do you believe will have the most significant impact on society in the next decade? How can we prepare for the potential societal and ethical implications of that? 

I definitely think AI is going to be the most important. I know there are some skeptics. I was just out for a walk with a friend of mine from Andover, actually, who works at one of the big tech platforms and is arguing that it’s a big bubble and going to go away. But he’s wrong. I think that the way that AI changes the nature of work, the way that AI changes the way we relate to each other, the way that AI changes income inequality, and the way even AI changes power dynamics between nations, I think will be massive… The second [important trend] will be bioengineering humans and the ability to improve our genes, the ability to use CRISPR to create new forms of life or change our own biology. I think that will be massive as well and raises all kinds of fun ethical questions that a good tech and science journalist like yourself will help society sort out.

Leave a Reply

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading