Rob Lalka on How Big Tech Turned Profits Into Power

Q&A: Rob Lalka on How Big Tech Turned Profits Into Power. Veatures the cover of The Venture Alchemists

Rob Lalka’s new book, The Venture Alchemists: How Big Tech Turned Profits Into Power, reexamines the familiar stories of America’s most famous entrepreneurs, uncovering the profound societal impacts of their decisions—and the unseen trade-offs that shape our world. Today, the influential leaders of the “Tech Right” have formed what the New York Times called “one of the most surprising and disruptive alliances in American political history.” In this Q&A, Lalka discusses his approach to research, what trends he spotted, what he didn’t anticipate, and what venture capitalists might do next with the power they will wield in the Trump administration.

Q: Your book presents familiar stories with a fresh perspective. How did you approach your research and writing?

Rob Lalka: Before writing this book, I spent many years as an entrepreneur myself. I’ve also worked with entrepreneurs throughout my career, at various stages of business growth and various points in their life’s journey. For me, entrepreneurship has always been about discovery and self-discovery, problem-solving and personal struggles, and the ways the world is impacted by the choices real people make.

My grandmother’s teachings influenced this book, too.  She used to tell amazing stories, tales about inner-conflicts and growth, each with moral lessons. They were as complex and involved as the people they were about.  That inspired me to take great care with the storytelling.

As wealthy and powerful as the people in The Venture Alchemists have become, they’ve each made trade-offs. Their choices often improved lives but also caused serious harm. I cared deeply about rigorous research, and collected materials from the Stanford Archives, leaked documents, digital archives of college newspapers, deleted tweets and blogs, and much more. There are 1,902 endnotes, and the book was professionally fact checked and went through Columbia’s peer review process.

As a result, the book tells a more human story than the glossy corporate PR or the pop culture understanding of these technologists. Some of their decisions we might agree with, others maybe not, but I believed lionizing or demonizing these men was less useful than understanding their multifaceted motivations. That perspective feels especially important now, as they gain extraordinary—maybe unprecedented—power in the second Trump administration.

Q: Your book addresses “cancel culture” in a thoughtful way, pointing out how it leads to dangerous oversimplifications that can mask deeper issues of corruption and abuses of power. What does this mean for culture and society moving forward?

Rob Lalka: In the first chapter, there’s a quote from the Facebook files that Frances Haugen leaked to the Wall Street Journal in 2021. An American boy described feeling constant pressure to post to Instagram, but he worried any mistake would ruin his reputation: “I just feel on the edge a lot of the time. It’s like you can be called out for anything you do. One wrong move. One wrong step.”

That constant pressure—combined with the unforgetting nature of the internet—really troubles me. It’s not healthy for students to have to self-censor because they’re worried that the worst thing they say or do as a 19-year-old will destroy their career decades later. College is a place where there should be the grace to make mistakes, to struggle, and to grow. That doesn’t make every mistake okay. But we’re all flawed, and we will all make them. What matters is whether we learn from them. Whether we try to be better than we’ve been, whether we grow, with help from a supportive community.

The New Orleans artist Tracey Mose critiques “cancel culture” in a powerful way. One of her works is captioned: “In trying to make everyone the same, we are no longer recognizing each individual’s talents and dreams.” I want my students to embrace their creativity, test their assumptions, challenge the status quo, and become more open to other viewpoints.  Education should inspire deeper critical thinking, expand knowledge, and build empathy—not increase pressure to get the “perfect” answer.

Unfortunately, what’s happening in the world today often does the opposite—it makes us shallower, narrows us, and divides us into unforgiving ideological factions. Let’s be honest with ourselves: “cancel culture” certainly hasn’t fixed our problems. And it makes sense that, ultimately, people in power would get away with more than people without it. Those with the most wealth or prestige are far more likely to have their reputation and career survive something damaging.

To grow out of this strange dynamic—with so much injustice and exploitation, yet such inconsistent moralizing, and so much anxiety, grievance, and hostility—we need to reject oversimplifications, to “pause, reflect, question, and discern,” a mantra throughout in the book. I hope my research makes a small, but meaningful, contribution to that. I’ve also worked with Columbia University Press to create discussion guides and teaching materials to accompany the book. They’ll soon be available online for free, and I’m hoping they encourage healthier, good-faith conversations about these important issues. 

Q: As AI becomes more integrated into our daily lives, there is growing concern that biases in AI systems could contribute to or worsen existing divisions and inequalities. Your book shows how companies like Facebook and Google were shaped by the perspectives and biases of their creators, which unintentionally (or intentionally) influenced the algorithms that now govern these platforms. What are you keeping an eye on during this AI explosion?

Rob Lalka: These worries have intensified with the prevalence of AI tools like ChatGPT and Claude, but we should have been paying more attention to how the algorithms of social media platforms were being designed years ago. Ezra Callahan, Facebook’s first product manager, put it starkly: “‘How much was the direction of the internet influenced by the perspective of nineteen-, twenty-, twenty-one-year-old well-off white boys?’ That’s a real question that sociologists will be studying forever.”

We cannot develop completely unbiased AI systems, because the data being fed in must come from somewhere—and we all make mistakes, like we just discussed. AI learns from us, flaws and all.  Rather than putting blind trust in these algorithms’ outputs, we must understand their limitations. I’m convinced that, in the era of AI, we need to study the humanities more, alongside fields like business and science, not less. In a sense, we’ve been asking powerful computers to figure out the human condition using math. People are way more interesting and complex than some formula.

Take, for instance, the debate between Elon Musk and Sam Altman about OpenAI changing from a nonprofit to a “capped-profit” model. In the book, I questioned if they’d stick to that. They didn’t. In December, OpenAI announced plans to become a public benefit corporation, just like Claude’s parent company—with no caps on profits. The Musk-Altman feud keeps escalating. Musk made a bid to buy OpenAI earlier this week, exchanged taunts with Altman, and then stated he would withdraw the offer—but only if OpenAI commits to remaining a nonprofit.

As increasingly powerful AI impacts business and society, we must remind ourselves that nothing is inevitable. These decisions are made by real people. Interdisciplinary collaboration—bringing together technologists, ethicists, business leaders, sociologists, and policymakers—offers the best chance for these tools to serve everyone, responsibly, not just the powerful few.

Q: Many of the investors that your book covered—including JD Vance, David Sacks, and Jacob Helberg—are coming into leading roles in D.C., with Vance becoming the vice president, Sacks named as the White House AI and crypto czar, and Helberg serving as undersecretary of state. Elon Musk leads the Department of Government Efficiency (DOGE). How did you know that these people would gain such incredible influence?

Rob Lalka: I didn’t know. I had no way of knowing. But I thought deeply about the power these men were pursuing, and I connected some important dots, so I spotted the trend.

The people you mentioned have been building ventures together and investing together—first in companies, and then in the political candidates they’ve supported—for decades. Their businesses have defied or changed regulations they disliked, and I expect those attitudes to impact their approach to governing. This is not the story of Silicon Valley elites who are assuming positions of power in Washington just because they donated money this political cycle. They have been building toward this, and gaining more and more influence, for years.

Q: What can we expect from the Trump administration?

Rob Lalka: The questions on my mind this week involve whether Silicon Valley ideas—the “move-fast-and-break-things” approach—might be embraced by political appointees as they try to make the government faster, cheaper, and better. An outside observer might say that that approach has worked well in business. That’s the popular perception about how Silicon Valley challenged other industries—think of Uber versus taxis or Airbnb versus hotels.

That’s true, in one sense. All net new jobs come from startups. Half of all Fortune 500 companies turn over every twenty-five years; they are replaced by new ventures, mostly backed by venture capital. But for every household brand we know, there are graveyards full of failed startups. Is that the kind of disruptiveness we want for public services that people depend on? Does that level of risk-taking work in governing? This Silicon Valley ethos, promoted by the “Tech Right” now coming into power, is different than prior small government conservatism. It emerges from a particular business approach that worked for a very small number of firms—and, for the vast majority, failed.

Venture capitalists take extreme risks. They reject tradition. They only need a few winners to succeed massively. At least historically, governing federal departments wisely and effectively doesn’t work that way. There’s too much downside risk with what could go wrong, which can cause real harms to people’s lives. Missteps won’t just hurt investors; they can destabilize communities, harm economies, and undermine peace and security.

With Musk beginning by shutting down USAID and firing all but 294 of its 10,000 employees—“feeding USAID through the wood chipper,” to use his words—we are already seeing the “move-fast-and-break-things” approach in action. Yet, let’s be clear, USAID’s budget is $40 billion. Musk has said that he wants to cut $1 trillion, even $2 trillion, from the federal budget. What’s next? As many have noted, foreign aid costs far less than military intervention, so the long-term risks and national security costs must be considered.

While tech can create efficiencies, disruptions often cause chaos. Government is usually more measured, prioritizing stability and reliability. So while cutting programs and personnel could seem like short-term political “wins,” lasting effectiveness will be much harder to achieve in governing, especially if the Trump administration attempts to make multitrillion-dollar cuts like they have been promising. Who gets hurt? Who benefits? Those are key questions.

Here’s another: What happens to Big Tech? Many of these venture capitalists have profited massively from people’s attention and data—the most valuable assets of the twenty-first century—while people have gained little of that economic value. Then Trump/Vance won an election thanks to a populist mandate that was anti-elite, against a system that’s unfair, a rigged system. Will the incoming administration now break up tech companies? Regulate them so that people share in that value, or at least so that new ventures—which offer a fairer deal—could compete? As I predicted in my book, something has got to give.