Your Mileage May Vary is an advice column that offers you a unique framework to reflect on your moral dilemmas. It is based on value pluralism, the idea that each of us has multiple values that are equally valid but often conflict with each other. To submit a question, please complete this anonymous form. Here’s a reader’s question from this week, condensed and edited for clarity..
I’m an AI engineer working at a mid-sized advertising agency, primarily on non-generative machine learning models (think ad performance prediction, not ad creation). Lately, it seems that people, specifically mid- and senior-level managers who do not have an engineering background, are driving the adoption and development of various AI tools. Honestly, it feels like a thoughtless riot.
I consider myself a conscientious objector to the use of AI, especially generative AI; I am not totally opposed to it, but I constantly ask who really benefits from the application of AI and what are its financial, human and environmental costs beyond what is right under our noses. However, as a rank-and-file employee, I don’t see any real avenue to convey those concerns to the people who have real power to decide. Worse yet, I feel like even expressing such concerns, certainly against the almost blind optimism that I assume plagues most marketing companies, is making me a pariah in my own workplace.
So my question is this: Considering the difficulty of finding good jobs in AI, is it worth trying to encourage critical use of AI in my company, or should I show it if only to keep paying the bills?
Dear conscientious objector,
You’re definitely not the only one who hates the uncritical deployment of generative AI. Many people hate it, from artists to programmers to students. I bet there are people in your own company who hate it too.
But they don’t talk, and of course there’s a reason for that: They’re afraid of losing their jobs.
Honestly, it’s a fair concern. And that is why I am not going to advise you to take risks and fight this crusade alone. If you as an individual If you object to your company’s use of AI, you become legible to the company as a “problematic” employee. There could be consequences for that and I don’t want you to lose your paycheck.
But I also don’t want you to lose your moral integrity. You are absolutely right to constantly ask who really benefits from the thoughtless application of AI and whether the benefits outweigh the costs.
So, I think you should fight for what you believe in, but do it as part of a collective. The real question here is not: “Should you voice your concerns about AI or stay silent?” It’s: “How can you build solidarity with others who want to be part of a resistance movement with you?” Working as a team is safer for you as an employee and more likely to have an impact.
“The most important thing an individual can do is to be somewhat less individual,” environmentalist Bill McKibben once said. “Join others in movements big enough to have some chance of changing those political and economic rules that keep us stuck on this current path.”
Now you know what word I’m going to say next, right? Syndicate. If your workplace can organize, that will be a key strategy that will allow you to fight back against AI policies you don’t agree with.
If you need a little inspiration, look at what some unions have already accomplished: from the Writers Guild of America, which won important AI protections for Hollywood writers, to the Service Employees International Union, which negotiated with the governor of Pennsylvania to create a labor board to oversee the implementation of generative AI in government services. Meanwhile, this year thousands of nurses marched in the streets as National Nurses United pushed for the right to determine how AI is and is not used in patient interactions.
“There are a whole range of different examples where unions have been able to really be at the forefront of setting the terms for how AI is used and whether it is used or not,” Sarah Myers West, co-executive director of the AI Now Institute, told me recently.
If it’s too difficult to form a union in your workplace, there are many organizations you can join forces with. Check out the Algorithmic Justice League or Fight for the Future, both pushing for equitable and responsible technology. There are also grassroots groups like Stop Gen AI, which aim to organize a resistance movement and mutual aid program to help those who have lost their jobs due to the release of AI.
You can also consider hyperlocal efforts, which have the benefit of building community. One of the most important ways they are manifesting themselves right now is in the fight against the massive construction of energy-hungry data centers intended to fuel the rise of AI.
“It’s where we’ve seen a lot of people fight in their communities and win,” Myers West told me. “They’re fighting on behalf of their own communities and working collectively and strategically to say, ‘We’re getting a really raw deal here. What if [the companies] If they are going to reap all the benefits of this technology, they must be accountable to the people who use it.’”
Local activists have already blocked or delayed $64 billion worth of data center projects across the United States, according to a study by Data Center Watch, a project led by artificial intelligence research firm 10a Labs.
Yes, some of those data centers may eventually be built anyway. Yes, fighting against uncritical adoption of AI can sometimes feel like facing an invincible giant. But it helps prevent discouragement if you take a step back and think about what it really looks like when social change occurs.
In a new book, someone should do something, Three philosophers (Michael Brownstein, Alex Madva, and Daniel Kelly) show how anyone can help create social change. The key, they argue, is to realize that when we join forces with others, our actions can generate butterfly effects:
Minor actions can trigger cascades that lead, in a surprisingly short time, to important structural results. This reflects a general characteristic of complex systems. Causal effects in such systems do not always complement each other smoothly or continuously. They are sometimes constructed in a non-linear manner, allowing seemingly small events to produce disproportionately large changes.
The authors explain that because society is a complex system, its actions are not a meaningless “drop in the bucket.” Adding water to a bucket is linear; each drop has the same impact. Complex systems behave more like heating water: not all degrees have the same effect, and the change from 99°C to 100°C crosses a tipping point that triggers a phase change.
We all know the boiling point of water, but we do not know the turning point of changes in the social world. That means you’ll have a hard time knowing, at any given moment, how close you are to creating a cascade of changes. But that doesn’t mean changes aren’t happening.
According to research by Harvard political scientist Erica Chenoweth, if you want to achieve systemic social change, you need to mobilize 3.5 percent of the population around your cause. While we haven’t seen AI-related protests on that scale yet, we do have data that indicates the potential for broad-based protests. According to a recent Pew Research Center survey, 50 percent of Americans are more worried than excited about the rise of AI in daily life. And 73 percent support strict regulation of AI, according to the Future of Life Institute.
So even if you feel alone in your workplace, there are people who share your concerns. Find your teammates. Come up with a positive vision for the future of technology. So, fight for the future you want.
Bonus: what I’m reading
- I was struck by Microsoft’s announcement that it wants to build a “humanistic superintelligence.” Whether you think this is an oxymoron or not, I take it as a sign that at least some of the power players are listening when we say we want AI that solves real, concrete problems for real, flesh-and-blood people, not some fanciful AI god.
- The Economist article “Meet the real screen addicts: the elderly” is very accurate. When it comes to digital media, everyone is always concerned about youth, but I think not enough research has been devoted to older people, who are often positively glued to their devices.
- Hallelujah, some AI researchers are finally taking a pragmatic approach to the whole: “Can AI be conscious?” debate! I’ve long suspected that “conscious” is a pragmatic tool that we use as a way of saying, “This should be in our moral circle,” so whether AI is conscious is not something we will discover, but something we will decide.

