The tech industry isn’t just providing solutions in search of problems. It’s reshaping our understanding of what a problem is—and what it means to solve one—in ways that fit the tools it can profitably offer.
We often stop at the surface: the belief that every human challenge has a technological fix. But that belief is only the entry point.
First, there’s the quiet assumption that Silicon Valley not only has the means to solve problems, but the right to define them for the rest of us. Whose definition of “problem” are we working with?
Then, deeper still, there’s the logic of the market: the need to create new problems in order to justify new tools. Innovation, under this view, is less about discovery and more about manufacturing demand.
And at the heart of it all: the reshaping of human experience itself. A world where our ways of thinking, working, and relating must adjust to the logic of the tools—rather than the other way around.
This isn’t new.
Echoes from the Past
Fifty years ago, well-meaning American volunteers traveled to rural Mexico to “help.”
They brought ideas, energy, and middle-class assumptions. They believed they were modernizing communities, solving problems. But they imposed values that didn’t fit, created dependencies they didn’t see, and failed to listen to the people they came to serve.
The parallels now:
- Tech workers building AI systems they believe will help humanity.
- Imposing Silicon Valley values—efficiency, scale, optimization—on complex human problems.
- Creating new dependencies in the name of progress.
- Operating at a distance from the people most affected by their tools.
The logic hasn’t changed. Just the scale, the speed, and the rhetoric.
No Tool Is Neutral
You hear it often: “But they’re just tools.”
A casual shrug. As if that settles the matter.
But tools are never just tools.
Every tool carries assumptions—about the world, about what matters, about what needs fixing. A hammer assumes something needs hitting. A spreadsheet assumes life can be modeled in rows and columns. An AI system assumes something should be predicted, optimized, or automated.
These aren’t neutral starting points. They’re embedded ways of seeing.
Tools reflect choices—often invisible—about what counts as intelligence, which outcomes are desirable, whose data is worth collecting, whose voice gets heard.
And once introduced, tools don’t just sit there waiting to be used. They reshape the environment they enter.
Workflows bend to fit the tool. Expectations shift. Entire job roles get redefined. Soon, the way things could be is forgotten—because the tool has made a particular way of working feel inevitable.
Think of the smartphone. Not because the phone itself was some flawless leap forward—but because the world reorganized itself around its presence.
The pattern:
A tool arrives.
We adjust.
The adjustment creates new expectations.
Those expectations drive the need for more tools.
The room for real choice shrinks.
McLuhan, Revisited
Marshall McLuhan: “We shape our tools, and thereafter our tools shape us.”
But it’s not just a one-time shaping. It’s recursive:
- We build tools that reflect our worldview.
- These tools reshape how we behave, work, and relate.
- Our new behaviors lead us to build more tools.
- Which shape us further.
Each loop tightens the fit. Each cycle reduces friction—until the tool feels natural and the world it creates feels inevitable.
What changes everything now is speed. McLuhan observed cultural shifts over generations. Today, our behaviors are reshaped in months. Ecosystems, industries, even our attention—redesigned in real time.
The Pattern
When power wears the face of help, when solutions are offered without asking the right questions, when tools redefine what it means to be human—that’s when we need to pause.
Not to reject the tool outright.
But to ask: What values does this tool assume? What kind of person does it reward? What ways of being does it make harder?
Tools shape us. But we get to notice. We still have that responsibility.
Even—especially—when the tool says it’s here to help.
AI as the Latest Iteration
If this pattern has played out before, AI may be its most potent form yet. Not because it’s evil, but because it’s persuasive. And fast. And everywhere.
AI isn’t one thing. Large language models train us to think in particular linguistic patterns. Recommendation algorithms shape what we see and therefore what we think about. Computer vision systems define what counts as recognizable. Predictive systems encode assumptions about risk and value into consequential decisions.
Each operates differently, each shapes us differently. But they share something crucial: they all arrive with embedded assumptions about what matters, how intelligence works, and what constitutes progress.
AI doesn’t just offer answers. It frames the questions. It encodes definitions of intelligence, appropriateness, value, truth. And then it trains us—subtly, constantly—to match those definitions.
It’s easy to mistake AI for a neutral force. But AI systems are trained on data that reflect specific histories, specific cultures, specific blind spots. They’re designed to optimize, predict, and automate—as if those are self-evidently desirable things.
They aren’t.
And like the missionaries of progress before them, AI tools arrive not just with solutions, but with assumptions about what needs solving, how it should be solved, and who gets to decide.
The risk isn’t just bad code. It’s that we begin to see ourselves—our choices, our relationships, even our thinking—through the lens of what the system can recognize. And in doing so, we shrink ourselves to fit.
==
photo by Doug Vos