The race to deploy artificial intelligence is accelerating faster than our ability to understand its consequences. That needs to change — not by slowing down innovation, but by steering it toward something worth having.
We are living through one of the most consequential technological shifts in human history. Artificial intelligence is no longer a distant
promise — it is reshaping how we work, communicate, think, and relate to one another. Billions of people interact with AI-powered systems every single day, often without fully understanding the degree to which those systems are shaping their choices, their attention, and their livelihoods.
And yet, the regulatory frameworks governing these platforms remain dangerously thin.
I want to be clear: I am not arguing for a clampdown on AI development. The technology holds enormous potential — in medicine, education, climate science, accessibility, and beyond. But potential and impact are not the same thing. Right now, the most powerful AI platforms are being deployed at scale with minimal accountability and almost no structural obligation to consider the societal costs they impose. That is not innovation. That is outsourcing risk onto the public.
We need better guardrails. Not to cage the technology — but to make it worthy of the trust we are already placing in it.
The Jobs Question Nobody Wants to Answer Honestly
Every wave of technological disruption has displaced workers. The industrial revolution, mechanization, the internet — each created enormous new economic opportunities, but only after significant periods of upheaval that devastated real communities. We have a chance, this time, to be more deliberate.
AI is not simply automating repetitive tasks. It is beginning to encroach on creative, analytical, and interpersonal work that was previously considered safe from automation. Copywriting, legal research, customer support, financial analysis, medical triage — the list of affected sectors grows longer every quarter.
The question is not whether AI will change the employment landscape. It will. The question is whether the companies building and deploying these systems will bear any responsibility for the transition costs — or whether, once again, those costs will be absorbed entirely by workers and governments.
A sensible regulatory framework would require large AI platform companies to contribute to transition funds for workers displaced by their systems. It would mandate transparency about where AI is being substituted for human labor, and support retraining programs designed in partnership with affected industries. Innovation should create prosperity broadly — not concentrate it narrowly while externalizing the pain.
The Mental Health Crisis in Plain Sight
The research on social media and mental health, particularly among young people, has been mounting for years. Platforms optimized for engagement tend to exploit psychological vulnerabilities — anxiety, social comparison, the craving for validation — because those vulnerabilities keep users scrolling. AI makes these mechanisms dramatically more powerful.
Recommendation algorithms powered by machine learning can now predict, with unnerving precision, exactly what content will provoke a strong emotional response and deliver it on an endless loop. AI companions and chatbots are being designed to maximize attachment, not wellbeing. The more time a user spends on a platform, the more revenue it generates — and so the incentive is always to deepen the hook, not loosen it.
This is not a hypothetical harm. We are watching it happen in real time, in the rising rates of anxiety, depression, and loneliness among people who have grown up in algorithmically managed information environments.
Regulation must require AI platform companies to conduct and publish independent assessments of the psychological effects of their systems. Engagement metrics cannot be the only measure of success. Platforms should be held to a duty of care, particularly for minors — and "our algorithm did it" should not be a viable defense.
Digital Addiction Is a Design Choice
Closely related to the mental health question is the matter of digital addiction — and the fact that it is, at least in significant part, an engineered outcome.
Variable reward schedules, infinite scroll, notification systems calibrated to maximize interruption — these are not accidents of design. They are deliberate mechanisms for capturing and holding attention. AI has made it possible to personalize these mechanisms to an extraordinary degree, making each user's addictive loop uniquely tailored to their particular psychology.
There is growing legal and ethical consensus that designing products to be addictive, particularly for vulnerable populations, is not an acceptable business practice in other industries. The tobacco industry can no longer market cigarettes to children. Casinos face strict regulations about how they design their floors and their games. AI platforms, which are at least as capable of engineering compulsive behavior, operate under no comparable constraints.
Guardrails should include clear prohibitions on design patterns specifically intended to create compulsive use. Users should have meaningful — not cosmetic — controls over their algorithmic experience. And the burden of demonstrating that a system is not exploiting psychological vulnerabilities should fall on the platforms, not the public.
Who Pays for the Power?
Here is a dimension of the AI debate that rarely gets the attention it deserves: the physical infrastructure that makes all of this possible.
The large language models and AI systems being deployed today are extraordinarily energy-intensive. Training a single large model can consume as much electricity as hundreds of homes use in a year. The data centers that run inference at scale require massive, continuous power supply, enormous quantities of water for cooling, and significant land. These are not abstract numbers — they translate into carbon emissions, strained power grids, environmental impacts on local communities, and competing demands on scarce resources.
Right now, the cost of this infrastructure is largely being socialized while the profits are being privatized. Local governments offer tax incentives to attract data centers. Electricity costs are subsidized in various ways. Environmental reviews are sometimes expedited under economic development pressure. The communities that host these facilities bear the environmental and infrastructure burden; the shareholders of AI companies collect the returns.
This is not sustainable, and it is not equitable. Regulation should require AI companies to be transparent about the environmental footprint of their systems, to meet meaningful carbon accountability standards, and to contribute fairly to the communities and infrastructure on which they depend. The computing power behind AI is not free — and the companies profiting from it should be honest about who is actually paying for it.
Equal Access or a New Digital Divide?
Finally, and perhaps most fundamentally: who benefits from this technology?
Right now, the most powerful AI tools are concentrated in the hands of well-funded companies, wealthy individuals, and elite institutions. The capability gap between those with access to cutting-edge AI and those without is widening rapidly — and it maps, almost perfectly, onto existing inequalities of income, geography, education, and power.
A student at a well-resourced school with AI tutoring tools has access to forms of personalized, on-demand educational support that would have seemed miraculous a decade ago. A student at an underfunded school in a rural or low-income area may have no meaningful access to these tools at all. The same disparity plays out in healthcare, legal services, entrepreneurship, and research.
If we allow AI to develop primarily as a premium product — something whose benefits flow to those who already have advantages — we will have built a technology that deepens inequality rather than alleviating it. That outcome is not inevitable, but avoiding it requires deliberate policy.
Regulatory frameworks should include provisions for public access to foundational AI capabilities, investment in AI literacy and education across income levels, and explicit attention to algorithmic bias that encodes and amplifies existing discrimination. Equal opportunity in an AI-powered world requires active effort, not passive optimism.
A Framework Worth Building
None of what I am proposing here is anti-technology. I believe AI can and should be one of the great tools of human flourishing. But tools do not deploy themselves responsibly — the people and institutions behind them bear responsibility for the choices they make.
The guardrails I am advocating for are not about limiting what AI can do. They are about ensuring that what AI does serves the public interest, not just the interests of those deploying it. They are about holding powerful systems accountable to the communities that host them, the workers they disrupt, the users whose minds they engage, and the planet whose resources they consume.
We have a window, right now, to shape the conditions under which this technology matures. That window will not stay open indefinitely. The patterns being established today — about labor, about attention, about infrastructure, about access — will harden into norms and lock-in effects that become increasingly difficult to undo.
The question is not whether to regulate AI. It is whether we will do it thoughtfully, with genuine regard for the society we want to build — or whether we will wait until the harms are too large to ignore, and regulate in crisis, poorly.
I'd rather not wait.
What guardrails do you think matter most? I'd love to hear your perspective in the comments.
Comments