The price we pay for intelligence
Determining the worth of one of the most major tradeoffs we are making in the gift we call life
November 24, 2025 · 31 minute read
Most of my writing has touched on AI in some specific context. Here I want to think about its impact more broadly.
Everything comes with a price. Milton Friedman's "there's no such thing as a free lunch" is one of those lines everyone knows and almost nobody applies consistently. We're good at weighing costs and benefits in daily scenarios but terrible at examining the price of innovation as a whole. In my economics classes, technology and progress are literally synonymous. When we talk about increased "innovation" or "productivity," we're almost always referring to a boost in technology, somehow or someway. That's a fair assumption most of the time, but it means we sometimes skip past the full picture when discussing what technology actually costs us.
Right now, that technology is AI. We're drowning in think pieces and threads and arguments about it. We talk about AI like it's a feature added to our lives, but I think it's becoming the environment itself. Just like the internet did for my parents, and just like smartphones and social media did as I grew up, AI will move from something we use to a default platform that everything else sits inside of.
I should be clear that I don't really have a stance going into this. I don't have the expertise to declare the fate of civilization, and I don't think anybody does, regardless of how accredited they may be. Yes, this means I don't think we should take Sam Altman seriously when he says AI is going to lead to the end of the world. That may sound dismissive, but it doesn't feel reasonable to come to such conclusions about something so life-altering that it's bound to be unpredictable. History makes this clear. People once thought television would rot society beyond repair, that the internet would either liberate humanity or destroy our attention spans beyond saving, and that nuclear technology would definitely end us in a single war. Were we entirely wrong? Definitely not, especially that attention span part. The point is that our predictions have always been partial. They say more about what we're afraid of and what we want to signal than about what will actually happen.
With any large technology boom, the number of moving parts is enormous. Governments, companies, social norms, energy constraints, economic incentives, personal behavior. None of these are fixed, and they'll all interact with the technology in ways we can't script or accurately predict. Speaking in absolutes about these turning points feels like a coping mechanism. My goal isn't to cope, it's to understand. I'm writing from a lens that isn't aiming to be conclusive but is really just an attempt to grasp what's happening around me. If I sound certain at any point, please assume I'm not.
What I mean when I say "AI"
When I use the term AI, my mind doesn't go to one single product or app. It refers to the invisible architecture that sits behind so much of what we interact with. The word has been pulled in so many directions that it almost feels empty on its own, and I say this as someone aiming to study the subject in school. If I'm not careful about what I mean, I'll end up talking around a shadow. So let me try to draw the outline as clearly as I can.
When I say AI here, I'm thinking about the whole stack of systems being used to mimic and automate the kind of work we used to consider purely cognitive. The reading, writing, summarizing, recognizing, pattern-finding tasks that humans were responsible for and used to find fulfillment in. This includes:
- The large language models that basically swallow an incomprehensible amount of data and then somehow begin to output it back to us
- The datacenters that house those models, including the hardware, the cooling units, the power lines feeding them, and all the other components that come together to make them work
- The companies pouring capital into this infrastructure
- The engineers and researchers tuning the models and writing code
I'm also referring to the more boring systems that existed before this recent wave of generative models. Recommendation engines, credit scoring algorithms, search ranking systems, facial recognition at airports, fraud detection at banks. All of it lives on the same continuum of giving machines partial authority over what we see, what we're offered, and how we're judged. What feels different now, and what I'm most interested in, is the way these capabilities are becoming general rather than narrow. We're no longer writing an algorithm for one specific task. We're building large general models that can automate almost anything with some fixing up.
There are also things I'm not talking about here. I'm not focusing on hypothetical god-like entities that wake up one morning and decide to erase humanity. I don't feel qualified to speculate at that level of abstraction, and honestly, it doesn't help me make sense of the reality I actually live in. I'm much more interested in the forms of AI that already exist or are clearly forming in front of us: systems that generate text and images, systems that answer questions, systems that help companies cut costs, systems that change what one worker can do in a day. These will shape my lifetime, and probably yours too. They already have enough complexity without importing distant science-fiction scenarios on top of them.
This definition matters to me for many reasons. It's easy, especially for people who live mostly online, to speak about AI as if it's weightless, as if intelligence can simply float in "the cloud" without touching anything real. Intelligence at this scale is brutally physical. It sits in buildings the size of warehouses. It consumes amounts of electricity that entire towns would have used not long ago. The "smartness" that feels so effortless when I type into a computer is supported by layers of metal, heat, water, and human labor that most people will probably never see with their own eyes. It only feels fair to be as precise as possible before weighing the benefits and the costs, both the visible ones and the ones we may not want to look at yet.
Contemporary Benefits
Thinking honestly about the benefits of AI right now is an insanely difficult task because the sheer number of ways it's being used to "improve" human life (or realistically, productivity) is unquantifiable.
Starting with daily life, the benefits are almost trivial to list. Cleaning up writing, translating text, summarizing, organizing spreadsheets, planning trips, debugging code, tutoring. None of that sounds grand because it's normalized now. Each of these functions used to require another human being, or a lot more time, or access to resources that many people simply didn't have. A person who struggled with English now has an editor in their pocket. Someone trying to learn a subject their school doesn't teach well can ask questions and get some form of explanation. A small business owner who can't afford a full marketing team can experiment with copy and design. There are justified arguments about the quality of the results, but the baseline availability of help has changed radically.
Accessibility has improved because of AI as well. AI-driven voice recognition, captioning, image description, and interface adjustments have made technology more usable for people with disabilities. Someone who is visually impaired can have images described in real time. Someone who is hard of hearing can have meetings or videos automatically transcribed. Someone with mobility issues can use voice to operate devices that once required a keyboard and a mouse. That's amazing. It changes what daily life looks like for entire groups of people.
There's an obvious impact on science and healthcare that sometimes feels almost unreal. Some headlines are literally unbelievable. There are AI models now that can accurately help detect tumors in medical images by flagging anomalies that a human eye might miss after looking at hundreds of scans in a single day. Words can't express how amazing I find that. There are systems using AI to assist in protein folding predictions or drug discovery, effectively minimizing the search space so researchers don't have to test billions of possibilities blindly. AI is being used to triage patients, summarize prior records, and surface the most relevant information during a short appointment, so the doctor can spend more of those fifteen minutes actually talking to the person in front of them instead of hunting through files. These are all on target to contribute to:
- Diagnoses becoming more accurate and faster
- Treatments being more personalized
- Research cycles for new drugs and therapies shortening
- … and much more that I'm probably missing
None of that should be dismissed when thinking about AI as a whole. I could wander through education and law and customer support and half a dozen other areas pulling example after example of how AI is, in some real way, making someone's day a little less burdensome or a little more possible. For now, the pattern is clear enough. There are real gains already, even if they aren't the most glamorous. I'll try to put some numbers around those gains later.
One of the most understated benefits is the spread of expertise. For much of history, access to high-quality information and advice depended on where you were born, what you could afford, and who you knew. This is still true in many ways and is part of a larger problem we need to address, but AI has its own way of pushing against that boundary. Because of how accessible many generative AI tools are, people in less privileged situations can get writing feedback, language practice, and context on nearly any issue at any hour of the day. It's obviously not a replacement for professionals, and it can be wrong, but as a first pass it helps in so many ways.
AI has also forced us to confront what we mean by intelligence, by work, by value. Models can sweep the SAT, ACT, LSAT, CFA, and nearly every other type of standardized exam. I don't see this as purely negative destabilization. There's something useful about being forced to re-examine our assumptions. Sometimes a tool that disrupts an old structure reveals how flimsy that structure was in the first place. I'm intrigued to see how society comes to terms with this.
The root benefit AI has to offer is productivity. Productivity is very simply how much output you can get from a given amount of input, and AI is pushing that boundary. We operate faster now. Some view this negatively (hello parents) in the sense that we don't use our brains as much. I think that's a choice individuals make for themselves. Applied correctly, AI can free up cognitive space for more complex work and let humans have more agency over their time. There's also the chance we simply cram more tasks into the same day, which is a human problem, but the underlying capability is real.
As I write this, I go back and forth between appreciation and suspicion. As positive as this section may sound, I'm not uncritical. The point of this whole piece is to think about the tradeoff society faces and how improvements can be seductive distractions from real costs. Still, if I focus on the present, I have to admit that AI has already made certain forms of knowledge, assistance, and capability more widely available than at any point in human history. None of this cancels out the harms, which is what I'm going to talk about now.
The costs we are facing
I already said I wasn't going to be certain about anything, but while brainstorming these costs I realized I'm probably not going to be neutral either. This might end up sounding more like criticism. I'll get to the point.
The grand majority of AI and how it's used right now is very sloppy. So much of the current wave is being spent on content that nobody really needs more of. Automated blog posts about nothing in particular. Mass-generated product reviews. Auto-written emails that exist just to provoke auto-written replies. Derivative images and videos that remix the same aesthetic over and over again. Better ad targeting in feeds. Better ad targeting in search. More engaging thumbnails. More optimized hooks. It's an engine that takes in the internet and spits out more internet. There's just so much random, redundant AI-produced content out there. Companies like Meta feel, to me, like pure slop factories. Their business model has always been to harvest attention and sell it, and AI simply lets them do that at ten times the speed. Social feeds are dripping with AI content that looks like everything else but somehow emptier. I'm not going to lie and say I'm removed from engaging in it. I find a lot of it funny. Very funny. But there's a deeper cost in this becoming the default use case for a very powerful technology. When people follow incentives, the most profitable thing to do with a large model isn't to cure disease or rethink education. It's to keep people scrolling a little longer, clicking a little more, buying a few extra things they don't need, and feeding more data back into the system.
The deeper risk isn't even that this is immoral. It's that if we keep pouring capital and talent into slop production, we'll get locked into a world where the highest-return use of "intelligence" is selling more ads. The cost of computation will stay high, the energy footprint will stay heavy, and the only people who can afford to run the biggest models are the same platforms that already sit on top of our attention. Unfortunately this is shaping up to be the case. Intelligence is becoming a scarce, centralized resource being optimized for click-through rate. What a tragedy.
The most obvious physical cost, at least to me, is the infrastructure. We hear the word "cloud" in a software context and picture something almost weightless, like our files are hovering in the sky. The more I read about datacenters, the more I realize how misleading that mental image was. (The naming of the "cloud" concept was definitely deliberate.) There is nothing light about running these models. They sit in massive buildings full of racks and cables and cooling systems, pulling electricity from grids that are already under pressure.
Right now, even before speculating about how big this could get, the costs of datacenters are already very real. These buildings sit on the edge of towns and cities pulling an enormous, constant stream of electricity from grids that weren't designed to feed warehouses of machines running at full tilt all day and all night. They need heavy cooling to keep everything from overheating, and that usually means vast amounts of water moving through pipes and towers, or more energy spent on industrial-scale air conditioning. In some regions, that water comes from supplies already under stress, the same rivers and aquifers that households and farmers depend on. The heat pulled out of the servers doesn't disappear. It spills back into the air or into nearby waterways, adding another small layer of strain to environments that have been absorbing "small layers of strain" from human activity for decades. On top of that, there's the land itself, the physical footprint carved out for these facilities, the transmission lines built to reach them. None of this is abstract. It's steel, concrete, copper, water, and power plants working a little harder so that somewhere else a model can respond a little faster.
Then there's the cost to how we think. This is harder to talk about because it's not as measurable, but I feel the impacts firsthand. When I know I can ask AI to summarize or structure something, there's a quiet shift in how much effort I'm willing to spend before I reach for help. This article is about the tradeoff of AI as a whole, but there's also a tradeoff we face every single time we know AI can do something for us. Opportunity cost, if you will. I still like thinking. I still like writing. But the threshold for "this is too much work, I'll just ask" keeps getting lower, and I don't think I'm alone in that. If you grow up with a calculator, you eventually stop doing math in your head. (Unfortunately I experienced this too.) If you grow up with AI that can do your work quicker than you can, it changes the way you think about how you spend your time. I'm glad I'm at least aware of this.
I'm not saying struggle is always noble. A lot of busywork genuinely sucks. But there's a kind of mental friction that builds capacity. When you sit with a hard idea, or write through a messy paragraph five times before it feels right, or notice something doesn't add up and dig until you find out why, you're not just producing output. You're shaping the way your own mind works. If we outsource too much of that friction, we'll end up with answers that sound good sitting on top of brains that haven't really done the work. Critical thinking won't disappear overnight. It'll chip away slowly. We're in a game of attrition that we stand no chance in because we're playing against ourselves.
There are other costs that get overshadowed. The human labor that goes into making these models safe and aligned is usually outsourced to people in low-wage environments who spend hours labeling content, ranking model outputs, and doing content moderation that most of us would never want to touch. When I say filtering trauma, I literally mean that some of these workers have to look at the worst parts of the internet all day. Graphic violence, hate speech, self-harm content, sexual exploitation, all the things the rest of us never see because the model has been trained not to surface them. Their job is to tag it, classify it, decide what should never be generated, and sometimes write the correct, safe response the model should learn to give back. They become the cushion that absorbs this constant exposure so that, on our side, we get to talk to a polite assistant that feels clean and helpful. We see a clean response from these chatbots, but much of it still sits on top of a global layer of invisible work done by people who are often underpaid and distant from the wealth being created. I actually didn't know anything about this for a really long time and had to do quite a bit of research to find out the extent of it, which should tell you enough about the ethics.
AI is also a net negative for mental health in almost every way. People use these systems as therapists, as friends, as late-night companions when they're already lonely or unstable, and the model will always respond, always engage, always mirror something back without ever actually understanding them.
For most people that just means slightly worse sleep and slightly more emotional dependence on a product. For the edge cases it can genuinely feel psychosis-inducing. Reality twists a bit when you spend hours a day talking to something that can imitate care without ever having stakes in your life. Even for "normal" users, there's a constant nudging away from learning how to think with other humans and toward outsourcing reflection and comfort. The more time we spend in those loops, the less practice we get in the messy, tiring, incredibly important work of actually being around other people, reading their faces, getting things wrong, apologizing, and trying again. There really is a mental health epidemic, and it's absurd to me that people can gloss over the fact that AI will contribute to it. I don't think we've even begun to see the full psychological price.
The last cost I'll cover here is attention. This was a big concern when social media platforms started using infinite scroll to keep users engaged for one more post. The same logic applies to AI. The promise is that each new tool will save us time or make things smoother. Sometimes it does. But each new layer of assistance also asks for a little more of our focus and dependence, which is indirectly just a bit more willingness to let something else think on our behalf.
Quantifying said benefits and costs
At a certain point in the sections above, listing out sectors where AI had an impact started to feel like it was missing the point. Healthcare, logistics, education, finance, entertainment, manufacturing, customer service, creative tools. All of them have some AI success story right now with a number attached. A bunch of costs minimized here, some lives saved there. It becomes repetitive to keep saying "look, it helps here too." So for this part I want to treat the economy like the huge mess it is and ask: if I aggregate everything, what do the benefits and costs of AI look like in rough numerical terms?
On the benefit side, the range of estimates is huge, but almost none of them are small. Global GDP sits somewhere around $110 trillion. If AI lifts global productivity growth by even half a percentage point per year for a sustained period, that alone compounds into extra output in the tens of trillions over a couple of decades. If it's closer to one percentage point and that persists, you can tell a reasonable story where AI is associated with something like an extra $10-15 trillion of yearly economic activity by the time my generation is around 30. Obviously I'm being very optimistic here, but when people talk about AI eventually being worth "another large economy," this is what they mean. The very boring math of growth compounding, nothing abstract or fantastical about it.
On the cost side, the numbers are smaller right now but still very real, especially because they're contributing to problems that already exist. Datacenters already consume one to two percent of global electricity. AI workloads are quickly becoming a big share of that, and if current trends continue, total datacenter demand could plausibly double over the next decade. That would mean an extra few hundred terawatt-hours of electricity every year, which at current energy mixes maps into tens or hundreds of millions of tons of additional CO₂ unless the grid gets much cleaner at the same time. There's no version of the future where AI avoids consequences from an energy and emissions standpoint. Pretty much all the large AI providers have already pushed their annual water use up by billions of gallons to keep servers and power plants cool. At a planetary scale those numbers aren't yet dominant compared to agriculture or heavy industry, but they're not negligible either, and they're growing faster than almost anything else in the digital world.
Imagine a rough but viable scenario where AI alone ends up adding something like half a billion to a billion extra tons of CO₂ per year by the 2030s and 2040s. Over those decades, that quietly eats a few percent of the remaining global carbon budget that's supposed to keep us near the 1.5°C limit. That doesn't mean AI will add a whole extra degree on its own. It looks more like a few hundredths of a degree nudged on top of whatever path we were already on. In honesty, that's not as much as I speculated. But that's easy for me to say now. In the real world, a few hundredths of a degree can be the difference between certain places flooding every decade instead of every century, or certain heatwaves being unbearable for the people who have to stand in them.
Putting all of this together, the rough picture is that on the benefit side we're talking about hundreds of billions to low trillions of dollars a year in added or unlocked economic value over the coming decade or two, if AI actually permeates everything it can touch. On the cost side, we're talking about environmental externalities that could amount to a few percent of our remaining carbon budget, plus a huge unpriced drain on attention and cognitive health. That's really all I can do in terms of "quantifying." The magnitudes on both sides are large, and whatever judgment we make later about whether this was worth it can't pretend the stakes were small.
Is it worth it?
It's absurdly difficult to answer this question. Any attempt runs into the dilemma of defining value itself. How do we measure a life saved by an AI-assisted medical diagnosis against a job made obsolete by automation? Or the convenience of a personal AI assistant against the carbon emissions of the data centers that power it? The task is muddled and imperfect by nature. But even imperfect considerations have their worth. We often try to quantify things that matter because the act of comparison helps us decide what matters in life. So, with as much humility as I can have about the limitations of whatever I'm about to write, here I go.
Tallying all of it up: the environmental toll, the labor issues, the job disruptions, the misallocated capital, the concentration of wealth and power, the erosion of attention. By most economic measures, AI is boosting productivity and will create more wealth than it destroys, at least in theory. But does that equate to real value? By humanistic measures it's more ambiguous. It improves quality of life in many ways and threatens it in others. Many of the costs aren't borne by the same people who enjoy the benefits, and some costs are spread across everyone on the planet and future generations. How do you even value that kind of tradeoff?
I guess it comes down to personal values. If you emphasize economic growth and technological progress, the benefits probably look enormous and very much worth the costs. You might argue we'll solve the problems, make AI greener, retrain workers, regulate misuse, and that slowing down would itself incur a cost in foregone innovation. If your values emphasize sustainability, equity, and psychological well-being, you might be more wary and see the current trajectory as too costly. One thing that feels a bit different with this issue is that many benefits of AI are immediate and concrete while many costs are deferred or diffuse. Humans favor the present over the future because we're short-sighted, so I wonder if that plays a role in being less cognizant of our actions when we know the consequences are delayed.
What we can say is that right now, AI is delivering real, quantifiable benefits that are large and growing, while imposing real, quantifiable costs that are non-trivial. The balance isn't set in stone. It's something we collectively influence by how we guide AI development. If we invest in greener infrastructure, the environmental cost can be reined in. If we create new safety nets and training programs, the labor displacement can be softened and the productivity gains more broadly shared. If we put ethical guardrails in place and slow down certain applications, we might reduce some of the mental health and societal costs. Those decisions are being made now, often without us fully realizing their stakes. Our generation is making a monumental decision, somewhat implicitly through millions of individual actions and general optimism, to embrace AI widely. We're placing a bet that the upsides will continue to compound and the downsides can be managed. I don't have a definitive stance. Whether AI is worth the price we pay is up to one's personal beliefs, and my views are still evolving.
What our future might actually look like
I wrote a bit about this before when I was delirious at 4AM. I'll try to be more coherent now. When we picture the future with all of this in mind, we always jump to extremes. Either we end up in some sterile world where nothing meaningful is left for humans to do, or we snap back to a simpler time because AI turns out to be a short-lived obsession. I don't know about the reality of either take.
In the next decade or so, the main change will be that AI stops feeling special. It'll turn into infrastructure the same way the internet did. At school, kids will grow up assuming that having an AI assistant to explain things, rewrite things, and quiz them is normal. At work, people will slowly become AI-native the way some of us are internet-native. They won't remember what it was like to write an email without a draft pre-suggested or to evaluate a spreadsheet on their own. Jobs will obviously change. Being good at many white-collar roles will mean knowing how to think alongside these systems. On a day-to-day basis, more of our time will be spent asking systems to do things, checking what they did, and deciding whether it's acceptable. In some sense this already describes a lot of modern life. Managers manage, analysts review, editors edit. A junior hire who might have once done hours of manual grinding will instead be asked to sit on top of automated tools and make sure nothing goes off the rails. That sounds like an easier job, but I'm not sure it is. There's a risk we end up with people who know how to correct outputs but can't generate the underlying reasoning themselves. That's a problem for the human resources departments of the future, not me.
Thinking about the rest of this century, the patterns of our lives will still remain the same. People will wake up, commute in some form, send their kids to school, make food, see friends, worry about money, and check whatever feeds exist. They're just going to go about life as normal. Humans adapt. Government services will probably be more automated. Almost everything that can be automated probably will be. Education will change in a big way because AI allows adaptive systems that can tailor content to students. That sounds great, but I don't know that we'll ever reach a point where AI can truly understand a human's thought process, so I don't know how these models will genuinely encode what "good learning" actually is. The climate, meanwhile, will keep reacting to all of the physical infrastructure we build to make these systems possible. I can easily imagine a world where people are using AI to optimize energy grids and mitigate disasters while those same AI workloads are part of the reason the grids are under so much pressure in the first place. Very ironic, but that is humanity.
Inequality will get worse because it's simply not possible to distribute the benefits of such a large technological shift evenly. In the near term, wealthier countries and communities will build and control the largest models, the most reliable datacenters, the best integrations. They'll get compounding benefits in productivity, research, and economic leverage. Poorer regions will have more dependence on systems they don't own and can't easily influence. The future, from that angle, might look like a world where intelligence feels abundant and cheap at the surface, but access to real power remains as lopsided as ever. Owning the models and the energy that runs them could matter as much as owning land or oil did in earlier eras.
If I stretch my imagination beyond this century, which I admit feels impossible, the picture becomes more blurred. Over the next millennium, all of our current anxieties will either have resolved or been replaced by new ones we can't name yet. I really can't think specifically over that long a period, so I'm not going to try. History has a way of normalizing things. Everyday life will always consist of small rituals and worries. People in some future city might complain about their AI-powered transit being late in the same way we complain about traffic now. No matter how much I write or think about this, I can't truly grasp what it will feel like to live in those futures. I can guess at trends, say that work will be reorganized, that education will be reshaped, that the environment will be both supported and strained. But the texture of it, the actual detail, is something my brain can't reach. While I'm curious, I don't have strong feelings about that scale. And that's what I'm going to talk about now.
Zooming out to a universal scale
The first honest thing I have to admit is that none of this is going to "matter" in the way I write about it like it does. Not to the universe, at least. Stars aren't going to flare any differently because we shipped another model. The entire story of AI so far fits into a microscopic sliver of human history, and human history itself fits into a microscopic sliver of the planet's history, which is just another rock circling another star. When I sit with that for a second, the whole question of whether AI is "worth it" becomes kind of funny.
Pulling the frame in a bit and looking at my own life, it's easier to talk about. Within the world I live in today, AI will absolutely change things around me. The jobs that exist, the way I learn, the kinds of companies I work with, how my health is managed as I age. All of that will be shaped by these systems. I might get a diagnosis earlier because a model flagged something in my bloodwork. I might land a job I like more because I know how to think with these tools instead of against them. I might also get boxed out of certain opportunities because a firm realizes they can hire five people instead of ten. Those are the scales my brain actually understands. I can feel those potential futures when I think about my parents getting older, my own career, the city I live in. Global GDP projections and long-term climate curves are important, but they don't land in my body the same way.
And that's where the selfishness comes in. I don't really care, in any visceral way, about the year 2200. I can pretend I do. I can say I care about "future generations" and in a shallow sense I do, but if I'm honest with myself, my emotional bandwidth stops somewhere around the lifetime of the people I know or could plausibly imagine knowing. My kids, maybe their kids if I have a really sentimental moment. Beyond that it gets hazy. My brain wasn't wired to hold the feelings of billions of future strangers. Most people's weren't, and I think anyone who claims they care to that extent is being a bit dishonest. We're built to care deeply about certain things, and society has hacked that wiring to care about things at a national or global level, but it's always unstable. That's why it's so easy to scroll past news headlines and then spend hours thinking about a meaningless text.
When I say I don't really care, it's not some proud declaration of apathy. It's more like acknowledging the limits of my empathy and attention. I'll never truly "feel" the suffering of someone born in 2150. The best I can do is accept that my caring is short-sighted and then still try to act as if it extends further than it does. I might never emotionally internalize the entire future of humanity, but I can at least behave in a way that isn't a hindrance. I care about the state of the world while I inhabit it, and I care about it honestly. That alone is playing my part in making the world that future people inherit less of a disaster.