Over the past year — possibly even longer — I’ve been writing about artificial intelligence. How it’s changing the way we work. How it’s reshaping learning and development. How it’s influencing the way we think about design, knowledge management, and even how we communicate. If you’ve been following along, you know I’ve tried to approach this space thoughtfully, with an honest look at both what AI can do and where it falls short.
But here’s what I haven’t said out loud yet: I’ve been wondering if AI is replacing me.
Not in the dramatic, sci-fi, “robots are coming for your job” kind of way. More in the quiet, practical sense. The kind where you watch the tools get better and better, and you start to ask yourself whether the skills you’ve spent years building still matter as much as they used to. Whether the expertise you bring to the table is something a well-prompted language model can approximate. Whether the gap between what you do and what AI can do is getting smaller.
And my honest answer? Maybe.
The Conversation No One Wants to Have
I know this is a loaded topic. AI has generated more divided opinions than almost anything else in the professional world right now, and that tension is not lost on me.
On one side, you have people who see AI as a revolutionary tool — something that amplifies human capability, saves time, and opens doors that weren’t there before. On the other side, you have people who are genuinely concerned about what it’s taking. The way it scrapes and repurposes other people’s ideas, writing, and art without consent or credit. The way it creates a false sense of efficiency that can devalue the craft and thought behind real expertise. The unrealistic expectations it sets for what one person — or one team — should be able to produce.
I’ve tried to be mindful of both sides in everything I’ve written. And I want to be honest: I don’t think either side is entirely wrong.
The Environmental Cost We’re Not Talking About Enough
But the conversation about AI doesn’t stop at who benefits and who gets left behind. There’s a cost that’s harder to see — one that doesn’t show up in a product demo or a LinkedIn post about productivity. And it’s one I think we all need to sit with.
The environmental impact of AI is real, and it’s growing fast.
The electricity consumption alone is staggering. According to the International Energy Agency, a typical AI data center uses as much electricity as 100,000 households. By 2026, the total electricity consumption of data centers globally is expected to approach 1,050 terawatt-hours. That means data centers will consume more electricity than the entire country of Japan. AI systems specifically accounted for 15% of total data center electricity demand in 2024, and that share is projected to nearly double.
And then there’s water. AI data centers require enormous amounts of water for cooling. U.S. data centers directly consumed 66 billion liters of water in 2023 — more than triple what they used in 2014. A study by the Houston Advanced Research Center found that data centers in Texas alone will use 49 billion gallons of water in 2025, potentially growing to 399 billion gallons by 2030. Large facilities can consume up to 5 million gallons of water per day — that’s enough to supply roughly 15,000 homes for an entire year, used in a single day by a single facility. A peer-reviewed study published in Science Direct estimated that the water footprint of AI systems could reach 312 to 764 billion liters globally — a range comparable to the world’s entire annual consumption of bottled water.
These aren’t abstract numbers. They represent real strain on energy grids, real pressure on water resources, and real environmental consequences that most of us — myself included — haven’t been thinking about nearly enough when we fire up a chatbot, generate an image, or use a chat as our therapist.
What the Research Actually Says About Replacement
So does AI actually replace people? I’ve looked into this, and the research is more nuanced than the headlines suggest.
A 2025 study from MIT Sloan — “The EPOCH of AI: Human-Machine Complementarities at Work” — found that AI is more likely to complement human workers than replace them. The researchers developed a framework for evaluating tasks across occupations and concluded that the most productive outcomes come from humans and AI working together, not from one replacing the other.
Carnegie Mellon University research from 2025 reached a similar conclusion: AI systems may best serve in partnership or facilitation roles rather than managerial ones. The technology excels at processing, pattern recognition, and speed — but it struggles with the kind of contextual judgment, ethical reasoning, and relational understanding that humans bring.
A PwC report from 2025 added another layer: in companies that embrace AI, adoption actually correlates with increased revenue, profits, and even employment. AI exposure can boost productivity in highly automatable jobs, making workers more valuable — not obsolete.
“AI won’t replace humans — but humans with AI will replace humans without AI.” — Harvard Business Review
That tracks with my own experience — especially working in Learning and Development. I hear it all the time: “AI created this SOP for me,” or “Scribe can just build a job aid for me.” And my response is always the same — okay, but let’s look at it. Let’s look at the flow. Let’s look at what’s actually covered, how it’s worded, whether it’s accessible. Because more often than not, what AI produces looks complete on the surface but is missing the parts that make it actually useful. And when we’re talking about sensitive information, there’s a whole other layer — where is that data going? Who has access to it? Those aren’t questions AI is asking for you.
I agree with the HBR quote, and I think it carries a responsibility with it. I don’t see a way we can avoid learning AI or integrating it into our roles at this point. But as leaders, we have to stay on top of the research. We have to be able to have useful, honest conversations about how AI is affecting our work, our organizations, our culture — and yes, even our environment, if that matters to your organization. That means doing the work to understand it, not just adopting it because everyone else is.
And here’s the other thing: we are not even close to utilizing AI to its full capacity. Studies continue to show that we’re still in the early stages of what this technology can do. We will keep seeing it develop. We will keep finding new uses for it. Which means we don’t even know what we don’t know yet — and that’s exactly why staying informed matters.
Where I’ve Seen AI Shine — and Where It Hasn’t
I’ve used AI extensively this past year. It’s helped me design and create learning solutions. It’s helped me think through curriculum design for specific roles, strategize core competencies, and even support assessment and performance development work. I’ve used it to brainstorm, organize, draft, and iterate. And I’ve written about all of that openly because I believe in being transparent about how these tools fit into my workflow.
But here’s what I keep coming back to: the one thing AI consistently could not do was the thinking that actually matters.
And there’s something else that doesn’t get talked about enough — consistency. Even when you use the exact same prompt, AI will give you different results each time. The outputs are consistently inconsistent. That might sound minor, but when you’re building learning solutions, writing procedures, or developing performance frameworks, consistency matters. You need to be able to trust that the foundation you’re building on is stable — and right now, AI doesn’t give you that.
It couldn’t identify the gaps. When I’d build out a learning path or a set of job aids, AI could help me generate content and structure it — but it couldn’t look at the finished product and tell me what was missing. It couldn’t recognize that a critical step had been skipped in a procedure, or that a prerequisite skill hadn’t been accounted for in the learning sequence. That required someone who knew the work, the audience, and the context — not just the content. Now, could it give me feedback to think through those things? Absolutely — and that was invaluable, especially when I was working with a limited team, tight timelines, and a stretched budget. But the gap identification itself? That was still on me.
It couldn’t provide the learning theory. AI can reference Bloom’s Taxonomy or Gagné’s Nine Events of Instruction if you ask it to. But it doesn’t inherently design with those frameworks in mind. It doesn’t instinctively know when a learning experience needs more scaffolding, when the cognitive load is too high, or when a formative assessment would be more effective than a summative one. That’s the kind of judgment that comes from years of studying and applying instructional design principles — and it’s not something you can prompt your way into. That said, if you already have that knowledge and can craft your prompts to reference the theories and strategies you know are necessary for successful learning delivery, AI can help you build. But it’s building from your expertise, not its own.
It couldn’t ensure SOPs and job aids were written clearly. AI can draft a procedure. But making sure that procedure is written in a way that someone in the field — at 2 a.m., under pressure, with limited experience — can actually follow? That requires empathy for the end user. It requires knowing how people actually read and process instructions in real work environments. AI can’t assess and know your audience. It can’t complete an audience analysis without someone who knows what they’re looking for helping to guide it through that process. AI doesn’t have that lived knowledge or experience.
And it couldn’t double-check that nothing was missed. AI doesn’t know what it doesn’t know. It can generate a comprehensive-looking document that has blind spots you won’t catch unless you already have the expertise to recognize them.
That’s the gap. That’s where humans still matter — and I’d argue, matter more than ever.
So Where Does That Leave Us?
I don’t see an end to artificial intelligence. I don’t know where we’ll be with it in a year. The pace of change is genuinely hard to keep up with, and anyone who tells you they know exactly what’s coming is selling something.
That said, it’s not all doom and gloom. I do believe that companies and people will find ways to produce cleaner energy and reduce the electricity and water consumption of data centers. And if I’m being real — part of that optimism comes from a pretty practical place: corporations are motivated to cut costs. If cleaner, more efficient infrastructure saves money, they’ll invest in it. That might not be the most idealistic reason for progress, but it’s an effective one. And I’ll take it.
But here’s what I do know: I have yet to see AI fully replace what a skilled, thoughtful human brings to the table. Not in my field. Not in the work I do every day. The tools are getting better. They’re saving time. They’re expanding what’s possible. But they are not — at least not yet — doing the work that requires judgment, empathy, contextual awareness, and the ability to recognize what isn’t there.
Maybe AI will get there someday. Maybe it won’t. But right now, the most honest answer I can give to the question “Is AI replacing me?” is: maybe — but only if I stop doing the things AI can’t.
The thinking. The questioning. The gap-finding. The human judgment that turns good content into great learning experiences.
That’s still mine. And I intend to keep sharpening it.
For now, I’m going to keep doing what I know matters — practicing empathy, being mindful, and showing up for the humans I work with. I’m going to keep learning about AI, keep using it where it makes sense, and keep being honest about where it doesn’t. And I’m going to try to be more intentional about how I use it — not just in my work, but in my life.
If this resonated with you — or if you disagree — I’d love to hear your perspective. This is the kind of conversation we need to be having openly as professionals. Join the conversation on LinkedIn — what are you seeing in your own work? Where is AI helping, and where is it falling short?
Let’s keep the conversation going.
References
- International Energy Agency. (2025). Energy and AI. IEA. https://www.iea.org/reports/energy-and-ai
- de Vries, A. (2025). The carbon and water footprints of data centers and what this could mean for artificial intelligence. Patterns, 7(1), 101430. https://doi.org/10.1016/j.patter.2025.101430
- Undark. (2025, December 16). How much water do AI data centers really use? https://undark.org/2025/12/16/ai-data-centers-water/
- Loaiza, I., & Rigobon, R. (2025). The EPOCH of AI: Human-machine complementarities at work [Working paper]. MIT Sloan School of Management. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5028371
- Johnson, A. (2025, October 30). Researchers explore how AI can strengthen, not replace, human collaboration. Carnegie Mellon University News. https://www.cmu.edu/news/stories/archives/2025/october/researchers-explore-how-ai-can-strengthen-not-replace-human-collaboration
- PwC. (2025). Fearless future: 2025 global AI jobs barometer. https://www.pwc.com/gx/en/issues/artificial-intelligence/job-barometer/2025/report.pdf
- Lakhani, K. R. (2023, August 4). AI won’t replace humans — but humans with AI will replace humans without AI. Harvard Business Review. https://hbr.org/2023/08/ai-wont-replace-humans-but-humans-with-ai-will-replace-humans-without-ai
- Zewe, A. (2025, January 17). Explained: Generative AI’s environmental impact. MIT News. https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117