Tiankai Feng has been doing a publication tour for his new book, Humanizing AI Strategy, and he noticed something odd about the conference circuit. He keeps getting invited specifically because he talks about the human side of AI, which apparently makes him an outlier. Everyone else is presenting on architecture, model comparisons, and technical capabilities.

This feels like a replay of what happened with data strategy about five years ago. Companies invested in platforms and governance frameworks while assuming the technology itself would solve their problems. It didn't. Most data strategy failures traced back to human behavior, organizational dysfunction, poor communication patterns. The technical pieces were fine. The people pieces weren't.

Now we're running the same experiment with AI, except faster and with more confidence that this time will be different.

Tiankai applies the same five Cs framework from his first book (Humanizing Data Strategy) to AI, but the concepts expand in interesting ways. Competence, for example, shifts from "use AI more" to "use AI more wisely." That means understanding what tasks you should delegate to these systems versus what should stay human-only. It requires at least conceptual understanding of how the technology works so you don't make predictable mistakes.

Communication becomes more complex too. In data strategy, it was mostly human to human. Now you're managing human to machine, machine to human, machine to machine, and still human to human. Each combination has different requirements and failure modes.

One point that stood out above all others was that AI doesn't judge whether the historical data it learns from reflects good decisions or mistakes. It just knows it needs to learn from this data. So if you're training on patterns from the past, and humans weren't always great in the past (we're still making plenty of mistakes now), you're potentially amplifying and accelerating dysfunction. The only thing preventing that is conscious human choice about what to feed these systems.

We also talked about recursive training decay. If AI only learns from AI-generated output (a.k.a. dead internet theory), everything becomes bland and low quality because there's no fresh human input, no originality, no feedback loop. Which means AI actually depends on continued human contribution to stay relevant. That dependency gets forgotten in a lot of the automation rhetoric.

For data analysts specifically, Tiankai expects their work to shift. If your stakeholders want conversational interfaces and chatbots, someone needs to build the right metadata models and metric structures underneath to make that work. That's backend work that data analysts haven't traditionally owned. At the same time, he hopes the customer thinking and design thinking parts of the role stay relevant, the parts where you're talking to stakeholders and understanding their needs. So the job might end up being more bifurcated than before, some community-facing work and more infrastructure work.

Tiankai Feng on why humanity is the glue that holds federated data teams together
In Episode 4 of Couch Confidentials Powered by Martech Therapy, host Matthew Niederberger welcomes Tiankai Feng, the author of Humanizing Data Strategy. Together, they explore the intersection of data, humanity, and leadership. From responding to criticis

I asked him what human habit he'd delete from the codebase if he could. He said bad intentions, the deliberate choices to embed bias, create fake information, or pursue goals without regard for consequences. Those intentions are why we now need guardrails and why everyone has to be more cautious about what's real. We created the problem, then had to build controls around it.

The full conversation is on YouTube. We're also giving away five copies of Humanizing AI Strategy. If you want in, react to the LinkedIn post and I'll draw names.

Tiankai thinks 2026 will bring some correction to the current AI bubble. Smaller players, more nuanced approaches, less hype. I hope he's right. The gap between what AI can technically do and what organizations are actually ready to implement well keeps widening. Maybe that forces a more honest conversation about the human behaviors that need to change before the technology can deliver on its potential.

👥 Connect with us on LinkedIn:

📲 Follow Martech Therapy on: