There is a position in the current AI landscape that is more dangerous than either full adoption or principled resistance. It is the position of being AI-adjacent: close enough to the conversation to feel informed, far enough from the implementation to avoid accountability, and positioned such that the consequences of AI deployment arrive at your door regardless of your engagement with the technology itself.

The Adjacent Executive

The AI-adjacent executive attends the conferences, uses the vocabulary, approves the budgets, and delegates the execution. They can speak convincingly about "transformation" and "capability." But they cannot distinguish between a genuine AI deployment and a relabelling exercise. They cannot audit an AI pipeline. They cannot assess whether a vendor's claims about their model are technically sound or commercially motivated.

This position is dangerous because it feels safe. It provides the social signalling of engagement without the cognitive cost of understanding. It allows the executive to believe they are participating in the AI transition when, in reality, they are merely observing it — and the gap between observation and understanding is where the risk concentrates.

The AI-led position carries execution risk. The AI-resistant position carries strategic risk. But the AI-adjacent position carries both — and adds the unique risk of not knowing which one you are exposed to.