When I transitioned from analyzing bills at the Texas Legislative Council to working on EU AI Act compliance and AI governance at a tech startup, I expected a steep learning curve on the technical side. What I didn’t expect was how much my policy training would become the foundation for everything else.
The EU AI Act isn’t just a technical compliance challenge—it’s a policy implementation problem. And policy implementation is something public administration professionals have studied, practiced, and refined for decades. The skills that make someone effective at legislative analysis—parsing dense statutory language, understanding regulatory intent, anticipating implementation challenges—turn out to be exactly what AI governance demands.
Why Policy Expertise Matters in AI Governance
The AI Act is, at its core, a piece of legislation. It was drafted by policymakers, debated in legislative chambers, and written in the language of regulation. Yet much of the compliance conversation focuses on technical solutions: logging systems, monitoring dashboards, audit trails.
This technical focus misses something crucial. The AI Act identifies over 80 specific obligations for high-risk AI systems—each requiring interpretation and operational translation. Before you can build compliant systems, you need to understand what compliance actually means. That requires reading regulatory text the way a legislative analyst reads a bill—not just for what it says, but for what it implies, what it leaves ambiguous, and how enforcement authorities are likely to interpret it.
Consider Article 9’s requirement for “risk management systems.” The text calls for processes that identify, analyze, and mitigate risks throughout the AI system lifecycle. A purely technical reading might focus on building risk scoring algorithms. A policy reading asks different questions: What counts as a “risk” under this framework? How will regulators evaluate whether your risk management is adequate? What documentation will demonstrate compliance during an audit?
These aren’t technical questions. They’re policy questions. And answering them requires the interpretive skills that policy professionals develop through training and practice.
The Translation Problem
Every regulatory framework faces what I think of as the translation problem: the gap between legislative intent and operational reality. Lawmakers write in abstractions—”appropriate levels of accuracy,” “effective human oversight,” “transparent operations.” Organizations must convert these abstractions into specific, measurable, implementable practices.
This translation work is inherently interdisciplinary. It requires understanding both the regulatory context (why lawmakers chose particular language, what problems they were trying to solve) and the technical context (what’s actually possible, what trade-offs different approaches involve). Neither policy professionals nor engineers can do this translation alone.
In my experience, the most effective AI governance happens when policy professionals and technical teams work as genuine partners. Policy analysts bring interpretive frameworks, regulatory knowledge, and experience with how enforcement typically unfolds. Engineers bring understanding of system architectures, data flows, and implementation constraints. Together, they can develop compliance approaches that are both legally sound and technically feasible.
Lessons from Legislative Analysis
My time at the Texas Legislative Council taught me to read legislation with particular attention to several elements that prove directly relevant to AI Act compliance:
Definitions Matter Enormously
Legislative drafters spend enormous effort on definitions sections because they know definitions shape everything that follows. The AI Act is no exception. Its definitions of “AI system,” “deployer,” “provider,” and “high-risk” determine which obligations apply to which actors.
Policy professionals are trained to trace how definitions propagate through statutory text. When the AI Act defines “high-risk AI systems” by reference to Annex III categories, a policy analyst knows to read that annex carefully, understand the reasoning behind each category, and anticipate how edge cases might be classified.
Legislative History Informs Interpretation
Regulators interpreting ambiguous provisions will look to legislative history for guidance. The AI Act went through extensive revision during the legislative process, with significant debates over scope, definitions, and enforcement mechanisms. Understanding these debates—what alternatives were considered and rejected, what compromises shaped the final text—provides insight into how provisions are likely to be interpreted.
This is research that policy professionals know how to conduct. We’re trained to dig into committee reports, floor debates, and amendment histories to understand legislative intent. This same methodology applies to EU legislative processes, even if the specific sources differ.
Implementation Always Reveals Gaps
No statute anticipates every situation. Implementation inevitably surfaces ambiguities, edge cases, and practical challenges that drafters didn’t foresee. Policy professionals expect this. We’re trained to identify potential implementation problems before they become crises and to develop interpretive frameworks that provide reasonable guidance even when the text is unclear.
The AI Act will be no different. Organizations that approach compliance with rigid, literalistic interpretations will struggle when they encounter the inevitable gaps. Those with policy professionals who can reason about regulatory intent and develop defensible interpretations will navigate uncertainty more effectively.
Building Policy Capacity for AI Governance
For organizations taking AI Act compliance seriously, building internal policy capacity is as important as building technical infrastructure. This means:
Hiring Policy Expertise
AI governance teams need people who can read regulations, understand enforcement patterns, and translate legal requirements into operational guidance. This might mean hiring policy analysts, compliance specialists, or lawyers with regulatory experience. The specific title matters less than the skill set: interpretive sophistication, regulatory knowledge, and the ability to communicate across technical and legal domains.
Creating Cross-Functional Processes
Compliance can’t be siloed. Technical teams need ongoing input from policy experts as they design and implement systems. Policy teams need ongoing input from engineers to understand what’s technically feasible. Organizations should build processes—regular meetings, shared documentation, integrated project teams—that facilitate this collaboration.
Developing Interpretive Frameworks
Rather than treating each compliance question in isolation, organizations benefit from developing coherent interpretive frameworks that guide decision-making across situations. What principles will guide how you interpret ambiguous provisions? How will you document and defend your interpretations? What triggers will prompt you to revisit earlier conclusions?
These frameworks should be developed collaboratively, drawing on both policy and technical expertise, and should be documented clearly enough that they can be explained to regulators if necessary.
The Opportunity for Policy Professionals
The AI Act represents an inflection point for policy careers. As AI governance becomes a permanent feature of the regulatory landscape, organizations will need sustained policy expertise—not just for initial compliance, but for ongoing adaptation as regulations evolve, enforcement patterns emerge, and new AI capabilities raise novel governance questions.
For policy professionals interested in technology, this is an extraordinary opportunity. The field is new enough that generalist policy skills—analytical rigor, interpretive sophistication, understanding of regulatory processes—provide genuine value. You don’t need a computer science degree to contribute meaningfully to AI governance. You need the ability to read complex regulatory text carefully, translate between technical and policy domains, and help organizations navigate uncertainty.
The AI Act is just the beginning. The EU is already developing additional AI-related regulations. The UK, Canada, Singapore, and other jurisdictions are advancing their own frameworks. The United States continues to develop sectoral approaches. Policy professionals who develop expertise in AI governance now will be positioned to contribute across these evolving regulatory landscapes.
The Global Dimension
AI governance is fundamentally an international challenge. The EU AI Act doesn’t exist in isolation—it’s part of an emerging global patchwork of AI regulations that organizations must navigate simultaneously.
My graduate work in international studies has shaped how I think about this landscape. The EU’s approach reflects distinctly European regulatory traditions: precautionary, rights-based, and comprehensive. The UK’s framework emphasizes sectoral flexibility and innovation-friendliness. China’s regulations prioritize state oversight and content control. The US continues its tradition of sector-specific regulation rather than comprehensive legislation.
For multinational organizations, this fragmentation creates complexity. A single AI system deployed globally may need to comply with multiple, sometimes conflicting, regulatory regimes. Policy professionals who understand comparative regulatory approaches—who can identify where frameworks converge and where they diverge—provide essential guidance for navigating this complexity.
Understanding the European Context
Understanding the EU AI Act also requires understanding the EU itself: how its institutions function, how regulations are enforced across member states, and how the European approach to technology governance differs from other jurisdictions. This is where international relations and comparative politics training proves valuable. The AI Act isn’t just a technical document—it’s an expression of European values about technology, rights, and the relationship between citizens and automated systems.
The Path Toward Harmonization
As AI governance matures, we’ll likely see increased international coordination—mutual recognition agreements, harmonization efforts, and cross-border enforcement cooperation. Policy professionals who can operate across these international dimensions will be increasingly valuable to organizations building AI systems for global markets.
What This Means for Your Career
The EU AI Act demands a new kind of professional fluency—comfort with both regulatory interpretation and technical systems, ability to translate between policy and engineering, understanding of how governance frameworks actually operate in practice.
Policy professionals bring essential skills to this work. Our training in legislative analysis, regulatory interpretation, and implementation challenges provides foundations that purely technical approaches lack. The organizations that will navigate AI governance most effectively are those that recognize this value and build genuine partnerships between policy and technical expertise.
The gap between regulatory intent and technical implementation won’t bridge itself. That’s work for people who understand both sides—and who can help organizations build AI systems that are not just compliant, but genuinely trustworthy.
John Skelton is a Policy Analyst at Integrity Studio, where he brings a public policy perspective to AI governance and compliance. He is pursuing a Master of Public Administration at Colorado State University with a concentration in International/Global Studies, and previously served as a Bill Analyst at the Texas Legislative Council. His work focuses on bridging the gap between technology governance and data-driven decision-making. Connect on LinkedIn