ITAM Accelerate: You’ve publicly committed to safe, controlled, transparent AI. Can you talk about what “controlled AI” means in practical terms and how IT asset managers should apply these principles for themselves as they explore the benefits of AI in their own organisation and practice?
Alex Cojocaru: Controlled AI means designing systems where autonomy is intentional, bounded, and observable. It is about being explicit about what AI is allowed to say, decide, and recommend, and where human judgment is needed.
From our experience, a multi-layered approach where a response or output to a query is incrementally built, coupled with rule-based validations to keep everything in check.
Transparency means outputs are auditable, business logic is reproducible, while reversibility ensures mistakes can be corrected. AI should be treated like any other high-impact enterprise system, with access controls, logging, validation points, and clear accountability.
For IT asset managers, this translates into starting with AI as an augmentation layer rather than a decision-maker. Use it to accelerate analysis, data transformation, normalization, and insight generation, while keeping humans accountable for outcomes.
IA: With ITAM moving into the governance of AI‑enabled products, what lessons do you have for practitioners based on your own experience of using AI day to day and incorporating it into your product and internal workflows?
AC: One of the first lessons is that AI introduces an entirely new class of licensing metrics, most notably tokens, and they are neither standardized nor predictable. Each vendor defines tokens differently, applies different burn rates depending on the model and feature used, task, and often changes pricing structures rapidly. From an ITAM perspective, this means token consumption must be treated as a first-class licensing metric, monitored continuously, and tied back to concrete use cases rather than left as an abstract usage cost.
A second lesson is that safety and policy alignment matter just as much as cost control. Teams need to be explicit about not using free or consumer-grade AI tools for commercial work where data may be used to train models or shared with third parties. The same scrutiny applies to models themselves, especially open source models, where provenance, governance, and jurisdiction matter. In practice, this means validating whether a model and its vendor align with internal security, data residency, and geopolitical policies before adoption.
Another practical reality is fragmentation. There is no clear winner in AI today, and models constantly leapfrog each other in capability. Organizations that want to stay at the cutting edge often end up using different models for different tasks, e.g., Claude for coding, ChatGPT for research, Gemini for Image generation, etc.). While this diversity can be a strength in creative and technical environments, it creates sprawl unless clear use cases, approved tools, and agreed processes are defined and enforced.
Finally, there is a strong human factor that is often underestimated. Modern models are generally capable, and many poor outcomes result from people not knowing how to build a prompt or formulate a logical question or request, rather than from weak AI. Educating teams on prompt engineering, logical problem formulation, and structured requests is essential. From a governance perspective, this is not just a productivity concern; better prompts lead to more predictable outputs, lower token waste, and more consistent results across teams.
IA: Based on your experience of the pros and cons of using AI, what opportunities does AI open for ITAM teams to become more strategic—beyond cost cutting and audit defence?
(e.g., helping organisations evaluate risks in vendor AI training policies.)
AC: One of the biggest strategic opportunities AI creates for ITAM is the ability to analyze and correlate volumes of unstructured data that would be impractical to handle through traditional methods. AI can digest contracts, usage data, vendor documentation, and policy language at scale, surfacing patterns, risks, and optimization opportunities that would otherwise remain hidden. This makes AI a powerful tool for identifying exposure, assessing compliance risk, and prioritizing actions based on impact rather than intuition.
AI also plays a role in procurement and technology strategy. It can be used to compare products, evaluate licensing models, and simulate migration scenarios between vendors or platforms. For ITAM teams, this shifts their contribution from validating purchases after the fact to actively informing decisions before commitments are made, helping the organization understand trade-offs around cost, flexibility, lock-in, and long-term risk.
At the same time, organizations need to be deliberate in their choice and governance of their AI stack. This includes educating and regulating the workforce on appropriate AI use, defining which tools are approved for which purposes, and setting expectations and best practices around quality and accountability. Over time, experienced users develop a strong instinct for spotting low-quality, generic AI output, and having that kind of “AI slop” associated with your brand, especially in a services-led organization, can erode trust. Used well, AI elevates strategic thinking; used carelessly, it creates noise and reputational risk.
IA: You say that “AI can’t fix your data” and that everything begins with reliable data.
How do you balance AI automation with the need for strong data foundations, especially in messy enterprise environments?
AC: AI is very good at storytelling. Given reliable, structured data, it can build strong narratives that align with a specific business context, explain complex situations clearly, and surface insights or create real-world connections in ways that resonate with stakeholders. The key point is that the story is only as good as the data behind it. If the inputs are inconsistent or ambiguous, AI will still produce a convincing output, but it will be confidently wrong.
License management, and ITAM more broadly, is fundamentally a deterministic discipline. Assets, entitlements, usage, and compliance outcomes must be grounded in strict, rule-based analysis. That foundation cannot be probabilistic or inferred. The balance comes from enforcing deterministic logic and trusted data at the core, and only then layering AI on top to interpret, explain, and enhance the results.
If AI is used for data enrichment (e.g. tell me <product specs> for <product x>), it must sit within a multi-layered approach. Deterministic rules, reference datasets, and strong validation logic need to come first, with AI acting as an augmentation layer rather than a source of truth. Each layer should reinforce the others, making uncertainty visible and preventing AI from filling gaps with assumptions.
Once those basics are in place, AI can make everything shine. It can translate complex compliance positions into clear business narratives, highlight trends, and support decision-making without compromising accuracy.
IA: Licenseware’s multi‑layer product recognition and software enrichment depend on public sources like vendor documentation and security advisories. How do you ensure quality, currency, and trustworthiness of these inputs?
AC: At the core of Licenseware’s approach is a multi-layered intelligence platform that doesn’t just ingest data, it validates and contextualizes it through structured, verifiable sources. The system is grounded in official vendor lifecycle pages, product documentation, support policy pages, and trusted curated sources as primary inputs, with secondary validation from trusted registries like the National Vulnerability Database (NVD), endoflife.date, official GitHub releases, and SPDX registries. We explicitly avoid unverified content such as random blogs, forums, speculation, or paywalled data, ensuring every piece of enrichment can be traced back to verifiable public documentation. Each catalog entry comes with source tracking and confidence scores, so both machine-driven and human users can see exactly where the information came from and how reliable it is. This layered design enables smart version inference when vendors omit patch data, dramatically improves recognition accuracy, and eliminates blind spots in lifecycle tracking.
Beyond automated source validation, ongoing quality assurance comes from continual refreshes and cross-reference checks across thousands of public sources so the data remains current and credible. What’s perhaps most important is that this isn’t just an engineering exercise: it mirrors how human analysts have always ensured data quality in ITAM. It may sound funny because it’s so obvious, but the same measures we implement to keep our AI agents from making mistakes are used by human analyst teams to ensure data quality and prevent human error from slipping into the process.
IA: As generative AI increasingly influences how vendors license their products, how should ITAM teams prepare for the next wave of AI‑dependent licensing models?
AC: AI seems to be forcibly spoon-fed to everyone. Vendors are no longer offering AI as an optional add-on; they are embedding it directly into core products. Microsoft rebranding Office 365 into Copilot 365 is a good example, effectively turning a massive installed base into AI users overnight. Adobe and others are following the same pattern, weaving AI features throughout their portfolios. For ITAM teams, this means AI usage is no longer a niche scenario but part of the default software estate.
These AI-enabled products can absolutely increase productivity, including within ITAM itself, but they also introduce significant complexity. Usage tracking becomes harder when AI features are bundled as consumption-based add-ons rather than explicitly licensed, and governance becomes more challenging when entitlements, consumption, and data rights are abstracted behind AI functionality. ITAM teams need to expand their focus beyond traditional metrics and understand how AI features are activated, measured, and governed across the organization.
The most practical advice is to get comfortable with uncertainty and fluidity, at least in the medium term. AI is evolving at an accelerated pace, and licensing models will continue to shift as vendors experiment and adapt. Some expect this to slow as current technological limits are reached, while others believe we are on a path toward AGI. Either way, a sense of predictability is not coming anytime soon.
IA: If you were to predict the next major AI capability that will disrupt ITAM, what would it be?
AC: The next big thing will be the emergence of base agentic AI systems with strong foundational reasoning, extended through modular, domain-specific skills. Instead of isolated tools with narrow contexts, or task-specific models, enterprises will adopt a core AI agent that understands user context, intent, and constraints, and can be equipped with ITAM-specific capabilities such as license interpretation & usage analysis (on different vendors), contract reasoning, financial analysis, and policy enforcement.
In this paradigm, ITAM teams will no longer interact with multiple disconnected systems. They will work alongside a persistent AI companion that understands their environment, remembers decisions, applies rules consistently, and adapts as licensing models and vendor behavior evolve. These agents will not replace governance but embody it, operating within clearly defined boundaries and escalating decisions when human judgment is required.
This shift changes ITAM fundamentally. Rather than spending most of their time producing reports or dashboards, ITAM teams will guide and supervise intelligent agents that continuously reason over the estate. Everyone in the organization will effectively have a reliable AI companion, and ITAM’s role will be to ensure that companion is trained, governed, and aligned with organizational and contractual reality.
Here’s an example that shows a funny kind of abstraction being created, almost like a video game, in order to be able to orchestrate and manipulate a fleet of AI agents: https://github.com/steveyegge/gastown#gas-town
IA: What is Licenseware doing to control its own AI costs? What advice would you give ITAM and FinOps practitioners who have been tasked with managing and optimising AI costs?
AC: If we leave aside everything except cost, the first step is to be very clear on the use case. Broadly, AI spend falls into two categories: internal productivity use, such as office work, research, or coding, and AI that is embedded into a client-facing product. These two behave very differently from a cost, risk, and scaling perspective, and mixing them usually leads to poor decisions.
For internal productivity use, most traditional FinOps principles still apply. Token usage is the new consumption metric, but it is not standardized. Each provider defines tokens differently and applies different burn rates depending on the model and features used. Practitioners should understand how token definitions map to real usage, what limits apply to the current plan, and where throttling or overage costs may appear. Treat tokens the same way you would CPU hours or cloud storage: measurable, attributable, and optimisable.
Cost management also sits on a spectrum of trade-offs. On one end, if the goal is near-zero marginal cost and minimal privacy or security concerns, it is entirely feasible to run capable open-source models locally using tools like LM Studio, provided the hardware is sufficient. On the other end, convenience and speed may justify paying for team subscriptions to hosted tools, such as general-purpose chatbots, coding-focused models, or image generation platforms. The key is to consciously choose where you sit on that spectrum rather than drifting into it by accident.
Additionally, creating a prompt library per department is a strong FinOps and governance lever because it reduces duplication, prompt bloat, and wasted tokens by encouraging reuse of well-tested, compact prompts tailored to real use cases. Standardized prompts lead to more predictable outputs, fewer retries, and lower overall consumption, while also embedding domain context that improves quality.
For AI embedded into products, cost management becomes much more complex (and deeply tied to engineering and product design decisions). It is tightly coupled to how the surrounding infrastructure is designed and how AI is invoked within the product. Here, a multi-layered approach, with concise, controlled prompts and good prompt hygiene in general, is critical. Inputs should be sanitized, compressed, and transformed into efficient representations before being sent to models, for example, converting commonly used, verbose JSON, CSV, or even unstructured inputs into more compact, structured formats. Small improvements in prompt efficiency can have a material impact on cost at scale.