
Tony Maciel
Co-Founder & Head of Product Management
Apr 7, 2026
How to Keep Data Safe When Using AI in Project Controls
Tony Maciel holds a Bachelor of Science in Mechanical/Manufacturing Engineering from Worcester Polytechnic Institute and brings over two decades of experience in enterprise technology and capital-intensive project systems.
Before introducing LoadSpring Elsie AI™ in 2025, our team spent a lot of time with customers trying to understand why the industries we serve were lagging in AI adoption. One particular concern came up repeatedly: security.
Megaprojects generate massive amounts of sensitive data like capital allocations, contract values, resource plans, and regulatory commitments. Organizations knew they needed AI to make sense of it all, but they were understandably wary of what using it might cost them in terms of exposure. The real question wasn't whether to use AI, it was how to deploy it without introducing new risk.
Why Does Data Risk Increase When Using AI In Project Controls?
Risk goes up when AI tools operate outside governed environments, or when they generate responses without traceable database queries.
Common exposure drivers include:
Exporting schedule or cost data into public AI tools
Running AI outside the enterprise security perimeter
Allowing AI to infer or fabricate missing information
Weak alignment with role-based permissions
Lack of automated validation before system updates
In capital-intensive project environments, even small inconsistencies in reported metrics can influence executive decisions. AI systems that rely on pattern-based generation rather than direct database queries introduce ambiguity at the data layer, and that’s where it hurts the most.
How Do You Keep Data Safe When Using AI In Project Controls?
Secure AI in project controls comes down to architecture and governance, not just model capability. In practice, that means:
Deploying AI within governed environments
Requiring deterministic data retrieval from live systems
Enforcing role-based access controls
Preventing shadow AI data movement
Running continuous validation testing before system updates
Security posture is defined by how AI is deployed and controlled.
Practical Steps to Implement Secure AI In Project Controls
Organizations can take the following actions to use AI securely:
Deploy AI within your enterprise-controlled infrastructure. That means AI runs inside your security perimeter, not through a public tool or detached third-party service that moves your data outside it.
Require deterministic, traceable database queries for all numeric responses. In plain terms: AI should pull exact values from your database, not calculate or estimate answers on its own.
Ensure AI respects existing role-based access controls. If a user can't see certain data in your project system, they shouldn't be able to surface it through AI either.
Log AI queries and outputs for auditability. Every question asked and every answer returned should be recorded so you can trace where data came from.
Run automated validation and consistency testing before updates. Before any system update goes live, run automated checks to confirm AI responses haven't shifted. The same question should always return the same data.
Design the system to report missing data explicitly rather than infer it. An AI that says "that data isn't available" is far safer than one that fills the gap with a best guess.
Test repeatability regularly by asking the same questions across isolated sessions. Open a fresh session with no prior context and ask the same factual question. If the answer changes, something in the retrieval layer is inconsistent.
Security posture improves when AI behavior is measurable, testable, and governed before release.
Common Mistakes to Avoid When Deploying AI
Even organizations with strong security practices can introduce risk if AI isn't deployed thoughtfully. Watch for these missteps:
Using public AI tools for project data. Pasting schedule data, cost figures, or contract details into a consumer AI tool moves that data outside your security perimeter immediately. Consumer AI is anything your organization hasn’t formally approved and doesn’t govern—if it’s a personal login and not an enterprise-controlled environment, it’s outside your perimeter. Enterprise-contracted AI, by contrast, is formally approved and deployed under governance—such as corporate-paid enterprise tenants or AI capabilities designed to operate inside controlled cloud environments using private endpoints and strong data isolation.
Assuming the AI is pulling live data when it isn't. Some AI implementations work from cached or exported snapshots rather than live database queries. If your AI isn't connected directly to your systems of record, the data it returns may already be outdated. Trustworthy AI surfaces the source system and “as of” timestamp with its answers, just like an executive report would. Transparency around data freshness isn’t a technical detail; it’s a governance requirement.
Letting AI fill gaps with inferences. An AI that confidently answers a question with estimated or pattern-based data is more dangerous than one that says, "I don't have that information." Make sure yours is designed to do the latter. If your AI doesn’t have a citations/evidence panel or an “explainability” view that exposes the underlying logic or query used then you need to treat the content with speculation. A simple but strong engineering test: ask the same factual question repeatedly in clean sessions (no conversational memory) and see if the answer changes. Grounded answers should be stable on facts (numbers/entities). If “facts” vary, the model may be guessing, pulling inconsistent context, or mixing in narrative inference.
Overlooking access controls during AI deployment. Role-based permissions that exist in your project systems need to carry over to your AI layer. A junior team member shouldn't be able to ask an AI for data they wouldn't otherwise have access to.
Ensuring AI Reduces Risk, Not Introduces It
Keeping data safe when using AI in project controls requires disciplined deployment architecture.
AI should operate as a deterministic retrieval layer across governed systems, returning data directly from live databases and undergoing continuous validation before release. When designed this way, AI reduces ambiguity, preserves auditability, and strengthens operational control.
In capital project environments, architecture is what determines whether AI introduces risk — or reduces it.
Start the Conversation About Secure AI Deployment
Exploring AI in your project systems? A conversation about deployment architecture and data governance is the right place to start. The LoadSpring team can share how deterministic, governed AI can operate inside your existing environment. Contact us today.
Frequently Asked Questions
Is it safe to use ChatGPT with project controls data? Generally, no — not without significant safeguards. Public AI tools are not designed to operate within your enterprise security perimeter, and submitting sensitive project data to them creates real exposure risk. If teams want to use AI more safely, a key step is to keep data inside a controlled environment by using enterprise AI solutions, private instances, or tools that connect directly to systems like P6, Autodesk, and EcoSys without exporting data externally.
What's the difference between deterministic AI and generative AI? Generative AI produces responses based on patterns in its training data, which means it can infer, estimate, or occasionally fabricate information. Deterministic AI retrieval, by contrast, pulls exact values from a live database and returns only what's actually there. For factual project data you want deterministic retrieval, not generative.
How do I make sure AI follows role-based access controls? Test it. Log in as a user with restricted permissions and ask the AI for data that role shouldn't be able to access. If it returns that data, your access controls aren't carrying over to the AI layer. This should be part of your validation process before any deployment goes live.
Why does AI give different answers to the same question? Inconsistency in factual responses is a red flag that the AI isn't retrieving data deterministically. It may be inferring answers rather than querying your database directly. Raise it with your vendor or implementation team and ask specifically how factual queries are resolved. They should be able to point to the exact database query behind any response. You can also run a quick test internally: Ask the same factual question multiple times and see if the answer changes. Then ask, “Where exactly did that number come from?” A reliable, enterprise-ready AI should consistently return the same answer and be able to trace it back to a specific record, report, or database query.
