An In-Depth Look at Arena’s Approach to Responsible AI and Data Protection
Inside This Article
As product companies turn to cloud-based platforms and artificial intelligence (AI) to drive faster innovation, streamline manufacturing, and enhance service delivery, concerns about data security and privacy have reached new heights. According to a 2025 KPMG report, 69% of corporate leaders identified data privacy as their top AI-related concern, followed by data quality (56%) and regulatory compliance (55%).1
Arena by PTC continues to integrate AI-enhanced features to improve the customer experience. We understand that earning trust requires unwavering transparency, robust security measures, and a commitment to responsible AI. In this article, we’ll examine how our cloud-native product lifecycle management (PLM) and quality management system (QMS) solutions safeguard customer data while maximizing value.
Our Commitment to Data Privacy
Our philosophy is simple: our customer’s data belongs to the customer. Whether you’re uploading product designs, requirements, or technical specifications, Arena PLM and QMS are structured to ensure that your proprietary information remains confidential, isolated, and under your control. This commitment is reflected in every aspect of the platform—from its built-in responsible AI to its multi-tenant architecture, application design, and user access controls.
PTC’s AI Policy: Responsible, Transparent, and Customer-First
No Training on Customer’s Data
One of the most common concerns with AI-powered platforms is the potential for a customer’s data to be used in model training, inadvertently exposing proprietary information. Our policy is clear: Arena does not use a customer’s data to train or fine-tune AI models. When customers interact with Arena’s AI features, their data is referenced in real time to generate responses or insights, but it is never stored, learned from, or used to alter the AI’s underlying parameters.
How Arena Uses RAG and ReAct for Secure AI Responses
Arena leverages advanced AI techniques such as retrieval augmented generation (RAG) and ReAct. These methods allow the AI to ground its responses in customer data at the time of the request, without retaining or learning from that data. This approach ensures that customer information remains private and is not incorporated into the AI’s knowledge base.
Fine-Tuning With Arena Data Only
In cases where Arena fine-tunes AI models, it does so exclusively with Arena’s own domain data—not customer data. This enables Arena to embed deep industry expertise into its AI capabilities, enhancing value for all customers without compromising the confidentiality of any individual organization’s information. Most fine-tuning occurs with traditional machine learning models, while generative AI features rely on grounding techniques rather than training on external data.
Looking Ahead: Customer-Controlled Fine-Tuning
In future software releases, Arena plans to offer customers the ability to fine-tune select AI models with their own data—and only within their own tenant, under strict governance, and on an opt-in basis. Key guardrails will include:
- Tenant Isolation: Fine-tuned models and artifacts remain scoped to the customer’s environment.
- No Backpropagation: Arena does not ingest, reuse, or mix a customer’s fine-tunes with other data.
- Flexible Hosting: Customers can choose between Arena-managed or customer-managed cloud options, aligning with their data residency and security needs.
- Governance and Transparency: Fine-tuning is opt-in only, with clear approvals, licensing, and ongoing monitoring for performance and fairness.
No Cross-Customer Data Sharing
Arena’s multi-tenant architecture ensures that each customer’s data is isolated and never mixed with data from other clients. Features such as semantic search or Q&A build tenant-specific indexes, guaranteeing that information remains within your own environment. Whether hosted in Arena’s managed cloud or a customer’s private cloud, there is no pooling or cross-contamination of data.
Application Program Interfaces (APIs), Licensing, and Fair Use
Customers own their data, while Arena owns the product schemas and access methods (UI and APIs). Large-scale data extraction via APIs requires appropriate licensing to ensure fairness, performance, and intellectual property protection.
Enterprise AI Governance
Every Arena AI feature undergoes rigorous enterprise governance and legal review. PTC’s AI Governance Policy enforces principles such as transparency, security, and human oversight, and aligns with evolving regulations like the EU AI Act and Data Act.
We do not use customer data to train or fine-tune AI models, nor do we mix data across customers. When our platforms use AI, they ground responses on customer data in real time, within their tenant. If we fine-tune models, we do so only with Arena’s own domain data. For large-scale API integrations, we support it under the right license to protect performance and intellectual property.
All of this is governed by PTC’s enterprise AI policy and legal review.
End-to-End Data Security You Can Trust
Arena’s multilayered approach to security means your data is protected at every stage. Our comprehensive security policies and processes ensure only approved users can access information, with full audit trails for accountability. We meet SOC 2 Type 2 standards and undergo regular independent reviews.
How Does Arena Ensure Physical and Cloud Infrastructure Security?
Our physical infrastructure is tightly controlled: We own and operate North American production equipment in secure, monitored facilities, while AWS manages the physical layer for ITAR/EAR-regulated companies (via GovCloud US) and for EMEA customers (via European-hosted services). Multiple firewalls, frequent security updates, and separate environments for different operations add extra layers of protection.
How Does Arena Protect Data at the Application and User Levels?
At the application level, data access is strictly controlled through encrypted interfaces. Direct database access is prohibited, and integrations use secure APIs with strict authentication and access controls.
User security is reinforced through strong authentication options, including two-factor authentication and single sign-on. Account administrators can set granular permissions, restrict access by group or location, and monitor all login attempts. Passwords are securely stored, and users can set recovery questions for added protection.
Lastly, Arena encrypts both at-rest and in-transit data, with frequent backups to secure disaster recovery sites.
Building Trust by Design
Arena’s unwavering commitment to data protection is woven into every aspect of its platform and operations. By combining a customer-first AI policy, multilayered security, and transparent governance, Arena helps organizations realize the full potential of PLM and AI without compromising their competitive edge or privacy.
Click here to learn more about Arena’s approach to data security.
References