Key Takeaways
- Certis Group and Ensign InfoSecurity have signed an MOU at Milipol TechX 2026 to bolster cybersecurity, safety, and ethical governance for AI‑driven robotic systems.
- The partnership tackles risks that go beyond data breaches, including manipulated sensor inputs, loss of supervisory control, and unintended physical actions by autonomous machines.
- Certis will lead safety‑ethics fail‑safes and Human‑in‑the‑Loop interfaces; Ensign will own cybersecurity requirements for communications, AI reasoning, and decision‑making processes.
- Guardrails will be embedded across the full system lifecycle—design, development, testing, deployment, and decommissioning—through standards, threat analysis, adversarial testing, and incident response.
- A joint Safety and Security Review Board will oversee implementation and ensure alignment with evolving safety and ethical expectations.
Background of the Partnership
On April 29, 2026, Certis Group, Singapore’s leading integrated security and operations solutions provider, and Ensign InfoSecurity, the Asia‑Pacific region’s largest pure‑play cybersecurity services firm, formalized a Memorandum of Understanding (MOU) at the Milipol TechX (MTX) 2026 exhibition. The announcement highlighted both organizations’ recognition that as artificial intelligence moves from assisting humans to operating autonomously, the security landscape expands beyond traditional IT concerns. By joining forces, Certis and Ensign aim to create a unified approach that addresses cyber‑physical risks inherent in AI‑powered robotics deployed in security, logistics, and transport environments.
Emerging Risks of Autonomous AI Systems
The MOU acknowledges that conventional cybersecurity threats—such as data breaches or system outages—are no longer sufficient to capture the full spectrum of danger posed by autonomous robotic systems. Vulnerabilities in these platforms can lead to manipulated sensor feeds, loss of supervisory control, or robots executing unintended physical actions with tangible, real‑world consequences. As Singapore accelerates the use of autonomous machines for security patrols, warehouse automation, and transit management, the margin for error shrinks dramatically, necessitating robust governance that safeguards both digital infrastructure and the physical spaces these systems interact with.
Structure of the Collaboration Under the MOU
Under the agreement, Certis will assume responsibility for developing and implementing safety and ethics fail‑safes, as well as designing Human‑in‑the‑Loop (HITL) interfaces that allow operators to monitor, intervene, or override AI decisions when necessary. Ensign, conversely, will oversee the cybersecurity dimension: establishing requirements for the platform’s communication interfaces, securing the AI’s core reasoning and decision‑making processes, and crafting controls, standards, and frameworks that protect against unauthorized access, tampering, and malicious manipulation. Together, the partners intend to embed guardrails throughout the entire system lifecycle—from initial concept and design through testing, deployment, and eventual decommissioning.
Key Focus Areas Outlined in the MOU
The MOU delineates several priority domains. First, it calls for developing ethical, safety‑first approaches to AI‑driven robotics, supported by shared standards governing data handling, model training, and real‑world operations. Second, it emphasizes embedding security and risk management by design across the system lifecycle, including systematic threat identification, risk assessment, and safeguards against unauthorized access or tampering. Third, it stresses implementing human‑centric safety protocols and robust HITL controls to ensure effective oversight and the ability to intervene in real time. Fourth, it aims to strengthen cyber‑physical resilience through rigorous testing—such as adversarial testing of AI behaviors—continuous monitoring, and specialized capabilities like threat analysis, incident response, and penetration testing.
Governance Mechanism: Joint Safety and Security Review Board
To ensure that the agreed‑upon measures are consistently applied and remain aligned with evolving safety and ethical expectations, the MOU proposes the establishment of a joint Safety and Security Review Board. This board will comprise representatives from both Certis and Ensign, tasked with overseeing the implementation of the agreed safeguards, reviewing audit findings, updating standards as threats evolve, and providing guidance on best practices for ethical AI deployment. By institutionalizing oversight at this level, the partnership seeks to create a feedback loop that continuously improves the resilience and trustworthiness of autonomous robotic systems.
Leadership Perspectives on the Collaboration
Certis’ Chief Information Security Officer, Alex Ooi, emphasized that securing autonomous systems transcends data protection; it is about guaranteeing the operational integrity and safety of the physical environments Certis is entrusted to guard. He noted that embedding cyber‑resilience into every technology, robot, and algorithm fosters a future where innovation and absolute trust coexist. Paul Tan, Executive Vice President of Government and Singapore Enterprises at Ensign InfoSecurity, echoed this sentiment, observing that the rapid scaling of autonomy demands that control, governance, and resilience scale in tandem. He stressed that cybersecurity must be woven into the design and deployment phases from the outset, rather than treated as an afterthought.
Strategic Implications for the Industry
The MOU signals a broader shift in how organizations deploying autonomous systems must conceptualize risk: no longer merely an IT issue to be addressed downstream, but a foundational design requirement that permeates every layer of the system. By integrating safety, ethics, and cybersecurity into the core architecture of AI‑driven robotics, Certis and Ensign aim to set a benchmark for responsible innovation in sectors where machines interact directly with people and critical infrastructure. This collaborative model may inspire similar partnerships across the Asia‑Pacific region and beyond, encouraging a holistic approach to cyber‑physical security as autonomous technologies become ubiquitous in everyday operations.

