12th Feb, 2026 Read time 6 minutes

A look at the HSE’s regulatory approach to Artificial Intelligence (AI)

In January 2026, the HSE Executive released an outline of how they are approaching AI from a regulatory perspective. The statement makes clear that AI is not treated as a separate or exceptional category of risk. Instead, it sits squarely within existing health and safety law, particularly the framework established under the Health and Safety at Work etc Act 1974.


AI falls within existing health and safety law

The HSE Executive emphasises that most of the legislation it enforces is goal setting in nature. Rather than prescribing exactly how employers must achieve safety outcomes, the law sets objectives. This means it is flexible enough to apply to new and emerging technologies, including AI. For businesses, this confirms that the introduction of AI tools into the workplace does not create a regulatory vacuum. Employers must still assess risks and put proportionate controls in place, whether those risks arise from traditional machinery or machine learning systems.

This approach reflects a broader principle within UK regulation. Technology may evolve, but the duty to manage risk remains constant.

Risk assessment remains central

A key message from the HSE Executive is that those who create risks are best placed to manage them. Where AI systems affect workplace health and safety, organisations are required to carry out suitable and sufficient risk assessments and reduce risks so far as is reasonably practicable.

Importantly, the HSE Executive highlights cyber security as part of this obligation. AI driven systems connected to networks can introduce new vulnerabilities. A compromised AI system controlling machinery or monitoring safety conditions could have direct physical consequences.

This risk based approach mirrors how other high hazard sectors operate. In industries such as oil and gas, aviation and rail, safety cases and structured risk assessments are already standard practice. AI systems introduced into these sectors will need to be integrated into existing safety management systems rather than treated as standalone digital tools.

For example:

  • In manufacturing, AI used for predictive maintenance must be validated to ensure it does not overlook critical faults.

  • In logistics and warehousing, AI driven robotics must be assessed for collision risk and interaction with human workers.

  • In construction, AI based monitoring tools that track worker movement or site hazards must be reliable and secure to avoid creating false reassurance.

Regulating AI in products and machinery

The HSE Executive also clarifies its role as a Market Surveillance Authority under the Product Safety regulatory framework. This means it regulates AI embedded in workplace machinery, equipment and products placed on the market.

As AI becomes more common in industrial machinery, from autonomous vehicles to smart lifting equipment, manufacturers and suppliers will need to ensure their products meet safety requirements before reaching end users.

This principle could extend to sectors such as:

  • Healthcare, where AI enabled diagnostic tools and robotics must meet safety and performance standards.

  • Agriculture, where autonomous tractors and drone systems operate in dynamic environments.

  • Energy, particularly in technologies supporting net zero targets such as smart grids and AI optimised storage systems.

In each case, the combination of software intelligence and physical systems creates hybrid risks that must be addressed through design, testing and ongoing monitoring.

Supporting innovation while protecting people

The HSE Executive states that its regulatory approach is proportionate and risk based. The aim is not to slow innovation, but to ensure it is introduced safely.

To support this, the statements outlines that the organisation is:

  • Coordinating internal expertise through an AI common interest group.

  • Working with government departments to shape the broader regulatory framework.

  • Collaborating with other regulators through forums such as the AI Standards Forum for UK Regulators and the United Kingdom Health and Safety Regulators Network AI and Innovation Sub Group.

  • Contributing to the development of national and international standards for safe AI use.

  • Building internal scientific and specialist capability in AI.

  • Conducting horizon scanning to monitor emerging developments.

  • Supporting aligned research and operating an Industrial Safetytech Regulatory Sandbox to explore practical barriers to adoption in construction.

  • Undertaking a feasibility study, funded by the AI Security Institute, on AI use in supply chains for new and emerging net zero technologies.

This collaborative model reflects how regulation has evolved in other transformative periods. During the rise of biotechnology, nuclear energy and advanced robotics, regulators worked alongside industry and academia to understand risks before setting clearer benchmarks.

The sandbox approach in particular is notable. By trialling technologies in a controlled regulatory environment, the HSE Executive can identify unintended consequences early. This model could be replicated in other areas, such as fintech or autonomous transport, where real world testing under regulatory oversight helps balance innovation and safety.

Building benchmarks and normalising AI risk

The HSE Executive’s longer term ambition is that AI risk should no longer be seen as novel. As benchmarks and standards develop, AI should be managed like any other workplace risk.

This is significant. It suggests that the future of AI regulation in the UK will not necessarily involve entirely new layers of sector specific safety law, but rather the adaptation of existing frameworks supported by clearer technical standards.

Other regulators may take note. Environmental regulators, financial services supervisors and building safety authorities are all grappling with how AI intersects with their remits. The HSE Executive’s model demonstrates how a principles based legal framework can remain resilient in the face of technological change.

Looking ahead

The HSE Executive confirms that it will continue to refine its approach as AI capabilities expand. Engagement with stakeholders will remain central, and the regulator will draw on its experience of helping Great Britain adapt safely to previous waves of technological change.

For employers, the message is clear. The adoption of AI does not reduce legal responsibility. Risk assessment, proportionate controls and continuous monitoring remain the foundation of compliance.

For innovators, the signal is equally clear. Safe design, transparency and robust cyber security are not optional extras but core components of bringing AI enabled products and systems into the workplace.

As AI moves from pilot projects to embedded infrastructure across sectors, the HSE Executive’s approach sets out a pragmatic blueprint. Protect people and places first, enable growth through collaboration, and ensure that new technology is subject to the same disciplined risk management as the systems it replaces.

Brands who we work with

Sign up to our newsletter
Keep up to date with all HSE news and thought leadership interviews