AI Shield
Hisense EuropeAI Policy Guide
AI Policy Hero
Official Policy Document

Safe & Responsible Use
of Artificial Intelligence

This guide establishes the policy and guidelines for ethical, secure, and responsible AI use across Hisense Europe. It applies to all employees, contractors, and third parties.

Section 1

Introduction

1.1 Purpose

This document establishes the policy and guidelines for the safe, responsible, and ethical use of Artificial Intelligence (AI) at Hisense Europe. It is designed to empower employees to leverage AI's transformative capabilities while safeguarding our corporate assets, intellectual property, and customer data.

This policy provides a clear framework for the usage, development, and integration of AI technologies, ensuring alignment with our security standards, legal obligations, and ethical principles.

1.2 Scope

This policy applies to all Hisense Europe employees, contractors, and third parties who use, develop, or manage AI systems. It covers all forms of AI, including generative AI, machine learning models, and AI-powered features within software applications.

Section 2

Core Principles of AI Engagement

All AI activities at Hisense Europe must adhere to these fundamental principles:

Security First

Protecting our data and systems is non-negotiable. All AI usage must prevent data leakage and unauthorised access.

Lawful & Ethical Conduct

AI must be used in full compliance with all applicable laws, including GDPR and the EU AI Act, and in alignment with our employee code of conduct.

Human Accountability

You are ultimately responsible for any content or action produced by an AI tool. All AI outputs must be verified for accuracy and appropriateness.

Transparent Operations

The use of AI should be disclosed where it could impact any stakeholders, ensuring honesty and trust.

Risk-Managed Innovation

We encourage innovation with AI, but it must be pursued within a structured risk assessment framework.

Cautious Approach

Always ask: if someone gained access to this information, what could they do with it?

Section 3

Guidelines for Using AI Tools

This section provides clear do's and don'ts for day-to-day AI usage across all roles.

What To Do

Use Approved Tools Only

Only use AI tools that have been formally vetted and approved by the IT Department. These tools are confirmed to meet our security and data privacy standards.

Verify Before Using

Treat AI-generated content as a working draft. Critically review, fact-check, and edit all outputs for accuracy, relevance, and tone before use.

Protect Sensitive Information

Anonymise any data you input into AI tools. Avoid using specific names, project details, or figures that could be considered sensitive.

Report Issues Promptly

If you encounter any security vulnerability, biased output, or other issue with an approved AI tool, report it immediately to [email protected].

Invest in AI Literacy

Complete the recommended "Elements of AI" course and all mandatory internal training to deepen your understanding of AI capabilities and risks.

What NOT To Do

Don't Use Unauthorised AI Services

Do not use free, public AI tools (e.g., online translators, PDF converters, public chatbots) for any work-related tasks. Do not log in with your work email.

Don't Upload Confidential Data

Never input, upload, or paste confidential company information, personal data, customer details, financial records, or strategic plans into any public or unapproved AI tool.

Don't Share AI Outputs Without Review

Do not copy and paste AI-generated content directly into reports, emails, or other materials without rigorous review and editing.

Don't Accept AI Outputs as Fact

Do not accept AI-generated information as fact without cross-referencing it with reliable sources.

Don't Infringe on Intellectual Property

Do not use AI to generate content that infringes on third-party intellectual property. Ensure all final work is original and properly sourced.

Don't Share Credentials

Do not share your passwords, API keys, PINs, IP addresses, or any internal URLs with any AI system.

Don't Use for Personal Purposes

Do not use company-provided AI tools or licences for private or personal use.

Section 3.1

Special Guidance: Using the HiGPT

The HiGPT application provides convenient, on-the-go access to our in-house AI platform. While this flexibility is valuable, the mobile context increases the risk of inadvertently entering confidential information. Employees must exercise heightened caution whenever using HiGPT outside of a controlled desktop environment.

⚠ CRITICAL WARNING

All confidential details MUST be anonymised before being entered into the HiGPT application. Anonymisation means removing or replacing any information that could identify an individual, a specific project, a client, or any proprietary detail. Think before you type. If you are uncertain whether information is confidential, do not enter it.

3.2 Prompting Best Practices

The examples below show how to write safe, effective AI prompts. Click the right column to reveal the improved version.

Original — Confidential

Help me draft an email to John Smith about his refrigerator purchase on May 5th for €1,299.

Click to reveal the safe prompt
Original — Confidential

Survey from Maria Novak (Customer ID: 12345, [email protected]). Purchased: Refrigerator Model HZS3669 on March 15, 2025. Feedback: Very satisfied with energy efficiency, but door seal issues after 2 weeks. Analyse this feedback.

Click to reveal the safe prompt
Original — Confidential

Attached is our full HR exit interview dataset, including resignation letters and performance reviews for employees like Jane Doe and John Wick. Analyse why they left.

Click to reveal the safe prompt
Original — Confidential

Act as an HR innovation consultant. Research and propose creative initiatives to improve employee well-being and retention in a hybrid work environment.

Click to reveal the safe prompt
Section 4

Guidelines for Developing and Integrating AI

The following requirements apply to all teams involved in designing, building, or integrating AI-powered solutions.

MANDATORY HOSTING REQUIREMENT

All AI systems and associated data — whether developed internally or by a third party — MUST be hosted exclusively within the Hisense Europe environment.

You Should

Conduct a Risk Assessment First

Before beginning any new AI development project, complete and submit a Risk Assessment Report to the IT Department identifying security, ethical, and operational risks.

Follow Secure Development Practices

Adhere to secure software development lifecycle (SDLC) standards when building AI models or applications.

Embed Privacy by Design

Embed data protection principles into the architecture of the AI system from the very beginning.

Maintain a Human-in-the-Loop

For AI systems that make significant decisions, design a process for human review and intervention to prevent errors and mitigate bias.

Document Everything

Keep detailed records of the data, algorithms, and models used in development for transparency and auditing purposes.

Define Clear Roles

Establish clear roles and responsibilities for AI development, deployment, and monitoring.

Test Robustly Before Deployment

Perform robust testing before production deployment to ensure reliability and safety.

Implement Abuse Prevention

Apply rate limiting and throttling to prevent misuse of AI systems.

Establish an Incident Response Plan

Establish incident response plans for AI failures or misuse to ensure rapid and effective remediation.

You Should Not

Don't Train on Unvetted Data

Do not train AI models on data that has not been approved or that may contain biases related to race, gender, age, or other protected characteristics.

Don't Deploy Without Approval

Do not integrate or launch any AI system into production without formal approval from IT, CISO, Compliance, and Legal.

Don't Build Opaque Systems

Avoid developing AI systems where the decision-making process is opaque. Strive for explainability, transparency, and interpretability.

Section 5

Approved & Recommended AI Tools

Hisense Europe provides access to the following approved AI tools. Only use tools listed below for work-related tasks.

Microsoft 365 Copilot

Approved

An AI assistant integrated into the Microsoft 365 suite. Approved for all employees. Useful for tasks within the Microsoft ecosystem including Word, Excel, Teams, and Outlook.

Approved for all employees.

SharePoint Agents / Copilot Agents

Approved

AI-driven assistants for finding information within SharePoint or the Microsoft environment.

Ideal for internal knowledge retrieval.

GitHub Copilot

Recommended

AI-powered code completion and suggestion tool for software developers. Assists with writing, reviewing, and documenting code.

For development teams only.

HiStar / HiGPT

Approved

Hisense's internal, privatised AI capability platform. Offers a secure environment for Q&A, document analysis, and content generation. Recommended for employees with Hisense Global access.

Recommended for Hisense Global access users.
Section 6

Governance & Reporting

Risk Assessment Reports

All new AI projects must begin with a Risk Assessment Report. This report must be submitted to [email protected] for review before development commences. The report should outline the intended purpose of the AI system, the data it will use, and an analysis of potential risks, including:

Data privacy and security vulnerabilities
Potential for biased or unfair outcomes
Operational risks if the AI fails or produces incorrect results
Reputational risks
Contact

Contact & Incident Reporting

IT Security Team

For all questions, to submit a risk assessment, or to report a security incident, please contact the IT Security team directly.

[email protected]

Questions & Guidance

General AI policy questions and clarifications

Risk Assessments

Submit reports before starting new AI projects

Security Incidents

Report vulnerabilities, breaches, or misuse immediately