Ethical AI Policy

Last updated 22 April 2024

This Ethical AI Policy for Kreoh Limited (“Kreoh”) sets out our policies and procedures on our development, deployment, and management of safe, responsible and ethical AI solutions which are in compliance with EU regulations such as the General Data Protection Regulation (“GDPR”) and the forthcoming AI Act, as well as any relevant national laws governing digital and AI technologies.

  1. Introduction

At Kreoh, we believe the introduction to the world of generative AI is a lightbulb moment for the legal industry. We are delighted to work with innovative legal teams to help them embrace Generative AI technologies as tools to help give them a competitive edge. But the power of AI comes with a responsibility to use it ethically and transparently.

That is why we have developed this Ethical AI Policy - to provide clear guidance for others on how to harness AI in a way that aligns with brand values.

Kreoh is committed to ethical practices in the development and application of artificial intelligence in the legal sector. This Ethical AI Policy outlines our approach and principles in ensuring that our AI solutions are developed and used responsibly.

Our focus is on maintaining transparency, accountability, and adherence to ethical standards throughout our AI systems' lifecycle. This document serves as a guide for our team and a commitment to our clients, ensuring that all AI technologies we engage with are managed with the utmost consideration for ethical implications.

In the following sections, we detail our core ethical principles, our methodologies in AI development and deployment, and our strategies for ongoing improvement and engagement with stakeholders. This statement is a reflection of our dedication to ethical responsibility in the evolving landscape of AI in legal technology.

  1. Core Values and Principles

Transparency - We are committed to clarity about how our AI systems function, the nature of the data they use, and their decision-making processes. This includes providing understandable explanations of AI outputs and decisions.

Accountability - We take full responsibility for our AI systems. This involves ensuring proper oversight, addressing any adverse impacts promptly, and being responsive to feedback and concerns related to our AI applications.

Fairness and Non-Discrimination - Our AI solutions are designed and tested to avoid biased outcomes. We strive to:

  • Regularly review and update our algorithms to prevent discriminatory biases.
  • Ensure diverse data sets to minimise skewed results.

Privacy and Data Governance - We adhere to stringent data privacy and security standards, respecting user confidentiality and ensuring compliance with GDPR, national data protection legislation and other relevant data protection laws. Key measures include:

  • Secure data handling and storage practices.
  • Clear data usage policies and user consent protocols.

Safety and Reliability - We prioritise the safety and reliability of our AI systems, ensuring they perform as intended and are resilient to manipulation and errors. Continuous monitoring and testing are integral to this commitment.

  1. Development and Deployment

Our process for developing and deploying AI solutions is guided by ethical considerations at every stage:

Responsible Development -

  • We involve a diverse range of perspectives in our development teams to anticipate and address a broad spectrum of ethical concerns.
  • Ethical risk assessments are integral to our development process, ensuring that potential issues are identified and mitigated early.

Impact Assessment -

  • In line with the EU's AI Act, we aim to conduct thorough assessments to understand the potential societal, ethical, and legal impacts of our AI solutions, taking particular care to avoid creating high-risk AI systems.
  • We engage with external experts and stakeholders to gain a comprehensive view of the implications of our technologies.

Testing and Validation -

  • Rigorous testing for accuracy, fairness, and safety is a cornerstone of our deployment strategy.
  • We employ both automated and manual testing methods, and our validation processes are transparent and open to scrutiny.

In these stages, our goal is to ensure that our AI solutions not only meet the highest standards of technical excellence but also align with our ethical commitments and societal expectations.

  1. Security

Our AI security practice includes:

  • Rigorous vetting of any third-party AI applications or language models by our IT team before using such applications as part of our AI systems. Only authorised tools may be used.

  • Ongoing cybersecurity training for all staff using AI - including how to spot AI prompt injections, phishing, social engineering, and other attacks targeting these systems.

  • Working closely with any third-party AI vendors to understand their evolving security protocols for protecting against threats like data poisoning, model extraction, and adversarial examples.

  • Conducting in-house audits and risk assessments of our AI systems to catch any vulnerabilities and quickly patch them.

  • Monitoring emerging cyber risks associated with artificial intelligence and adjusting our infosec procedures accordingly.

  • Having an incident response plan in place in the unlikely event an AI-related breach occurs.

  1. Stakeholder Engagement

Engaging with stakeholders is a critical aspect of our ethical AI framework. Our strategies include:

Collaboration with Clients - We work closely with our clients to understand their ethical concerns and requirements. This collaborative approach ensures that our AI solutions are aligned with their values and ethical standards.

User Education - We are committed to educating our users about the capabilities and limitations of our AI technology. This includes providing clear guidelines on the effective and responsible use of our AI tools.

Feedback Mechanisms -

  • We encourage feedback from users, clients, and other stakeholders to continuously improve our AI solutions.
  • We have established channels for reporting concerns or suggestions related to AI ethics, ensuring that feedback is reviewed and acted upon.

  1. Compliance with Laws and Regulations

Our commitment to ethical AI is complemented by strict adherence to legal and regulatory standards. We have internal lawyers that undertake the following:

  • We ensure full compliance with all relevant local, national, and international laws and regulations, including but not limited to GDPR, data protection, and privacy laws.

  • Our team stays abreast of evolving legal landscapes to anticipate and adapt to changes in regulations that impact AI technologies.

  • We conduct regular audits to verify compliance and implement necessary adjustments in a timely manner.

In this regard, our ethical AI practices are not only about adhering to the current standards but also about being proactive in responding to new legal and ethical challenges in the AI domain.

  1. Continuous Improvement and Monitoring

Our commitment to ethical AI is an ongoing process, characterised by continuous improvement and vigilant monitoring:

Ongoing Monitoring - We regularly monitor our AI systems to ensure they operate as intended and adhere to our ethical standards. This includes periodic reviews to identify and rectify any unintended biases or errors.

Adapting to Technological Advances - AI technology is rapidly evolving. We stay informed about the latest developments and research in AI to continually enhance our ethical practices and technological capabilities. We align our practices with the evolving EU framework for trustworthy AI, including adherence to the guidelines set forth by the European Commission's High-Level Expert Group on Artificial Intelligence.

Stakeholder Engagement - Continuous dialogue with stakeholders, including legal professionals, clients, and technology experts, helps us refine and update our AI solutions in line with ethical considerations and societal needs.

  1. Accountability and Reporting

Accountability is a key pillar of our ethical AI framework:

Internal Accountability Mechanisms - We have established clear internal processes for decision-making and accountability in the development and deployment of AI technologies. This includes designated teams responsible for ethical AI oversight.

External Reporting - We are committed to transparency in our operations and will provide regular reports on our AI ethics initiatives and compliance. These reports are accessible to our clients and the public, fostering trust and accountability. Our feedback channels are designed in accordance with the GDPR's requirements for transparency and user engagement.

Incident Response - In the event of any ethical concerns or breaches, we have a robust incident response plan to address and rectify issues promptly and transparently.

Contact Information

For more information, questions, or concerns about our approach to AI ethics, please contact us at:


© 2024 Kreoh AI Limited. All rights reserved.