The Challenges and Opportunities of Artificial Intelligence for Businesses

How to navigate the ethical, legal, business and social implications of AI and how will the EU Act on AI influence this

Artificial intelligence (AI) is transforming the world of business, creating new opportunities for business growth, efficiency increase, new ways of working and competitiveness. However, AI also poses significant challenges and risks, such as ethical dilemmas, human rights violations, social disruption and legal uncertainty. How can businesses ensure that they use AI responsibly and in compliance with the law (EU Act on AI)? How can they benefit from the EU Act on AI, which aims to establish a common framework for trustworthy and human-centric AI in the European Union? AI is global and impacts everyone, as a general-purpose technology (GPT). In this blog post, we will explore these questions and provide some guidance for businesses that want to embrace AI in an ethical way and as an additional business asset.

The Threats and Risks of Artificial Intelligence
AI is a powerful and fast evolving technology that can be applied to any domain, industry or sector, such as health, education, transport, finance, security, entertainment, military, etc. However, AI also comes with many significant threats and risks, for individuals, for organizations, for businesses and for society. Some of the main challenges and concerns related to AI are:
  • Ethical issues: AI can raise ethical questions about the values, principles and norms that guide its development and use. For example, how can we ensure that AI respects human dignity, autonomy, privacy and fairness? How can we prevent AI from discriminating, manipulating or harming people? How can we balance the benefits and risks of AI for different stakeholders and groups?
  • Human rights issues: AI can affect the protection of human rights, such as liberty, security, privacy, expression, education and work. For example, how can we ensure that AI does not infringe on people's rights to privacy and data protection, especially when it involves personal or sensitive data? How can we ensure that AI does not violate people's rights to freedom of expression and information, especially when it involves content moderation, censorship or disinformation (hallucinations)? 
  • Social issues: AI can have social impacts, such as changing the nature of work, education, communication and culture. For example, how can we ensure that AI does not create or strengthen social inequalities, such as the digital divide, the gender gap or the skills gap? How can we ensure that AI does not erode social cohesion, trust and democracy, especially when it involves polarization, radicalization or manipulation? How can we ensure that AI does not threaten social diversity, identity and creativity, especially when it involves homogenization, stereotyping or imitation?
  • Legal issues: AI can create legal uncertainty, such as the lack of clarity, consistency, and accountability in the regulation and governance of AI. For example, how can we ensure that AI complies with the existing laws and regulations, such as the EU General Data Protection Regulation (GDPR), the EU Charter of Fundamental Rights or the EU Product Liability Directive? How can we ensure that AI is subject to effective oversight, audit and enforcement mechanisms, such as the EU Data Protection Authorities, the EU Fundamental Rights Agency or the EU Court of Justice? Who is liable and accountable for AI actions and outcomes, especially when it involves autonomy, complexity or unpredictability?

Many questions and there and still many more. Based on how the technology will continue to evolve, there will still be more questions to come.

How to Address the Threats and Risks of Artificial Intelligence
To address the threats and risks of AI, businesses need to adopt a responsible and ethical approach to AI development and use, based on the principles of trustworthiness and human-centricity. Some of the key steps and measures that businesses can take to achieve this are:
  • Ensure an AI governance system: Businesses should ensure an AI governance, which is a set of policies and standards that regulate and monitor the AI development and use within the company or organization. The AI governance should also include the mechanisms and tools that enable the oversight, audit, enforcement of an AI framework and the AI impact assessment. The AI governance should be transparent, accountable and participative, involving the relevant stakeholders, such as the employees, customers, partners, regulators, society or academia.
  • Implement an AI framework: Businesses should implement an AI framework, which is a set of guidelines and procedures which align with the values, principles and norms that should guide the AI development and use. The AI framework should also specify the roles and responsibilities of the different actors involved in the AI lifecycle, such as the developers, providers, users and regulators. The AI framework could also be aligned with the applicable ethical and legal frameworks, such as the EU Ethical Guidelines for Trustworthy AI, the OECD Principles on AI or the UN Guiding Principles on Business and Human Rights.
  • Conduct an AI impact assessment: Before developing or deploying an AI solution, businesses should conduct an AI impact assessment, which is a systematic and comprehensive analysis of the potential impacts and risks of the AI solution. The AI impact assessment should also identify the mitigation measures and safeguards that can be implemented to prevent or minimize the negative impacts and risks of the AI solution. When there are already existing AI solutions, an AI impact assessment should still be done on all the existing AI solutions.
  • Implement, deploy and monitor AI risk mitigation measures and safeguards: Based on the above three described activities, there are several risk migration measures and safeguards that need to be implemented, deployed and monitored throughout the entire use of AI within businesses. To increase the trustworthiness and usage of the AI solutions.

What is Good Governance of Artificial Intelligence for Businesses
Good governance of AI for businesses is not only a matter of compliance and risk management, but also a matter of vision, strategy, opportunity and competitive advantage. By adopting a responsible and ethical approach to AI, businesses can benefit from the following advantages:
  • Enhance trust and reputation: By demonstrating that they use AI in a trustworthy and human-centric way, businesses can enhance their trust and reputation among their customers, partners, staff, regulators and the public. This can also increase their customer loyalty, satisfaction and retention, as well as their market share, revenue and profitability.
  • Improve quality and performance: By ensuring that their AI solutions are aligned with the ethical and legal standards, businesses can improve the quality and performance of their AI solutions, such as their accuracy, reliability, robustness and security. This can also reduce the costs and risks associated with the AI solutions, such as the errors, failures, breaches or litigation.
  • Innovate and differentiate: By incorporating the ethical and social aspects into their AI development and use, businesses can innovate and differentiate their AI solutions, creating new value propositions and competitive advantages. This can also foster their creativity and diversity, as well as their collaboration and co-creation with the different stakeholders and experts.

How the EU Act on Artificial Intelligence will Affect Businesses
The EU Act on AI is a proposed regulation that aims to establish a common framework for trustworthy and human-centric AI in the European Union. The EU Act on AI will affect businesses in several ways, such as:
  • Classify AI solutions according to their risk level: The EU Act on AI will classify AI solutions into four categories, based on their potential impact and risk on the ethical, human rights, social and legal aspects. The four categories are: prohibited AI solutions, high-risk AI solutions, limited-risk AI solutions and minimal-risk AI solutions. Depending on the category, different rules and obligations will apply to the AI solutions, their providers and users.
  • Set requirements and obligations for AI solutions: The EU Act on AI will set specific requirements and obligations for AI solutions, such as data quality, technical documentation, human oversight, transparency, accuracy, robustness and security. The providers and users of AI solutions will have to comply with these requirements and obligations, as well as to conduct an AI impact assessment, implement an AI ethics framework, and ensure an AI governance system.
  • Establish a European AI Board and national AI authorities (work in progress): The EU Act on AI will establish a European AI Board and national AI authorities, which will be responsible for the coordination, supervision, and enforcement of the EU Act on AI. The European AI Board and the national AI authorities will also provide guidance, support and advice to the providers and users of AI solutions, as well as to the other stakeholders and experts.
  • Create a European AI Trustmark (AI4Copernicus) and a European (Regulatory) AI Sandbox: The EU Act on AI will create a European AI Trustmark and a European AI Sandbox, which will be voluntary schemes that aim to promote and facilitate the development and use of trustworthy and human-centric AI in the European Union. The European AI Trustmark will be a label that certifies that an AI solution complies with the EU Act on AI and the EU Ethical Guidelines for Trustworthy AI. The European AI Sandbox will be a testing environment that provides access to data, infrastructure, and expertise for the experimentation and validation of AI solutions.

The EU Act on AI is expected to enter into force in 2024, after the approval of the European Parliament and the Council of the European Union. After these approvals a local translation of the act will still need to be done on the local government level (Target: April 2026). The EU Act on AI will have a significant impact on the AI landscape in the European Union, as well as in the global market. Therefore, businesses should start preparing for the EU Act on AI, by assessing their current and future AI solutions, implementing the best practices and standards for trustworthy and human-centric AI, and engaging with the relevant stakeholders and experts.
Are you ready to navigate the challenges and opportunities of AI for your business? Contact us today to learn how our consulting, coaching, and training services can help you ensure responsible and ethical AI development and use, comply with the EU Act on AI, and benefit from the advantages of good AI governance. Don't miss out on the chance to enhance your business's trust, reputation, performance, and competitiveness with our expert guidance.

About the author: Steven Ackx knows so much about AI, he's probably a bot himself!

Steven Ackx, DX Gladiators, February 2024

Share by: