August 22, 2023

From hype to action: charting your AI course

Strategy, values and regulation as guidance

It is critical for decision makers to gain a solid functional understanding of AI.

With the growing maturity of AI models and techniques, the technical art of the possible is no longer the most important ceiling.

Company specific boundaries will come from the strategy of a company and the values it adheres to.

Need to take into account societal values and beliefs.

ChatGPT has fixed the public spotlight firmly on artificial intelligence

In November 2022 the launch of ChatGPT has fixed the public spotlight firmly on artificial intelligence. Making the web browser access available for all and free of charge has turned out to be a powerful move to allow a very broad audience to play around with the tool and discover what it can do. Within five days of its launch ChatGPT hit the mark of one million users, and by January 2023 it surpassed the 100 million user milestone.

ChatGPT made quite a first impression as an very intuitive and easy to use search tool, with users being able to interact with the tool by asking questions in normal, natural language. While not perfect, the accuracy of the answers provided on a very broad range of topics is a convincing testimonial to the significant leaps AI models are making in terms of maturity.

Those leaps did not come overnight. They are the result of years of research, training, testing and retraining to make the underlying GPT model knowledgeable on vast amounts of information, to teach it to communicate in human language as well as to guide it to avoid toxic content as much as possible.

Large language models - a brief view on the development history

Corporate leaders across industries are taking notice 

The broad media coverage of ChatGPT has also drawn the attention of business leaders towards the potential value and especially the rapidly growing maturity of artificial intelligence. They are taking notice of the ability of AI based tools to serve as an instrument for digital transformation, whether it is to improve the current way of working for example to increase productivity, raise customer intimacy, meet sustainability targets, improve commercial excellence, boost expertise building and sharing, etc; or whether it is to transform the company business model and opening up new revenue streams by leveraging the strengths of AI.

However, also the risks associated with the use of AI at scale in a corporate environment have not gone unnoticed. Large language models (LLMs), of which GPT is an example, are not perfect. Even with the significant investment made in reinforcement learning based on human feedback (RLHF) there is a remaining risk of toxic or biased answers. And while the training of the LLMs on vast amounts of data is making it possible for these models to answer very varied questions, the answers are not always fully accurate, and can sometimes feel rather generic. Also the fact that - based on the choice of model and type of access - data provided by users to the model is being stored and maybe used for retraining the model, is important to keep in mind, especially in a corporate environment with strategically sensitive data.

Understanding AI as a necessary foundation for corporate decision making

So while using LLM-based chat and search solutions in our private lives is starting to be an automatic go-to for a lot of people, and while the value of these solutions is significant also for companies, integrating LLM and AI in general in our professional daily lives requires a bit more caution and conscious decision making. 

Before taking a stance on how to use AI and what business value to aim for, it is critical for decision makers to gain a solid functional understanding of how AI models work, how they can be shaped to generate benefits for the company, and also what their drawbacks and attention points are. 

This understanding will set business leaders up for informed decision making, ensuring they understand the potential of AI and adopt a business value driven approach, focusing on the use cases that have the highest potential for their company. In addition it allows them to grasp the risks and challenges associated with AI, not on hearsay but on objective information and knowledge, and design risk mitigation fitted to their company and intended AI use. Finally a solid understanding of AI is a necessity to be able to outline, communicate and advocate a company AI story that employees are willing to commit to and hence lay the foundation for AI adoption

Finally a solid understanding of AI is a necessity to be able to outline, communicate and advocate a company AI story that employees are willing to commit to and hence lay the foundation for AI adoption

Company and societal values as guidance on how to use AI

With the growing maturity of AI models and techniques, the technical art of the possible is no longer the most important ceiling to the corporate ambition in applying AI. 

Europe is working hard to draft the European AI act which will put in place boundaries to corporate AI use. The act will call for a classification of the AI use case by risk profile - with a number of use cases legally prohibited, and with a mandatory conformity assessment for high risk use cases, as well as requirements on the design, development and documentation of solutions for those high risk cases. The use case based assessment is - triggered by the GPT release and the resulting LLM wave - supplemented with more generic obligations for providers of foundation models and/or generative AI models, including open source providers.

At the same time, additional guidance and potential own company specific boundaries will come from the strategy of a company and the values it adheres to. The use of AI to support, enhance or automate certain activities may be technically feasible, but a company may explicitly opt not do so. 
A company with a brand image built around personal touch and human interaction would jeopardise its unique identity by adopting AI to fully automate its client-facing activities. Media companies building their business model on opinionated and validated news reporting by professional journalists are declining the use of AI in the creation of new articles.

In addition to their own strategic choices and values, companies will be expected to take into account societal values and beliefs in what constitutes acceptable AI - something that will differ by geography and which is a subjective, evolving perception of individuals.
Using AI algorithms to understand and influence human behaviour, for example aiming to increase online gaming over time, or to influence political voting behaviour is generally deemed not acceptable. Also HR, employment and education related use cases are receiving close scrutiny.

Building robust AI solutions that are reliable in various scenarios

Having defined the company AI ambition, intended AI use and boundaries to AI use, the next challenge lies in setting up AI solutions that are fit for purpose and will continue to be so over time also when exposed to changing and potentially adverse situations.

This means first of all that solutions need to be accurate in the execution of the tasks they are set up to do, and they need to continue being accurate also when the context in which they are used changes over time, or when there are explicit adverse attack on them

Even if varying by use case, AI solutions are typically designed as such that they provide insight into the drivers of a model outcome as well as a quantified indication of the reliability of that outcome. This model transparency and explainability enable users to understand and trust the AI solution.

When training AI models on historical data, attention needs to be paid to detecting and eliminating any bias or lack of diversity that might be present in that historical training data. By doing so the fairness and objectivity of the resulting model can be safeguarded

Finally it is important to keep in mind that an AI solution does not need to automate a full process or activity to be of value. Making conscious decisions around human oversight and intervention is a powerful way to ensure model reliability, as well as to build user trust in the solution - in addition to the choice to maintain human execution for strategic, company value and/or societal reasons.

We are in the middle of the third AI summer, as is clear from the high paced technological innovations and the exuberant expectations on corporate applications. In order to realise the business potential of AI, it will be critical for decision makers to gain a solid functional understanding of AI as a basis for decision making. Especially since the growing maturity of AI models and techniques is lifting the ceiling of the technical art of the possible and company strategy and values together with the societal values and beliefs are becoming more important as guidance on defining a company’s AI ambition and intended use, and potential boundaries on that.