ChatGPT logo
The Local Language Model (LLM) behind GPT applications should be made to work for enterprise needs too. Image Credit: REUTERS

It’s good to chat. Of late, it would appear that it’s good to be good at chatting. Especially if you are an AI bot.

People everywhere - many of whom were not aware of how far Natural-Language Processing (NLP) had come - marveled at the capabilities of OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard. The underlying technology of these AI superstars is the Large Language Model (LLM), a neural network built by churning through volumes of text to create a system able to synthesize human-like text in response to prompts from users.

The interest we have seen, however, is in products that are largely for consumers. They allow one-to-one, real-time interaction through ad-hoc typing or pasting of text in a Web interface. But the potential for models that can be plugged into enterprise platforms is phenomenal.

How do we approach adoption of these powerful technologies so they can become part of our Everyday AI culture?

There are two ways to accomplish this. The first would be APIs (application programming interfaces, which allow bespoke code to make calls to an external library at run-time) exposed by cloud-native services. The second would be self-managed open-source models.

Let’s chat

Providers like OpenAI, AWS, and GCP already provide public model-as-a-service APIs. They have low entry barriers and junior developers can get up to speed with their code frameworks within minutes. API models tend to be the largest and most sophisticated versions of LLM, allowing more sophisticated and accurate responses on a wider range of topics.

The hosted nature of the API may mean that data residency and privacy problems arise — a significant issue for privately owned GCC companies when it comes to regulatory compliance. There are also cost premiums to an API, as well as the risk of a smaller provider going out of business and the API therefore no longer being operable.

What about an open-source model, managed by the organization itself? There is a wide range of such models, each of which can be run on the premises or in the cloud. Enterprise stakeholders have full control over the availability of the system.

While costs may be lower for the LLM itself, setting up and maintaining one necessitates the onboarding of expensive talent, such as data scientists and engineers.

Different use cases within a single organization may require different approaches. Some may end up using APIs for one use case and self-managed, open-source models for another. For each project, decision-makers must look to a range of factors.

They must consider risk tolerance when using the technology for the first time, and choose a business challenge where the department has a certain tolerance for such risk. Looking to apply LLM tech in an operations-critical area is ill-advised. Instead, look to provide a convenience or efficiency gain to a team.

Finally, traditional NLP techniques that don’t rely on LLMs are widely available and can be well adapted to specific problems.

Importance of moderation

Every LLM product should be subject to human review. In other words, the technology should be seen as an extraordinary time-saver for first drafts, but organizations should retain their review structure to ensure accuracy and quality.

Let LLMs work to their strengths. LLMs are best used for generating sentences or paragraphs of text. To this end, it is also necessary to have a clear definition of what success looks like. What business challenges are being addressed and what is the preferred — and preferably, measurable — outcome? Does LLM technology deliver this?

Discussions of business value bring us neatly to a further consideration that applies to the entire field of AI and to matters of ESG — responsible use. Organizations that build or use LLMs are duty-bound to understand how the model was built.

Every machine-learning and neural-network model that has ever existed was only as accurate, equitable, and insightful as the data used in its construction. If there was bias in the data, then there will be bias in the LLM products.

Responsible AI does not just cover the general public. What of the employee? LLM builders must have an appreciation of the model’s impact on end users, whether these are customers or employees.

For example, ensuring that users know they are interacting with an AI model is critical. It is helpful to be very plain with users on how and where models are used and be open with them about drawbacks, such as those regarding accuracy and quality. The principles of responsible AI dictate that users have the right to full disclosure so that they can make informed decisions on how to treat the product of a model.

Governance and accountability

Many of these issues are addressed through a robust governance framework. Processes for approving which applications are appropriate uses for each technology are an indispensable part of an Everyday AI culture.

The rules of responsible AI make it plain that individual data scientists are not the right decision makers for which models to apply to which use cases. Their technical expertise is invaluable input, but they may not have the right mindset to account for wider concerns.

As with all business decisions, it is important not to run and join the LLM procession just because you hear the band playing. Wait, watch, evaluate. And then make the moves that are right for your organization.

LLM has a place in the modern enterprise. Make sure you place it well.