Five Challenges in Implementing AI in Automation

Five Challenges in Implementing AI in Automation
Five Challenges in Implementing AI in Automation

Implementing AI in automation and process orchestration workflows has tremendous potential to drive efficiency, process effectiveness, and better business outcomes. At the same time, real-world applications of AI come with certain challenges, risks and rewards. Beyond the deeper, existential threats identified by some AI leaders—which is a topic for an entirely different conversation—there are some practical business considerations to keep in mind before adding AI into your hyperautomation tech stack.
However, with the right expertise, partnerships and human supervision, AI systems can be incredibly effective in an automation context. Here are some of the key challenges that come along with AI implementations in automation, and how to solve them.

1. Availability of machine learning-ready data

While it sounds like a simple problem, preparing data for a machine learning model is a huge challenge. Data scientists and data engineers typically spend 80% of their effort on data preparation. Without clean data that’s designed for AI, it’s impossible to train an AI model to make it production ready. Unfortunately, failures happen too often. Gartner found 85% of AI projects fail to deliver, with only 53% of projects making it from prototype to production.
Many organizations offer machine learning-ready datasets for automation purposes, such as process execution data used to continuously improve process models, which accelerate training time and significantly reduce the effort required by both data scientists and data engineers. Any time these high-value employees can allocate their time elsewhere is key—they’re often in short supply and high demand.

2. Accuracy issues and bias in models

Accuracy and bias are two critical, yet recurring issues in AI that require human supervision. For example, generative AI applications are prone to hallucination, or making up facts based on their training dataset. In the same vein, biased datasets fed into a machine learning model can produce biased results. If a financial services firm is using an AI-driven automated system to accept or reject credit applications, for example, it’s essential to avoid well-documented, systemic biases toward women or people of color that may be contained in the training dataset.
As we progress toward AI-driven decision-making, it’s critical for humans to remain in the loop, verifying the results generated by machine learning algorithms to check bias and other forms of inaccuracy. Keeping humans in the loop is a critical step toward re-training algorithms to perform more effectively in a production environment.

3. Security

Many large language models and other machine learning models have been trained using a massive corpus of online data generated by internet users. For example, publicly available Amazon and Yelp reviews are used to train sentiment analysis algorithms.
In an enterprise context, using publicly available models like ChatGPT may pose risk to sensitive data such as personally identifiable information (PII) or intellectual property. It’s important to adhere to company policies for data security when using these tools. To avoid these types of issues, many organizations develop their own proprietary machine learning models based on internal datasets, which reduces the risk that corporate data falls into the wrong hands.

4. Legal risks

Regulating AI is an ongoing issue globally, and the legal field continues to be shaped by emerging technologies including generative AI. For example, many have raised copyright concerns around the use of AI-generated text and images. In the open source world, automated code generators have raised concerns around licensing. Some of the key concerns lie in the lack of traceability of generative AI systems—in other words, it’s hard to know where the code came from and how to attribute it to its original creator.
If organizations are using automated code generators to develop code for process models, for example, it’s best to proceed with caution when it comes to entering proprietary code or leveraging open source software.

5. Maturity

Certain technologies, such as augmented intelligence systems that automate decision-making, may not be fully ready for prime time quite yet. These technologies often require blended datasets from multiple sources to make effective decisions. Many teams don’t have the capacity to use these systems in production, whether that’s due to resource limitations or a lack of applicable training data.
However, as organizations mature and can use augmented intelligence in a human-supervised setting, these systems will become more effective at automating certain decisions. Such systems can help improve human workflows, allowing employees to allocate their time more efficiently.

While these challenges should certainly weigh into any decision to implement AI, it shouldn’t inhibit an organization’s willingness to experiment. AI, when leveraged alongside process orchestration, has the potential to increase the degree of automation, and therefore improve business operations and customer experiences. From continuous process improvement, to automating decisions, to augmenting or accelerating human workflows—the possibilities for AI in this sector are exciting and limitless.

About The Author

Jakob Freund is co-founder and CEO of Camunda. He is responsible for the company’s vision and strategy. He’s also the driving force behind Camunda’s global growth and takes responsibility for the company culture. As well as holding an MSc in Computer Science, he co-authored the book “Real-Life BPMN” and is a sought-after speaker at technology and industry events.

Xem Thêm: Hệ thống MES

Trí Cường

Translate »