The pandemic has forced executives to rethink how they deploy new AI initiatives and optimize AI resources.
AI has the potential to drive tremendous business results, but far too many organizations have used it as an isolated and shiny new tool without integrating it into their organizations at scale.
A PwC survey shows that 52% of executives are ramping up their AI approaches in the wake of Covid-19. Despite that, the pandemic has forced executives to rethink how they deploy new AI initiatives and optimize AI resources, which means building a framework to operationalize AI is now a top priority.
Operationalizing AI is all about how to take the small AI projects you’ve been experimenting with and applying them to real business problems by replicating them at speed and scale. It’s about going from data to insights. Here are three steps for organizations to rethink how they operationalize AI in 2021:
Automating AI (AutoAI and AutoML)
Though automating AI might sound redundant, the analogy of cooking a pizza can bring a lot of clarity to the concept. Using today’s ovens, you can automate much of the pizza-baking process in terms of timing and temperature, but you still have to keep an eye on it. There’s a certain art to balancing the “automated” baking elements while still manually ensuring that the pizza is baking properly.
When building an ML model, there is a lot of raw data that needs to be orchestrated and systematically processed. Just as with the pizza, the art is in automating much of this process while still maintaining human oversight and quality control.
AutoML uses machine learning algorithms and real-time data to continuously improve the target ML model’s performance. Humans can then ensure that bias is not creeping into the model and that it maintains certain performance standards. As humans and machines continue to collaborate to improve the model over time, it creates a feedback loop of success.
Make AI Explainable (XAI)
Responsible AI is a growing field of research within AI. Its framework helps build trust in your AI deployments. Ignoring such a framework can undermine your relationship with customers and employees alike. For example, a recent rift at Google was caused when an employee published a paper highlighting bias in its AI systems (paywall).
One exciting solution that could help identify and root out such bias is called explainable AI (XAI), and it’s making AI more easily understandable. XAI is a technique that justifies an algorithm’s outputs in a simple and clear way. For example, when it comes to image classification, explainable AI would classify an image and explain its rationale for doing so, helping researchers understand how their algorithm works while increasing trust that the system was bias-free.
As deep learning algorithms ingest increasingly massive amounts of data, it will become even more important to understand how and why they make decisions. For instance, the natural language processing model called GPT-3 has made waves for its sheer scale. It has over 175 billion weighted connections between words known as parameters. The model was trained on much of the English language on the internet, but, as a result, it has recently come under scrutiny for widespread bias. For example, the OpenAI researchers themselves noted that 83% of 388 occupations sampled were more likely to be associated with males than females.
Explainable AI could help researchers understand why the model is behaving in this way, so they can begin to fix the problem.
Adopt An AIOps Framework
Around 20 years ago, DevOps forever changed the way applications were developed, deployed and managed. DevOps standardized workflow procedures and pipelines, leading to dramatically improved efficiency and delivery times.
Today, AIOps and MLOps are doing the same thing for artificial intelligence. When Gartner first coined the term AIOps four years ago, they likely had no idea how much of an impact AIOps and MLOps would have on business operations in such a short time. In fact, Cognalityca predicts that the MLOps market will expand to nearly $4 billion by 2025.
These platforms help make the entire AI/ML lifecycle more structured and optimized. They streamline AI workflows and benchmark the success of new initiatives so that organizations better understand their AI operations and can iterate improvements over time.
For example, organizations are adopting containerization and automating workflows on platforms such as Red Hat on an infrastructure level. This allows them to deploy and automate microservices to bring new products to market much faster.
From Isolated AI Use Cases To Frameworks Of Success
According to PwC’s 2021 AI predictions survey, 33% of companies started implementing limited AI use cases, while 25% had fully enabled AI processes with widespread adoption. I expect the latter to increase significantly by 2022, as organizations see the potential of fully integrated AI across their business units and teams.
To get there, they must adopt automated processes that can simplify how they build new AI/ML models, adopt responsible AI techniques such as XAI, and invest in AI and MLOps. Businesses simply cannot afford to move forward with expensive AI initiatives without these frameworks of success in place to enable lasting change.
By: Venkat Viswanathan is the Founder and Chairman of LatentView Analytics, a marketing analytics and decision science company.