Get started Bring yourself up to speed with our introductory content.

The 3 Es of AI: How to deploy efficient AI

This is the final piece in a three-part series. Read the second piece here.

How can companies deploy AI efficiently? It’s an important question to find the answer to, since industry sources estimate that as few as 29% of U.S. companies regularly use AI. In this article, I’ll explain why efficient AI is essential in propelling your company’s realization of the benefits delivered by ethical AI and explainable AI.

Efficient development resources

Computing power and software are the bread and butter of AI development. It makes sense to optimize both of these resources to maximize the efficiency of the way your organization builds AI solutions.

  • Compute resources: How easily can compute resources be provisioned? If it’s a challenge to get large amounts of computational horsepower and storage on-demand from the IT organization, cloud computing is a better strategy. In the cloud, development resources and testing can cohabitate seamlessly, with compute power and storage dialed up or down as needed.
  • Software resources: How many shared assets can analytic scientists leverage? Rather than writing code — including training algorithms — from scratch, open source AI tools give you the pre-packaged ability to build models quickly. An open source-heavy development environment in the cloud makes for maximum efficiency.
  • Buying decisions: Even with the availability of on-demand compute resources and open source software, there can be a variety of purpose-built machine learning application enhancements you can use to super-charge AI adoption. These are often advantageous in analytic areas with years or even decades of development efforts and purpose-built technology, such as fraud detection, which would be very difficult to reproduce with commodity AI tools. These technologies often come in the forms of machine learning studios where APIs exist to access purpose-built machine learning algorithms, and often with example models to get data scientists familiar and produce rapidly.

Implementing best practices

In any aspect of life, not learning from mistakes makes for an inefficient future. AI development is no different. Data scientists who have managed analytics, AI and machine learning development teams have the experience to know the mistakes every new scientist makes, many of which can have significant negative impact.

However, true efficient AI development is gained through the experience of not just individual managers, but the entire organization. The most efficient organizations provide access to lessons learned and AI assets beyond open source, as well as taken the time and resources to implement methodologies and resources for training, collaboration, code testing, unit testing and ongoing knowledge sharing.

Specifically, organizations that take best practices seriously have shared code assets, automated testing and entire development processes that tie directly into AI governance. The development processes help ensure that costly mistakes are not made during the haste to be efficient. Best-practice processes come with valuable code, tools and testing that allow models to be built quickly and utilized with confidence.

Operationalizing AI

Ultimately, efficient AI is AI that solves a business problem quickly; it’s not necessarily who builds the model the fastest. Efficient AI organizations focus on deployment and understand that AI runs within a larger software cradle, which is metaphorically rocking 24/7. Thinking about the end game of where the model will be deployed should be the starting point of any AI development. Without it, the model build should not proceed. Once the constraints, such as data, latency, storage and software, are understood, then AI can be built to ensure the model can live and operate in an environment where its value can be achieved.

In other words, you can develop an effective model quickly, but it has to be designed from inception to run within an operational environment. The model must process a certain number of transactions per second — within latency limits — and with clear expectations of all lifecycle stages, such as production, support and maintenance. From my meetings with data science organizations around the globe, this is the biggest challenge I see. The truth is, they can’t deploy a model because it’s built in such a way that it can’t be operationalized. Operationalization is hard, and as such it takes deep architecture discussions with the software developers, IT and data scientists to achieve success.

Steps toward efficiency

Building efficient AI is an attainable goal, but it doesn’t happen overnight. Here are two steps to focus on to get your organization moving in the right direction:

  1. First, understand that data scientist and software developer roles are merging. Many of the people on my team are very effective at software; they understand Kubernetes, Docker, Java and APIs. Our professional service teams are nearly at the same level of expertise as senior enterprise software developers. Today’s AI and machine learning scientists need to have an extremely granular understanding of the software implications of all their work products and the most successful will identify with — or as — software engineers.
  2. Second, from a business perspective, companies must take advantage of institutional knowledge from across their organization. Organizations that don’t have a available AI experience could develop a strong peer review focus and agile development processes for AI projects that include important checkpoints such as unit tests, functional tests and what-if model tests. Those lacking these abilities should work to bring that expertise in through consultation or, even better, a board of advisers that will review AI initiatives bringing critical feedback into the process.

At the end of the day, efficient AI development comes down to people, process and technology. You’ve got to inventory the building blocks you have right now and develop plans to acquire the rest.

All IoT Agenda network contributors are responsible for the content and accuracy of their posts. Opinions are of the writers and do not necessarily convey the thoughts of IoT Agenda.

CIO
Security
Networking
Data Center
Data Management
Close