Running artificial intelligence (AI) and machine learning (ML) at the edge to turn your data into actionable insights isn’t rocket science. I should know; I’ve had a hand in building rockets! But nor is it a walk in the park.
In this blog, I’ll share my experiences of building, deploying, and operating intelligent apps (let’s call them ML apps) at the edge – in other words, in the physical world that’s outside the data centre.
Just before take-off, a couple of basics. Your ML apps need a stable and secure infrastructure incorporating the near-edge or far-edge or both. They also need high-quality data which must be pre-processed to retain its rich context. And they must be easy to deploy, redeploy and operate.
So, who should be on board to run your ML apps (let’s call this activity ML ops)? It’s helpful to think of three groups of people – some of whom belong to your organisation and others who are partners or provide consulting services to your organisation. They are the IT team, the data scientists and the business folk.
The IT team: What are these people up to? They’re working hard to provide a stable and secure edge-to-cloud infrastructure, needing full visibility of the edge especially if there are physical and/or cyber risks. If something goes wrong, these people need to know immediately.
These techies like to select ML apps according to their intended purpose. If a new ML app will be critical to local business processes, the IT team picks one capable of functioning locally at the edge (it delivers business continuity). If the new ML app will be used for overall management, monitoring or trending analysis, the team picks one that can communicate with the wider world to provide an overview (it delivers business availability).
The data scientists: What do these people want? A massive, central warehouse of well-organised, high-quality data that’s easily queryable and downloadable. This is the liquid hydrogen essential to every rocket launch!
Over the years, I’ve talked to many data scientists working in transportation (and many other industries) here on Planet Earth. They all say very similar things “90% of failing AI projects fail because of data-related issues,” “Data is the hardest part of machine learning and the most important to get right” (Uber), and “No other activity in the machine learning lifecycle has a higher return on investment than improving the data a model has access to” (Gojek).
What else could knock your ML ops off-course? Data scientists insist high-quality data includes context – the “where, when and how” of its collection, also known as its metadata – information that must be packaged and stored alongside data wherever it travels.
The business folk: These people make significant demands: they want fluid deployment and operation of ML apps. They want everything immediately and want to make changes almost daily – whatever it takes to solve business problems and remain agile in a competitive market.
Needing to deploy new ML apps all the time can bring business folk into conflict with the IT team. In most business cases, there’s not a single app vendor or creator capable of fulfilling all requirements, so the business team wants the freedom to compose its own bouquet of apps, from different sources – a risk for the IT team who are tasked with minimising ML ops costs and maintaining clear T&Cs, SLAs, support services, pricing and more. This also introduces more work. Training and deploying each new ML app is not one-shot task because the physical world changes and evolves, so ML app models require regular re-training, from regularly updated data, and regular re-deployment.
Whoever takes the decision about how to run AI at the edge, it’s important to keep in mind the varying needs of the IT team, data scientists and business folk.
Hyperscalers such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP) are attracting attention right now, so one of these might seem the right choice. But hyperscale providers have a cloud-first business model. This means they focus on selling cloud storage and compute resources and giving access to tools for data exploration as well as AI and ML model training.
The triumvirate rocket crew would do better with an edge-first solution. Only this puts them all in full control of their data processing chain – from data capture, crunching and training, right back to inference (aka running the model on fresh data). And only an edge-first solution can deliver the required level of maturity, control and flexibility.
Thinking about the infrastructure level, edge-first allows you to adopt container technology at the edge. Your IT team will be able to re-use, re-purpose and modernise many existing infrastructures, knowledge and processes. Edge-first also allows you to operate different types and generations of assets and devices at the edge in a coherent way. You can manage them with secure and intelligent edge computing software like our hardware-agnostic, container-native NuvlaEdge software.
Thinking about your data needs, edge-first allows you to use an edge-to-cloud management platform, such as our B2B SaaS Nuvla.io, to accelerate app deployment at the edge, reducing risk, removing management complexity and delivering full control over data. For companies operating in a regulated industry, there’s an on-premises version of Nuvla.io.
Once curated, the data can be pushed to your favourite backend service. For example, Nuvla.io will deliver the data to your own servers, Amazon’s, Azure’s or Google’s. We also provide this service from Switzerland. What’s important here is your data scientists get easy and safe access to their rocket fuel!
Thinking about ML app deployment and operation, an edge-first management platform is likely to support apps from multiple vendors and third parties. Our Nuvla.io platform certainly does, while also providing an excellent selection of ML apps and app bundles in the Nuvla.io Marketplace. This allows your organisation to move fast while controlling risks.
By reading this far, I hope you now agree running AI at the edge isn’t rocket science! There’s a lot to consider but there’s no need to journey alone. Before lift-off, please get in touch to discuss your specific needs.
26 June 2024
In the rapidly evolving landscape of digital connectivity, 5G technology represents a colossal leap forward, enabling unprecedented speed, efficien...
Read blog06 June 2024
Transforming retail with edge computing
In the dynamic sector of retail, it is important to adapt to consumer preferences and technological advancements. The retail industry is a very com...
Read blog28 May 2024
Leveraging Edge Computing for Enhanced Manufacturing Efficiency and Safety
Manufacturing is a complex, dynamic yet adaptive industry where every second counts towards productivity and safety. Spiralling costs and shortages...
Read blog