SecDevOps.comSecDevOps.com
From Cloud Native To AI Native: Where Are We Going?

From Cloud Native To AI Native: Where Are We Going?

The New Stack(2 weeks ago)Updated 2 weeks ago

Has the cloud native era now fully morphed into the AI-native era? If so, what does that mean for the future of both cloud native and AI technology? These are the questions a panel of experts took up...

Has the cloud native era now fully morphed into the AI-native era? If so, what does that mean for the future of both cloud native and AI technology? These are the questions a panel of experts took up at KubeCon + CloudNativeCon North America in Atlanta earlier this month. The occasion was one of The New Stack’s signature pancake breakfasts, sponsored by Dynatrace. TNS Founder and Publisher Alex Williams probed the panelists’ current obsessions in this time of fast-moving change. For Jonathan Bryce, the new executive director of the Cloud Native Computing Foundation, inference is claiming a lot of his attention these days. “What are the future AI-native companies going to look like? Because it’s not all going to be chatbots,” Bryce said. “If you just look at the fundamentals and how you build towards every form of AI productivity, you have to have models where you’re taking a large dataset, turning it into intelligence, and then you have to have the inference layer where you’re serving those models to answer questions, make predictions. “And at some level, we sort of have skipped that layer,” he added, because the attention is now focused on chatbots and agents. “Personally, I’ve always been a plumber, an infrastructure guy, and inference is my obsession.” Inference is coming to the fore as organizations depend more on edge computing and on personalizing websites, said Kate Goldenring, senior software engineer at Fermyon Technologies. WebAssembly, the technology Fermyon focuses on, can help users who are finding they now need to make “extra hops,” as she put it, because of the new need for rapid inferencing. “There [are] interfaces out there where you can basically package up your model with your WebAssembly component and then deploy that to some hardware with the GPU and directly do inferencing and other types of AI compute, and have that all bundled and secure,” Goldenring noted. “Whenever you get a new technology, the next question is, how do we use it really, really quickly? And then the following question is, how [do] we do it securely? And WebAssembly provides the opportunity to do that by sandboxing those executions as well.” Observability and Infrastructure The issue of security brings up observability. The tsunami of data that AI uses and generates has major implications for how we approach observability in the AI-native era, according to panelist Sean O’Dell, principal product marketing manager at Dynatrace. “If you’ve been training your data in a predictive manner for eight, nine, 10 years now, we have the ability to add a [large language model] and intelligence on top and over inference in that situation,” O’Dell said. That “value add” carries pros and cons, he said. “It’s very nice to be able to at least say we have this information from an observability perspective. However, on the other side, it’s a lot of data. So now there’s a fundamental shift of, what do I need to get the right information about an end user? Among the biggest differences between the cloud native and the AI-native eras is in infrastructure, suggested Shaun O’Meara, CTO of Mirantis. “One of the key things that keep forgetting about all of this, the stuff has to run somewhere,” he said. “We have to orchestrate the infrastructure that all of these components run on top of.” A big trend he’s noticing, he said, “is we’re moving away from the abstraction that we were beginning to accept as normal in cloud native. You know, we go to a public cloud. We run our workloads. We have no idea what infrastructure is underneath that. With … workloads [running on GPUs], we have to be aware of the deep infrastructure,” including network speed and performance. “One of the things that behooves us as we start to look at all of these great tools that we’re running on top of these platforms, to remember to run them securely, to be efficient, to manage infrastructure efficiently.” This, O’Meara said, “is going to be one of the key challenges of the next few years. We have a power problem. We’re running out of power to run these data centers, and we’re building them as fast as we can. We have to manage that infrastructure efficiently.” Check out the full recording to hear how the panel digs into the questions, opportunities and challenges the “AI native” era will bring. The post From Cloud Native To AI Native: Where Are We Going? appeared first on The New Stack.

Source: This article was originally published on The New Stack

Read full article on source →

Related Articles