Ethernet in the Age of AI Q&A

AI is having a transformative impact on networking. It’s a topic that the SNIA Data, Storage & Networking Community covered in our live webinar, “Ethernet in the Age of AI: Adapting to New Networking Challenges.” The presentation explored various use cases of AI, the nature of traffic for different workloads, the network impact of these workloads, and how Ethernet is evolving to meet these demands. The webinar audience was highly engaged and asked many interesting questions. Here are the answers to them all.

Q. What is the biggest challenge when designing and operating an AI Scale out fabric?

A. The biggest challenge in designing and operating an AI scale-out fabric is achieving low latency and high bandwidth at scale. AI workloads, like training large neural networks, demand rapid, synchronized data transfers between thousands of GPUs or accelerators. This requires specialized interconnects, such as RDMA, InfiniBand, or NVLink, and optimized topologies like fat-tree or dragonfly to minimize communication delays and bottlenecks.

Balancing scalability with performance is critical; as the system grows, maintaining consistent throughput and minimizing congestion becomes increasingly complex. Additionally, ensuring fault tolerance, power efficiency, and compatibility with rapidly evolving AI workloads adds to the operational challenges.

Unlike standard data center networks, AI fabrics handle intensive east-west traffic patterns that require purpose-built infrastructure. Effective software integration for scheduling and load balancing is equally essential. The need to align performance, cost, and reliability makes designing and managing an AI scale-out fabric a multifaceted and demanding task.

Q. What are the most common misconceptions about AI scale-out fabrics? Read More

SNIA Fosters Industry Knowledge of Collaborative Standards Engagements

November 2024 was a memorable month to engage with audiences at The International Conference for High Performance Computing, Networking, Storage, and Analysis (SC) 24 and Technology Live! to provide the latest on collaborative standards development and discuss high performance computing, artificial intelligence, and the future of storage.

At SC24, seven industry consortiums participated in an Open Standards Pavilion to discuss their joint activities in memory and interconnect standards, storage standards, networking fabric standards, and management and orchestration. Technology leaders from DMTF, Fibre Channel Industry Association, OpenFabrics Alliance, SNIA, Ultra Accelerator Link Consortium, Ultra Ethernet Consortium, and Universal Chiplet Interconnect Express™ Consortium shared how these standards are collaborating to foster innovation as technology trends accelerate. CXL® Consortium, NVM Express®, and PCI-SIG® joined these groups in a lively panel discussion moderated by Richelle Ahlvers, Vice Chair SNIA Board of Directors, on their cooperation in standards development. Read More

Storage for AI Q&A

Our recent SNIA Data, Networking & Storage Forum (DNSF) webinar, “AI Storage: The Critical Role of Storage in Optimizing AI Training Workloads,” was an insightful look at how AI workloads interact with storage at every stage of the AI data pipeline with a focus on data loading and checkpointing. Attendees gave this session a 5-star rating and asked a lot of wonderful questions. Our presenter, Ugur Kaynar, has answered them here. We’d love to hear your questions or feedback in the comments field. Q. Great content on File and Object Storage, Are there any use cases for Block Storage in AI infrastructure requirements? A. Today, by default, AI frameworks cannot directly access block storage, and need a file system to interact with block storage during training. Block storage provides raw storage capacity, but it lacks the structure needed to manage files and directories. Like most AI frameworks, PyTorch depends on a file system to manage and access data stored on block storage. Q. Do high speed networks make some significant enhancements to I/O and checkpointing process? Read More

AIOps Q&A

Moving well beyond “fix it when it breaks,” AIOps introduces intelligence into the fabric of IT thinking and processes. The impact of AIOps and the shift in IT practices were the focus of a recent SNIA Cloud Storage Technologies Initiative (CSTI) webinar, “AIOps: Reactive to Proactive – Revolutionizing the IT Mindset.” If you missed the live session, it’s available on-demand together with the presentation slides, at the SNIA Educational Library. The audience asked several intriguing questions. Here are answers to them all: Q. How do you align your AIOps objectives with your company’s overall AI usage policy when it is still fairly restrictive in terms of AI use and acceptance? Read More

Q&A for Accelerating Gen AI Dataflow Bottlenecks

Generative AI is front page news everywhere you look. With advancements happening so quickly, it is hard to keep up. The SNIA Networking Storage Forum recently convened a panel of experts from a wide range of backgrounds to talk about Gen AI in general and specifically discuss how dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. If you missed this session, “Accelerating Generative AI: Options for Conquering the Dataflow Bottlenecks,” it’s available on-demand at the SNIA Educational Library. We promised to provide answers to our audience questions, and here they are. Q: If ResNet-50 is a dinosaur from 2015, which model would you recommend using instead for benchmarking? A: Setting aside the unfair aspersions being cast on the venerable ResNet-50, which is still used for inferencing benchmarks 😊, Read More

Hidden Costs of AI Q&A

At our recent SNIA Networking Storage Forum webinar, “Addressing the Hidden Costs of AI,” our expert team explored the impacts of AI, including sustainability and areas where there are potentially hidden technical and infrastructure costs. If you missed the live event, you can watch it on-demand in the SNIA Educational Library. Questions from the audience ranged from training Large Language Models to fundamental infrastructure changes from AI and more. Here are answers to the audience’s questions from our presenters. Q: Do you have an idea of where the best tradeoff is for high IO speed cost and GPU working cost? Is it always best to spend maximum and get highest IO speed possible? A: It depends on what you are trying to do If you are training a Large Language Model (LLM) then you’ll have a large collection of GPUs communicating with one another regularly (e.g., All-reduce) and doing so at throughput rates that are up to 900GB/s per GPU! For this kind of use case, it makes sense to use the fastest network option available. Any money saved by using a cheaper/slightly less performant transport will be more than offset by the cost of GPUs that are idle while waiting for data. If you are more interested in Fine Tuning an existing model or using Retrieval Augmented Generation (RAG) then you won’t need quite as much network bandwidth and can choose a more economical connectivity option. It’s worth noting Read More

AIOps: The Undeniable Paradigm Shift

AI has entered every aspect of today’s digital world. For IT, AIOps is creating a dramatic shift that redefines how IT approaches operations. On April 9, 2024, the SNIA Cloud Storage Technologies Initiative will host a live webinar, “AIOps: Reactive to Proactive – Revolutionizing the IT Mindset.” In this webinar, Pratik Gupta, one of the industry’s leading experts in AIOps, will delve beyond the tools of AIOps to reveal how AIOps introduces intelligence into the very fabric of IT thinking and processes, discussing:
  • From Dev to Production and Reactive to Proactive: Revolutionizing the IT Mindset: We’ll move beyond the “fix it when it breaks” mentality, embracing a future-proof approach where AI analyzes risk, anticipates issues, prescribes solutions, and learns continuously.
  • Beyond Siloed Solutions: Embracing Holistic Collaboration:  AIOps fosters seamless integration across departments, applications, and infrastructure, promoting real-time visibility and unified action.
  • Automating the Process: From Insights to Intelligent Action: Dive into the world of self-healing IT, where AI-powered workflows and automation resolve issues and optimize performance without human intervention.
Read More

Accelerating Generative AI

Workloads using generative artificial intelligence trained on large language models are frequently throttled by insufficient resources (e.g., memory, storage, compute or network dataflow bottlenecks). If not identified and addressed, these dataflow bottlenecks can constrain Gen AI application performance well below optimal levels. Given the compelling uses across natural language processing (NLP), video analytics, document resource development, image processing, image generation, and text generation, being able to run these workloads efficiently has become critical to many IT and industry segments. The resources that contribute to generative AI performance and efficiency include CPUs, DPUs, GPUs, FPGAs, plus memory and storage controllers. Read More

Edge AI Q&A

At our recent SNIA Cloud Storage Technologies (CSTI) webinar “Why Distributed Edge Data is the Future of AI” our expert speakers, Rita Wouhaybi and Heiko Ludwig, explained what’s new and different about edge data, highlighted use cases and phases of AI at the edge, covered Federated Learning, discussed privacy for edge AI, and provided an overview of the many other challenges and complexities being created by increasingly large AI models and algorithms. It was a fascinating session. If you missed it you can access it on-demand along with a PDF of the slides at the SNIA Educational Library. Our live audience asked several interesting questions. Here are answers from our presenters. Q. With the rise of large language models (LLMs) what role will edge AI play? Read More

Addressing the Hidden Costs of AI

The latest buzz around generative AI ignores the massive costs to run and power the technology. Understanding what the sustainability and cost impacts of AI are and how to effectively address them will be the topic of our next SNIA Networking Storage Forum (NSF) webinar, “Addressing the Hidden Costs of AI.” On December 12, 2023, our SNIA experts will offer insights on the potentially hidden technical and infrastructure costs associated with generative AI. You’ll also learn best practices and potential solutions to be considered as they discuss: Read More