What is the place for GPUs in the modern Cloud environment? How did we come to them being so integral for AI and is the naming now a bit confusing for general public? What can we expect from this area? Our guest for episode 50 of DevOps Accents is Veronica Nigro, the co-founder of mkinf, a company providing access to distributed GPUs worldwide.
- How GPUs are used in cloud infrastructure;
- Should we rename GPUs?
- What are specific skills and knowledge required for GPU-based infra?
- Is there competition with hyperscalers?
- Cost management for GPUs;
- What will the future look like?
Check out our website to learn more about the services mkdev offers for companies.
Subscribe to our newsletter.
Buy mkdev merchandise here.
Show Notes:
- Our guest, Veronica Nigro, the co-founder of mkinf, on LinkedIn.
- mkinf, distributed inference for GenAI companies. mkinf provides access to distributed GPUs worldwide, reducing latency and making it easier for businesses that need real-time AI responses. Their platform aggregates fragmented computational power across data centers, making it easier to scale and optimize AI infrastructure for model deployment and inference.
- Evolving Drivers of AI Infrastructure Optimization, an interview with Veronica for more insights on inference.
Podcast editing: Mila Jones, [email protected]