300 avsnitt • Längd: 25 min • Veckovis: Onsdag
The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.
For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack
The podcast The New Stack Podcast is created by The New Stack. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
Jetstack’s cert-manager, a leading open-source project in Kubernetes certificate management, began as a job interview challenge. Co-founder Matt Barker recalls asking a prospective engineer to automate Let’s Encrypt within Kubernetes. By Monday, the candidate had created kube-lego, which evolved into cert-manager, now downloaded over 500 million times monthly.
Cert-manager’s journey to CNCF graduation, achieved in September, began with its donation to the foundation four years ago. Relaunched as cert-manager, the project grew under engineer James Munnelly, becoming the de facto standard for certificate lifecycle management. The thriving community and ecosystem around cert-manager highlighted its suitability for CNCF stewardship. However, maintainers, including Ashley Davis, noted challenges in navigating differing opinions within its vast user base.
With graduation achieved, cert-manager’s roadmap includes sub-projects like trust-manager, addressing TLS trust bundle management and Istio integration. Barker aims to streamline enterprise-scale deployments and educate security teams on cert-manager’s impact. Cert-manager has become integral to cloud-native workflows, promising to simplify hybrid, multicloud, and edge deployments.
Learn more from The New Stack about cert-manager:
Jetstack’s cert-manager Joins the CNCF Sandbox of Cloud Native Technologies
Jetstack Secure Promises to Ease Kubernetes TLS Security
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The tech industry faces a paradox: despite high demand for skills, many developers and engineers are unemployed. At KubeCon + CloudNativeCon North America in Salt Lake City, Utah, Andela and the Cloud Native Computing Foundation (CNCF) announced an initiative to train 20,000 technologists in cloud native computing over the next decade. oss O'neill, Senior Program Manager at Andela and Chris Aniszczyk, CNCF’s CTO, highlighted the lack of Kubernetes-certified professionals in regions like Africa and emphasized the need for global inclusivity to make cloud native technology ubiquitous.
Andela, operating in over 135 countries and founded in Nigeria, views this program as a continuation of its mission to upskill African talent, aligning with its partnerships with tech giants like Google, AWS, and Nvidia. This initiative also addresses the increasing employer demand for Kubernetes and modern cloud skills, reflecting a broader skills mismatch in the tech workforce.
Aniszczyk noted that companies urgently seek expertise in cloud native infrastructure, observability, and platform engineering. The partnership aims to bridge these gaps, offering opportunities to meet evolving global tech needs.
Learn more from The New Stack about developer talent, skills and needs:
Top Developer Skills for AI and Cloud Jobs
5 Software Development Skills AI Will Render Obsolete
Cloud Native Skill Gaps are Killing Your Gains
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
When open source projects shift to proprietary licensing, forks and new communities often emerge. Such was the case with MapLibre, born from Mapbox’s 2020 decision to make its map rendering engine proprietary. In conjunction with All Things Open 2024, Seth Fitzsimmons, a principal engineer at AWS and Tarus Balog, principal technical strategist for open source at AWS shared that this engine, popular for its WebGL-powered vector maps and dynamic customization features, was essential for organizations like BMW, The New York Times, and Instacart. However, Mapbox’s move disappointed its open-source user base by tying the upgraded Mapbox GL JS library to proprietary products.
In response, three users forked the engine to create MapLibre, committing to modernizing and preserving its open-source ethos. Despite challenges—forking often struggles to sustain momentum—MapLibre has thrived, supported by contributors and corporate sponsors like AWS, Meta, and Microsoft. Notably, a community member transitioned the project from JavaScript to TypeScript over nine months, showcasing the dedication of unpaid contributors.
Thanks to financial backing, MapLibre now employs maintainers, enabling it to reciprocate community efforts while fostering equality among participants. The project illustrates the resilience of open-source communities when proprietary shifts occur.
Learn more from The New Stack about forking open source projects:
Why Do Open Source Projects Fork?
OpenSearch: How the Project Went From Fork to Foundation
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
At All Things Open in October, Anandhi Bumstead, AWS’s director of software engineering, highlighted OpenSearch's journey and the advantages of the Linux Foundation's stewardship. OpenSearch, an open source data ingestion and analytics engine, was transferred by Amazon Web Services (AWS) to the Linux Foundation in September 2024, seeking neutral governance and broader community collaboration. Originally forked from Elasticsearch after a licensing change in 2021, OpenSearch has evolved into a versatile platform likened to a “Swiss Army knife” for its broad use cases, including observability, log and security analytics, alert detection, and semantic and hybrid search, particularly in generative AI applications.
Despite criticism over slower indexing speeds compared to Elasticsearch, significant performance improvements have been made. The latest release, OpenSearch 2.17, delivers 6.5x faster query performance and a 25% indexing improvement due to segment replication. Future efforts aim to enhance indexing, search, storage, and vector capabilities while optimizing costs and efficiency. Contributions are welcomed via opensearch.org.
Learn more from The New Stack about deploying applications on OpenSearch
AWS Transfers OpenSearch to the Linux Foundation
From Flashpoint to Foundation: OpenSearch’s Path Clears
Semantic Search with Amazon OpenSearch Serverless and Titan
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Is Apache Spark too costly? Amazon Principal Engineer Patrick Ames tackled this question during an interview with The New Stack Makers, sharing insights into transitioning from Spark to Ray for managing large-scale data. Ames, described as a "go-to" engineer for exabyte-scale projects, emphasized a goal-driven approach to solving complex engineering problems, from simplifying daily chores to optimizing software solutions.
Initially, Spark was chosen at Amazon for its simplicity and open-source flexibility, allowing efficient merging of data with minimal SQL code. The team leveraged Spark in a decoupled architecture over S3 storage, scaling it to handle thousands of jobs daily. However, as data volumes grew to hundreds of terabytes and beyond, Spark’s limitations became apparent. Long processing times and high costs prompted a search for alternatives.
Enter Ray—a unified framework designed for scaling AI and Python applications. After experimentation, Ames and his team noted significant efficiency improvements, driving the shift from Spark to Ray to meet scalability and cost-efficiency needs.
Learn more from The New Stack about Apache Spark and Ray:
Amazon to Save Millions Moving From Apache Spark to Ray
How Ray, a Distributed AI Framework, Helps Power ChatGPT
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers, Codiac aims to simplify app deployment on Kubernetes by offering a unified interface that minimizes complexity. Traditionally, Kubernetes is powerful but challenging for teams due to its intricate configurations and extensive manual coding. Co-founded by Ben Ghazi and Mark Freydl, Codiac provides engineers with infrastructure on demand, container management, and advanced software development life cycle (SDLC) tools, making Kubernetes more accessible.
Codiac’s interface streamlines continuous integration and deployment (CI/CD), reducing deployment steps to a single line of code within CI/CD pipelines. Developers can easily deploy, manage containers, and configure applications without mastering Kubernetes' esoteric syntax. Codiac also offers features like "cabinets" to organize assets across multi-cloud environments and enables repeatable processes through snapshots, making cluster management smoother.
For experienced engineers, Codiac alleviates the burden of manually managing YAML files and configuring multiple services. With ephemeral clusters and repeatable snapshots, Codiac supports scalable, reproducible development workflows, giving engineers a practical way to manage applications and infrastructure seamlessly across complex Kubernetes environments.
Learn more from The New Stack about deploying applications on Kubernetes:
Kubernetes Needs to Take a Lesson from Portainer on Ease-of-Use
Three Common Kubernetes Challenges and How to Solve Them
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Valkey, an open-source fork of Redis launched in March, introduced its multithreaded Version 8.0 in September, now available through AWS ElastiCache. At All Things Open 2024 in Raleigh, AWS's Kyle Davis explains that Valkey was developed after Redis changed to a restrictive license, drawing contributors from companies like AWS, Google, Alibaba, and Oracle. Notably, some contributors emerged independently, including a significant contributor from Vietnam. Version 8.0 differentiates itself from Redis by leveraging multithreaded CPUs, addressing the efficiency of I/O operations in modern hardware. Additionally, data structure refinements were made to improve memory efficiency by up to 20%, particularly benefiting large-key databases.
Looking ahead, Valkey plans two annual updates, with the next release expected in 2025. New modules are anticipated, including a JSON module for efficient data manipulation and a Bloom filter for probabilistic data presence checks. Version 9.0 may bring substantial changes to clustering, updating it to better leverage modern technologies. The Valkey project aims to continue evolving its capabilities to meet the demands of advanced data storage needs.
Learn more from The New Stack about Valkey:
Valkey Is a Different Kind of Fork
AWS Adds Support, Drops Prices, for Redis-Forked Valkey
Valkey: A Redis Fork With a Future
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Deb Nicholson, executive director of the Python Software Foundation, attributes Python’s popularity to its minimal syntactical complexity, which appeals to beginners and seasoned developers alike. Python allows flexibility for those exploring coding without a specific focus, unlike purpose-built languages. Since her leadership began in 2022, Nicholson has overseen the foundation’s role in managing Python’s fiscal and operational needs, including the package index that hosts over half a million add-ons. This open ecosystem enables contributions from large corporations and individual developers while demanding vigilant security measures.
Nicholson envisions Python's future advancements, particularly in improving multi-threading and expanding usage in mobile development. She acknowledges Python’s critical role in AI and data science but remains cautious about AI’s pervasive application, likening it to a temporary trend. On open source in the enterprise, Nicholson critiques companies profiting from open-source tools while adopting restrictive licenses. Instead, she admires models like Red Hat’s, which leverage open source sustainably without compromising accessibility or innovation.
Learn more from The New Stack about Python:
Python 3.13: Blazing New Trails in Performance and Scale
The Top 5 Python Packages and What They Do
Python Mulls a Change in Version Numbering
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Platform engineering will be a key focus at KubeCon this year, with a special emphasis on AI platforms. Priyanka Sharma, executive director of the Linux Foundation, highlighted the convergence of platform engineering and AI during an interview on The New Stack Makers with Adobe’s Joseph Sandoval. KubeCon will feature talks from experts like Chen Goldberg of CoreWeave and Aparna Sinha of CapitalOne, showcasing how AI workloads will transform platform operations.
Sandoval emphasized the growing maturity of platform engineering over the past two to three years, now centered on addressing user needs. He also discussed Adobe's collaboration on CNOE, an open-source initiative for internal developer platforms. The intersection of platform engineering, Kubernetes, cloud-native technologies, and AI raises questions about scaling infrastructure management with AI, potentially improving efficiency and reducing toil for roles like SRE and DevOps. Sharma noted that reference architectures, long requested by the CNCF community, will be highlighted at the event, guiding users without dictating solutions.
Learn more from The New Stack about Kubernetes:
Cloud Native Networking as Kubernetes Starts Its Second Decade
Primer: How Kubernetes Came to Be, What It Is, and Why You Should Care
How Cloud Foundry Has Evolved With Kubernetes
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. game. https://thenewstack.io/newsletter/
Rohit Choudhary, co-founder and CEO of Acceldata, placed an early bet on data observability, which has proven prescient. In a New Stack Makers podcast episode, Choudhary discussed three key insights that shaped his vision: First, the exponential growth of data in enterprises, further amplified by generative AI and large language models. Second, the rise of a multicloud and multitechnology environment, with a majority of companies adopting hybrid or multiple cloud strategies. Third, a shortage of engineering talent to manage increasingly complex data systems.
As data becomes more essential across industries, challenges in data observability have intensified. Choudhary highlights the complexity of tracking where data is produced, used, and its compliance requirements, especially with the surge in unstructured data. He emphasized that data's operational role in business decisions, marketing, and operations heightens the need for better traceability. Moving forward, traceability and the ability to manage the growing volume of alerts will become areas of hyper-focus for enterprises.
Learn more from The New Stack about data observability:
What Is Data Observability and Why Does It Matter?
The Looming Crisis in the Observability Market
The Growth of Observability Data Is Out of Control!
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Rust has maintained its place among the top 15 programming languages and has been the most admired language for nine consecutive years. In a New Stack Makers podcast, Joel Marcey, director of technology at the Rust Foundation, discussed the language's growing importance, including initiatives to improve its security, performance, and adoption in various domains. While Rust is widely used in systems and backend programming, it’s also gaining traction in embedded systems, safety-critical applications, game development, and even the Linux kernel.
Marcey highlighted Rust’s strengths as a safe and fast systems language, noting its use on the web through WebAssembly (Wasm), though adoption there is still early. He also addressed Rust vs. Go, explaining that Rust excels in performance-critical applications. Marcey discussed recent updates, such as Rust 1.81, and project goals for 2024, which include a new edition and async improvements.
He also touched on government interest in Rust, including DARPA’s initiative to convert C code to Rust, and the Rust Security Initiative, aimed at maintaining the language’s strong security reputation.
Learn more from The New Stack about Rust
Could Rust be the Future of JavaScript Infrastructure?
Rust Growing Fastest, But JavaScript Reigns Supreme
Rust vs. Zig in Reality: A (Somewhat) Friendly Debate
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In a New Stack Makers episode, Ashley Williams, founder and CEO of axo, highlights how the software world depends on open-source code, which is largely maintained by unpaid volunteers. She likens this to a CVS relying on volunteer-run shipping companies, pointing out how unsettling that might be for customers. The conversation focuses on open-source maintainers’ reluctance to be seen as "suppliers" of software, an idea explored in a 2022 blog post by Thomas Depierre. Many maintainers reject the label, as there is no contractual obligation to support the software they provide.
Williams critiques the industry's response to this, noting that instead of involving maintainers in software supply chain security, companies have relied on third-party vendors. However, these vendors have no relationship with the maintainers, leading to increased vulnerabilities. Williams advocates for better engagement with maintainers, especially at build time, to improve security. She also reflects on the growing pressures on maintainers and the underappreciation of release teams.
Learn more from The New Stack about open source software supply chain
2023: The Year Open Source Security Supply Chain Grew Up
Fortifying the Software Supply Chain
The Challenges of Securing the Open Source Supply Chain
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers podcast, Xun Wang, CTO of Bloomreach, brings insights from his time at Nvidia, particularly lessons from its founder, Jensen Huang, to his current role in e-commerce personalization. Wang emphasizes structuring organizations to reflect the architecture of the products they build, applying a hands-on, detail-oriented approach that encourages deep understanding of engineering challenges.
He credits Huang for teaching him the importance of focusing on fundamental architecture rather than relying on iterative testing alone. Wang highlights the impact of generative AI (GenAI) on Bloomreach, explaining how AI-driven search is essential to understanding human language and user intent. As GenAI reshapes application development, Wang stresses the need for engineers to adopt new skills in AI manipulation, while still maintaining traditional coding expertise. He advocates for continuous learning, acknowledging the challenge of staying updated in a rapidly evolving field. Wang, himself, reads extensively to keep pace with innovations, underscoring the importance of staying curious and adaptable in today’s tech landscape.
Learn more from The New Stack about Entrepreneurship for Engineers:
Engineering Leaders: Switch to Wartime Management Now
How Teleport’s Leader Transitioned from Engineer to CEO
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Code reviews can be highly beneficial but tricky to execute well due to the human factors involved, says Adrienne Braganza Tacke, author of *Looks Good to Me: Actionable Advice for Constructive Code Review.* In a recent conversation with *The New Stack*, Tacke identified three challenges teams must address for successful code reviews: ambiguity, subjectivity, and ego.
Ambiguity arises when the goals or expectations for the code are unclear, leading to miscommunication and rework. Tacke emphasizes the need for clarity and explicit communication throughout the review process. Subjectivity, the second challenge, can derail reviews when personal preferences overshadow objective evaluation. Reviewers should justify their suggestions based on technical merit rather than opinion. Finally, ego can get in the way, with developers feeling attached to their code. Both reviewers and submitters must check their egos to foster a constructive dialogue.
Tacke encourages programmers to first review their own work, as self-checks can enhance the quality of the code before it reaches the reviewer. Ultimately, code reviews can improve code quality, mentor developers, and strengthen team knowledge.
Learn more from The New Stack about code reviews:
The Anatomy of Slow Code Reviews
One Company Rethinks Diff to Cut Code Review Times
How Good Is Your Code Review Process?
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In the New Stack Makers episode, Adi Polak, Director, Advocacy and Developer Experience Engineering at Confluent discusses the operational and analytical estates in data infrastructure. The operational estate focuses on fast, low-latency event-driven applications, while the analytical estate handles long-running data crunching tasks. Challenges arise due to the "schema evolution" from upstream operational changes impacting downstream analytics, creating complexity for developers.
Apache Iceberg and Flink help mitigate these issues. Iceberg, a table format developed by Netflix, optimizes querying by managing file relationships within a data lake, reducing processing time and errors. It has been widely adopted by major companies like Airbnb and LinkedIn.
Apache Flink, a versatile data processing framework, is driving two key trends: shifting some batch processing tasks into stream processing and transitioning microservices into Flink streaming applications. This approach enhances system reliability, lowers latency, and meets customer demands for real-time data, like instant flight status updates. Together, Iceberg and Flink streamline data infrastructure, addressing developer pain points and improving efficiency.
Learn more from The New Stack about Apache Iceberg and Flink:
Unfreeze Apache Iceberg to Thaw Your Data Lakehouse
Apache Flink: 2023 Retrospective and Glimpse into the Future
4 Reasons Why Developers Should Use Apache Flink
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Bob Wise, CEO of Heroku, discussed the impact of generative AI (GenAI) coding tools on software development in a recent episode of The New Stack Makers. He compared the rise of these tools to adding an "infinite number of interns" to development teams, noting that while they accelerate code writing, they don't yet simplify testing, deployment, or production operations. Wise likened this to the early days of Kubernetes, which focused on improving operations rather than the frontend experience. He emphasized that Kubernetes' success was due to its focus on easing the operational burden, something current GenAI tools have yet to achieve.
Heroku, acquired by Salesforce in 2010, is positioned to benefit from these changes by helping teams transition to more automated systems. Wise highlighted Heroku’s strategic bet on Postgres, a database technology that's gaining traction, especially for GenAI workloads. He also discussed Heroku's ongoing migration to Kubernetes, aligning with industry standards to enhance its platform.
Learn more from The New Stack about Heroku
The Data Stack Journey: Lessons from Architecting Stacks at Heroku and Mattermost
Kubernetes and the Next Generation of PaaS
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
After the XZ Utils backdoor vulnerability was uncovered in March, the OpenJS Foundation saw a surge in inquiries from potential open source JavaScript contributors. Robin Ginn, executive director of the foundation, noted that volunteer-led JavaScript communities often face challenges in managing these contributions. The discovery that a single contributor, "Jia Tan," planted the backdoor heightened vigilance, especially when new contributors requested admin privileges. Ginn emphasized that trust is not synonymous with security, especially in open source projects where maintainers must be vigilant about who can access their repositories.
The XZ vulnerability highlighted broader concerns about the security of open source software, particularly in projects with only a single maintainer. Despite receiving a significant grant from Germany's Sovereign Tech Fund, the foundation remains under-resourced, with just two full-time staffers supporting 35 projects. Ginn urged companies that rely on open source software to invest in it by hiring maintainers, ensuring these critical projects are properly supported.
Learn more from The New Stack about open source vulnerability
Linux xz Backdoor Damage Could Be Greater Than Feared
Unzipping the XZ Backdoor and Its Lessons for Open Source
Linux xz and the Great Flaws in Open Source
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Paige Bailey, who began coding at age 9 in rural Texas, now leads the GenAI developer experience at Google. In a conversation with Chris Pirillo on The New Stack Makers, Bailey reflected on the evolving role of software development in the era of generative AI. While she once urged her nieces and nephews to pursue computer science degrees, Bailey now believes that critical thinking and problem-solving may be more crucial for future tech careers.
She emphasized that generative AI is democratizing software development, making it more accessible and enabling developers to focus on creative tasks rather than the minutiae of coding. Bailey's experience at Google highlights this shift, as she now acts more as a reviewer and overseer of AI-generated code. She sees GenAI not as a replacement for developers but as a tool to accelerate their creativity and tackle longstanding backlogs. Bailey believes the key is ensuring everyone understands how to effectively apply generative AI to their work.
Learn more from The New Stack about the future of development:
7 Ways to Future Proof Your Developer Job in the Age of AI
The Future of Developer Careers
4 Forecasts for the Future of Developer Relations
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Anne Currie, a leading expert in sustainable tech and part of the Green Software Foundation, discusses practical steps for building resilient, sustainable software in an episode of The New Stack Makers. With 30 years of experience, Currie co-authored Building Green Software, emphasizing the tech industry's role in the energy transition. She highlights the complexity of adapting technology to renewable energy, involving extensive research and debunking misinformation. Currie discusses the importance of energy proportionality—the idea that increased utilization improves a computer's energy efficiency—and how this concept aligns with modern DevOps practices that reduce carbon emissions while enhancing speed, cost efficiency, and security.
Currie also emphasizes architecting systems to operate on renewable power and draws parallels between managing variable grid power and internet bandwidth. Using examples like video conferencing, she illustrates how software can adapt to fluctuating resources. The episode also touches on potential pitfalls like greenwashing and the challenges in accurately naming concepts like energy proportionality.
Learn more from The New Stack about sustainability:
Sustainability: How Did Amazon, Azure, Google Perform in 2023?
Sustainability Focus: Cloud Efficiency, Not Carbon Emissions
Developers Should Press Cloud Providers on Sustainability
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
In an era marked by complexity, the golden path is essential for software architects, asserts James Watters, senior director of R&D at VMware Tanzu, Broadcom. This approach, emphasizing fewer application patterns, simplifies life for security personnel, developers, and infrastructure teams. VMware defines the golden path as streamlining software development, crucial in today's economic climate. Watters highlights this in the Broadcom report: State of Cloud Native App Platforms 2024, noting that 55% of organizations favor this method for its consistency and security.
Watters, a pioneer in platform as a service since 2009, helped establish Cloud Foundry and now drives VMware Tanzu. Tanzu's golden operations offer standardized, consistent processes across platforms, crucial for efficiency and security. Watters advocates for minimal DIY in favor of operational consistency, providing commands for building, deploying, and scaling applications.
Tanzu’s focus is on integrating AI to enhance user interfaces and data access, impacting platform engineering significantly in the coming years. This integration aims to offer a better developer experience while maintaining security and efficiency.
Learn more from The New Stack about golden paths:
Golden Paths Start with a Shift Left
Platform Engineering Not Working Out? You’re Doing It Wrong.
How to Pave Golden Paths That Actually Go Somewhere
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Maintaining and ensuring the success of a microservice-based system can be challenging. Sarah Wells, a seasoned tech consultant with over 20 years of experience, offers valuable insights in her book "Enabling Microservices Success" and a discussion on The New Stack Makers podcast. Drawing from her tenure at the Financial Times (FT), Wells illustrates how transitioning to microservices and adopting DevOps and SRE practices enabled FT to accelerate software releases from 12 annually to over 20,000.
This transformation required merging IT organizations, investing in automation, and fostering team autonomy. Wells emphasizes that successful microservices adoption depends not only on developer expertise but also on organizational structures. She highlights the importance of continuous delivery and proactive communication, especially during critical periods like major news events. Additionally, she discusses the evolving roles of senior engineers and the need for flexibility in defining architectural responsibilities. Wells advocates for "engineering enablement" over "platform teams" to better support effective service management and evolution.
Learn more from The New Stack about enabling successful outcomes of microservices:
What Is Microservices Architecture?
4 Strategies for Migrating Monolithic Apps to Microservices
Continuous Improvement Metrics for Scaling Engineering Teams
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
In August 2023, the open source community rallied to create OpenTofu, an alternative to Terraform, after HashiCorp, now owned by IBM, adopted a restrictive Business Source License for Terraform. Ohad Maislish, co-founder and CEO of env0, explained on The New Stack Makers how this move sparked the initiative. A few hours after HashiCorp's license change, Maislish secured the domain opentf.org and began developing the new project, eventually named OpenTofu, which was donated to The Linux Foundation to ensure its license couldn't be altered.
Maislish highlighted the importance of distinguishing between vendor-backed and foundation-backed open source projects to avoid sudden licensing changes. Before coding, the community created a manifesto, gathering significant support and pledges, but received no response from HashiCorp. Consequently, they proceeded with the fork and development of OpenTofu. Despite accusations of intellectual property theft from HashiCorp, OpenTofu gained traction and was adopted by organizations like Oracle. The community continues to prioritize user feedback through GitHub.
Learn more from The New Stack about OpenTofu:
OpenTofu vs. HashiCorp Takes Center Stage at Open Source Summit
OpenTofu Amiable to a Terraform Reconciliation
OpenTofu 1.6 General Availability: Open Source Infrastructure as Code
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In the early days, the internet was a decentralized space created by enthusiasts. However, it has since transformed into a centralized, commerce-driven entity dominated by a few major players. The promise of the fediverse, a decentralized social networking concept, offers a refreshing alternative.
Evan Prodromou, OpenEarth Foundation's director of open technology, has been advocating for decentralized social networks since 2008, starting with his creation, Identi.ca. Unlike Twitter, Identi.ca was open source and federated, allowing independent networks to interconnect.
Prodromou, a co-author of ActivityPub—the W3C standard for decentralized networking used by platforms like Mastodon—discusses the evolution of the fediverse on The New Stack Makers podcast. He notes that small social networks dwindled to a few giants, such as Twitter and Facebook, which rarely interconnected. The acquisition of Twitter by Elon Musk disrupted the established norms, prompting users to reconsider their dependence on centralized platforms.
The fediverse aims to address these issues by allowing users to maintain relationships across different instances, ensuring a smoother transition between networks. This decentralization fosters community management and better control over social interactions.
Check out the full podcast episode to explore how tech giants like Meta are engaging with the fediverse and how to join decentralized social networks.
Learn more from The New Stack about fediverse:
FediForum Showcases New Fediverse Apps and Developer Network
One Login: Towards a Single Fediverse Identity on ActivityPub
Web Dev 2024: Fediverse Ramps Up, More AI, Less JavaScript
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
In a recent episode of The New Stack Makers, recorded at the Open Source Summit North America, Matt Hartley, Linux support lead at Framework, discusses the importance of the "right to repair" movement. This initiative seeks to allow consumers to repair and upgrade their own electronic devices, countering the trend of disposable electronics that contribute to environmental damage. Framework, a company offering modular and customizable laptops, embodies this philosophy by enabling users to replace outdated components easily.
Hartley, interviewed by Chris Pirillo, highlights how Framework’s approach helps reduce electronic waste, likening obsolete electronics to a form of "technical debt." He shares his personal struggle with old devices, like an ASUS Eee, illustrating the need for repairable technology. Hartley also describes his role in fostering a DIY community, collaborating closely with Fedora Linux maintainers and creating user-friendly support scripts. Framework’s community is actively contributing to the platform, developing new features and hardware integrations.
The episode underscores the growing momentum of the right to repair movement, advocating for consumer empowerment and environmental sustainability.
Learn more from The New Stack about repairing and upgrading devices:
New Linux Laptops Come with Right-to-Repair and More
Troubling Tech Trends: The Dark Side of CES 2024
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Blockchain technology continues to drive innovation despite declining hype, with Distributed Ledgers (DLTs) offering secure, decentralized digital asset transactions. In an On the Road episode of The New Stack Makers recorded at Open Source Summit North America, Andrew Aitken of Hedera and Dr. Leemon Baird of Swirlds Labs discussed DLTs with Alex Williams.
Baird highlighted the Hashgraph Consensus Algorithm, an efficient, secure distributed consensus mechanism he created, leveraging a hashgraph data structure and gossip protocol for rapid, robust transaction sharing among network nodes. This algorithm, which has been open source under the Apache 2.0 license for nine months, aims to maintain decentralization by involving 32 global organizations in its governance. Aitken emphasized building an ecosystem of DLT contributors, adhering to open source best practices, and developing cross-chain applications and more wallets to enhance exchange capabilities. This collaborative approach seeks to ensure transparency in both governance and software development. For more insights into DLT’s 2.0 era, listen to the full episode.
Learn more from The New Stack about Distributed Ledgers (DLTs)
IOTA Distributed Ledger: Beyond Blockchain for Supply Chains
Why I Changed My Mind About Blockchain
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The Linux xz utils backdoor exploit, discussed in an interview at the Open Source Summit 2024 on The New Stack Makers with John Kjell, director of open source at TestifySec, highlights critical vulnerabilities in the open-source ecosystem. This exploit involved a maintainer of the Linux xz utils project adding malicious code to a new release, discovered by a Microsoft engineer. This breach demonstrates the high trust placed in maintainers and how this trust can be exploited. Kjell explains that the backdoor allowed remote code execution or unauthorized server access through SSH connections.
The exploit reveals a significant flaw: the human element in open source. Maintainers, often under pressure from company executives to quickly address vulnerabilities and updates, can become targets for social engineering. Attackers built trust within the community by contributing to projects over time, eventually gaining maintainer status and inserting malicious code. This scenario underscores the economic pressures on open source, where maintainers work unpaid and face demands from large organizations, exposing the fragility of the open-source supply chain. Despite these challenges, the community's resilience is also evident in their rapid response to such threats.
Learn more from The New Stack about Linux xz utils
Linux xz Backdoor Damage Could Be Greater Than Feared
Unzipping the XZ Backdoor and Its Lessons for Open Source
The Linux xz Backdoor Episode: An Open Source Myster
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Suman Debnath, principal developer advocate for machine learning at Amazon Web Services, emphasized the advantages of using Python in machine learning during a New Stack Makers episode recorded at PyCon US. He noted Python's ease of use and its foundational role in the data science ecosystem as key reasons for its popularity. However, Debnath highlighted that building generative AI applications doesn't necessarily require deep data science expertise or Python.
Amazon Bedrock, AWS’s generative AI framework introduced in September, exemplifies this flexibility by allowing developers to use any programming language via an API-based service. Bedrock supports various languages like Python, C, C++, and Java, enabling developers to leverage large language models without intricate knowledge of machine learning. It also integrates well with open-source libraries such as Langchain and llamaindex. Debnath recommends visiting the community AWS platform and GitHub for resources on getting started with Bedrock. The episode includes a demonstration of Bedrock's capabilities and its benefits for Python users.
Learn More from The New Stack on Amazon Bedrock:
Amazon Bedrock Expands Palette of Large Language Models
Build a Q&A Application with Amazon Bedrock and Amazon Titan
10 Key Products for Building LLM-Based Apps on AWS
Join our community of newsletter subscribers to stay on top of the news and at the top of your game/
Nathan Peck, a senior developer advocate for generative AI at Amazon Web Services (AWS), shares his experiences working with Python in a recent episode of The New Stack Makers, recorded at PyCon US. Although not a Python expert, Peck frequently deals with Python scripts in his role, often assisting colleagues in running scripts as cron jobs. He highlights the challenge of being a T-shaped developer, possessing broad knowledge across multiple languages and frameworks but deep expertise in only a few.
Peck introduces Amazon Q, a generative AI coding assistant launched by AWS in November, and demonstrates its capabilities. The assistant can be integrated into an integrated development environment (IDE) like VS Code. It assists in explaining, refactoring, fixing, and even developing new features for Python codebases. Peck emphasizes Amazon Q's ability to surface best practices from extensive AWS documentation, making it easier for developers to navigate and apply.
Amazon Q Developer is available for free to users with an AWS Builder ID, without requiring an AWS cloud account. Peck's demo showcases how this tool can simplify and enhance the coding experience, especially for those handling complex or unfamiliar codebases.
Learn more from The New Stack about Amazon Q and Amazon’s Generative AI strategy:
Amazon Q, a GenAI to Understand AWS (and Your Business Docs)
Decoding Amazon’s Generative AI Strategy
Responsible AI at Amazon Web Services: Q&A with Diya Wynn
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Mike Fiedler, a PyPI safety and security engineer at the Python Software Foundation, prefers the title “code gardener,” reflecting his role in maintaining and securing open source projects. Recorded at PyCon US, Fiedler explains his task of “pulling the weeds” in code—handling unglamorous but crucial aspects of open source contributions. Since August, funded by Amazon Web Services, Fiedler has focused on enhancing the security of the Python Package Index (PyPI). His efforts include ensuring that both packages and the pipeline are secure, emphasizing the importance of vetting third-party modules before deployment.
One of Fiedler’s significant initiatives was enforcing mandatory two-factor authentication (2FA) for all PyPI user accounts by January 1, following a community awareness campaign. This transition was smooth, thanks to proactive outreach. Additionally, the foundation collaborates with security researchers and the public to report and address malicious packages.
In late 2023, a security audit by Trail of Bits, funded by the Open Technology Fund, identified and quickly resolved medium-sized vulnerabilities, increasing PyPI's overall security. More details on Fiedler's work are available in the full interview video.
Learn more from The New Stack about PyPl:
PyPl Strives to Pull Itself Out of Trouble
Poisoned Lolip0p PyPI Packages
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The name "Falcon" for the UAE’s large language model (LLM) symbolizes the national bird's qualities of courage and perseverance, reflecting the vision of the Technology Innovation Institute (TII) in Abu Dhabi. TII, launched in 2020, addresses AI’s rapid advancements and unintended consequences by fostering an open-source approach to enhance community understanding and control of AI. In this New Stack Makers, Dr. Hakim Hacid, Executive Director and Acting Chief Researcher, Technology Innovation Institute emphasized the importance of perseverance and innovation in overcoming challenges. Falcon gained attention for being the first truly open model with capabilities matching many closed-source models, opening new possibilities for practitioners and industry.
Last June, Falcon introduced a 40-billion parameter model, outperforming the LLaMA-65B, with smaller models enabling local inference without the cloud. The latest 180-billion parameter model, trained on 3.5 trillion tokens, illustrates Falcon’s commitment to quality and efficiency over sheer size. Falcon’s distinctiveness lies in its data quality, utilizing over 80% RefinedWeb data, based on CommonCrawl, which ensures cleaner and deduplicated data, resulting in high-quality outcomes. This data-centric approach, combined with powerful computational resources, sets Falcon apart in the AI landscape.
Learn more from The New Stack about Open Source AI:
Open Source Initiative Hits the Road to Define Open Source AI
Linus Torvalds on Security, AI, Open Source and Trust
Transparency and Community: An Open Source Vision for AI
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Crash-level bugs continue to pose a significant challenge due to the lack of memory safety in programming languages, an issue persisting since the punch card era. This enduring problem, described as "the Joker to the Batman" by Anil Dash, VP of developer experience at Fastly, is highlighted in a recent episode of The New Stack Makers. The White House has emphasized memory safety, advocating for the adoption of memory-safe programming languages and better software measurability. The Office of the National Cyber Director (ONCD) noted that languages like C and C++ lack memory safety traits and are prevalent in critical systems. They recommend using memory-safe languages, such as Java, C#, and Rust, to develop secure software. Memory safety is particularly crucial for the US government due to the high stakes, especially in space exploration, where reliability standards are exceptionally stringent. Dash underscores the importance of resilience and predictability in missions that may outlast their creators, necessitating rigorous memory safety practices.
Learn more from The New Stack about Memory Safety:
White House Warns Against Using Memory-Unsafe Languages
Can C++ Be Saved? Bjarne Stroupstrup on Ensuring Memory Safety
Bjarne Stroupstrup's Plan for Bringing Safety to C++
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In the push to integrate data into development, time series databases have gained significant importance. These databases capture time-stamped data from servers and sensors, enabling the collection and storage of valuable information. InfluxDB, a leading open-source time series database technology by InfluxData, has partnered with Amazon Web Services (AWS) to offer a managed open-source service for time series databases.
Brad Bebee, General Manager of Amazon Neptune and Amazon Timestream highlighted the challenges faced by customers managing open-source Influx database instances, despite appreciating its API and performance. To address this, AWS initiated a private beta offering a managed service tailored to customer needs. Paul Dix, Co-founder and CTO of InfluxData joined Bebee, and highlighted Influx's prized utility in tracking measurements, metrics, and sensor data in real-time.
AWS's Timestream complements this by providing managed time series database services, including TimesTen for Live Analytics and Timestream for Influx DB. Bebee emphasized the growing relevance of time series data and customers' preference for managed open-source databases, aligning with AWS's strategy of offering such services. This partnership aims to simplify database management and enhance performance for customers utilizing time series databases.
Learn more from The New Stack about time series databases:
What Are Time Series Databases, and Why Do You Need Them?
Amazon Timestream: Managed InfluxDB for Time Series Data
Install the InfluxDB Time-Series Database on Ubuntu Server 22.04
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Amazon Web Services (AWS) has introduced PG Vector, an open-source tool that integrates generative AI and vector capabilities into PostgreSQL databases. Sirish Chandrasekaran, General Manager of Amazon Relational Database Services, explained at Open Source Summit 2024 in Seattle that PG Vector allows users to store vector types in Postgres and perform similarity searches, a key feature for generative AI applications.
The tool, developed by Andrew Kane and offered by AWS in services like Aurora and RDS, originally used an indexing scheme called IVFFlat but has since adopted Hierarchical Navigable Small World (HNSW) for improved query performance.
HNSW offers a graph-based approach, enhancing the ability to find nearest neighbors efficiently, which is crucial for generative AI tasks. AWS emphasizes customer feedback and continuous innovation in the rapidly evolving field of generative AI, aiming to stay responsive and adaptive to customer needs.
Learn more from The New Stack about Vector Databases
Top 5 Vector Database Solutions for Your AI Project
Vector Databases Are Having a Moment – A Chat with Pinecone
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
Valkey, a Redis fork supported by the Linux Foundation, challenges Redis' new license. In this episode, Madelyn Olson, a lead contributor to the Valkey project and former Redis core contributor, along with Ping Xie, Staff Software Engineer at Google and Dmitry Polyakovsky, Consulting Member of Technical Staff at Oracle highlights concerns about the shift to a more restrictive license at Open Source Summit 2024 in Seattle.
Despite Redis' free license for end users, many contributors may not support it. Valkey, with significant industry backing, prioritizes continuity and a smooth transition for Redis users. AWS, along with Google and Oracle maintainers, emphasizes the importance of open, permissive licenses for large tech companies. Valkey plans incremental updates and module development in Rust to enhance functionality and attract more engineers. The focus remains on compatibility, continuity, and consolidating client behaviors for a robust ecosystem.
Learn more from The New Stack about the Valkey Project and changes to Open Source licensing
Linux Foundation Backs 'Valkey' Open Source Fork of Redis
Redis Pulls Back on Open Source Licensing, Citing Stingy Cloud Services
HashiCorp's Licensing Change is only the Latest Challenge to Open Source
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
A virtual cluster, described by Loft Labs CEO Lukas Gentele at Kubecon+ CloudNativeCon Paris, is a Kubernetes control plane running inside a container within another Kubernetes cluster. In this New Stack Makers episode, Gentele explained that this approach eliminates the need for numerous separate control planes, allowing VMs to run in lightweight, quickly deployable containers. Loft Labs' open-sourced vcluster technology enables virtual clusters to spin up in about six seconds, significantly faster than traditional Kubernetes clusters that can take over 30 minutes to start in services like Amazon EKS or Google GKE.
The integration of vCluster into Rancher at KubeCon Paris enables users to manage virtual clusters alongside real clusters seamlessly. This innovation addresses challenges faced by companies managing multiple applications and clusters, advocating for a multi-tenant cluster approach for improved sharing and security, contrary to the trend of isolated single-tenant clusters that emerged due to complexities in cluster sharing within Kubernetes.
Learn more from The New Stack about virtual clusters:
Navigating the Trade-Offs of Scaling Kubernetes Dev Environments
Managing Kubernetes Clusters for Platform Engineers
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
When Weaveworks, known for pioneering "GitOps," shut down, concerns arose about the future of Flux, a critical open-source project. However, Puja Abbassi, Giant Swarm's VP of Product, reassured Alex Williams, Founder and Publisher of The New Stack at Open Source Summit in Paris that Flux's maintenance is secure in this episode of The New Makers podcast.
Giant companies like Microsoft Azure and GitLab have pledged support. Giant Swarm, an avid Flux user, also contributes to its development, ensuring its vitality alongside related projects like infrastructure code plugins and UI improvements. Abbassi highlighted the importance of considering a project's sustainability and integration capabilities when choosing open-source tools. He noted Argo CD's advantage in UI, emphasizing that projects like Flux must evolve to meet user expectations and avoid being overshadowed. This underscores the crucial role of community support, diversity, and compatibility within the Cloud Native Computing Foundation's ecosystem for long-term tool adoption.
Learn more from The New Stack about Flux:
End of an Era: Weaveworks Closes Shop Amid Cloud Native Turbulence
Why Flux Isn't Dying after Weaveworks
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The use of large language models (LLMs) has become widespread, but there are significant security risks associated with them. LLMs with millions or billions of parameters are complex and challenging to fully scrutinize, making them susceptible to exploitation by attackers who can find loopholes or vulnerabilities. On an episode of The New Stack Makers, Chris Pirillo, Tech Evangelist and Lance Seidman, Backend Engineer at Atomic Form discussed these security challenges, emphasizing the need for human oversight to protect AI systems.
One example highlighted was malicious AI models on Hugging Face, which exploited the Python pickle module to execute arbitrary commands on users' machines. To mitigate such risks, Hugging Face implemented security scanners to check every file for security threats. However, human vigilance remains crucial in identifying and addressing potential exploits.
Seidman also stressed the importance of technical safeguards and a culture of security awareness within the AI community. Developers should prioritize security throughout the development life cycle to stay ahead of evolving threats. Overall, the message is clear: while AI offers remarkable capabilities, it requires careful management and oversight to prevent misuse and protect against security breaches.
Learn more from The New Stack about AI and security:
Artificial Intelligence: Stopping the Big Unknown in Application, Data Security
Cyberattacks, AI and Multicloud Hit Cybersecurity in 2023
Will Generative AI Kill DevSecOps?
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
The Kubernetes community primarily focuses on improving the development and operations experience for applications and infrastructure, emphasizing DevOps and developer-centric approaches. In contrast, the data science community historically moved at a slower pace. However, with the emergence of the AI engineer persona, the pace of advancement in data science has accelerated significantly.
Alex Williams, founder and publisher of The New Stack co-hosted a discussion with Sanjeev Mohan, an independent analyst, which highlighted the challenges faced by data-related tasks on Kubernetes due to the stateful nature of data. Unlike applications, restarting a database node after a failure may lead to inconsistent states and data loss. This discrepancy in pace and needs between developers and data scientists led to Kubernetes and the Cloud Native Computing Foundation initially overlooking data science.
Nevertheless, Mohan noted that the pace of data engineers has increased as they explore new AI applications and workloads. Kubernetes now plays a crucial role in supporting these advancements by helping manage resources efficiently, especially considering the high cost of training large language models (LLMs) and using GPUs for AI workloads. Mohan also discussed the evolving landscape of AI frameworks and the importance of aligning business use cases with AI strategies. Learn more from The New Stack about data development and DevOps: AI Will Drive Streaming Data Use — But Not Yet, Report Says https://thenewstack.io/ai-will-drive-streaming-data-adoption-says-redpanda-survey/ The Paradigm Shift from Model-Centric to Data-Centric AI https://thenewstack.io/the-paradigm-shift-from-model-centric-to-data-centric-ai/ AI Development Needs to Focus More on Data, Less on Models https://thenewstack.io/ai-development-needs-to-focus-more-on-data-less-on-models/
Learn more from The New Stack about data development and DevOps:
AI Will Drive Streaming Data Use - But Not Yet, Report Says
The Paradigm Shift from Model-Centric to Data-Centric AI
AI Development Needs to Focus More on Data, Less on Models
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
LLM observability focuses on maximizing the utility of larger language models (LLMs) by monitoring key metrics and signals. Alex Williams, Founder and Publisher for The New Stack, and Janikiram MSV, Principal of Janikiram & Associates and an analyst and writer for The New Stack, discusses the emergence of the LLM stack, which encompasses various components like LLMs, vector databases, embedding models, retrieval systems, read anchor models, and more. The objective of LLM observability is to ensure that users can extract desired outcomes effectively from this complex ecosystem.
Similar to infrastructure observability in DevOps and SRE practices, LLM observability aims to provide insights into the LLM stack's performance. This includes monitoring metrics specific to LLMs, such as GPU/CPU usage, storage, model serving, change agents in applications, hallucinations, span traces, relevance, retrieval models, latency, monitoring, and user feedback. MSV emphasizes the importance of monitoring resource usage, model catalog synchronization with external providers like Hugging Face, vector database availability, and the inference engine's functionality.
He also mentions peer companies in the LLM observability space like Datadog, New Relic, Signoz, Dynatrace, LangChain (LangSmith), Arize.ai (Phoenix), and Truera, hinting at a deeper exploration in a future episode of The New Stack Makers.
Learn more from The New Stack about LLM and observability
Observability in 2024: More OpenTelemetry, Less Confusion
How AI Can Supercharge Observability
Next-Gen Observability: Monitoring and Analytics in Platform Engineering
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In a conversation on The New Stack Makers, co-hosted by Alex Williams, TNS founder and publisher, and Charles Humble, an industry expert who served as a software engineer, architect and CTO and now podcaster, author and consultant at Conissaunce Ltd., discussed why software developers and engineers should care about their impact on climate change. Humble emphasized that building software sustainably starts with better operations, leading to cost savings and improved security. He cited past successes in combating environmental issues like acid rain and the ozone hole through international agreements and emissions reduction strategies.
Despite modest growth since 2010, data centers remain significant electricity consumers, comparable to countries like Brazil. The power-intensive nature of AI models exacerbates these challenges and may lead to scarcity issues. Humble mentioned the Green Software Foundation's Maturity Matrix with goals for carbon-free data centers and longer device lifespans, discussing their validity and the role of regulation in achieving them. Overall, software development's environmental impact, primarily carbon emissions, necessitates proactive measures and industry-wide collaboration.
Learn more from The New Stack about sustainability:
What is GreenOps? Putting a Sustainable Focus on FinOps
Unraveling the Costs of Bad Code in Software Development
Can Reducing Cloud Waste Help Save the Planet?
How to Build Open Source Sustainability
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
This New Stack Makers podcast co-hosted by Alex Williams, TNS founder and publisher, and Adrian Cockcroft, Partner and Analyst at OrionX.net, discussed Nvidia's GH200 Grace Hopper superchip. Industry expert Sunil Mallya, Co-founder and CTO of Flip AI weighed in on how it is revolutionizing the hardware industry for AI workloads by centralizing GPU communication, reducing networking overhead, and creating a more efficient system.
Mallya noted that despite its innovative design, challenges remain in adoption due to interface issues and the need for software to catch up with hardware advancements. However, optimism persists for the future of AI-focused chips, with Nvidia leading the charge in creating large-scale coherent memory systems. Meanwhile, Flip AI, a DevOps large language model, aims to interpret observability data to troubleshoot incidents effectively across various cloud platforms. While discussing the latest chip innovations and challenges in training large language models, the episode sheds light on the evolving landscape of AI hardware and software integration.
Learn more from The New Stack about Nvidia and the future of chip design
Nvidia Wants to Rewrite the Software Development Stack
Nvidia GPU Dominance at a Crossroads
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
This New Stack Makers podcast co-hosted by TNS founder and publisher, Alex Williams and Joan Westenberg, founder and writer of Joan’s Index, discussed Copilot. Westenberg highlighted its integration with Microsoft 365 and its role as a coding assistant, showcasing its potential to streamline various tasks.
However, she also revealed its limitations, particularly in reliability. Despite being designed to assist with tasks across Microsoft 365, Copilot's performance fell short during Westenberg's tests, failing to retrieve necessary information from her email and Microsoft Teams meetings. While Copilot proves useful for coding, providing helpful code snippets, its effectiveness diminishes for more complex projects. Westenberg's demonstrations underscored both the strengths and weaknesses of Copilot, emphasizing the need for improvement, especially in reliability, to fulfill its promise as a versatile work companion.
Learn more from The New Stack about Copilot
Microsoft One-ups Google with Copilot Stack for Developers
Copilot Enterprises Introduces Search and Customized Best Practices
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
This New Stack Makers podcast co-hosted by Adrian Cockroft, analyst at OrionX.net and TNS founder and publisher, Alex Williams discusses the importance of monitoring services utilizing Large Language Models (LLMs) and the emergence of tools like LangChain and LangSmith to address this need. Adrian Cockcroft, formerly of Netflix and now working with The New Stack, highlights the significance of monitoring AI apps using LLMs and the challenges posed by slow and expensive API calls from LLMs. LangChain acts as middleware, connecting LLMs with services, akin to the Java Database Controller. LangChain's monitoring capabilities led to the development of LangSmith, a monitoring tool. Another tool, LangKit by WhyLabs, offers similar functionalities but is less integrated. This reflects the typical evolution of open-source projects into commercial products. LangChain recently secured funding, indicating growing interest in such monitoring solutions. Cockcroft emphasizes the importance of enterprise-level support and tooling for integrating these solutions into commercial environments. This discussion underscores the evolving landscape of monitoring services powered by LLMs and the emergence of specialized tools to address associated challenges.
Learn more from The New Stack about LangChain:
LangChain: The Trendiest Web Framework of 2023, Thanks to AI
How Retool AI Differs from LangChain (Hint: It's Automation)
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers podcast, Martin Parker, a solutions architect for UST, spoke with TNS editor-in-chief, Heather Joslyn and discussed the significance of internal developer platforms (IDPs), emphasizing benefits beyond frontend developers to backend engineers and site reliability engineers (SREs).
Parker highlighted the role of IDPs in automating repetitive tasks, allowing SREs to focus on optimizing application performance. Standardization is key, ensuring observability and monitoring solutions align with best practices and cater to SRE needs. By providing standardized service level indicators (SLIs) and key performance indicators (KPIs), IDPs enable SREs to maintain reliability efficiently. Parker stresses the importance of avoiding siloed solutions by establishing standardized practices and tools for effective monitoring and incident response. Overall, the deployment of IDPs aims to streamline operations, reduce incidents, and enhance organizational value by empowering SREs to concentrate on system maintenance and improvements.
Learn more from The New Stack about UST:
Cloud Cost-Unit Economics- A Modern Profitability Model
Cloud Native Users Struggle to Achieve Benefits, Report Says
John our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers podcast, Ben Wilcock, a senior technical marketing architect for Tanzu, spoke with TNS editor-in-chief, Heather Joslyn and discussed the challenges organizations face when building internal developer platforms, particularly the issue of scope, at KubeCon + CloudNativeCon North America.
He emphasized the difficulty for platform engineering teams to select and integrate various Kubernetes projects amid a plethora of options. Wilcock highlights the complexity of tracking software updates, new features, and dependencies once choices are made. He underscores the advantage of having a standardized approach to software deployment, preventing errors caused by diverse mechanisms.
Tanzu aims to simplify the adoption of platform engineering and internal developer platforms, offering a turnkey approach with the Tanzu Application Platform. This platform is designed to be flexible, malleable, and functional out of the box. Additionally, Tanzu has introduced the Tanzu Developer Portal, providing a focal point for developers to share information and facilitating faster progress in platform engineering without the need to integrate numerous open source projects.
Learn more from The New Stack about Tanzu and internal developer platforms:
VMware Unveils a Pile of New Data Services for Its Cloud VMware
VMware Expands Tanzu into a Full Platform Engineering Environment
VMware Targets the Platform Engineer
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
In this New Stack Makers podcast, Mike Stefaniak, senior product manager at NGINX and Kate Osborn, a software engineer at NGINX discusses challenges associated with network ingress in Kubernetes clusters and introduces the Kubernetes Gateway API as a solution. Stefaniak highlights the issues that arise when multiple teams work on the same ingress, leading to friction and incidents. NGINX has also introduced the NGINX Gateway Fabric, implementing the Kubernetes Gateway API as an alternative to network ingress.
The Kubernetes Gateway API, proposed four years ago and recently made generally available, offers advantages such as extensibility. It allows referencing policies with custom resource definitions for better validation, avoiding the need for annotations. Each resource has an associated role, enabling clean application of role-based access control policies for enhanced security.
While network ingress is prevalent and mature, the Kubernetes Gateway API is expected to find adoption in greenfield projects initially. It has the potential to unite North-South and East-West traffic, offering a role-oriented API for comprehensive control over cluster traffic. The article encourages exploring the Kubernetes Gateway API and engaging with the community to contribute to its development.
Learn more from The New Stack about NGINX and the open source Kubernetes Gateway API:
Kubernetes API Gateway 1.0 Goes Live, as Maintainers Plan for The Future
API Gateway, Ingress Controller or Service Mesh: When to Use What and Why
Ingress Controllers or the Kubernetes Gateway API? Which is Right for You?
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
TNS publisher Alex Williams spoke with Ben Kramer, co-founder and CTO of Monterey.ai Cole Hoffer, Senior Software Engineer at Monterey.ai to discuss how the company utilizes vector search to analyze user voices, feedback, reviews, bug reports, and support tickets from various channels to provide product development recommendations. Monterey.ai connects customer feedback to the development process, bridging customer support and leadership to align with user needs. Figma and Comcast are among the companies using this approach.
In this interview, Kramer discussed the challenges of building Large Language Model (LLM) based products and the importance of diverse skills in AI web companies and how Monterey employs Zilliz for vector search, leveraging Milvus, an open-source vector database.
Kramer highlighted Zilliz's flexibility, underlying Milvus technology, and choice of algorithms for semantic search. The decision to choose Zilliz was influenced by its performance in the company's use case, privacy and security features, and ease of integration into their private network. The cloud-managed solution and Zilliz's ability to meet their needs were crucial factors for Monterey AI, given its small team and preference to avoid managing infrastructure.
Learn more from The New Stack about Zilliz and vector database search:
Improving ChatGPT’s Ability to Understand Ambiguous Prompts
Create a Movie Recommendation Engine with Milvus and Python
Using a Vector Database to Search White House Speeches
Join our community of newsletter subscribers to stay on top of the news and at the top of your game. https://thenewstack.io/newsletter/
TNS host Heather Joslyn sits down with Ron Masas to discuss trade-offs when it comes to creating fast, secure applications and APIs. He notes a common issue of neglecting documentation and validation, leading to vulnerabilities. Weak authorization is a recurring problem, with instances where changing an invoice ID could expose another user's data.
Masas, an ethical hacker, highlights the risk posed by "zombie" APIs—applications that have become disused but remain potential targets. He suggests investigating frameworks, checking default configurations, and maintaining robust logging to enhance security. Collaboration between developers and security teams is crucial, with "security champions" in development teams and nuanced communication about vulnerabilities from security teams being essential elements for robust cybersecurity.
For further details, the podcast discusses case studies involving TikTok and Digital Ocean, Masas's views on AI and development, and anticipated security challenges.
Learn more from The New Stack about Imperva and API security:
What Developers Need to Know about Business Logic Attacks
Why Your APIs Aren’t Safe — and What to Do about It
The Limits of Shift-Left: What’s Next for Developer Security
Join our community of newsletter subscribers to stay on top of the news and at the top of your game.
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.
This structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users."
This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.
Learn more from The New Stack about Platform Engineering and Humanitec:
Platform Engineering Overview, News, and Trends
The Hype Train Is Over. Platform Engineering Is Here to Stay
Is the end of programming nigh? That's the big question posed in this episode recorded earlier in 2023. It was very popular among listeners, and with the topic being as relevant as ever, we wanted to wrap up the year by highlighting this conversation again.
If you ask Matt Welsh, he'd say yes, the end of programming is upon us. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.
Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.
Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.
For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.
Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple.
Learn more from The New Stack about AI and the future of software development:
Top 5 Large Language Models and How to Use Them Effectively
Kubevirt, a relatively new capability within Kubernetes, signifies a shift in the virtualization landscape, allowing operations teams to run KVM virtual machines nested in containers behind the Kubernetes API. This integration means that the Kubernetes API now encompasses the concept of virtual machines, enabling VM-based workloads to operate seamlessly within a cluster behind the API. This development addresses the challenge of transitioning traditional virtualized environments into cloud-native settings, where certain applications may resist containerization or require substantial investments for adaptation.
The emerging era of virtualization simplifies the execution of virtual machines without concerning the underlying infrastructure, presenting various opportunities and use cases. Noteworthy advantages include simplified migration of legacy applications without the need for containerization, thereby reducing associated costs.
Kubevirt 1.1, discussed at KubeCon in Chicago by Red Hat's Vladik Romanovsky and Nvidia's Ryan Hallisey, introduces features like memory hotplug and vCPU hotplug, emphasizing the stability of Kubevirt. The platform's stability now allows for the implementation of features that were previously constrained.
Learn more from The New Stack about Kubevirt and the Cloud Native Computing Foundation:
The Kubernetes landscape is evolving, shifting from the domain of visionaries and early adopters to a more mainstream audience. Tigera, represented by CEO Ratan Tipirneni at KubeCon North America in Chicago, recognizes the changing dynamics and the demand for simplified Kubernetes solutions. Tigera's open-source Calico security platform has been updated with a focus on mainstream users, presenting a cohesive and user-friendly solution. This update encompasses five key capabilities: vulnerability scoring, configuration hardening, runtime security, network security, and observability.
The aim is to provide users with a comprehensive view of their cluster's security through a zero to 100 scoring system, tracked over time. Tigera's recommendation engine suggests actions to enhance overall security based on the risk profile, evaluating factors such as egress traffic controls and workload isolation within dynamic Kubernetes environments. Tigera emphasizes the importance of understanding the actual flow of data across the network, using empirical data and observed behavior to build accurate security measures rather than relying on projections. This approach addresses the evolving needs of customers who seek not just vulnerability scores but insights into runtime behavior for a more robust security profile.
Learn more from The New Stack about Tigera and Cloud Native Security:
Cloud Native Network Security: Who’s Responsible?
Turbocharging Host Workloads with Calico eBPF and XDP
3 Observability Best Practices for Cloud Native App Security
Boeing, with around 6,000 engineers, is emphasizing open source engagement by focusing on three main themes, according to Damani Corbin, who heads Boeing's Open Source office. He joined our host, Alex Williams, for a discussion at KubeCon+CloudNativeCon in Chicago.
The first priority Corbin talks about is simplifying the consumption of open source software for developers. Second, Boeing aims to facilitate developer contributions to open source projects, fostering involvement in communities like the Cloud Native Computing Foundation and the Linux Foundation. The third theme involves identifying opportunities for "inner sourcing" to share internally developed solutions across different groups.
Boeing is actively working to break down barriers and encourage code reuse across the organization, promoting participation in open source initiatives. Corbin highlights the importance of separating business-critical components from those that can be shared with the community, prioritizing security and extending efforts to enhance open source security practices. The organization is consolidating its open source strategy by collaborating with legal and information security teams.
Corbin emphasizes the goal of making open source involvement accessible and attractive, with a phased approach to encourage meaningful contributions and ultimately enabling the compensation of engineers for open source work in the future.
Learn more from The New Stack about Boeing and CNCF open source projects:
How Open Source Has Turned the Tables on Enterprise Software
At KubeCon + CloudNativeCon North America 2022, Amazon Web Services (AWS) revealed plans to mirror Kubernetes assets hosted on Google Cloud, addressing Cloud Native Computing Foundation's (CNCF) egress costs. A year later, the project, led by AWS's Davanum Srinivas, redirects image requests to the nearest cloud provider, reducing egress costs for users.
AWS's Todd Neal and Jonathan Innis discussed this on The New Stack Makers podcast recorded at KubeCon North America 2023. Neal explained the registry's functionality, allowing users to pull images directly from the respective cloud provider, avoiding egress costs.
The discussion also highlighted AWS's recent open source contributions, including beta features in Kubectl, prerelease of Containerd 2.0, and Microsoft's support for Karpenter on Azure. Karpenter, an AWS-developed Kubernetes cluster autoscaler, simplifies node group configuration, dynamically selecting instance types and availability zones based on running pods.
The AWS team encouraged developers to contribute to Kubernetes ecosystem projects and join the sig-node CI subproject to enhance kubelet reliability. The conversation in this episode emphasized the benefits of open development for rapid feedback and community collaboration.
Learn more from The New Stack about AWS and Open Source:
Powertools for AWS Lambda Grows with Help of Volunteers
Amazon Web Services Open Sources a KVM-Based Fuzzing Framework
In the past year, developers have faced both promise and uncertainty, particularly in the realm of generative AI. Heath Newburn, global field CTO for PagerDuty, joins TNS host Heather Joslyn to talk about the impact AI and other topics will have on developers in 2024.
Newburn anticipates a growing emphasis on DevSecOps in response to high-profile cyber incidents, noting a shift in executive attitudes toward security spending. The rise of automation-centric tools like Backstage signals a changing landscape in the link between development and operations tools. Notably, there's a move from focusing on efficiency gains to achieving new outcomes, with organizations seeking innovative products rather than marginal coding speed improvements.
Newburn highlights the importance of experimentation, encouraging organizations to identify areas for trial and error, learning swiftly from failures. The upcoming year is predicted to favor organizations capable of rapid experimentation and information gathering over perfection in code writing.
Listen to the full podcast episode as Newburn further discusses his predictions related to platform engineering, remote work, and the continued impact of generative AI.
Learn more from The New Stack about PagerDuty and trends in software development:
How AI and Automation Can Improve Operational Resiliency
Why Infrastructure as Code Is Vital for Modern DevOps
Operationalizing AI: Accelerating Automation, DataOps, AIOps
In this episode of The New Stack Makers, Rob Skillington, co-founder and CTO of Chronosphere, discusses the challenges engineers face in building tools for their organizations. Skillington emphasizes that the "build or buy" decision oversimplifies the issue of tooling and suggests that understanding the abstractions of a project is crucial. Engineers should consider where to build and where to buy, creating solutions that address the entire problem. Skillington advises against short-term thinking, urging innovators to consider the long-term landscape.
Drawing from his experience at Uber, Skillington highlights the importance of knowing the audience and customer base, even when they are colleagues. He shares a lesson learned when building a visualization platform for engineers at Uber, where understanding user adoption as a key performance indicator upfront could have improved the project's outcome.
Skillington also addresses the "not invented here syndrome," noting its prevalence in organizations like Microsoft and its potential impact on tool adoption. He suggests that younger companies, like Uber, may be more inclined to explore external solutions rather than building everything in-house. The conversation provides insights into Skillington's experiences and the considerations involved in developing internal tools and platforms.
Learn more from The New Stack about Software Engineering, Observability, and Chronosphere:
Cloud Native Observability: Fighting Rising Costs, Incidents
Jean Yang, founder of API observability company Akita Software, emphasizes that programming languages should be shaped by software development needs and data, rather than philosophical ideals. Yang, a former assistant professor at Carnegie Mellon University, believes that programming tools and processes should be influenced by actual use and data, prioritizing the developer experience over the language creator's beliefs. With a background in programming languages, Yang advocates for a shift away from the outdated notion that language developers are building solely for themselves.
In this discussion on The New Stack Makers, Yang underscores the importance of understanding the reality of developers' needs, especially as developer tools have evolved into a full-time industry. She argues for a focus on UX design and product fundamentals in developing tools, moving beyond the traditional mindset where developer tools were considered side projects.
Yang founded Akita to address the challenges of building reliable software systems in a world dominated by APIs and microservices. The company transitioned to API observability, recognizing the crucial role APIs play in enhancing the understandability of complex systems. Yang's commitment to improving software correctness and the belief in APIs as key to abstraction and ease of monitoring align with Postman's direction after acquiring Akita. Postman aims to serve developers worldwide, emphasizing the significance of APIs in complex systems.
Check out more episodes from The Tech Founder Odyssey series:
How Byteboard’s CEO Decided to Fix the Broken Tech Interview
Docker CTO Justin Cormack reveals that Docker has been a go-to tool for data scientists in AI and machine learning for years, primarily in specialized areas like image processing and prediction models. However, the release of OpenAI's ChatGPT last year sparked a significant surge in Docker's popularity within the AI community.
The focus shifted to large language models (LLMs), with a growing interest in the retrieval-augmented generation (RAG) stack. Docker's collaboration with Ollama enables developers to run Llama 2 and Code Llama locally, simplifying the process of starting and experimenting with AI applications. Additionally, partnerships with Neo4j and LangChain allow for enhanced support in storing and retrieving data for LLMs.
Cormack emphasizes the simplicity of getting started locally, addressing challenges related to GPU shortages in the cloud. Docker's efforts also include building an AI solution using its data, aiming to assist users in Dockerizing applications through an interactive notebook in Visual Studio Code. This tool leverages LLMs to analyze applications, suggest improvements, and generate Docker files tailored to specific languages and applications.
Docker's integration with AI technologies demonstrates a commitment to making AI and Docker more accessible and user-friendly.
Learn more from The New Stack about AI and Docker:
Artificial Intelligence News, Analysis, and Resources
In this episode, Stefano Maffulli, Executive Director of the Open Source Initiative, discusses the need for a new definition as AI differs significantly from open source software. The complexity arises from the unique nature of AI, particularly large language models and transformers, which challenge traditional copyright frameworks. Maffulli emphasizes the urgency of establishing a definition for open source AI and discusses an ongoing effort to release a set of principles by the year's end.
The concept of "open" in the context of AI is undergoing a significant transformation, reminiscent of the early days of open source. The recent upheaval at OpenAI, resulting in the removal of CEO Sam Altman, reflects a profound shift in the technology community, prompting a reconsideration of the definition of "open" in the realm of AI.
The conversation highlights the parallels between the current AI debate and the early days of software development, emphasizing the necessity for a cohesive approach to navigate the evolving landscape. Altman's ousting underscores a clash of belief systems within OpenAI, with a "safetyist" community advocating caution and transparency, while Altman leans towards experimentation. The historical significance of open source, with a focus on trust preservation over technical superiority, serves as a guide for defining "open" and "AI" in a rapidly changing environment.
Learn more from The New Stack about AI and Open Source:
Artificial Intelligence News, Analysis, and Resources
Open Source Development Threatened in Europe
The AI Engineer Foundation: Open Source for the Future of AI
DockerCon showcased a commitment to enhancing the developer experience, with a particular focus on addressing the challenge of debugging containers in Kubernetes. The newly launched Docker Debug offers a language-independent toolbox for debugging both local and remote containerized applications.
By abstracting Kubernetes concepts like pods and namespaces, Docker aims to simplify debugging processes and shift the focus from container layers to the application itself. Our guest, Docker Principal Engineer Ivan Pedrazas, emphasized the need to eliminate unnecessary complexities in debugging, especially in the context of Kubernetes, where developers grapple with unfamiliar concerns exposed by the API.
Another Docker project, Tape, simplifies deployment by consolidating Kubernetes artifacts into a single package, streamlining the process for developers. The ultimate goal is to facilitate debugging of slim containers with minimal dependencies, optimizing security and user experience in Kubernetes development.
While progress is being made, bridging the gap between developer practices and platform engineering expectations remains an ongoing challenge.
Learn more from The New Stack about Kubernetes and Docker:
Kubernetes Overview, News, and Trends
TNS host Alex Williams is joined by Florian Valeye, a data engineer at Back Market, to shed light on the evolving landscape of data engineering, particularly focusing on Delta Lake and his contributions to open source communities. As a member of the Delta Lake community, Valeye discusses the intersection of data warehouses and data lakes, emphasizing the need for a unified platform that breaks down traditional barriers.
Delta Lake, initially created by Databricks and now under the Linux Foundation, aims to enhance reliability, performance, and quality in data lakes. Valeye explains how Delta Lake addresses the challenges posed by the separation of data warehouses and data lakes, emphasizing the importance of providing asset transactions, real-time processing, and scalable metadata.
Valeye's involvement in Delta Lake began as a response to the challenges faced at Back Market, a global marketplace for refurbished devices. The platform manages large datasets, and Delta Lake proved to be a pivotal solution in optimizing ETL processes and facilitating communication between data scientists and data engineers.
The conversation delves into Valeye's journey with Delta Lake, his introduction to Rust programming language, and his role as a maintainer in the Rust-based library for Delta Lake. Valeye emphasizes Rust's importance in providing a high-level API with reliability and efficiency, offering a balanced approach for developers.
Looking ahead, Valeye envisions Delta Lake evolving beyond traditional data engineering, becoming a platform that seamlessly connects data scientists and engineers. He anticipates improvements in data storage optimization and envisions Delta Lake serving as a standard format for machine learning and AI applications.
The conversation concludes with Valeye reflecting on his future contributions, expressing a passion for Rust programming and an eagerness to explore evolving projects in the open-source community.
Learn more from The New Stack about Delta Lake and The Linux Foundation:
Delta Lake: A Layer to Ensure Data Quality
Liam Crilly, Senior Director of Product Management at NGINX, discussed the potential of WebAssembly (Wasm) during this recording at the Open Source Summit in Bilbao, Spain. With over three decades of experience, Crilly highlighted WebAssembly's promise of universal portability, allowing developers to build once and run anywhere across a network of devices.
While Wasm is more mature on the client side in browsers, its deployment on the server side is less developed, lacking sufficient runtimes and toolchains. Crilly noted that WebAssembly acts as a powerful compiler target, enabling the generation of well-optimized instruction set code. Despite the need for a virtual machine, WebAssembly's abstraction layer eliminates hardware-specific concerns, providing near-native compute performance through additional layers of optimization.
Learn more from The New Stack about WebAssembly and NGINX:
WebAssembly Overview, News and Trends
Why WebAssembly Will Disrupt the Operating System
Jonathan Katz, a principal product manager at Amazon Web Services, discusses the evolution of PostgreSQL in an episode of The New Stack Makers. He notes that PostgreSQL's uses have expanded significantly since its inception and now cover a wide range of applications and workloads. Initially considered niche, it faced competition from both open-source and commercial relational database systems. Katz's involvement in the PostgreSQL community began as an app developer, and he later contributed by organizing events.
PostgreSQL originated from academic research at the University of California at Berkeley in the mid-1980s, becoming an open-source project in 1994. In the mid-1990s, proprietary databases like Oracle, IBM DB2, and Microsoft SQL dominated the market, while open-source alternatives like MySQL, MariaDB, and SQLite emerged.
PostgreSQL 16 introduces logical replication from standby servers, enhancing scalability by offloading work from the primary server. The meticulous design process within the PostgreSQL community leads to stable and reliable features. Katz mentions the development of Direct I/O as a long-term feature to reduce latency and improve data writing performance, although it will take several years to implement.
Amazon Web Services has built Amazon RDS on PostgreSQL to simplify application development for developers. This managed service handles operational tasks such as deployment, backups, and monitoring, allowing developers to focus on their applications. Amazon RDS supports multiple PostgreSQL releases, making it easier for businesses to manage and maintain their databases.
Learn more from The New Stack about PostgreSQL and AWS:
PostgreSQL 16 Expands Analytics Capabilities
The practice of "shift left," which involves moving security concerns to the code level and increasing developers' responsibility for security, is facing a backlash, with both developers and security professionals expressing concerns. Peter Klimek, director of technology at Imperva, discusses the reasons behind this backlash in this episode.
Some organizations may have exhausted the benefits of shift left, while the main challenge for many isn't finding vulnerabilities but finding time to address them. Security attacks are now targeting business logic vulnerabilities rather than dependencies, which shift left tools are better at identifying. These business logic vulnerabilities are often tied to authorization decisions, making them harder to address through code-level tools. Additionally, attacks increasingly focus on the frontend, such as API development and cart attacks.
Klimek emphasizes the need for development and security teams to collaborate and advocates for using DORA metrics to assess the impact of security efforts on the development pipeline. Some organizations may reach a point where the tools added to the development lifecycle become counterproductive, he notes. DORA metrics can help determine when this occurs and provide valuable insights for security teams.
Learn more from The New Stack about Developer Security and Imperva:
Why Your APIs Aren’t Safe — and What to Do about It
What Developers Need to Know about Business Logic Attacks
Are Your Development Practices Introducing API Security Risks?
Operational resiliency, as explained by Dormain Drewitz of PagerDuty, involves the ability to bounce back and recover from setbacks, not only technically but also in terms of organizational recovery. True resiliency means maintaining the willingness to take risks even after facing challenges. In a conversation with Heather Joslyn on the New Stack Makers podcast, Drewitz discussed the role of AI and automation in achieving operational resiliency, especially in a context where teams are under pressure to be more productive.
Automation, including generative AI code completion tools, is increasingly used to boost developer productivity. However, this may lead to shifting bottlenecks from developers to operations, creating new challenges. Drewitz emphasized the importance of considering the entire value chain and identifying areas where AI and automation can assist. For instance, automating repetitive tasks in incident response, such as checking APIs, closing ports, or database checks, can significantly reduce interruptions and productivity losses.
PagerDuty's AI-powered platform leverages generative AI to automate tasks and create runbooks for incident handling, allowing engineers to focus on resolving root causes and restoring services. This includes drafting status updates and incident postmortem reports, streamlining incident response and saving time. Having an operations platform that can generate draft reports at the push of a button simplifies the process, making it easier to review and edit without starting from scratch.
Learn more from The New Stack about AI, Automation, Incident Response, and PagerDuty:
Operationalizing AI: Accelerating Automation, DataOps, AIOps
Three Ways Automation Can Improve Workplace Culture
In this episode, Scott Johnston, CEO of Docker, highlights the evolving role of developers, emphasizing their increasing importance in architectural decision-making and tool development for applications. This shift in prioritizing a great developer experience and rapid tool development has led to substantial spending in the industry.
Johnston expressed confidence that integrating generative AI into the developer experience will drive business growth and expand the customer base. He downplayed concerns about AI taking jobs, explaining that it would alleviate repetitive tasks, enabling developers to focus on more complex problem-solving. Johnston likened this evolution to expanding bike lanes in a city, leading to increased bike traffic, equating it to the development of more apps due to increased speed and efficiency.
In his talk with TNS host, Alex Williams, Johnston emphasized that each advancement in programming languages and tools has expanded the developer market and driven greater demand for applications. Notably, the demand for over 750 million apps in the next two years, as reported by IDC, demonstrates the ever-increasing appetite for creative solutions from developers.
Overall, Johnston sees the integration of generative AI and increasing development velocity as a multifaceted expansion that benefits developers and meets growing demand for applications in the market.
Learn more from The New Stack about Generative AI and Docker:
Generative AI News, Analysis, and Resources
This episode of The New Stack Makers was recorded on the road at the Linux Foundation’s Open Source Summit Europe in Bilbao, Spain. A pair of technologists from Amazon Web Services (AWS) join us to discuss the development of Powertools for AWS Lambda. Andrea Amorosi, a senior solutions architect at AWS, and Leandro Damascena, a specialist solutions architect, share insights into how Powertools evolved from an observability tool to support more advanced use cases like ensuring workload safety, batch processing, streaming data, and idempotency.
Powertools primarily supports Python, TypeScript, Java, and .NET. The latest feature, idempotency for TypeScript, was introduced to help customers achieve best practices for developing resilient and fault-tolerant workloads. By integrating these best practices during the development phase, Powertools reduces the need for costly re-architecting and rewriting of code.
The success of Powertools can be attributed to its strong open source community, which fosters collaboration and contributions from users. AWS ensures transparency by conducting all project activities in the open, allowing anyone to understand and influence feature prioritization and contribute in various ways. Furthermore, the project's international support team offers assistance in multiple languages and time zones.
A noteworthy aspect is that 40% of new Powertools features have been contributed by the community, providing contributors with valuable networking opportunities at a prominent tech giant like AWS. Overall, Powertools demonstrates how open source principles can thrive within a major corporation, offering benefits to both the company and the open source community.
Learn more from The New Stack about Powertools, Lambda, and Amazon Web Services:
AWS Offers a TypeScript Interface for Lambda Observability
How Donating Open Source Code Can Advance Your Career
Turn AWS Lambda Functions Stateful with Amazon Elastic File System
KubeCon 2023 is set to feature three hot topics, according to Taylor Dolezal from the Cloud Native Computing Foundation. Firstly, GenAI and Large Language Models (LLMs) are taking the spotlight, particularly regarding their security and integration with legacy infrastructure. Platform engineering is also on the rise, with over 25 sessions at KubeCon Chicago focusing on its definition and how it benefits internal product teams by fostering a culture of product proliferation. Lastly, WebAssembly is emerging as a significant topic, with a dedicated day during the conference week. It is maturing and finding its place, potentially complementing containers, especially in edge computing scenarios. Wasm allows for efficient data processing before data reaches the cloud, adding depth to architectural possibilities.
Overall, these three trends are expected to dominate discussions and presentations at KubeCon NA 2023, offering insights into the future of cloud-native technology.
See what came out of the last KubeCon event in Amsterdam earlier this year:
Digital.ai, an AI-powered DevSecOps platform, serves large enterprises such as financial institutions, insurance companies, and gaming firms. The primary challenge faced by these clients is scaling their DevOps practices across vast organizations. They aim to combine modern development methodologies like agile DevOps with the need for speed and intimacy with end-users on a large scale.
This episode features a discussion between Wing To of Digital.ai and TNS host Heather Joslyn about platform engineering and the role of AI in enhancing automation. It delves into the dilemma of whether increased code production and release frequency driven by DevOps practices are inherently beneficial. Additionally, it explores the emerging challenge of AI-assisted development and how large enterprises are striving to realize productivity gains across their organizations.
Digital.ai is focused on incorporating AI into automation to assist developers in creating and delivering code while helping organizations derive more business value from their software in production. The company employs templates to capture and replicate key aspects of software delivery processes and uses AI to automate the rapid setup of developer environments and tooling. These efforts contribute to the concept of the internal developer platform, which consists of multiple toolsets for tasks like creating pipelines and setting up various components.
Learn more from The New Stack about Platform Engineering, DevSecOps and Digital.ai:
Platform Engineering Overview, News, and Trends
Moving workloads to the cloud presents cost prediction challenges. Traditional setups with on-premises hardware offer predictability, but cloud costs are usage-based and granular. In this podcast episode, Matt Stellpflug, a senior FinOps specialist at ProsperOps, discusses the complexities of forecasting cloud expenses with TNS host Heather Joslyn.
Cloud users face fluctuating costs due to continuous deployments and changing workloads. There are additional expenses for data access and transfer. Stellpflug emphasizes the importance of establishing reference workloads and benchmarks for accurate forecasting.
Engineers play a vital role in FinOps initiatives since they ensure application availability and system integrity. Stellpflug suggests collaborating with engineering teams to identify essential metrics. He co-authored an "Engineer's Guide to Cloud Cost Optimization," highlighting the distinction between resource and rate optimization. Best practices involve addressing high-impact, low-risk areas first, engaging subject matter experts for complex issues, and maintaining momentum. This episode also provides further insights into implementing FinOps for effective cloud cost management.
Learn more from The New Stack about FinOps and ProsperOps:
FinOps Overview, News, and Trends
ProsperOps Wants to Automate Your FinOps Strategy
Engineer’s Guide to Cloud Cost Optimization: Manual DIY Optimization
Engineer’s Guide to Cloud Cost Optimization: Engineering Resources in the Cloud
Engineer’s Guide to Cloud Cost Optimization: Prioritize Cloud Rate Optimization
In her keynote address at the Linux Foundation's Open Source Summit Europe, Fatima Sarah Khalid emphasized that being an ally is more than just superficial gestures like wearing pronouns on badges or correctly pronouncing coworkers' names. True allyship involves taking meaningful actions to support and uplift individuals from underrepresented or marginalized backgrounds. This support is essential, not only in obvious ways but also in everyday interactions, which collectively create a more inclusive community.
Open source communities typically lack diversity, with only a small percentage of women, non-binary contributors, and individuals from underrepresented backgrounds. Khalid stressed the importance of improving diversity and inclusion through various means, including using inclusive language, facilitating asynchronous communication to accommodate global contributors, and welcoming non-technical contributions such as documentation.
Khalid also provided insights on making open source events more inclusive, like welcoming newcomers and marginalized groups, providing quiet spaces and enforcing a code of conduct, and partnering newcomers with mentors. Moreover, she highlighted GitLab's unique approach to allyship within the organization, including the Ally Lab, which pairs employees from different backgrounds to learn about and understand each other's experiences.
To encourage the audience to embrace allyship, Khalid shared a set of commitments to keep in mind, such as educating oneself about the experiences of marginalized groups, speaking up against inappropriate behavior, using one's voice to amplify marginalized voices, donating to support such groups, and advocating for equity and justice through social networks and connections. She also shared real-life examples of allyship, illustrating how meaningful actions can create positive change in communities.
Khalid's discussion with host Jennifer Riggins emphasizes the significance of meaningful, everyday actions to promote allyship in open source communities and organizations, ultimately contributing to a more diverse, inclusive, and equitable tech industry.
Learn more from The New Stack about Open Source, Allyship, and GitLab:
Embracing Open Source for Greater Business Impact
Leadership and Inclusion in the Open Source Community
How Implicit Bias Impacts Open Source Diversity and Inclusion
In a recent conversation at the Open Source Summit in Bilbao, Spain, Gabriel Colombo, the General Manager of the Linux Foundation Europe and the Executive Director of the Fintech Open Source Foundation, discussed the potential impact of the Cyber Resilience Act (CRA) on the open source community. The conversation shed light on the challenges and opportunities that the CRA presents to open source and how individuals and organizations can respond.
The conversation began by addressing the Cyber Resilience Act and its significance. Gabriel Colombo explained that while the Act is being touted as a measure to bolster cybersecurity and national security, it could have unintended consequences for the open source ecosystem, particularly in Europe. The Act, currently in the legislative process, aims to address cybersecurity concerns but could inadvertently hinder open source development and collaboration.
Jim Zemlin, the Executive Director of the Linux Foundation, had previously mentioned the importance of forks in open source development, emphasizing that they are a healthy aspect of the ecosystem. However, Colombo pointed out that the CRA could create a sense of unease, as it might deter people and companies from participating in open source projects or using open source software due to potential legal liabilities.
To grasp the implications of the CRA, Colombo explained some of the key provisions. The initial drafts of the Act proposed potential liability for individual developers, open source foundations, and package managers. This raised concerns about the open source supply chain's potential vulnerability and the distribution of liability.
As the Act evolves, the liability landscape has shifted somewhat. Individual developers may not be held liable unless they consistently receive donations from commercial companies. However, for open source foundations, especially those accepting recurring donations from commercial entities, there remains a concern about potential liabilities and the need to conform to the CRA's requirements.
Colombo emphasized that this issue isn't limited to Europe. It could impact the entire global open source ecosystem and affect the ability of European developers and small to medium-sized businesses to participate effectively.
The conversation highlighted the challenges open source communities face when engaging with policymakers. Open source is not structured like traditional corporations or industry consortiums, making it more challenging to present a unified front. Additionally, the legislative process can be slow and complex, which may not align with the rapid pace of technology development.
The lack of proactive engagement from the European Commission and the absence of open source communities in the initial consultations on the Act are concerning. The understanding of open source, its nuances, and the role it plays in the broader software supply chain appears limited within policy-making circles.
What Can Be Done?
Gabriel Colombo stressed the importance of awareness and education. It is vital for individuals, businesses, and open source foundations to understand the implications of the CRA. The Linux Foundation and other organizations have launched campaigns to provide information and resources to help stakeholders comprehend the Act's potential impact.
Being vocal and advocating for open source within your network, organization, and through public affairs channels can also make a difference. Engagement with policymakers, especially as the Act progresses through the legislative process, is crucial. Colombo encouraged businesses to emphasize the significance of open source in their operations and supply chains, making policymakers aware of how the CRA might affect their activities.
In the face of the Cyber Resilience Act, the open source community must unite and actively engage with policymakers. It's essential to educate and raise awareness about the potential impact of the Act and advocate for a balanced approach that strengthens cybersecurity without stifling open source innovation.
The Act's development is ongoing, and there is time for stakeholders to make their voices heard. With a united effort, the open source community can help shape the legislation to ensure that open source remains vibrant and resilient in the face of evolving cybersecurity challenges.
Learn more from The New Stack about open source and Linux Foundation Europe:
At Open Source Summit: Introducing Linux Foundation Europe
In this episode of The New Stack Makers podcast, Uma Daniel, a product manager at UST, discusses the current complexities in the global economy, marked by low unemployment except in the tech industry, high inflation, high interest rates, a volatile stock market, and the looming threat of recession. Amid these challenges, organizations are seeking ways to enhance their operational efficiency.
Daniel introduces the concept of FinOps, which goes beyond just managing cloud costs. Instead, it focuses on leveraging the cloud to generate revenue. This represents a cultural shift in many organizations, emphasizing the need for a mindset change across different departments, including business, finance, and procurement.
She dispels misconceptions, such as the belief that only certain teams should be involved in the FinOps process. Daniel stresses that it's a collaborative effort involving various teams, and it's best to adopt FinOps at the beginning of a cloud journey. Once an organization is already established in the cloud, implementing FinOps becomes more challenging.
To foster collaboration, Daniel suggests identifying team members willing to champion FinOps and forming cross-functional teams to lead the initiative. Regular committee meetings and the establishment of generic policies, such as project budgets, help control cloud spending.
This episode, hosted by Heather Joslyn, provides insights into how to initiate and implement a FinOps strategy and highlights common ways in which organizations waste cloud resources.
Learn more from The New Stack about FinOps and UST:
Cloud Cost-Unit Economics — A Modern Profitability Model
What Is FinOps? Understanding FinOps Best Practices for Cloud
Since the release of OpenAI's ChatGPT-3 in late 2022, various industries have been actively exploring its applications. Madhukar Kumar, CMO of SingleStore, discussed his experiments with large language models (LLMs) in this podcast episode with TNS host Heather Joslyn. He mentioned a specific LLM called Gorilla, which is trained on APIs and can generate APIs based on specific tasks. Kumar also talked about SingleStore Now, an AI conference, where they plan to teach attendees how to build generative AI applications from scratch, focusing on enterprise applications.
Kumar highlighted a limitation with current LLMs - they are "frozen in time" and cannot provide real-time information. To address this, a method called "retrieval augmented generation" (RAG) has emerged. SingleStore is using RAG to keep LLMs updated. In this approach, a user query is first matched with up-to-date enterprise data to provide context, and then the LLM is tasked with generating answers based on this context. This method aims to prevent the generation of factually incorrect responses and relies on storing data as vectors for efficient real-time processing, which SingleStore enables.
This strategy ensures that LLMs can provide current and contextually accurate information, making AI applications more reliable and responsive for enterprises.
Learn more from The New Stack about LLMs and SingleStore:
Top 5 Large Language Models and How to Use Them Effectively
Observability in multi-cloud environments is becoming increasingly complex, as highlighted by Martin Mao, CEO and co-founder of Chronosphere. This challenge has two main components: a rise in customer-facing incidents, which demand significant engineering time for debugging, and the ineffectiveness and high cost of existing tools. These issues are creating a problematic return on investment for the industry.
Mao discussed these observability challenges on The New Stack Makers podcast with host Heather Joslyn, emphasizing the need to help teams prioritize alerts and encouraging a shift left approach for security responsibility among developers. With the adoption of distributed cloud architectures, organizations are not only dealing with a surge in data but also facing a cultural shift towards DevOps, where developers are expected to be more accountable for their software in production.
Historically, operations teams handled software in production, but in the cloud-native world, developers must take on these responsibilities themselves. Many current observability tools were designed for centralized operations teams, which creates a gap in addressing developer needs.
Mao suggests that cloud-native observability tools should empower developers to run and maintain their software in production, providing insights into the complex environments they work in. Moreover, observability tools can assist developers in understanding the intricacies of their software, such as its dependencies and operational aspects.
To streamline the data obtained from observability efforts and manage costs, Chronosphere introduced the "Observability Data Optimization Cycle." This framework starts with establishing centralized governance to set budgets for teams generating data. The goal is to optimize data usage to extract value without incurring unnecessary costs. This approach applies financial operations (FinOps) concepts to the observability space, helping organizations tackle the challenges of cloud-native observability.
Learn more from The New Stack about Observability and Chronosphere:
Observability Overview, News and Trends
4 Key Observability Best Practices
Platform engineering is gaining prominence due to the need for faster application deployment, which directly impacts business velocity. Valentina Alaria, Senior Director of Product at VMware, emphasizes that not all organizations pursuing platform engineering have the same goals, context, or pain points. They tailor solutions to each organization's specific needs. Some focus on rapid onboarding for junior developers, while others aim to reduce complexity, friction, and support larger development teams with fewer operational staff.
Platform engineering aims to streamline collaboration between developers and operations engineers. Developers want portable code and the ability to focus on coding without worrying about production requirements. Operations engineers and platform teams seek a seamless environment for deploying applications in different contexts.
Successful platform engineering initiatives involve strong collaboration models, fostering a cooperative approach rather than a siloed one. The goal is to create applications and value for the organization by facilitating effective interaction between developers and operations engineers.
This podcast episode, hosted by Alex Williams of TNS, also delves into VMware Tanzu's latest tools for supporting platform engineering.
Learn more from The New Stack about platform engineering and VMware Tanzu:
Platform Engineering Overview, News and Trends
6 Patterns for Platform Engineering Success
ByConity is an open source project that emerged from ByteDance's use of Clickhouse, an open-source database system, to address their growing data volume. ByConity focuses on enhancing the separation of compute and storage, improving multitenancy support, and optimizing query performance in cloud-native environments.
ByteDance's Vini Jaiswal, a principle developer advocate at the parent company of TikTok, highlights the power of open source in fostering innovation and collaboration. She shares her personal experience of leveraging open source to solve problems quickly and efficiently. She emphasizes the importance of getting involved in open source, even for those who might be hesitant, and suggests starting by identifying a pain point and making small contributions.
ByConity's architecture, which separates compute and storage, offers benefits like preventing data lake corruption, read and write separation, elasticity, and scalability. Jaiswal also mentions her previous experience with open source during her time at CitiBank, where she realized how open source accelerated digital transformations.
Throughout the conversation, Jaiswal underscores the strength of open source communities in collectively addressing challenges. She encourages listeners to embrace open source and start contributing, emphasizing how even small contributions can lead to significant impacts over time.
The episode also delves into Jaiswal's involvement with other open source projects, such as PyTorch, and explores the intersection of open source and generative AI.
Learn more from The New Stack about open source and cloud native environments:
What Is 'Cloud Native' (and Why Does It Matter)?
Along with discussing the emergence and ascension of platform engineering in this episode, we also discuss the role that Humanitec plays in helping organizations establish platforms for developers, as well as Backstage, a popular open source internal developer platform that was developed by Spotify for its own developers.
An IDP, our guest Kaspar Von Grünberg explained, is a standardized interface for developers to build applications using a golden path of vetted tools and libraries, allowing for a high degree of efficiency for both the developers themselves as well as the engineers who are supporting the developers. They can include an integration and delivery plane, a continuous integration registry, a platform orchestrator, observability tools and a resource plane.
"How you're consuming this is a little bit up to the individual preference of the user, and what the platform team has configured for you. So we're seeing some teams like to use a user interface and some teams like to use code based interactions," Von Grünberg explained.
In some ways, a IDP is reminiscent of the platform-as-a-service packages of a decade ago. They also were designed to help developer efficiency, though devs chafed at the limited number of tools they were allowed to use in these walled gardens. That was a mistake, Von Grünberg said.
Those platforms required developers to use a small set of pre-defined times.
"We don't want to get back to those times, which is why we want to provide sensible defaults," Von Grünberg said. A good IDP will provide developers with "golden paths" or "paved roads" as Netflix calls them.
"Developers can stay on those paths if they want," Von Grünberg said. They can enjoy the security default and service-level agreements (SLAs) from the engineers. But developers are also free to leave the path and make low-level configurations on their own as well.
"Good platform engineering is never about covering all the use cases," he said.
Learn more from The New Stack about platform engineering and Humanitec:
Platform Engineering Overview, News, and Trends
How to Pave Golden Paths That Actually Go Somewhere
Build Your IDP at Light Speed with a Platform Reference Architecture
In this episode of The New Stack Makers, technologist and author John Willis emphasized caution when considering AI solutions from vendors. He advised against blindly following vendor recommendations for "one-size-fits-all" AI products, likening it to discouraging learning Java in the past in favor of purchasing a product.
Willis stressed that DevOps serves as an example of how human expertise, not just products, solves problems. He urged C-level executives to first understand AI's intricacies and then make informed purchasing decisions, suggesting a "DevOps redo" to encourage experimentation and collaboration, similar to the early days of the DevOps movement.
Willis highlighted that early adopters of DevOps, like successful banks, heavily invested in developing their human capital. He cautioned against hasty product purchases, as the AI landscape is rife with startups that may quickly disappear or be acquired by larger companies.
Instead, Willis advocated for educating teams on effective data management techniques, including retrieval augmentation, to fine-tune large language models. He emphasized the need for data cleansing to build robust data pipelines and prevent LLMs from generating undesirable code or sensitive information.
According to Willis, the process becomes enjoyable when done correctly, especially for companies using LLMs at scale with retrieval augmentation. To ensure success, he suggested adding governance and structure, including content moderation and red-teaming of data, which vendors may not prioritize in their offerings.
Learn more from The New Stack about DevOps and AI:
AIOps: Is DevOps Ready for an Infusion of Artificial Intelligence?
Deliveroo, a prominent food delivery company, relies on Apache Flink, a distributed processing engine, to enhance its three-sided marketplace, connecting delivery drivers, restaurants, and customers. Seeking to improve real-time data streaming and gain insights into customer behavior, Deliveroo transitioned to Flink, comparing it to alternatives like Apache Spark and Kafka Streams. Flink, with feature parity to their previous platform, offered stability and scalability. They initially experimented with Flink on Kubernetes but turned to the Amazon Managed Service for Flink (MSF) for enhanced support and maintenance.
Engineers from Deliveroo, Felix Angell and Duc Anh Khu, emphasized the need for flexibility in data modeling to accommodate their fast-paced product development. However, flexibility can be complex, often requiring data model adjustments. They expressed the desire for a self-serve configuration feature in MSF, allowing easy customization of low-level settings and auto-scaling based on application metrics. This move to Flink and MSF has empowered Deliveroo to focus on core responsibilities like continuous integration and delivery while efficiently managing their data processing needs.
Learn more from The New Stack about Apache Flink and AWS:
Kinesis, Kafka and Amazon Managed Service for Apache Flink
Over the past five to ten years, the testing of microservices has seen significant growth. This surge in testing can be attributed to the increasing adoption of microservices and Kubernetes, which signify a shift away from monolithic application architectures. Bruno Lopes, a leader at Kubernetes company incubator Kubeshop, noted this trend. Kubeshop has initiated six Kubernetes projects, including TestKube, a Kubernetes native testing framework led by Lopes.
This rise in testing is making it more accessible to a wider audience and is enhancing the developer experience through automation. Developers now have more time to focus on innovation rather than manual testing. However, there is often a disconnect between development and testing, as developers move quickly, outpacing organizational adaptation to modern testing methods.
Lopes emphasized the importance of testing before production deployment and advocated for creating production-resembling testing environments that allow for rapid deployment without waiting for manual tests. This approach is particularly critical for Site Reliability Engineering (SRE) teams who need to respond quickly to issues and minimize downtime for customers. In some cases, it's necessary to run tests within Kubernetes itself, a concept that may take time for companies to fully embrace as the developer experience continues to improve.
Learn more from The New Stack about Kubernetes, Testing and TestKube:
Testkube: A Cloud Native Testing Framework for Kubernetes
Apache Flink is an open-source framework and distributed processing engine designed for data analytics. It excels at handling tasks such as data joins, aggregations, and ETL (Extract, Transform, Load) operations. Moreover, it supports advanced real-time techniques like complex event processing.
In this episode, Deepthi Mohan and Nagesh Honnalii from AWS discussed Apache Flink and the Amazon Managed Service for Apache Flink (MSF) with our host, Alex Williams. MSF is a service that caters to customers with varying infrastructure preferences. Some prefer complete control, while others want AWS to handle all infrastructure-related aspects.
Use cases for MSF can be grouped into three categories. First, there's streaming ETL, which involves tasks like log aggregation for later auditing. Second, it supports real-time analytics, enabling customers to create dashboards for tasks like fraud detection. Third, it handles complex event processing, where data from multiple sources is joined and aggregated to extract meaningful insights.
The origins of MSF trace back to the evolution of real-time data services within AWS. In 2013, AWS introduced Amazon Kinesis, while the open-source community developed Apache Kafka. These services paved the way for MSF by highlighting the need for real-time data processing.
To provide more flexibility, AWS launched Kinesis Data Analytics in 2016, allowing customers to write code in JVM-based languages like Java and Scala. In 2018, AWS decided to incorporate Apache Flink into its Kinesis Data Analytics offering, leading to the birth of MSF.
Today, thousands of customers use MSF, and AWS continues to enhance its offerings in the real-time data processing space, including the launch of Amazon MSK (Managed Streaming for Apache Kafka). To align with its foundation on Flink, AWS rebranded Kinesis Data Analytics for Apache Flink to Amazon Managed Service for Apache Flink, making it clearer for customers.
Learn more from The New Stack about AWS and Apache Flink:
Apache Flink for Real Time Data Analysis
Modern developer conferences like the upcoming Infobip Shift Conference in Croatia are centered around themes. At this particular event for developers, you can expect a lot of focus to be on the developer experience and artificial intelligence (AI).
Ivan Burazin, Chief Development Experience Officer at InfoBip, joined us on the show and emphasizes that developers spend a substantial portion of their time not coding, often losing 50 to 70% of their productive hours to non-coding activities, such as setting up environments, running tests, and building code. This highlights the importance of improving the developer experience to enhance productivity.
The developer experience has both internal and external dimensions. Externally, it impacts customer experience, while internally, it influences development velocity. A better developer experience translates to faster and more efficient coding.
The Shift Conference will feature talks on six stages, one of which will focus on the developer experience, addressing its internal and external aspects. Additionally, AI will take center stage at another segment of the conference.
Although there may not be an abundance of true AI experts taking the stage, the focus will be on how individuals and companies can leverage AI to create products and services. It's recognized that AI will play a pivotal role in the future of every industry, and the conference aims to explore practical applications and strategies for integrating AI into various businesses.
Overall, the Shift Conference aims to address the challenges developers face in optimizing their productivity and explore the growing importance of AI in shaping the future of businesses and products.
Learn more from The New Stack about the developer experience and InfoBip Shift:
7 Principles and 10 Tactics to Make You a 10x Developer
This episode delves into Apache Flink, a versatile platform for executing both batch and real-time streaming data analysis tasks. This session marks the beginning of a three-part series unveiling Amazon Web Services' (AWS) new managed service built on Flink. Future episodes will explore this service in detail and examine customer experiences.
The podcast features insights from Danny Cranmer, a principal engineer at AWS and an Apache Flink PMC and Committer, along with Hong Teoh, a software development engineer at AWS.
Flink stands out as a high-level framework for defining data analytics jobs, accommodating both batch and streaming data sets. It offers APIs for building analysis jobs in various languages, including Java, Python, and SQL. Flink also provides a distributed job execution engine with fault tolerance and horizontal scaling capabilities.
One prominent use case is Extract-Transform-Load (ETL), where raw data is swiftly processed for specific workloads. Flink excels in delivering low-latency transformations for unbounded data streams. Additionally, Flink supports event-driven applications, responding immediately to triggers such as user requests for weather data.
Flink ensures exactly-once processing, critical for scenarios like financial transactions. It employs checkpoints to maintain data integrity in case of node failures.
The podcast also touches on AWS's role in supporting the open-source Flink project and the future outlook for this powerful data processing framework.
Learn more from The New Stack about Apache Flink:
3 Reasons Why You Need Apache Flink for Stream Processing
In an interview with The New Stack, renowned technologist Adrian Cockcroft discussed the process of fine-tuning Large Language Models (LLMs) through prompt engineering. Cockcroft, known for his roles at Netflix and Amazon Web Services, explained how to obtain tailored programming advice from an LLM. By crafting specific prompts like asking the model to provide code in the style of a certain expert programmer, such as Java's James Gosling, users can guide the AI's output.
Prompt engineering involves setting up conversations to bias the AI's responses. These prompts are becoming more advanced with plugins and loaded information that shape the model's behavior before use. Cockcroft highlighted the concept of fine-tuning, where models are adapted beyond what a prompt can contain. Companies are incorporating vast amounts of their internal data, like wiki pages and corporate documents, to train the model to understand their specific domain and processes.
Cockcroft pointed out the efficacy of ChatGPT within certain tasks, illustrated by his experience using it for data analysis and programming assistance. He also discussed the growing need for improved results from LLMs, which has led to the demand for vector databases. These databases store word meanings as vectors with associated weights, enabling fuzzy matching for enhanced information retrieval from LLMs. In essence, Cockcroft emphasized the multifaceted process of shaping and optimizing LLMs through prompt engineering and fine-tuning, reflecting the evolving landscape of AI-human interactions.
Learn more from The New Stack about LLMs and Prompt Engineering:
Top 5 Large Language Models and How to Use Them Effectively
The Pros (And Con) of Customizing Large Language Models
Prompt Engineering: Get LLMs to Generate the Content You Want
TechWorld with Nana is one of the most popular resources for people looking to get into or progress a DevOps career. Nana Janashia, the creator of TechWorld with Nana, is a DevOps trainer and consultant who joined us to discuss why DevOps is needed now more than ever and how this is the perfect time to begin a career in DevOps.
Host Alex Williams and Nana go over the key concepts of DevOps. Then they talk about how the complexity of tools can sidetrack and complicate the learning process for those new to DevOps and why focusing on concepts rather than tools the way to go. Before wrapping up the conversation, they even talk about the best ways for people to get involved who are new to DevOps.
Nana's journey into DevOps commenced during her time as an engineer in Austria, where she began exploring Kubernetes. As inquiries from colleagues poured in, she recognized her knack for demystifying complex topics, catalyzing her passion for teaching. Viewers attest to switching to DevOps careers after watching her videos.
Throughout the conversation, we learned how people can discover the world of DevOps through TechWorld with Nana as an expert guide. With a large YouTube audience, online courses, workshops, and corporate training, Nana has empowered countless individuals in advancing their DevOps expertise. The six-month boot camps from TechWorld with Nana encompass a comprehensive curriculum, starting with fundamentals and culminating in hands-on programming abilities, Python automation, configuration management, and Prometheus-based monitoring.
Nana underscores that DevOps, still a relatively nascent profession, suffers from role ambiguity both among engineers and within companies aspiring to implement it. This confusion stems from differing workflows and environments when engineers switch jobs. Nana's insights bring clarity to these challenges, acknowledging the evolving chaos of the DevOps culture and its driving force for innovation in managing intricate distributed technologies.
Learn more about DevOps from TNS, Roadmap (our sister site), and TechWorld with Nana:
TechWorld with Nana - DevOps Bootcamp
Explore the complex intersection of AI and open source with insights from experts in this illuminating discussion. Amanda Brock, CEO of OpenUK, reveals the challenges in labeling AI as open source amidst legal ambiguities. The dialogue, led by TNS host Alex Williams, delves into the evolution of open source licensing, its departure from traditional models, and the complications arising from applying open source principles to AI, which encompasses sensitive data governed by privacy laws.
The focus turns to "Llama 2," a contentious example where Meta labeled their language model as open source, sparking confusion. Notable guests Erica Brescia, Managing Director at Redpoint Ventures, and Steven Vaughan-Nichols, founder of Open Source Watch, weigh in on this topic. Brock emphasizes that AI's complexity prevents it from aligning with the Open Source Definition, necessitating a clear distinction between open innovation and open source.
Amidst these debates, the Open Source Initiative (OSI) is crafting a new definition tailored for AI, sparking anticipation and discussion about its implications. The necessity for an evolved understanding of open source and its licenses is underscored, as the rapid evolution of technology challenges established norms. The journey concludes with reflections on vendors transitioning from open source licenses to Server Side Public License (SSPL) due to cloud-related considerations, raising questions about the future of open source in a dynamically changing tech landscape.
Learn more from The New Stack about open source and AI:
Open Source May Yet Eat Google's and OpenAI's AI Lunch
Discover how large language models and generative AI are revolutionizing DevOps with PromptOps. The company, initially known as CtrlStack, introduces its unique process engine that comprehends human requests, reads knowledge bases, and generates code on the fly to accomplish tasks. Dev Nag, the CEO, explains how PromptOps saves users time and money by automating routine operations in this podcast episode with The New Stack.
Dev Nag is joined by GK Brar, PromptOps' founding engineer, and our host Joab Jackson as they delve into the concept of generative AI and its potential benefits for DevOps. Traditionally, DevOps tasks often involve repetitive troubleshooting and reporting, making automation essential. PromptOps specializes in intent matching, understanding nuanced requests and providing the right solutions.
Notably, PromptOps employs generative AI offline to prepare for automating common actions and enhancing the user experience. Unlike others, PromptOps aims beyond simple enhancements. It aspires to transform the entire DevOps landscape by leveraging this groundbreaking technology.
Tune in to the podcast to gain deeper insights into this transformative approach that PromptOps brings to DevOps thanks to the power and possibilities of generative AI.
Learn more from The New Stack about DevOps and PromptOps:
DevOps News, Trends, Analysis and Resources
In this episode, Matt Butcher, CEO of Fermyon Technologies, discusses the potential impact of the component model on WebAssembly (Wasm) and its integration into the cloud-native landscape. WebAssembly is a binary instruction format enabling code to run anywhere, written in developers' preferred languages. The component model aims to provide a common way for WebAssembly libraries to express their needs and connect with other modules, reducing the barriers and maintenance of existing libraries. Butcher believes this model could be a game changer, allowing new languages to compile WebAssembly and utilize existing libraries seamlessly.
WebAssembly also shows promise in delivering on the long-awaited potential of serverless computing. Unlike traditional virtual machines and containers, WebAssembly boasts a rapid startup time and addresses various developer challenges. Butcher states that developers have been eagerly waiting for a platform with these characteristics, hinting at a potential resurgence of serverless. He clarifies that WebAssembly is not a "Kubernetes killer" but can coexist with container technologies, evident from the Kubernetes ecosystem's interest in supporting WebAssembly.
The episode explores further developments in WebAssembly and its potential to play a central role in the cloud-native ecosystem.
Learn more from The New Stack about WebAssembly and Fermyon Technologies:
WebAssembly Overview, News, and Trends
Fermyon Cloud: Save Your WebAssembly Serverless Data Locally
Building and deploying applications in the cloud offers significant advantages, primarily driven by the scalability it provides. Developers appreciate the speed and ease with which cloud-based infrastructure can be set up, allowing them to scale rapidly as long as they have the necessary resources. However, the very scale that makes cloud computing attractive also poses serious risks.
The risk lies in the potential for developers to make mistakes in application building, which can lead to widespread consequences when deployed at scale. Cloud-focused attacks have seen a significant increase, tripling from 2021 to 2022, as reported in the Cloud Risk Report by Crowdstrike.
The challenges in securing the cloud are exacerbated by its relative novelty, with organizations still learning about its intricacies. The newer generation of adversaries is adept at exploiting cloud weaknesses and finding ways to attack multiple systems simultaneously. Cultural issues within organizations, such as the tension between security professionals and developers, can further complicate cloud protection.
To safeguard cloud infrastructure, best practices include adopting the principle of least privilege, regularly evaluating access rights, and avoiding hard-coding credentials. Ongoing hygiene and assessments are crucial in ensuring that access levels are appropriate and minimizing risks of cloud-focused attacks.
Overall, understanding and addressing the risks associated with cloud deployments are vital as cloud-native adversaries grow increasingly sophisticated. Implementing proper security measures, along with staying up-to-date on runtime security and avoiding misconfigurations, are essential in safeguarding cloud-based applications and data.
Elia Zaitsev of CrowdStrike joined TNS host Heather Joslyn for this conversation on the heels of the release of their Cloud Risk Report.
Learn more from The New Stack about cloud security and CrowdStrike:
Cloud-Focused Attacks Growing More Frequent, More Brazen
In this episode of The New Stack Makers, Purnima Padmanabhan, a senior vice president at VMware, discusses three common mistakes organizations make when trying to move faster in meeting customer needs. The first mistake is equating application modernization with solely moving to the cloud, often resulting in a mere lift and shift of applications, without reaping the full benefits. The second mistake is a lack of automation, particularly in operations, which hinders the development process's speed. The third mistake involves adding unnecessary complexity by adopting new technologies or procedures, which slows down developers.
As a solution, Padmanabhan introduces the concept of platform engineering, which not only accelerates development but also reduces toil for operations engineers and architects. However, many organizations struggle with implementing it effectively, as they often approach platform engineering in fragmented ways, investing in separate components without fully connecting them.
To succeed in adopting platform engineering, Padmanabhan emphasizes the need for a mindset shift. The platform team must treat platform engineering as a continuously evolving product rather than a one-time delivery, ensuring that service-level agreements are continuously met, and regularly updating and improving features and velocity. The episode discusses the benefits of a well-implemented "golden path" for entire organizations and provides insights on how to start a platform engineering team.
Learn more from The New Stack about Platform Engineering and VMware:
Platform Engineering Overview, News and Trends
In this episode of The New Stack Makers, Peter Klimek, director of technology in the Office of the CTO at Imperva, discusses the vulnerability of business logic in a distributed, cloud-native environment. Business logic refers to the rules and processes that govern how applications function and how users interact with them and other systems. Klimek highlights the increasing attacks on APIs that exploit business logic vulnerabilities, with 17% of attacks on APIs in 2022 coming from malicious bots abusing business logic.
The attacks on business logic take various forms, including credential stuffing attacks, carding (testing stolen credit cards), and newer forms like influence fraud, where algorithms are manipulated to deceive platforms and users. Klimek emphasizes that protecting business logic requires a cross-functional approach involving developers, operations engineers, security, and fraud teams.
To enhance business logic security, Klimek recommends conducting a threat modeling exercise within the organization, which helps identify potential risk vectors. Additionally, he suggests referring to the Open Web Application Security Project (OWASP) website's list of automated threats as a checklist during the exercise.
Ultimately, safeguarding business logic is crucial in securing cloud-native environments, and collaboration among various teams is essential to effectively mitigate potential threats and attacks.
More from The New Stack, Imperva, and Peter Klimek:
Why Your APIs Aren’t Safe — and What to Do about It
Zero-Day Vulnerabilities Can Teach Us About Supply-Chain Security
In this episode of The New Stack Makers podcast, the focus is on the challenges of handling unstructured data in today's data-rich world and the potential solutions offered by vector databases and vector searches. The use of relational databases is limited when dealing with text, images, and voice data, which makes it difficult to uncover meaningful relationships between different data points.
Vector databases, which facilitate vector searches, have become increasingly popular for addressing this issue. They allow organizations to store, search, and index data that would be challenging to manage in traditional databases. Semantic search and Large Language Models have sparked interest in vector databases, providing developers with new possibilities.
Beyond standard applications like information search and recommendation bots, vector searches have also proven useful in combating copyright infringement. Social media companies like Facebook have pioneered this approach by using vectors to check copyrighted media uploads.
Vector databases excel at finding similarities between data objects, as they operate in vector spaces and perform approximate nearest neighbor searches, sacrificing a bit of accuracy for increased efficiency. However, developers need to understand their specific use cases and the scale of their applications to make the most of vector databases and search.
Frank Liu, the director of operations at Zilliz, advised listeners to educate themselves about vector databases, vector search, and machine learning to leverage the existing ecosystem of tools effectively. One notable indexing strategy for vectors is Hierarchical Navigable Small Worlds (HNSW), a graph-based algorithm created by Yury Malkov, a distinguished software engineer at VerSE Innovation who also joined us along with Nils Reimers of Cohere.
It's crucial to view vector databases and search as additional tools in the developer's toolbox rather than replacements for existing database management systems or document databases. The ultimate goal is to build applications focused on user satisfaction, not just optimizing clicks. To delve deeper into the topic and explore the gaps in current tooling, check out the full episode.
Learn more about vector databases at thenewstack.io
Vector Databases: What Devs Need to Know about How They Work
Vector Primer: Understand the Lingua Franca of Generative AI
Sargun Kaur, co-founder of Byteboard, aims to revolutionize the tech interview process, which she believes is flawed and ineffective. In an interview with The New Stack for our Tech Founder Odyssey podcast series, Kaur compared assessing technical skills during interviews to evaluating the abilities of basketball star Steph Curry by asking him to draw plays on a whiteboard instead of watching him perform on the court. Kaur, a former employee of Symantec and Google, became motivated to change the interview process after a talented engineer she had coached failed a Google interview due to its impractical format.
Kaur believes that traditional tech interviews overly emphasize theoretical questions that do not reflect real-world software engineering tasks. This not only limits the talent pool but also leads to mis-hires, where approximately one in four new employees is unsuitable for their roles or teams. To address these issues, Kaur co-founded Byteboard in 2018 with Nicole Hardson-Hurley, another former Google employee. Byteboard offers project-based technical interviews, adopted by companies like Dropbox, Lyft, and Robinhood, to enhance the efficiency and fairness of their hiring processes. In recognition of their work, Kaur and Hardson-Hurley received Forbes magazine's "30 Under 30" award for enterprise technology.
Kaur's journey into the tech industry was unexpected, considering her initial disinterest in her father's software engineering career. However, exposure to programming and shadowing a female engineer at Microsoft sparked her curiosity, leading her to study computer science at the University of California, Berkeley. Overcoming initial challenges as a minority in the field, Kaur eventually joined Google as an engineer, content with the work environment and mentorship she received. However, her dissatisfaction with the interview process prompted her to apply to Google's Area 120 project incubator, leading to the creation of Byteboard. Kaur's experience with Byteboard's development and growth taught her valuable lessons about entrepreneurship, the power of founders in fundraising meetings, and the potential impact of AI on tech hiring processes.
Check out more episodes in The Tech Founder Odyssey series:
A Lifelong ‘Maker’ Tackles a Developer Onboarding Problem
How Teleport’s Leader Transitioned from Engineer to CEO
How 2 Founders Sold Their Startup to Aqua Security in a Year
Shanea Leven, co-founder and CEO of CodeSee, shared her journey as a tech founder in an episode of the Tech Founder Odyssey podcast series. Despite coming to programming later than many of her peers, Leven always had a creative spark and a passion for making things. She initially pursued fashion design but taught herself programming in college and co-founded a company building custom websites for book authors. This experience eventually led her to a job at Google, where she worked in product development.
While at Google, Leven realized the challenge of deciphering legacy code and onboarding developers to it. Inspired by a presentation by Bret Victor, she came up with the idea for CodeSee—a developer platform that helps teams understand and review code bases more effectively. She started working on CodeSee in 2019 as a side project, but it soon received venture capital funding, allowing her to quit her job and focus on the startup full-time.
Leven candidly discussed the challenges of juggling a day job and a startup, particularly after receiving funding. She also shared advice on raising money from venture capitalists and building a company culture.
Listen to the full episode and check out more installments from The Tech Founder Odyssey.
How Teleport’s Leader Transitioned from Engineer to CEO
How 2 Founders Sold Their Startup to Aqua Security in a Year
In deploying cloud-native sustainable foundation AI models, there are five key steps outlined by Huamin Chen, an R&D professional at Red Hat's Office of the CTO. The first two steps involve using containers and Kubernetes to manage workloads and deploy them across a distributed infrastructure. Chen suggests employing PyTorch for programming and Jupyter Notebooks for debugging and evaluation, with Docker community files proving effective for containerizing workloads.
The third step focuses on measurement and highlights the use of Prometheus, an open-source tool for event monitoring and alerting. Prometheus enables developers to gather metrics and analyze the correlation between foundation models and runtime environments.
Analytics, the fourth step, involves leveraging existing analytics while establishing guidelines and benchmarks to assess energy usage and performance metrics. Chen emphasizes the need to challenge assumptions regarding energy consumption and model performance.
Finally, the fifth step entails taking action based on the insights gained from analytics. By optimizing energy profiles for foundation models, the goal is to achieve greater energy efficiency, benefitting the community, society, and the environment.
Chen underscores the significance of this optimization for a more sustainable future.
PyTorch Takes AI/ML Back to Its Research, Open Source Roots
PyTorch Lightning and the Future of Open Source AI
Jupyter Notebooks: The Web-Based Dev Tool You've Been Seeking
The concept of a software bill of materials (SBOM) aims to provide consumers with information about the components inside a software, enabling better assessment of potential security issues. Justin Hutchings, Senior Director of Product Management at GitHub, emphasizes the importance of SBOMs and their potential to facilitate patching without relying solely on the vendor. He spoke with Alex Williams in this episode of The New Stack Makers.
Creating a comprehensive SBOM poses challenges. Each software package is unique, such as an Android application that combines the developer's code with numerous open-source dependencies obtained through Maven packages. The SBOM should ideally serve as a machine-readable inventory of all these dependencies, enabling developers to evaluate their security.
Hutchings notes that many SBOMs fall short in being fully machine-readable, and the vulnerability landscape is even more problematic. To achieve the standards Hutchings envisions, several actions are necessary. For instance, certain programming languages make it difficult to inspect build contents, while the lack of a centralized distribution point for dependencies in languages like C and C++ complicates the enumeration and standardization of machine-readable names and versions. Addressing these issues across the entire software supply chain is imperative.
SBOMs hold potential for enhancing software security, but the current state of implementation and machine-readability needs improvement, particularly concerning diverse programming languages and dependency management.
Learn more at thenewstack.io
Creating a 'Minimum Elements' SBOM Document in 5 Minutes
Angel Diaz, Vice President of Technology, Capabilities, and Innovation at Discover Financial Services, spoke with TNS Host Alex Williams at the Open Source Summit in Vancouver, BC. Diaz emphasizes the importance of learning and collaboration among software engineers. He leads The Discover Technology Academy, a community of 15,000 engineers, which he describes as a place where craftsmen come together rather than an ivory tower institution.
Developers and engineers at Discover define and develop processes for software development. They start their journey by contributing atomic elements of knowledge, such as articles, blogs, videos, and tutorials, and then democratize that knowledge. Open source principles, communities, guilds, and established practices play a vital role in their work and discovery process.
Discover's developer experience revolves around the concept of the golden path, which goes beyond consuming content and includes aspects like code, automation, and setting up development environments. Pair programming and a cultural approach to learning are also incorporated into Discover's talent system.
Diaz highlights that Discover's work extends beyond their financial services company, as they share their knowledge and open source work with the external community through platforms like technology.discovered.com. This enables engineers to gain merit badges, such as maintainers or contributors, and showcase their expertise on professional platforms like LinkedIn.
Learn more at thenewstack.io
The Future of Developer Careers
The Linux Foundation's Open Source Security Foundation (OSSF) is addressing the challenge of timely software component updates to prevent security vulnerabilities like Log4J. In an interview with Alex Williams of The New Stack at the Open Source Summit in Vancouver, Omkhar Arasaratnam, the new general manager of OSSF, and Brian Behlendorf, CTO of OSSF, discuss the importance of making software secure from the start and the need for rapid response when vulnerabilities occur.
In this conversation, they highlight the significance of Software Bill of Materials (SBOMs), which provide a complete list of software components and supply chain relationships. SBOMs offer data that can aid decision-making and enable reputation tracking of repositories. The interview also touches on the issues with package managers and the quantification of software vulnerability risks. Overall, the goal is to improve the efficiency and effectiveness of software component updates and leverage data to enhance security in enterprise and production environments.
Learn more from The New Stack:
Apache Airflow is an open-source platform for building machine learning pipelines. It allows users to author, schedule, and monitor workflows, making it well-suited for tasks such as data management, model training, and deployment. In a discussion on The New Stack Makers, three technologists from Amazon Web Services (AWS) highlighted the improvements and ease of use in Apache Airflow.
Dennis Ferruzzi, a software developer at AWS, is working on updating Airflow's logging and metrics backend to the OpenTelemetry standard. This update will provide more granular metrics and better visibility into Airflow environments. Niko Oliveria, a senior software development engineer at AWS, focuses on reviewing and merging pull requests as a committer/maintainer for Apache Airflow. He has worked on making Airflow a more pluggable architecture through the implementation of AIP-51.
Raphaël Vandon, also a senior software engineer at AWS, is contributing to performance improvements and leveraging async capabilities in AWS Operators, which enable seamless interactions with AWS. The simplicity of Airflow is attributed to its Python base and the operator ecosystem contributed by companies like AWS, Google, and Databricks. Operators are like building blocks, each designed for a specific task, and can be chained together to create workflows across different cloud providers.
The latest version, Airflow 2.6, introduces sensors that wait for specific events and notifiers that act based on workflow success or failure. These additions aim to simplify the user experience. Overall, the growing community of contributors continues to enhance Apache Airflow, making it a popular choice for building machine learning pipelines.
Check out the full article on The New Stack:
How Apache Airflow Better Manages Machine Learning Pipelines
In this episode featuring Nima Negahban, CEO of Kinetica, the potential impact of generative AI tools like ChatGPT on businesses and organizations is discussed. Negahban highlights the transformative potential of generative AI when combined with data analytics. One use case he mentions is an "Alexa for all your data," where real-time queries can be made about store performance or product underperformance in specific weather conditions. This could provide organizations with a new level of visibility into their operations.
Negahban identifies two major challenges in the generative AI space. The first is security, especially when using internal data to train AI models. The second challenge is ensuring accuracy in AI outputs to avoid misleading information. However, he emphasizes that generative AI tools, such as GitHub Copilot, can bring a new expectation of efficiency and innovation for developers.
The future of generative AI in the enterprise involves discovering how to orchestrate these models effectively and leverage them with organizational data. Negahban mentions the growing interest in vector search and vector database capabilities to generate embeddings and perform embedding search. Kinetica's processing engine, coupled with OpenAI technology, aims to enable ad hoc querying against natural language without extensive data preparation, indexing, or engineering.
Check out the episode to hear more about how the integration of generative AI and data analytics presents exciting opportunities for businesses and organizations, providing them with powerful insights and potential for creativity and innovation.
Read more about Generative AI on The New Stack
Is Generative AI Augmenting Our Jobs, or About to Take Them?
Generative AI: How to Choose the Optimal Database
How Will Generative AI Change the Tech Job Market?
Generative AI: How Companies Are Using and Scaling AI Models
In this episode of The New Stack Makers from KubeCon EU 2023, Rob Barnes, a senior developer advocate at HashiCorp, discusses how their networking service, Consul, allows users to incorporate containers or virtual machines into their workflows without imposing container usage. Consul, an early implementation of service mesh technology, offers a full-featured control plane with service discovery, configuration, and segmentation functionalities. It supports various environments, including traditional applications, VMs, containers, and orchestration engines like Nomad and Kubernetes.
Barnes explains that Consul can dictate which services can communicate with each other based on rules. By leveraging these capabilities, HashiCorp aims to make users' lives easier and software more secure.
Barnes emphasizes that there are misconceptions about service mesh, with some assuming it is exclusively tied to container usage. He clarifies that service mesh adoption should be flexible and meet users wherever they are in their technology stack. The future of service mesh lies in educating people about its role within the broader context and addressing any knowledge gaps.
Join Rob Barnes and our host, Alex Williams, in exploring the evolving landscape of service mesh and understanding how it can enhance workflows.
Find out more about HashiCorp or the biggest news from KubeCon on The New Stack:
HashiCorp Vault Operator Manages Kubernetes Secrets
What did software engineers at KubeCon say about how AI is coming up in their work? That's a question we posed Taylor Dolezal, head of ecosystem for the Cloud Native Computing Foundation at KubeCon in Amsterdam.
Dolezal said AI did come up in conversation.
"I think that when it's come to this, typically with KubeCons, and other CNCF and LF events, there's always been one or two topics that have bubbled to the top," Dolezal said.
At its core, AI surfaces a data issue for users that correlates to data sharing issues, said Dolezal in this latest episode of The New Stack Makers.
Read more about AI and Kubernetes on The New Stack:
3 Important AI/ML Tools You Can Deploy on Kubernetes
Flyte: An Open Source Orchestrator for ML/AI Workflows
Overcoming the Kubernetes Skills Gap with ChatGPT Assistance
Kubernetes release 1.27 is boring, says Xander Grzywinski, a senior product manager at Microsoft.
It's a stable release, Grzywinski said on this episode of The New Stack Makers from KubeCon Europe in Amsterdam.
"It's reached a level of stability at this point," said Grzywinski. "The core feature set has become more fleshed out and fully realized.
The release has 60 total features, Grzywinski said. The features in 1.27 are solid refinements of features that have been around for a while. It's helping Kubernetes be as stable as it can be.
Examples?
It has a better developer experience, Grzywinski said. Storage primitives and APIs are more stable.
The mystery and miracle of flight sparked Ev Kontsevoy’s interest in engineering as a child growing up in the Soviet Union.
“When I was a kid, when I saw like airplane flying over, I was having a really hard time not stopping and staring at it until it's gone,” said Kontsevoy, co-founder and CEO of Teleport, said in this episode of the Tech Founders Odyssey podcast series. “I really wanted to figure out how to make it fly.”
Inevitably, he said, the engineering path led him to computers, where he was thrilled by the power he could wield through programming. “You're a teenager, no one really listens to you yet, but you tell a computer to go print number 10 ... and then you say, do it a million times. And the stupid computer just prints 10 million. You feel like a magician that just bends like machines to your will.”
In this episode of the series, part of The New Stack Makers podcast, Kontsevoy discussed his journey to co-founding Teleport, an infrastructure access platform, with TNS co-hosts Colleen Coll and Heather Joslyn.
Developer tool integration and AI differentiate workflows to achieve that "fluid" state developers strive for in their work.
Amazon CodeCatalyst and Amazon CodeWhisperer exemplify how developer workflows are accelerating and helping to create these fluid states. That's a big part of the story we hear from Harry Mower, director AWS DevOps Services, and Doug Seven, director, Software Development, AWS CodeWhisperer, from our recording in Seattle earlier in April for this week's AWS Developer Innovation Day.
CodeCatalyst serves as an end-to-end integrated DevOps toolchain that provides developers with everything they need to go from planning through to deployment, Mower said. CodeWhisperer is an AI coding companion that generates whole-line and full-line function code recommendations in an integrated development environment (IDE).
CodeWhisperer is part of the IDE, Seven said. The acceleration is two-fold. CodeCatalyst speeds the end-to-end integration process, and CodeWhisper accelerates writing code through generative AI.
Just as everyone was heading out to the New Year's holidays last year, CTO Rob Zuber got a surprise of a most unwelcome sort. A customer alerted CircleCI to suspicious GitHub OAuth activity. Although the scope of the attack appeared limited, there was still no telling if other customers of the DevOps-friendly continuous integration and continuous delivery platform were impacted.
This notification kicked off a deeper review by CircleCI’s security team with GitHub, and they rotated all GitHub OAuth tokens on behalf of their customers. On January 4, the company also made the difficult but necessary decision to alert customers of this “security instance,” asking them to immediately rotate any and all stored secrets and review internal logs for any unauthorized access.
In this latest episode of The New Stack Makers podcast, we discuss with Zuber the attack and how CircleCI responded. We also talk about what other companies should do to avoid the same situation, and what to do should it happen again.
Feature flags, the toggles in software development that allow you to turn certain features on or off for certain customers or audiences, offer release management at scale, according to Karishma Irani, head of product at LaunchDarkly.
But they also help unleash innovation, as she told host Heather Joslyn of The New Stack in this episode of The New Stack Makers podcast. And that points the way to a future where the potential for easy testing can inspire new features and products, Irani said.
“We've observed that when the risk of releasing something is lowered, when the risk of introducing bugs in production or breaking, something is reduced, is lowered, our customers feel organically motivated to be more innovative and think about new ideas and take risks,” she said.
Hoi Europe and beyond!
Once again it is time for cloud native enthusiasts and professionals to converge and discuss cloud native computing in all its efficiency and complexity. The Cloud Native Computing Foundation's KubeCon+CloudNativeCon 2023 is being held later this month in Amsterdam, April 18 - 21, at the Rai Convention Centre.
In this latest edition of The New Stack podcast, we spoke with two of the event's co-chairs who helped define this year's themes for the show, which is expected to draw over 9,000 attendees: Aparna Subramanian, Shopify's Director of Production Engineering for Infrastructure; and Cloud Native Infra and Security Enterprise Architect Frederick Kautz.
s the end of programming nigh?
If you ask Matt Welsh, he'd say yes. As Richard McManus wrote on The New Stack, Welsh is a former professor of computer science at Harvard who spoke at a virtual meetup of the Chicago Association for Computing Machinery (ACM), explaining his thesis that ChatGPT and GitHub Copilot represent the beginning of the end of programming.
Welsh joined us on The New Stack Makers to discuss his perspectives about the end of programming and answer questions about the future of computer science, distributed computing, and more.
Welsh is now the founder of fixie.ai, a platform they are building to let companies develop applications on top of large language models to extend with different capabilities.
For 40 to 50 years, programming language design has had one goal. Make it easier to write programs, Welsh said in the interview.
Still, programming languages are complex, Welsh said. And no amount of work is going to make it simple.
Speed is a recurring theme in this episode of The Tech Founder Odyssey. Also, timing.
Eilon Elhadad and Eylam Milner, who met while serving in the Israeli military, discovered that source code leak was a hazardous side effect of businesses’ need to move fast and break things in order to stay competitive.
“Every new business challenge leads to a new technological solution,” said Elhadad in this episode of The New Stack's podcast series. “The business challenge was to deliver product faster to the business; the solution was to build off the supply chain. And then it leads to a new security attack surface.”
Discovering this problem, and finding a solution to it, put Milner and Elhadad in the right place at the right time — just as the tech industry was beginning to rally itself to deal with this issue and give it a name: software supply chain security.
It led them to co-found Argon Security, which was acquired by Aqua Security in late 2021, Elhadad told The New Stack, a year after Argon started.
Given the vulnerability of so many systems, it’s not surprising that cyberattacks on applications and APIs increased 82% in 2022 compared to the previous year, according to a report released this year by Imperva’s global threat researchers.
What might rattle even the most experienced technologists is the sheer scale of those attacks. Digging into the data, Imperva, an application and data security company, found that the largest layer seven, distributed denial of service (DDoS) attack it mitigated during 2022 involved — you might want to sit down for this — more than 3.9 million API requests per second.
“Most developers, when they think about their APIs, they’re usually dealing with traffic that’s maybe 1,000 requests per second, not too much more than that. Twenty thousand, for a larger API,” said Peter Klimek, director of technology at Imperva, in this episode of The New Stack Makers podcast. “So, to get to 3.9 million, it’s really staggering.”
Klimek spoke to Heather Joslyn of TNS about the special challenges of APIs and cybersecurity and steps organizations can take to keep their APIs safe.
The episode was sponsored by Imperva.
The 20th Annual Southern California Linux Expo (SCALE) runs Thursday through Sunday at the Pasadena Convention Center in Pasadena, Ca., featuring keynotes from notables such as Ken Thompson, the creator of Unix, said Ilan Rabinovich, one of the co-founders and conference chair for the conference on this week's edition of The New Stack Makers.
"Honestly, most of the speakers we've had, you know, we got at SCALE in the early days, we just, we, we emailed them and said: 'Would you come to speak at the event?' We ran a call for proposals, and some of them came in as submissions, but a lot of it was just cold outreach. I don't know if that succeeded, because that's the state of where the community was at the time and there wasn't as much demand or just because or out of sheer dumb luck. I assure you, it wasn't skill or any sort of network that we like, we just, you know, we just we managed to, we managed to do that. And that's continued through today. When we do our call for papers, we get hundreds and hundreds of submissions, and that makes it really hard to choose from."
Rethinking Web Application Firewalls
Thompson, who turned 80 on February 4 (Happy Birthday, Mr. Thompson), created Unix at Bell Labs. He worked with people like Robert Griesemer and Rob Pike on developing the Go programming language and other projects over the years, including Plan 9, UTF-8, and more.
Rabinovich is pretty humble about the keynote speakers that the conference attracts. He and the conference organizers scoured the Internet and found Thompson's email, who said he'd love to join them. That's how they attracted Lawrence Lessig, the creator of the Creative Commons license, who spoke at SCALE12x in 2014 about the legal sides of open source, content sharing, and free software.
"I wish I could say, we have this very deep network of connections," Rabinovich said. "It's just, these folks are surprisingly approachable, despite, you know, even after years and years of doing amazing work."
SCALE is the largest community-run open-source and free software conference in North America, with roots befitting an event that started with a group of college students wanting to share their learnings about Linux.
Rabinovitch was one of those college students attending UCSB, the University of California, Santa Barbara.
"A lot of the history of SCALE comes from the LA area back when open source was still relatively new and Linux was still fairly hard to get up and running," Rabinovitch said. "There were LUGS (Linux User Groups) on every corner. I think we had like 25 LUGS in the LA area at one point. And so so there was a vibrant open source community.'
Los Angeles's freeways and traffic made it difficult to get the open source community together. So they started LUGFest. They held the day-long event at a Nortel building until the telco went belly up.
So, as open source people tend to do, they decided to scale, so to speak, the community gatherings. And so SCALE came to be – led by students like Rabinovitch. The conference started with a healthy community of 200 to 250 people. By the pandemic, 3,500 people were attending.
For more about SCALE, listen to the full episode of The New Stack Makers wherever you get your podcasts.
When she was a student in her native Israel, Shira Shamban was a self-proclaimed “geek.”
But, unusually for a tech company founder and CEO, not a computer geek.
Shamban was a science nerd, with her sights set on becoming a doctor. But first, she had to do her state-mandated military service. And that’s where her path diverged.
In the military, she was not only immersed in computers but spent years working in intelligence; she stayed in the service for more than a decade, eventually rising to become head of an intelligence sector for the Israeli Defense Forces. At home, she began building her own projects to experiment with ideas that could help her team.
“So that kind of helped me not to be intimidated by technology, to learn that I can learn anything I want by myself,” said Shamban, co-founder of Solvo, a company focused on data and cloud infrastructure security. “And the most important thing is to just try out things that you learn.”
To date, Solvo has raised about $11 million through investors like Surround Ventures, Magenta Venture Partners, TLV Partners and others. In this episode of The New Stack Makers podcast series The Tech Founder Odyssey, Shamban talked to Heather Joslyn and Colleen Coll of TNS about her journey.
Shamban opted to stay in the technology world, nurturing a desire to eventually start her own company. It was during a stint at Dome9, a cloud security company, that she met her future Solvo co-founder, David Hendri — and built a foundation for entrepreneurship.
“After that episode, I got the guts,” she said. “Or I got stupid enough.”
Hendri, now Solvo’s chief technology officer, struck Shamban as having the right sensibility to be a partner in a startup. At Dome9, she said, “very often, I used to stay up late in the office, and I would see him as well. So we'd grab something to eat.”
Their casual conversations quickly revealed that Hendri was often staying late to troubleshoot issues that were not his or his team’s responsibility, but simply things that someone needed to fix. That sense of ownership, she realized, “is exactly the kind of approach one would need to bring to the table in a startup.”
The mealtime chats that started Solvo have carried over into its current organizational culture. The company employs 20 people; workers based in Tel Aviv are expected to come to the office four days a week.
Hendri and Shamban started their company in the auspicious month of March 2020, just as the Covid-19 pandemic started. While many companies have moved to all-remote work, Solvo never did.
“We knew we wanted to sit together in the same room, because the conversations you have over a cup of coffee are not the same ones that you have on a chat, and on Slack,” the CEO said. “So that was our decision. And for a long time, it was an unpopular decision.”
As the company scales, finding employees who align with its culture can make recruiting tricky, Shamban said.
It's not only about your technical expertise, it's also about what kind of person you are,” she said. “Sometimes we found very professional people that we didn't think would make a good fit to the culture that we want to build. So we did not hire them. And in the boom times, when it was really hard to hire engineers.
“These were tough decisions. But we had to make them because we knew that building a culture is easier in a way than fixing a culture.
Listen to the full episode to hear more about Shamban's journey.
At Cloud Native Security Con, we sat down with Solo.io's Marino Wijay and Jim Barton, who discussed how service mesh technologies have matured, especially now with the removal of sidecars in Ambient Mesh that it developed with Google.
Ambient Mesh is "a new proxy architecture that, according to the Solo.io site, "moves the proxy to the node level for mTLS and identity. It also allows a policy-enforcement policy to manage Layer 7 security filters and policies.
A sidecar is a mini-proxy, a mini-firewall, like an all-in-one router, said Wijay, who does developer relations and advocacy for Solo. A sidecar receives instructions from an upstream control plane.
"Now, one of the things that we started to realize with different workloads and different patterns of communication is that not all these workloads need a sidecar or can take advantage of the sidecar," Wijay said. "Some better operate without the sidecar."
Ambient Mesh reflects the maturity of service mesh and the difference between day one and day two operations, said Barton, a field engineer with Solo.
"Day one operations are a lot about understanding concepts, enabling developers, initial configurations, that sort of thing," Barton said. "The community is really much more focused and Ambient Mesh is a good example of this on day two concerns. How do I scale this? How do I make it perform in large environments? How can I expand this across clusters, clusters in multiple zones in multiple regions, that sort of thing? Those are the kinds of initiatives that we're really seeing come to the forefront at this point."
With the maturity of service mesh comes the users. In the context of security, that means the developer security operations person, Barton said. It's not the developer's job to connect services. Their job is to build out the services.
"It's up to the platform operator, or DevSecOps engineers to create that, that fundamental plane or foundation for where you can deploy your services, and then provide the security on top of it," Barton said.
The engineers then have to configure it and think it through. "How do I know who's doing what and who's talking to who, so that I can start forming my zero trust posture?," Barton said.
Here's a breakdown of what we cover:
Everyone in the community was surprised by ChatGPT last year, which a web service responded to any and all user questions with a surprising fluidity.
ChatGPT is a variant of the powerful GPT-3 large language model created by OpenAI, a company owned by Microsoft. It is still a demo though it is pretty clear that this type of generative AI will be rapidly commercialized. Indeed Microsoft is embedding the generative AI in its Bing Search service, and Google is building a rival offering.
So what are smaller businesses to do to ensure their messages are heard to these machine learning giants?
For this latest podcast from The New Stack, we discussed these issues with Ryan Johnston, chief marketing officer for Writer. Writer has enjoyed an early success in generative AI technologies. The company's service is dedicated to a single mission: making sure its customers' content adheres to the guidelines set in place.
This can include features such as ensuring the language in the copy matches the company's own designated terminology, or making sure that a piece of content covers all the required topic points, or even that a press release has quotes that are not out of scope with the project mission itself.
In short, the service promises "consistently on-brand content at scale," Johnston said. "It's not taking away my creativity. But it is doing a great job of figuring out how to create content for me at a faster pace, [content] that actually sounds like what I want it to sound like."
For our conversation, we first delved into how the company was started, its value proposition ("what is it used for?") and what role that AI plays in the company's offering. We also delve a bit into the technology stack Writer deploys to offer these services, as well as what material the Writer may require from their customers themselves to make the service work.
For the second part of our conversation, we turn our attention to how other companies (that are not search giants) can get their message across in the land of large language models, and maybe even find a few new sources of AI-generated value along the way. And, for those public-facing businesses dealing with Google and Bing, we chat about how they should they refine their own search engine optimization (SEO) strategies to be best represented in these large models?
One point to consider: While AI can generate a lot of pretty convincing text, you still need a human in the loop to oversee the results, Johnston advised.
"We are augmenting content teams copywriters to do what they do best, just even better. So we're scaling the mundane parts of the process that you may not love. We are helping you get a first draft on paper when you've got writer's block," Johnston said. "But at the end of the day, our belief is there needs to be a great writer in the driver's seat. [You] should never just be fully reliant on AI to produce things that you're going to immediately take to market."
The story goes something like this:
There's this marketing manager who is trying to time a launch. She asks the developer team when the service will be ready. The dev team says maybe a few months. Let's say three months from now in April. The marketing manager begins prepping for the release.
The dev team releases the services the following week.
It's not an uncommon occurrence.
Edith Harbaugh is the co-founder and CEO of LaunchDarkly, a company she launched in 2014 with John Kodumal to solve these problems with software releases that affect organizations worldwide. Today, LaunchDarkly has 4,000 customers and an annual return revenue rate of $100 million.
We interviewed Harbaugh for our Tech Founder Odyssey series on The New Stack Makers about her journey and LaunchDarkly's work. The interview starts with this question about the timing of dev releases and the relationship between developers and other constituencies, particularly the marketing organization.
LaunchDarkly is the number one feature management company, Harbaugh said. "Their mission is to provide services to launch software in a measured, controlled fashion. Harbaugh and Kodumal, CTO, founded the company on the premise that software development and releasing software is arduous.
"You wonder whether you're building the right thing," Harbaugh said, who has worked as both an engineer and a product manager. "Once you get it out to the market, it often is not quite right. And then you just run this huge risk of how do you fix things on the fly."
Feature flagging was a technique that a lot of software companies did. Harbaugh worked at Tripit, a travel service, where they used feature flags as did companies such as Atlassian, where Kodumal had developed software.
"So the kernel of LaunchDarkly, when we started in 2014, was to make this technique of feature flagging into a movement called feature management, to allow everybody to build better software faster, in a safer way."
LaunchDarkly allows companies to release features however granular an organization wants, allowing a developer to push a release into production in different pieces at different times, Harbaugh said. So, a marketing organization can send a release out even after the developer team has released it into production.
"So, for example, if, we were running a release, and we wanted somebody from The New Stack to see it first, the marketing person could turn it on just for you."
Harbaugh describes herself as a huge geek. But she also gets it in a rare way for geeks and non-geeks alike. She and Kodumal took a concept used effectively by develops, transforming it into a service that provides feature management for a broader customer base, like the marketer wanting to push releases out in a granular way for a launch on the East Coast that is pre-programmed with feature flags in advance from the company office the previous day in San Francisco.
The idea is novel, but like many intelligent, technical founders, Harbaugh's journey reflects her place today. She's a leader in the space, and a fun person to talk to, so we hope you enjoy this latest episode in our tech founder series from The New Stack Makers.
By now, almost everyone agreed platform engineering is probably a good idea, in which an organizations builds an internal development platform to empower coders and speed application releases. So, for this latest edition of The New Stack podcast, we spoke with one of the pioneers in this space, Zohar Einy, CEO of Port, to see how platform engineering would work in your organization. TNS Editor Joab Jackson hosted this conversation.
Port offers what it claims is the world's first low code platform for developers.
Rethinking Web Application Firewalls
With Port, an organization can build a software catalogue of approved tools, import its own data model, and set up workflows. Developers can consume all the resources they need through a self-service catalogue, without needing the knowledge how to set up a complex application, like Kubernetes. The DevOps and platform teams themselves maintain the platform.
Application owners aren't the only potential users of a self-service catalogues, Einy points out in our convo. DevOps and system administration teams can also use the platform. A DevOps teams can set up automations "to make sure that [developers are] using the platform with the right mindset that fits with their organizational standards in terms of compliance, security, and performance aspects."
Even machines themselves could benefit from a self-service platform, for those who are looking to automate deployments as much as possible.
Einy offered an example: A CI/CD process could create a build process on its own. If it needs to check the maturity level of some tool, it can do so through an API call. If it's not adequately certified, the developer is notified, but if all the tools are sufficiently mature than the automated process can finish the build without further developer intervention.
Another possible process that could be automated would be the termination of permissions when their deadline has passed. Think about an early-warning system for expired digital certificates. "So it's a big driver for both for cost reduction and security best practices," Einy said.
But what about developer choice? Won't developers feel frustrated when barred from using the tools they are most fond of?
But this freedom to use any tool available was what led us to the current state of overcomplexity in full-stack development, Einy responded. This is why the role of "full-stack developer" seems like an impossible, given all the possible permutations at each layer of the stack.
Like the artist who finds inspiration in a limited palette, the developer should be able to find everything they need in a well-curated platform.
"In the past, when we talked about 'you-build-it-you-own-it', we thought that the developer needs to know everything about anything, and they have the full ownership to choose anything that they want. And they got sick of it, right, because they needed to know too much," Einy said. "So I think we are getting into a transition where developers are OK with getting what they need with a click of a button because they have so much work on their own."
In this conversation, we also discussed measuring success, the role of access control in DevOps, and open source Backstage platform, and its recent inclusion of paid plug-ins. Give it a listen!
In this latest episode of The New Stack Makers podcast, we delve more deeply into the emerging practice of platform engineering. The guests for this show are Aeris Stewart, community manager at platform orchestration provider Humanitec and Michael Galloway, an engineering leader for infrastructure software provider HashiCorp. TNS Features Editor Heather Joslyn hosted this conversation.
Although the term has been around for several years, platform engineering caught the industry's attention in a big way last September, when Humanitec published a report that identified how widespread the practice was quickly becoming, citing its use by Nike, Starbucks, GitHub and others.
Right after the report was released, Stewart provided an analysis for TNS arguing that platform engineering solved the many issues that another practice, DevOps, was struggling with. "Developers don’t want to do operations anymore, and that’s a bad sign for DevOps," Stewart wrote. The post stirred a great deal of conversation around the success of DevOps.
Platform engineering is "a discipline of designing and building tool chains and workflows that enable developer self service," Stewart explained. The purpose is to give the developers in your organization a set of standard tools that will allow them to do their job — write and fix apps — as quickly as possible. The platform provides the tools and services "that free up engineering time by reducing manual toil cognitive load," Galloway added.
But platform engineering also has an advantage for the business itself, Galloway elaborated. With an internal developer platform in place, a business can scale up with "reliability, cost efficiency and security," Galloway said.
Before HashiCorp, Galloway was an engineer at Netflix, and there he saw the benefits of platform engineering for both the dev and the business itself. "All teams were enabled to own the entire lifecycle from design to operation. This is really central to how Netflix was able to scale," Galloway said. A platform engineering team created a set of services that made it possible for Netflix engineers to deliver code "without needing to be continuous delivery experts."
The conversation also touched on the challenges of implementing platform engineering, and what metrics you should use to quantify its success.
And because platform engineering is a new discipline, we also discussed education and community. Humanitec's debut PlatformCon drew over 6,000 attendees last June (and Platform 2023 has just been scheduled for June). There is also a platform engineering Slack channel, which has drawn over 8,000 participants thus far.
"I think the community is playing a really big role right now, especially as a lot of organizations' awareness of platform engineering is just starting," Stewart said. "There's a lot of knowledge that can be gained by building a platform that you don't necessarily want to learn the hard way."
Platform engineering “is the art of designing and binding all of the different tech and tools that you have inside of an organization into a golden path that enables self service for developers and reduces cognitive load,” said Kaspar Von Grünberg, founder and CEO of Humanitec, in this episode of The New Stack Makers podcast.
This is structure is important for individual contributors, Grünberg said, as well as backend engineers: “if you look at the operation teams, it reduces their burden to do repetitive things. And so platform engineers build and design internal developer platforms, and help and serve users. “
This conversation, hosted by Heather Joslyn, TNS features editor, dove into platform engineering: what it is, how it works, the problems it is intended to solve, and how to get started in building a platform engineering operation in your organization. It also debunks some key fallacies around the concept.
This episode was sponsored by Humanitec.
The notion of “you build it, you run it” — first coined by Werner Vogels, chief technology officer of [sponsor_inline_mention slug="amazon-web-services-aws" ]Amazon,[/sponsor_inline_mention] in a 2006 interview — established that developers should “own” their applications throughout their entire lifecycle. But, Grünberg said, that may not be realistic in an age of rapidly proliferating microservices and multiple, distributed deployment environments.
“The scale that we're operating today is just totally different,” he said. “The applications are much more complex.” End-to-end ownership, he added, is “a noble dream, but unfair towards the individual contributor. We're asking developers to do so much at once. And then we're always complaining that the output isn't there or not delivering fast enough. But we're not making it easy for them to deliver.”
Creating a “golden path” — though the creation by platform teams of internal developer platforms (IDPs) — can not only free developers from unnecessary cognitive load, Grünberg said, but also help make their code more secure and standardized.
For Ops engineers, he said, the adoption of platform engineering can also help free them from doing the same tasks over and over.
“If you want to know whether it's a good idea to look at platform engineering, I recommend go to your service desk and look at the tickets that you're receiving,” Grünberg said. “And if you have things like, ‘Hey, can you debug that deployment?’ and ‘Can you spin up in a moment all these repetitive requests?’ that's probably a good time to take a step back and ask yourself, ‘Should the operations people actually spend time doing these manual things?’”
For organizations that are interested in adopting platform engineering, the Humanitec CEO attacked some of the biggest misconceptions about the practice. Chief among them: failing to treat their platform as a product, in the same way a company would begin creating any product, by starting with research into customer needs.
“If you think about how we would develop a software feature, we wouldn't be sitting in a room and taking some assumptions and then building something,” he said. “We would go out to the user, and then actually interview them and say, ‘Hey, what's your problem? What's the most pressing problem?’”
Other fallacies embraced by platform engineering newbies, he said, are “visualization” — the belief that all devs need is another snazzy new dashboard or portal to look at — and believing the platform team has to go all-in right from the start, scaling up a big effort immediately. Such an effort, he said is “doomed to fail.”
Instead, Grünberg said, “I'm always advocating for starting really small, come up with what's the most lowest common tech denominator. Is that containerization with EKS? Perfect, then focus on that."
And don’t forget to give special attention to those early adopters, so they can become evangelists for the product. “make them fans, prioritize the right way, and then show that to other teams as a, ‘Hey, you want to join in? OK, what's the next cool thing we could build?’”
Check out the entire episode for much more detail about platform engineering and how to get started with it.
Feature flags — the on/off toggles, written in conditional statements, that allow organizations greater control over the user experience once code has been deployed — are proliferating and growing more complex, and demand robust feature management, said Karishma Irani, head of product at LaunchDarkly, in this episode of The New Stack Makers.
In a November survey by LaunchDarkly, which queried more than 1,000 DevOps professionals, 69% of participants said that feature flags are “must-have, mission-critical and/or high priority” for their organizations.
“Feature management, we believe, is a modern practice that's becoming more and more common with companies that want to deploy more frequently, innovate faster, and just keep a healthy engineering team,” Irani said.
The idea of feature management, Irani said, is to “maximize value while minimizing risk.”
LaunchDarkly uses its own software, she said, and eating its own dog food, as the saying goes, has paid off in gaining insights into user needs.
As part of LaunchDarkly’s virtual conference Trajectory in November, Irani joined Heather Joslyn, features editor of The New Stack, for a wide-ranging conversation about the latest developments in feature management.
This episode of Makers was sponsored by LaunchDarkly.
As an example of the benefits of having first-hand knowledge of how their company's products are used, Irani pointed to an internal project in mid-2022.
When the company migrated from [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention] to CockroachDB, it used new capabilities in its Feature Workflows product, which allow users to define a workflow that can schedule the gradual release of a feature flag for a future date and time, and automate approval requests.
“All of these async processes around approvals schedules, they're critical to releasing software, but they do slow you down and add more potential for manual error or human error,” Irani said. “And so our goal with Feature Workflows was to essentially automate the entire process of a feature release.”
This past June, the company also revised its Experimentation offering, she said. Led by James Frost, LaunchDarkly’s head of experimentation, the team did “a complete overhaul of our stats engine, they enhanced the integration path of our customers’ existing data sets and metrics,” Irani said. “They redesigned our UX and the codified model and experimentation best practices into the product itself.”
For instance, a new metric import API helps prevent the problem of multiple teams or users within a company using different tools for A/B and other experiments. It “significantly cuts down on manual duplicate work when importing metrics for experimentation,” said Irani. “So you can get set up faster.”
Another addition to the Experimentation product is a sample ratio mismatch test, she said, so “you can be confident that all of your experiments are correctly allocating traffic to each variant.”
These innovations, along with new capabilities to the company’s Core Flagging Platform, are in general availability. On the horizon — and now available through LaunchDarkly’s early access program, is Accelerate, which lets users track and visualize key engineering metrics, such as deployment frequency, release frequency, lead time for code changes, and flag coverage.
“I'm sure you've caught on already,” Irani said, “but a few of these are Dora metrics, which obviously are extremely critical to our users.”
Check out the entire episode for more details on what’s new from LaunchDarkly and the problems that innovators in the feature management space still need to solve.
In this latest podcast from The New Stack, we interview Manish Devgan, chief product officer for Hazelcast, which offers a real time stream processing engine. This interview was recorded at KubeCon+CloudNativeCon, held last October in Detroit.
"'Real time' means different things to different people, but it's really a business term," Devgan explained. In the business world, time is money, and the more quickly you can make a decision, using the right data, the more quickly one can take action.
Although we have many "batch-processing" systems, the data itself rarely comes in batches, Devgan said. "A lot of times I hear from customers that are using a batch system, because those are the things which are available at that time.
But data is created in real time sensors, your machines, espionage data, or even customer data — right when customers are transacting with you."
A real time data processing engine can analyze data as it is coming in from the source. This is different from traditional approaches that store the data first, then analyze it later. Bank loans may is example of this approach.
With a real time data processing engine in place, a bank can offer a loan to a customer using an automated teller machine (ATM) in real time, Devgan suggested. "As the data comes in, you can actually take action based on context of the data," he argued.
Such a loan app may combine real-time data from the customer alongside historical data stored in a traditional database. Hazelcast can combine historical data with real time data to make workloads like this possible.
In this interview, we also debated the merits of Kafka, the benefits of using a managed service rather than running an application in house, Hazelcast's users, and features in the latest release of the Hazelcast platform.
Back in April, Kris Nóva, now principal engineer at GitHub, started creating a server on Mastodon as a side project in her basement lab.
Then in late October, Elon Musk bought Twitter for an eye-watering $44 billion, and began cutting thousands of jobs at the social media giant and making changes that alienated longtime users.
And over the next few weeks, usage of Nóva’s hobby site, Hachyderm.io, exploded.
“The server started very small,” she said on this episode of The New Stack Makers podcast. “And I think like, one of my friends turned into two of my friends turned into 10 of my friends turned into 20 colleagues, and it just so happens, a lot of them were big names in the tech industry. And now all of a sudden, I have 30,000 people I have to babysit.”
Though the rate at which new users are joining Hachyderm has slowed down in recent days, Nóva said, it stood at more than 38,000 users as of Dec. 20.
Hachyderm.io is still run by a handful of volunteers, who also handle content moderation. Nóva is now seeking nonprofit status for it with the U.S. Internal Revenue Service, with intentions of building a new organization around Hachyderm.
This episode of Makers, hosted by Heather Joslyn, TNS features editor, recounts Hachyderm’s origins and the challenges involved in scaling it as Twitter users from the tech community gravitated to it.
Nóva and Joslyn were joined by Gabe Monroy, chief product officer at DigitalOcean, which has helped Hachyderm cope with the technical demands of its growth spurt.
Suddenly having a social media network to “babysit” brings numerous challenges, including the technical issues involved in a rapid scale up. Monroy and Nóva worked on Kubernetes projects when both were employed at Microsoft, “so we’re all about that horizontal distribution life.” But the Mastodon application’s structure proved confounding.
“Here I am operating a Ruby on Rails monolith that's designed to be vertically scaled on a single piece of hardware,” Nóva said. “And we're trying to break that apart and run that horizontally across the rack behind me. So we got into a lot of trouble very early on by just taking the service itself and starting to decompose it into microservices.”
Storage also rapidly became an issue. “We had some non-enterprise but consumer-grade SSDs. And we were doing on the order of millions of reads and writes per day, just keeping the Postgres database online. And that was causing cascading failures and cascading outages across our distributed footprint, just because our Postgres service couldn't keep up.”
DigitalOcean helped with the storage issues; the site now uses a data center in Germany, whose servers DigitalOcean manages. (Previously, its servers had been living in Nóva’s basement lab.)
Monroy, longtime friends with Nóva, was an early Hachyderm user and reached out when he noticed problems on the site, such as when he had difficulty posting videos and noticed other people complaining about similar problems.
“This is a ‘success failure’ in the making here, the scale of this is sort of overwhelming,” Monroy said. “So I just texted Nóva, ‘Hey, what's going on? Anything I could do to help?’
“In the community, we like to talk about the concept of HugOps, right? When people are having issues on this stuff, you reach out, try and help. You give a hug. And so, that was all I did. Nóva is very crisp and clear: This is what I got going on. These are the issues. These are the areas where you could help.”
One challenge in particular has nudged Nóva to seek nonprofit status: operating costs.
“Right now, I'm able to just kind of like eat the cost myself,” she said. “I operate a Twitch stream, and we're taking the proceeds of that and putting it towards operating service.” But that, she acknowledges, won’t be sustainable as Hachyderm grows.
“The whole goal of it, as far as I'm concerned, is to keep it as sustainable as possible,” Nóva said. “So that we're not having to offset the operating costs with ads or marketing or product marketing. We can just try to keep it as neutral and, frankly, boring as possible — the NPR of social media, if you could imagine such a thing.”
Check out the full episode for more details on how Hachyderm is scaling and plans for its future, and Nóva and Monroy’s thoughts about the status of Twitter.
Feedback? Find me at @hajoslyn on Hachyderm.io.
During the pandemic, many organizations sped up their move to the cloud — without fully understanding the costs, both human and financial, they would pay for the convenience and scalability of a digital transformation.
“They really didn’t have a baseline,” said Mekka Williams, principal engineer, at Spot by NetApp, in this episode of The New Stack Makers podcast. “And so the those first cloud bills, I'm sure were shocking, because you don't get a cloud bill, when you run on your on-premises environment, or even your private cloud, where you've already paid the cost for the infrastructure that you're using.
What’s especially worrisome is that many of those costs are simply wasted, Williams said. “Most of the containerized applications running in Kubernetes clusters are running underutilized,” she said. “And anything that's underutilized in the cloud equates to waste. And if we want to be really lean and clean and use resources in a very efficient manner, we have to have really good cloud strategy in order to do that.”
This episode of The New Stack Makers, hosted by Heather Joslyn, TNS features editor, focused on CloudOps, which in this case stands for “cloud operations.” (It can also stand for “cloud optimization,” but more about that later.)
The conversation was sponsored by Spot by NetApp.
Many organizations that moved quickly to the cloud during the dog days of the pandemic have begun to revisit the decisions they made and update their strategies, Williams said.
“We see some organizations that are trying to modernize their applications further, to make better use of the services that are available in the cloud,” she said. “The cloud is getting more complex as they grow and mature in their journey.
“And so they're looking for ways to simplify their operations. And as always keep their costs down. Keep things simple for their DevOps and SRE, to is not incur additional technical debt, but still make the most make the best use out of their cloud, wherever they are.”
Automation holds the key to CloudOps — both definitions — according to Williams. For starters, it makes teams more efficient.
“The less tasks that your workforce have to perform manually, the more time they have to spend focused on business logic and being innovative,” Williams said. “Automation also helps you with repeatability. And it's less error-prone, and it helps you standardize. Really good automation simplifies your environment greatly.”
Automating repetitive tasks can also help prevent your site reliability engineers (SREs) from burnout, she said.
Practicing “good data hygiene,” Williams said, also helps contain costs and reduce toil: “Making sure you're using the right tier of data, making sure you're not over-provisioned. And the type of storage you need, you don't need to pay top dollar for high-performing storage, if it's just backup data that doesn't get accessed that often.”
Such practices are “good to know on-premises, but these are imperative to know when you're in the cloud,” she said, in order to reduce waste.
During this episode, Williams pointed to solutions in the Spot by Netapp portfolio that use automation to help make the most of cloud infrastructure, such as its flagship product, Elastigroup, which takes advantage of excess capacity to scale workloads.
In June, Spot by NetApp acquired Instaclustr, a solution for managing open source database and streaming technologies. The company recognizes the growing importance of open source for enterprises. “We're paying attention to trends for cloud applications,” Williams said, “and we're growing the portfolio to address the needs that are top of mind for those customers.”
Check out the entire episode to learn more about CloudOps.
Redis, best known as a data cache or real-time data platform, is evolving into much more, Tim Hall, chief of product at the company told The New Stack in a recent TNS Makers podcast.
Redis is an in-memory database or memory-first database, which means the data lands there and people are using us for both caching and persistence. However, these days, the company has a number of flexible data models, but one of the brand promises of Redis is developers can store the data as they're working with it. So as opposed to a SQL database where you might have to turn your data structures into columns and tables, you can actually store the data structures that you're working with directly into Redis, Hall said.
“About 40% of our customers today are using us as a primary database technology,” he said. “That may surprise some people if you're sort of a classic Redis user and you knew us from in-memory caching, you probably didn't realize we added a variety of mechanisms for persistence over the years.”
Meanwhile, to store the data, Redis does store it on disk, sort of behind the scenes while keeping a copy in memory. So if there's any sort of failure, Redis can recover the data off of disk and replay it into memory and get you back up and running. That's a mechanism that has been around about half a decade now.
Yet, Redis is playing what Hall called the ‘long game', particularly in terms of continuing to reach out to developers and showing them what the latest capabilities are.
“If you look at the top 10 databases on the planet, they've all moved into the multimodal category. And Redis is no different from that perspective” Hall said. “So if you look at Oracle it was traditionally a relational database, Mongo is traditionally JSON documents store only, and obviously Redis is a key-value store. We've all moved down the field now. Now, why would we do that? We're all looking to simplify the developer’s world, right?”
Yet, each vendor is really trying to leverage their core differentiation and expand out from there. And the good news for Redis is speed is its core differentiation.
“Why would you want a slow data platform? You don't, Hall said. “So the more that we can offer those extended capabilities for working with things like JSON, or we just launched a data structure called t-digest, that people can use along and we've had support for Bloom filter, which is a probabilistic data structure like all of these things, we kind of expand our footprint, we're saying if you need speed, and reducing latency, and having high interactivity is your goal Redis should be your starting point. If you want some esoteric edge case functionality where you need to manipulate JSON in some very strange way, you probably should go with Mongo. I probably won't support that for a long time. But if you're just working with the basic data structures, you need to be able to query, you need to be able to update your JSON document. Those straightforward use cases we support very, very well, and we support them at speed and scale.”
As a Redis customer, Alain Russell, CEO at Blackpepper, a digital e-commerce agency in Auckland, New Zealand, said his firm has undergone the same transition.
“We started off as a Redis as a cache, that helped us speed up traditional data that was slower than we wanted it,” he said. “And then we went down a cloud path a couple of years ago. Part of that migration included us becoming, you know, what's deemed as ‘cloud native.’ And we started using all of these different data stores and data structures and dealing with all of them is actually complicated. You know, and from a developer perspective, it can be a bit painful.”
So, Blackpepper started looking for how to make things simpler, but also keep their platform very fast and they looked at the Redis Stack. “And honestly, it filled all of our needs in one platform. And we're kind of in this path at the moment, we were using the basics of it. And we're very early on in our journey, right? We're still learning how things work and how to use it properly. But we also have a big list of things that we're using other data stores for traditional data, and working out, okay, this will be something that we will migrate to, you know, because we use persistent heavily now, in Redis.”
Twenty-year-old Blackpepper works with predominantly traditional retailers and helps them in their omni-channel journey.
Hall said there are three modes of access to the Redis technology: the Redis open source project, the Redis Stack – which the company recommends that developers start with today -- and then there's Redis Enterprise Edition, which is available as software or in the cloud.
“It's the most popular NoSQL database on the planet six years running,” Hall said. “And people love it because of its simplicity.”
Meanwhile, it takes effort to maintain both the commercial product and the open source effort. Allen, who has worked at Hortonworks, InfluxData, said “Not every open source company is the same in terms of how you make decisions about what lands in your commercial offering and what lands in open source and where the contributions come from and who's involved.”
For instance, “if there was something that somebody wanted to contribute that was going to go against our commercial interest, we probably not would not merge that,” Hall said.
Redis was run by project founder Salvatore Sanfilippo, for many, many years, and he was the sole arbiter of what landed and what did not land in Redis itself. Then, over the last couple of years, Redis created a core steering committee. It's made up of one individual from AWS, one individual from Alibaba, and three Redis employees who look after the contributions that are coming in from the Redis open source community members who want to contribute those things.
“And then we reconcile what we want from a commercial interest perspective, either upstream, or things that, frankly, may have been commoditized and that we want to push downstream into the open source offering, Hall said. “And so the thing that you're asking about is sort of my core existential challenge all the time, that is figuring out where we're going from a commercial perspective. What do we want to land there first? And how can we create a conveyor belt of commercial opportunity that keeps us in business as a software company, creating differentiation against potential competitors show up? And then over time, making sure that those things that do become commoditized, or maybe are not as differentiating anymore, I want to release those to the open source community. But this upstream/downstream kind of challenge is something that we're constantly working through.”
Blackpepper was an open source Redis user initially, but they started a journey where they used Memcached to speed up data. Then they migrated to Redis when they moved to the AWS cloud, Russell said.
The Redis TNS Makers podcast goes on to look at the use of AI/ML in the platform, the acquisition of RESP.app, the importance of JSON and RediSearch, and where Redis is headed in the future.
Let’s say you’re a passenger on a cruise ship. Floating in the middle of the ocean, far from reliable Wi-Fi, you wear a device that lets you into your room, that discreetly tracks your move from the bar to the dinner table to the pool and delivers your drink order wherever you are. You can buy sunscreen or toothpaste or souvenirs in the ship’s stores without touching anything.
If you’re a Carnival Cruise Lines passenger, this is reality right now, in part because of the company’s partnership with Couchbase, according to Mark Gamble, product and solutions marketing director, Couchbase.
Couchbase provides a cloud native, no SQL database technology that's used to power applications for customers including Carnival but also Amadeus, Comcast, LinkedIn, and Tesco.
In Carnival’s case, Gamble said, “they run an edge data center on their ships to power their Ocean Medallion application, which they are super proud of. They use it a lot in their ads, because it provides a personalized service, which is a differentiator for them to their customers.”
In this episode of The New Stack Makers, Gamble spoke to Heather Joslyn, features editor of TNS, about edge computing, 5G, and Couchbase Capella, its Database as a Service (DBaaS) offering for enterprises.
This episode of Makers was sponsored by Couchbase.
The goal of edge computing, Gamble told our podcast audience, is bring data and compute closer to the applications that consume it. This speeds up data processing, he said, “because data doesn't have to travel all the way to the cloud and back.” But it also has other benefits
“This serves to make applications more reliable, because local data processing sort of removes internet slowness and outages from the equation,” he said.
The innovation of 5G networks has also had a big impact on reducing latency and increasing uptime, Gamble said.
“To compare with 4G, things like the average round trip data travel time between the device, and the cell tower is like 15 milliseconds. And with 5G, that latency drops to like two milliseconds. And 5G can support they say, a million devices, within a third of a mile radius, way more than what's possible with 4G.”
But 5G, Gamble said, “really requires edge computing to realize its its full potential.” Increasingly, he said, Couchbase hears interest from its customers in building “offline-first” applications, which can run even in Wi-Fi dead zones.
The use cases, he said, are everywhere: “When I pass a fast food restaurant, it's starting to become more common, where you'll see that, instead of just a box you're talking to, there's a person holding a tablet, and they walk down the line, and they're taking orders. And as they come closer to the restaurant, it syncs up with the kitchen. They find that just a better, more efficient way to serve customers. And so it becomes a competitive differentiator forum.”
As part of Couchbase’s Capella product, it recently announced Capella App Service, a new capability for mobile developers, is a fully managed backend designed for mobile, Internet of Things (IoT) and edge applications.
“Developers use it to access and sync data between the Database as a Service and their edge devices, as well as it handles authenticating and managing mobile and edge app users,” he said.
Used in conjunction with Couchbase Lite, a lightweight, embedded NoSQL database used with mobile and IoT devices, Capella App Services synchronizes the data between backend and edge devices.
Even for workers in remote areas, “eventually, you have to make sure that data updates are shared with the rest of the ecosystem,” Gamble said. “ And that's what App Services is meant to do, as conductivity allows — so during network disruptions in areas with no internet, apps will still continue to operate.”
Check out the rest of the conversation to learn more about edge computing and the challenges Gamble thinks still need to be addressed in that space.
Wayfair describes itself as the “the destination for all things home: helping everyone, anywhere create their feeling of home.” It provides an online platform to acquire home furniture, outdoor decor and other furnishings. It also supports its suppliers so they can use the platform to sell their home goods, explained Natali Vlatko, global lead, open source program office (OSPO) and senior software engineering manager, for Wayfair as the featured guest in Detroit during KubeCon + CloudNativeCon North America 2022.
“It takes a lot of technical, technical work behind the scenes to kind of get that going,” Vlatko said. This is especially true as Wayfair scales its operations worldwide. The infrastructure must be highly distributed, relying on containerization, microservices, Kubernetes, and especially, open source to get the job done.
“We have technologists throughout the world, in North America and throughout Europe as well,” Vlatko said. “And we want to make sure that we are utilizing cloud native and open source, not just as technologies that fuel our business, but also as the ways that are great for us to work in now.”
Open source has served as a “great avenue” for creating and offering technical services, and to accomplish that, Vlatko amassed the requite tallent, she said. Vlatko was able to amass a small team of engineers to focus on platform work, advocacy, community management and internally on compliance with licenses.
About five years ago when Vlatko joined Wayfair, the company had yet to go “full tilt into going all cloud native,” Vlatko said. Wayfair had a hybrid mix of on-premise and cloud infrastructure. After decoupling from a monolith into a microservices architecture “that journey really began where we understood the really great benefits of microservices and got to a point where we thought, ‘okay, this hybrid model for us actually would benefit our microservices being fully in the cloud,” Vlatko said. In late 2020, Wayfair had made the decision to “get out of the data centers” and shift operations to the cloud, which was completed in October, Vlatko said.
The company culture is such that engineers have room to experiment without major fear of failure by doing a lot of development work in a sandbox environment. “We've been able to create production environments that are close to our production environments so that experimentation in sandboxes can occur. Folks can learn as they go without actually fearing failure or fearing a mistake,” Vlatko said. “So, I think experimentation is a really important aspect of our own learning and growth for cloud native. Also, coming to great events like KubeCon + CloudNativeCon and other events [has been helpful]. We're hearing from other companies who've done the same journey and process and are learning from the use cases.”
In the rush to create, provision and manage Kubernetes, often left out is proper resource provisioning. According to StormForge, a company paying, for example, a million dollars a month on cloud computing resources is likely wasting $6 million a year of resources on the cloud on Kubernetes that are left unused. The reasons for this are manifold and can vary. They include how DevOps teams can tend to estimate too conservatively or aggressively or overspend on resource provisioning. In this podcast with StormForge’s Yasmin Rajabi, vice president of product management, and Patrick Bergstrom CTO, we look at how to properly provision Kubernetes resources and the associated challenges. The podcast was recorded live in Detroit during KubeCon + CloudNativeCon Europe 2022.
Rethinking Web Application Firewalls
Almost ironically, the most commonly used Kubernetes resources can even complicate the ability to optimize resources for applications.The processes typically involve Kubernetes resource requests and limits, and predicting how the resources might impact quality of service for pods. Developers deploying an application on Kubernetes often need to set CPU-request, memory-request and other resource limits. “They are usually like ‘I don't know — whatever was there before or whatever the default is,’” Rajabi said. “They are in the dark.”
Sometimes, developers might use their favorite observability tool and say “‘we look where the max is, and then take a guess,’” Rajabi said. “The challenge is, if you start from there when you start to scale that out — especially for organizations that are using horizontal scaling with Kubernetes — is that then you're taking that problem and you're just amplifying it everywhere,” Rajabi said. “And so, when you've hit that complexity at scale, taking a second to look back and ‘say, how do we fix this?’ you don't want to just arbitrarily go reduce resources, because you have to look at the trade off of how that impacts your reliability.”
The process then becomes very hit or miss. “That's where it becomes really complex, when there are so many settings across all those environments, all those namespaces,” Rajabi said. “It's almost a problem that can only be solved by machine learning, which makes it very interesting.”
But before organizations learn the hard way about not automating optimizing deployments and management of Kubernetes, many resources — and costs — are bared to waste. “It's one of those things that becomes a bigger and bigger challenge, the more you grow as an organization,” Bergstrom said. Many StormForge customers are deploying into thousands of namespaces and thousands of workloads. “You are suddenly trying to manage each workload individually to make sure it has the resources and the memory that it needs,” Bergstrom said. “It becomes a bigger and bigger challenge.”
The process should actually be pain free, when ML is properly implemented. With StormForge’s partnership with Datadog, it is possible to apply ML to collect historical data, Bergstrom explained. “Then, within just hours of us deploying our algorithm into your environment, we have machine learning that's used two to three weeks worth of data to train that can then automatically set the correct resources for your application. This is because we know what the application is actually using,” Bergstrom said. “We can predict the patterns and we know what it needs in order to be successful.”
Feature management isn’t a new idea but lately it’s a trend that’s picked up speed. Analysts like Forrester and Gartner have cited adoption of the practice as being, respectively, “hot” and “the dominant approach to experimentation in software engineering.”
A study released in November found that 60% of 1,000 software and IT professionals surveyed started using feature flags only in the past year, according to the report sponsored by LaunchDarkly, the feature management platform and conducted by Wakefield Research.
At the heart of feature management are feature flags, which give organizations the ability to turn features on and off, without having to re-deploy an entire app. Feature flags allow organizations test new features, and control things like access to premium versions of a customer-facing service.
An overall feature management practice that includes feature flags allows organizations “to release progressively any new feature to any segment of users, any environment, any cohort of customers in a controlled manner that really reduces the risk of each release,” said Ravi Tharisayi, senior director of product marketing at LaunchDarkly, in this episode of The New Stack Makers podcast.
Tharisayi talked to The New Stack’s features editor, Heather Joslyn, about the future of feature management, on the eve of the company’s latest Trajectory user conference. This episode of Makers was sponsored by LaunchDarkly.
The participants in the new survey worked at companies of at least 200 employees, and nearly all of them that use feature flags — 98%— said they believe they save their organizations money and demonstrate a return on investment.
Furthermore, 70% said that their company views feature management as either a mission-critical or a high-priority investment.
Fielding the annual survey, Tharisayi said, has offered a window into how organizations are using feature flags. Fifty-five percent of customers in the 2022 survey said they use feature flags as long-term operational controls — for API rate limiting, for instance, to prioritize certain API calls in high-traffic situations.
The second most common use, the survey found — cited by 47% of users — was for entitlements, “managing access to different types of plans, premium plans versus other plans, for example,” Tharisayi said.
“This is really a powerful capability because of this ability to allow product managers or other personas to manage who has access to certain features to certain plans, without having to have developers be involved,” he said. “Previously, that required a lot of developer involvement.”
LaunchDarkly, Tharisayi said, has been investing in and improving its platform’s experimentation and measurement capabilities: “At the core of that is this notion that experimentation can be a lot more successful when it's tightly integrated to the developer workflow.”
As an example, he pointed to CCP Games, makers of the gaming platform EVE Online, which serves millions of players.
“They were recently thinking through how to evolve their recommendation engine, because they wanted this engine to recommend actions for their gamers that will hopefully increase their ultimate North Star metric,” its tracking of how much time gamers spend with their games.
By using LaunchDarkly’s platform, CCP was able to run A/B tests and increase gamers’ session lengths and engagement. ”So that's the kind of capability that we think is going to be an increasing priority,” Tharisayi said.
As feature management matures and standardizes, he said, he pointed to the adoption of DevOps as a model and cautionary tale.
”When it comes to cultural shifts, like DevOps or feature management that require teams to work in a different way, oftentimes there can be early success with a small team,” Tharisayi said “But then there can be some cultural and process barriers as you're trying to standardize to the team level and multi-team level, before figuring out the kinks in deploying it at an organization-wide level.”
He added, “that's one of the trends that we observed a little bit in this survey, is that there are some cultural elements to getting success at scale, with something like feature management and the opportunity as an industry to support organizations as they're making that quest to standardize a practice like this, like any other cultural practice.”
Check out the full episode for more on the survey and on what’s next for feature management.
DETROIT — Rob Skillington’s grandfather was a civil engineer, working in an industry that, in over a century, developed processes and know-how that enabled the creation of buildings, bridges and road.
“A lot of those processes matured to a point where they could reliably build these things,” said Skillington, co-founder and chief technology officer at Chronosphere, an observability platform. “And I think about observability as that same maturity of engineering practice. When it comes to building software that actually is useful in the world, it is this process that helps you actually achieve the deployment and operation of these large scale systems that we use every day.”
Skillington spoke about the evolution of observability, and his company’s recent donation of an open source project to Prometheus, in this episode of The New Stack Makers podcast. Heather Joslyn, features editor of TNS, hosted the conversation.
This On the Road edition of The New Stack Makers was recorded at KubeCon + CloudNativeCon North America, in the Motor City. The episode was sponsored by Chronosphere.
Helping observability practices grow as mature and reliable as civil engineering rules that help build sturdy skyscrapers is a tough task, Skillington suggested.
In the cloud era, he said, “you have to really prepare the software for a whole set of runtime environments. And so the challenges around that is really about making it consistent, well understood and robust.”
At KubeCon in late October, Chronosphere and PromLabs (founded by Julius Volz, creator of Prometheus) announced that they had donated their open source project PromLens to the Prometheus project, the open source monitoring and alerts primitive.
The donation is a way of placing a bet on a tool that integrates well with Kubernetes. “There's this real yearning for essentially a standard that can be built upon by everyone in the industry, when it comes to these core primitives, essentially,” Skillington said. “And Prometheus is one of those primitives. We want to continue to solidify that as a primitive that stands the test of time.”
“We can't build a self-driving car if we're always building a different car,” he added.
PromLens builds Prometheus queries in a sort of integrated development environment (IDE), Skillington said. It also makes it easier for more people in an organization to create queries and understand the meaning and seriousness of alerts.
The PromLens tool breaks queries into a visual format, and allows users to edit them through a UI. “Basically, it's kind of like a What You See Is What You Get editor, or WYSIWYG editor, for Prometheus queries,” Skillington said.
“Some of our customers have tens of thousands of these alerts to find in PromQL, which is the query language for Prometheus,” he noted. “Having a tool like an integrated development environment — where you can really understand these complex queries and iterate faster on, setting these up and getting back to your day job — is incredibly important.”
Check out the full episode for more on PromLens and the current state of observability.
In this latest podcast from The New Stack, we spoke with Ricardo Torres, who is the chief engineer of open source and cloud native for aerospace giant Boeing. Torres also joined the Cloud Native Computing Foundation in May to serve as a board member. In this interview, recorded at KubeCon+CloudNativeCon last month, Torres speaks about Boeing's use of open source software, as well as its adoption of cloud native technologies.
While we may think of Boeing as an airplane manufacturer, it would be more accurate to think of the company as a large-scale system integrator, one that uses a lot of software. So, like other large-scale companies, Boeing sees a distinct advantage in maintaining good relations with the open source community.
"Being able to leverage the best technologists out there in the rest of the world is of great value to us strategically," Torres said. This strategy allows Boeing to "differentiate on what we do as our core business rather than having to reinvent the wheel all the time on all of the technology."
Like many other large companies, Boeing has created an open source office to better work with the open source community. Although Boeing is primarily a consumer of open source software, it still wants to work with the community. "We want to make sure that we have a strategy around how we contribute back to the open source community, and then leverage those learnings for inner sourcing," he said.
Boeing also manages how it uses open source internally, keeping tight controls on the supply chain of open source software it uses. "As part of the software engineering organization, we partner with our internal IT organization, to look at our internet traffic and assure nobody's going out and downloading directly from an untrusted repository or registry. And then we host instead, we have approved sources internally."
It's not surprising that Boeing, which deals with a lot of government agencies, embraces the practice of using software bills of material (SBOMs), which provide a full listing of what components are being used in a software system. In fact, the company has been working to extend the comprehensiveness of SBOMs, according to Torres.
" I think one of the interesting things now is the automation," he said of SBOMs. "And so we're always looking to beef up the heuristics because a lot of the tools are relatively naïve, and that they trust that the dependencies that are specified are actually representative of everything that's delivered. And that's not good enough for a company like Boeing. We have to be absolutely certain that what's there is exactly what did we expected to be there."
While Boeing builds many systems that reside in private data centers, the company is also increasingly relying on the cloud as well. Earlier this year, Boeing had signed agreements with the three largest cloud service providers (CSPs): Amazon Web Services, Microsoft Azure and the Google Cloud Platform.
"A lot of our cloud presence is about our development environments. And so, you know, we have cloud-based software factories that are using a number of CNCF and CNCF-adjacent technologies to enable our developers to move fast," Torres said.
DETROIT — Developer relations, or DevRel to its friends, is not only a coveted career path but also essential to helping developers learn and adopt new technologies.
That guidance is a matter of survival for many organizations. The cloud native era demands new skills and new ways of thinking about developers and engineers’ day-to-day jobs. At Dell Technologies, it meant responding to the challenges faced by its existing customer base, which is “very Ops centric — server admins, system admins,” according to Brad Maltz, of Dell.
With the rise of the DevOps movement, “what we realized is our end users have been trying to figure out how to become infrastructure developers,” said Maltz, the company’s senior director of DevOps portfolio and DevRel. “They've been trying to figure out how to use infrastructure as code Kubernetes, cloud, all those things.”
“And what that means is we need to be able to speak to them where they want to go, when they want to become those developers. That’s led us to build out a developer relations program ... and in doing that, we need to grow out the community, and really help our end users get to where they want to.”
In this episode of The New Stack’s Makers podcast, Maltz spoke to Heather Joslyn, TNS features editor, about how Dell has, since August, been busy creating a DevRel team to aid its enterprise customers seeking to adopt DevOps as a way of doing business.
This On the Road edition of Makers, recorded at KubeCon + CloudNativeCon North America in the Motor City, was sponsored by Dell Technologies.
Maltz, an eight-year veteran of Dell, has moved quickly in assembling his team, with three hires made by late October and a fourth planned before year’s end. That’s lightning fast, especially for a large, established company like Dell, which was founded in 1984.
“There's two ways of building a DevOps team,” he said. “One way is to actually kind of go and try to homegrow people on the inside and get them more presence in the community. That's the slower road.
“But we decided we have to go and find industry influencers that believe in our cause, that believe in the problem space that we live in. And that's really how we started this: we went out to find some very, very strong top talent in the industry and bring them on board.”
In addition to spreading the DevOps solutions gospel at conferences like KubeCon, Maltz’s vision for the team is currently focused on social media and building out a website, developer.dell.com, which will serve as the landing page for the company’s DevRel knowledge, including links to community, training, how-to videos and an API marketplace.
In building the team, the company made an unorthodox choice. “We decided to put Dev Rel into product management on the product side, not marketing,” Maltz said. “The reason we did that was we want the DevRel folks to really focus on community contributions, education, all that stuff.
“But while they're doing that, their job is to bring the data back from those discussions they're having in the field back to product management, to enable our tooling to be able to satisfy some of those problems that they're bringing back so we can start going full circle.”
The roles that Dell’s DevRel team is focusing on in the DevOps culture are site reliability engineers (SREs) and platform engineers. These not only align with its traditional audience of Ops engineers, but reflect a reality Dell is seeing in the wider tech world.
“The reality is, application developers don't want to shift left, they don't want to operate. They don't want they want somebody else to take it, and they want to keep developing,” Maltz said. “where DevOps has transitioned for us is, how do we help those people that are kind of that operator turning into infrastructure developer fit into that DevOps culture?”
The rise of platform engineering, he suggested, is a reaction to the endless choices of tools available to developers these days.
“The notion is developers in the wild are able to use any tool on any cloud with any language, and they can do whatever they want. That's hard to support,” he said.
“That's where DevOps got introduced, and was to basically say, Hey, we're gonna put you into a little bit of a box, just enough of a box that we can start to gain control and get ahead of the game. The platform engineering team, in this case, they're the ones in charge of that box.”
But all of that, Maltz said, doesn’t mean that “shift left” — giving devs greater responsibility for their applications — is dead. It simply means most organizations aren’t ready for it yet: “That will take a few more years of maturity within these DevOps operating models, and other things that are coming down the road.”
Check out the full episode for more from Maltz, including new solutions from Dell aimed at platform engineers and SREs and collaborations with Red Hat OpenShift.
Cloud giant Amazon Web Services manages the largest number of Kubernetes clusters in the world, according to the company. In this podcast recording, AWS Senior Engineer Jay Pipes discusses AWS' use of Kubernetes, as well as the company's contribution to the Kubernetes code base. The interview was recorded at KubeCon North America last month.
Kubernetes is an open source container orchestration platform. AWS is one of the largest providers of cloud services. In 2021, the company generated $61.1 billion in revenue, worldwide. AWS provides a commercial Kubernetes service, called the Amazon Elastic Kubernetes Service (EKS). It simplifies the Kubernetes experience by adding a control plane and worker nodes.
In addition to providing a commercial Kubernetes service, AWS supports the development of Kubernetes, by dedicating engineers to the work on the open source project.
"It's a responsibility of all of the engineers in the service team to be aware of what's going on and the upstream community to be contributing to that upstream community, and making it succeed," Pipes said. "If the upstream open source projects upon which we depend are suffering or not doing well, then our service is not going to do well. And by the same token, if we can help that upstream project or project to be successful, that means our service is going to be more successful."
In addition to EKS, AWS has also a number of other tools to help Kubernetes users. One is Karpenter, an open-source, flexible, high-performance Kubernetes cluster autoscaler built with AWS. Karpenter provides more fine-grained scaling capabilities, compared to Kubernetes' built-in Cluster Autoscaler, Pipes said. Instead of using Cluster Autoscaler, Karpenter deploys AWS' own Fleet API, which offers superior scheduling capabilities.
Another tool for Kubernetes users is cdk8s, which is an open-source software development framework for defining Kubernetes applications and reusable abstractions using familiar programming languages and rich object-oriented APIs. It is similar to the AWS Cloud Development Kit (CDK), which helps users deploy applications using AWS CloudFormation, but instead of the output being a CloudFormation template, the output is a YAML manifest that can be understood by Kubernetes.
In addition to providing open source development help to Kubernetes, AWS has offered to help defray the considerable expenses of hosting the Kubernetes development and deployment process. Currently, the Kubernetes upstream build process is hosted on the Google Cloud Platform, and artifact registry is hosted in Google's container registry, and totals about 1.5TB worth of storage. Each month, AWS alone was paying $90-$100,000 a month for egress costs, just to have the Kubernetes code on an AWS-hosted infrastructure, Pipes said.
AWS has been working on a mirror of the Kubernetes assets that would reside on the company's own cloud servers, thereby eliminating the Google egress costs typically borne by the Cloud Native Computing Foundation.
"By doing that we completely eliminate the egress costs out of Google data centers and into AWS data centers," Pipes said.
LOS ANGELES — Kubernetes, the open source container orchestrator, may have a big footprint in the cloud native world, but some organizations are doing just fine without it. Take, for example, SeatGeek, which runs a mobile application that serves as a primary and secondary market for event tickets.
For cloud infrastructure, the 12-year-old company’s workloads — which include non-containerized applications — have largely run on Amazon Web Services. A few years ago, it turned to HashiCorp’s Nomad, a scheduler built for running for apps whether they’re containerized or not.
“In the beginning, we had a platform that an engineer would deploy something to but it was very constrained. We could only give them certain number of options that they could use, as very static experience,” said Jose Diaz-Gonzalez, a staff engineer at SeatGeek, in this episode of The New Stack Makers podcast.
“If they want to scale, an application required manual toil on the platform team side, and then they can do some work. And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that runtime environment was executed, and how they scale their applications.”
This On the Road episode of Makers, recorded here during HashiConf, HashiCorp’s annual user conference, featured a case study of SeatGeek’s adoption of Nomad and the HashiCorp Cloud Platform. The conversation was hosted by Heather Joslyn, features editor of TNS.
This episode was sponsored by HashiCorp.
SeatGeek essentially runs the back office for ticket sales for its partners, including Broadway productions and NFL teams like Dallas Cowboys, providing them with “something like a software as a service,” said Diaz-Gonzalez.
“All of those installations, they're single tenant, but they run roughly the same way for every single customer. And then on the consumer side we run a ton of different services and microservices and that sort of thing.”
Though the workloads run in different languages or on different frameworks, he said, they are essentially homogeneous in their deployment patterns; SeatGeek deploys to Windows and Linux containers on the enterprise side, and to Linux on the consumer, and deploys to both the U.S. and European Union regions.
It began using Nomad to give developers more control over their applications; previously, the deployment experience had been very constrained, Diaz-Gonzalez said, resulting in what he called “a very static experience.
“To scale an application required manual toil on the platform team side, and then they can do some work,” he said. “And so for us, we wanted to expose more of the platform to engineers and allow them to have more control over what it is that they were shipping, how that how that runtime environment was executed and how they scale their applications.”
Now, he said, SeatGeek uses Nomad ‘to provide basically the entire orchestration layer for our deployments
Foregoing Kubernetes (K8s) does have its drawbacks. The cloud native ecosystem is largely built around products meant to run with K8s, rather than Nomad.
The ecosystem built around HashiCorp’s product is “a much smaller community. If we need support, we lean heavily on HashiCorp Enterprise. And we're willing, on the support team, to answer questions. But if we need support on making some particular change, or using some certain feature, we might be one of the few people starting to use that feature.”
“That said, it's much easier for us to manage and support Nomad and its integration with the rest of our platform, because it's so simple to run.”
To learn more about SeatGeek’s cloud journey and the challenges it faced — such as dealing with security and policy — check out the full episode.
OpenTelemetry project offers vendor-neutral integration points that help organizations obtain the raw materials — the "telemetry" — that fuel modern observability tools, and with minimal effort at integration time. But what does OpenTelemetry mean for those who use their favorite observability tools but don’t exactly understand how it can help them? How might OpenTelemetry be relevant to the folks who are new to Kuberentes (the majority of KubeCon attendees during the past years) and those who are just getting started with observability?
Austin Parker, head of developer relations, Lightstep and Morgan McLean, director of product management, Splunk, discuss during this podcast at KubeCon + CloudNativeCon 2022 how the OpenTelemetry project has created demo services to help cloud native community members better understand cloud native development practices and test out OpenTelemetry, as well as Kubernetes, observability software, etc.
At this conjecture in DevOps history, there has been considerable hype around observability for developers and operations teams, and more recently, much attention has been given to helping combine the different observability solutions out there in use through a single interface, and to that end, OpenTelemetry has emerged as a key standard.
DevOps teams today need OpenTelemetry since they typically work with a lot of different data sources for observability processes, Parker said. “If you want observability, you need to transform and send that data out to any number of open source or commercial solutions and you need a lingua franca to to be consistent. Every time I have a host, or an IP address, or any kind of metadata, consistency is key and that's what OpenTelemetry provides.”
Additionally, as a developer or an operator, OpenTelemetry serves to instrument your system for observability, McLean said. “OpenTelemetry does that through the power of the community working together to define those standards and to provide the components needed to extract that data among hundreds of thousands of different combinations of software and hardware and infrastructure that people are using,” McLean said.
Observability and OpenTelemetry, while conceptually straightforward, do require a learning curve to use. To that end, the OpenTelemetry project has released a demo to help. It is intended to both better understand cloud native development practices and to test out OpenTelemetry, as well as Kubernetes, observability software, etc.,the project’s creators say.
OpenTelemetry Demo v1.0 general release is available on GitHub and on the OpenTelemetry site. The demo helps with learning how to add instrumentation to an application to gather metrics, logs and traces for observability. There is heavy instruction for open source projects like Prometheus for Kubernetes and Jaeger for distributed tracing. How to acquaint yourself with tools such as Grafana to create dashboards are shown. The demo also extends to scenarios in which failures are created and OpenTelemetry data is used for troubleshooting and remediation. The demo was designed for the beginner or the intermediate level user, and can be set up to run on Docker or Kubernetes in about five minutes.
“The demo is a great way for people to get started,” Parker said. “We've also seen a lot of great uptake from our commercial partners as well who have said ‘we'll use this to demo our platform.’”
DETROIT — Even in the midst of hand-wringing at KubeCon + CloudNativeCon North America about how the global economy will make it tough for startups to gain support in the near future, the news about a couple of young WebAssembly-centric companies was bright.
Cosmonic announced that it had raised $8.5 million in a seed round led by Vertex Ventures. And Fermyon Technologies unveiled both funding and product news: a $20 million A Series led by Insight Partners (which also owns The New Stack) and the launch of Fermyon Cloud, a hosted platform for running WebAssembly (Wasm) microservices. Both Cosmonic and Fermyon were founded in 2021.
“A lot of people think that Wasm is this maybe up and coming thing, or it's just totally new thing that's out there in the future,” noted Bailey Hayes, a director at Cosmonic, in this episode of The New Stack Makers podcast.
But the future is already here, she said: “It's one of technology's best kept secrets, because you're using it today, all over. And many of the applications that we use day-to-day — Zoom, Google Meet, Prime Video, I mean, it really is everywhere. The thing that's going to change for developers is that this will be their compilation target in their build file.”
In this On the Road episode of Makers, recorded at KubeCon here in the Motor City, Hayes and Kate Goldenring, a software engineer at Fermyon, spoke to Heather Joslyn, TNS’ features editor, about the state of WebAssembly. This episode was sponsored by the Cloud Native Computing Foundation (CNCF).
WebAssembly – the roughly five-year-old binary instruction format for a stack-based virtual machine, is designed to execute binary code on the web, lets developers bring the performance of languages like C, C++, and Rust to the web development area.
At Wasm Day, a co-located event that preceded KubeCon, support for a number of other languages — including Java, .Net, Python and PHP — was announced. At the same event, Docker also revealed that it has added Wasm as a runtime that developers can target; that feature is now in beta.
Such steps move WebAssembly closer to fulfilling its promise to devs that they can “build once, run anywhere.”
“With Wasm, developers shouldn't need to know necessarily that it's their compilation target,” said Hayes. But, she added, “what you do know is that you're now able to move that Wasm module anywhere in any cloud. The same one that you built on your desktop that might be on Windows can go and run on an ARM Linux server.”
Goldenring pointed to the findings of the CNCF’s “mini survey” of WebAssembly users, released at Wasm Day, as evidence that the technology’s user cases are proliferating quickly.
“Even though WebAssembly was made for the web, the number one response —it was around a little over 60% — said serverless,” she noted. “And then it said, the edge and then it said web development, and then it said IoT, and the use cases just keep going. And that's because it is this incredibly powerful, portable target that you can put in all these different use cases. It's secure, it has instant startup time.”
The podcast guests talked about recent efforts to make it easier to use Wasm, share code and reuse it, including the development of the component model, which proponents hope will simplify how WebAssembly works outside the browser. Goldenring and Hayes discussed efforts now under construction, including “worlds” files and Warg, a package registry for WebAssembly. (Hayes co-presented at Wasm Day on the work being done on WebAssembly package management, including Warg.)
A world file, Hayes said, is a way of defining your environment. "One way to think of it is like .profile, but for Wasm, for a component. And so it tells me what types of capabilities I need for my web module to run successfully in the runtime and can read that and give me the right stuff.”
And as for Warg, Hayes said: “It's really a protocol and a set of APIs, so that we can slot it into existing ecosystems. A lot of people think of it as us trying to pave over existing technologies. And that's really not the case. The purpose of Warg is to be able to slot right in, so that you continue working in your current developer environment and experience and using the packages that you're used to. But get all of the advantages of the component model, which is this new specification we've been working on" at the W3C's WebAssembly Working Group.
Goldenring added another finding from the CNCF survey: “Around 30% of people wanted better code reuse. That's a sign of a more mature ecosystem. So having something like Warg is going to help everyone who's involved in the server side of the WebAssembly space.”
Listen to the full conversation to learn more about WebAssembly and how these two companies are tackling its challenges for developers.
Organizations are now, almost by default, now becoming multi-cloud operations. No cloud service offers the full breadth of what an enterprise may need, and enterprises themselves find themselves using more than one service, often inadvertently.
HashiCorp is one company preparing enterprises for the challenges with managing more than a single cloud, through the use of a coherent set of software tools. To learn more, we spoke with Megan Laflamme, HashiCorp director of product marketing, at the HashiConf user conference, for this latest episode of The New Stack Makers podcast. We talked about zero trust computing, the importance identity and the general availability of HashiCorp Boundary single sign-on tool.
"In the cloud operating model, the [security] perimeter is no longer static, and you move to a much more dynamic infrastructure environment," she explained.
The HashiCorp Cloud Platform (HCP) is a fully-managed platform offering HashiCorp software including Consul, Vault, and other services, all connected through HashiCorp Virtual Networks (HVN). Through a web portal or by Terraform, HCP can manage log-ins, access control, and billing across multiple cloud assets.
The HashiCorp Cloud Platform now offers the ability to do single sign-on, reducing a lot of the headache of signing into multiple applications and services.
Boundary is the client that enables this “secure remote access” and is now generally available to users of the platform. It is a remote access client that manages fine-grained authorizations through trusted identities. It provides the session connection, establishment, and credential issuance and revocation.
"With Boundary, we enable a much more streamlined workflow for permitting access to critical infrastructure where we have integrations with cloud providers or service registries," Laflamme said.
The HCP Boundary is a fully managed version of HashiCorp Boundary that is run on the HashiCorp Cloud. With Boundary, the user signs on once, and everything else is handled beneath the floorboards, so to speak. Identities for applications, networks, and people are handled through HashiCorp Vault and HashiCorp Consul. Every action is authorized and documented.
Boundary authenticates and authorizes users, by drawing on existing identity providers (IDPs) such as Okta, Azure Active Directory, and GitHub. Consul authenticates and authorizes access between applications and services. This way, networks aren’t exposed, and there is no need to issue and distribute credentials. Dynamic credential injection for user sessions is done with HashiCorp Vault, which injects single-use credentials for passwordless authentication to the remote host.
With zero trust security, users are authenticated at the service level, rather than through a centralized firewall, which becomes increasingly infeasible in multicloud designs.
In the industry, there is a shift “from high trust IP based authorization in the more static data centers and infrastructure, to the cloud, to a low trust model where everything is predicated on identity,” Laflamme explained.
This approach does require users to sign on to each individual service, in some form, which can be a headache to those (i.e. developers and system engineers) who sign on to a lot of apps in their daily routine.
DETROIT — Modern software projects’ emphasis on agility and building community has caused a lot of security best practices, developed in the early days of the Linux kernel, to fall by the wayside, according to Aeva Black, an open source veteran of 25 years.
“And now we're playing catch up,“ said Black, an open source hacker in Microsoft Azure’s Office of the CTO “A lot of less than ideal practices have taken root in the past five years. We're trying to help educate everybody now.”
Chris Short, senior developer advocate with Amazon Web Services (AWS), challenged the notion of “shifting left” and giving developers greater responsibility for security. “If security is everybody's job, it's nobody's job,” said Short, founder of the DevOps-ish newsletter.
“We've gone through this evolution: just develop secure code, and you'll be fine,” he said. “There's no such thing as secure code. There are errors in the underlying languages sometimes …. There's no such thing as secure software. So you have to mitigate and then be ready to defend against coming vulnerabilities.”
Black and Short talked about the state of the software supply chain’s security in an On the Road episode of The New Stack Makers podcast.
Their conversation with Heather Joslyn, features editor of TNS, was recorded at KubeCon + CloudNativeCon North America here in the Motor City.
This podcast episode was sponsored by AWS.
For our podcast guests, “trust, but verify” is a slogan more organizations need to live by.
A lot of the security problems that plague the software supply chain, Black said, are companies — especially smaller organizations — “just pulling software directly from upstream. They trust a build someone's published, they don't verify, they don't check the hash, they don't check a signature, they just download a Docker image or binary from somewhere and run it in production.”
That practice, Black said, “exposes them to anything that's changed upstream. If upstream has a bug or a network error in that repository, then they can't update as well.” Organizations, they said, should maintain an internal staging environment where they can verify code retrieved from upstream before pushing it to production — or rebuild it, in case a vulnerability is found, and push it back upstream.
That build environment should also be firewalled, Short added: “Create those safeguards of, ‘Oh, you want to pull a package from not an approved source or not a trusted source? Sorry, not gonna happen.’”
Being able to rebuild code that has vulnerabilities to make it more secure — or even being able to identify what’s wrong, and quickly — are skills that not enough developers have, the podcast guests noted.
More automation is part of the solution, Short said. But, he added, by itself it's not enough. “Continuous learning is what we do here as a job," he said. "If you're kind of like, this is my skill set, this is my toolbox and I'm not willing to grow past that, you’re setting yourself up for failure, right? So you have to be able to say, almost at a moment's notice, ‘I need to change something across my entire environment. How do I do that?’”
As both Black and Short said during our conversation, there’s no such thing as perfectly secure code. And even such highly touted tools as software bills of materials, or SBOMs, fall short of giving teams all the information they need to determine code’s safety.
“Many projects have dependencies 10, 20 30 layers deep,” Black said. “And so if your SBOM only goes one or two layers, you just don't have enough information to know if as a vulnerability five or 10 layers down.”
Short brought up another issue with SBOMs: “There's nothing you can act on. The biggest thing for Ops teams or security teams is actionable information.”
While Short applauded recent efforts to improve user education, he said he’s pessimistic about the state of cybersecurity: “There’s not a lot right now that's getting people actionable data. It's a lot of noise still, and we need to refine these systems well enough to know that, like, just because I have Bash doesn't necessarily mean I have every vulnerability in Bash.”
One project aimed at addressing the situation is GitBOM, a new open source initiative. “Fundamentally, I think it’s the best bet we have to provide really high fidelity signal to defense teams,” said Black, who has worked on the project and produced a white paper on it this past January.
GitBOM — the name will likely be changed, Black said —takes the underlying technology that Git relies on, using a hash table to track changes in a project's code over time, and reapplies it to track the supply chain of software. The technology is used to build a hash table connecting all of the dependencies in a project and building what GItBOM’s creators call an artifact dependency graph.
“We had a team working on it a couple of proof of concepts right now,” Black said. “And the main effect I'm hoping to achieve from this is a small change in every language and compiler … then we can get traceability across the whole supply chain.”
In the meantime, Short said, there’s plenty of room for broader adoption of the best practtices that currently exist. “Security vendors, I feel, like need to do a better job of moving teams in the right direction as far as action,” he said.
At DevOps Chicago this fall, Short said, he ran an open space session in which he asked participants for their pain points related to working with containers
“And the whole room admitted to not using least privilege, not using policy engines that are available in the Kubernetes space,” he said. “So there's a lot of complexity that we’ve got to help people understand the need for it, and how to implement it.”
Listen to whole podcast to learn more about the state of software supply chain security.
Ukraine has a bright future. It will soon be time to rebuild. But rebuilding requires more than the resources needed to construct a hydroelectric plant or a hospital. It involves software and an understanding of how to use it.
Ihor Dvoretskyi, developer advocate at the Cloud Native Computing Foundation (CNCF), and Dima Zakhalyavko, board member at Razom for Ukraine, came to KubeCon in Detroit to discuss the push to provide training materials for Ukraine as they rebuild from the destruction caused by Russia's invasion.
Razom, a nonprofit, amplifies the voices of Ukrainians in the United States and helps with humanitarian efforts and IT training. Razom formed before Russia's 2014 invasion of the Crimean peninsula of Ukraine, Zakhalyavko said. Since the full-scale invasion earlier this year, Razom has had an understandable increase in donations and volunteers helping in their efforts.
Individual first aid kits for soldiers, tourniquets, and medics supplies are provided by Razom, but so is IT training, materials to train the next generation of IT, translated into Ukrainian.
The Linux Foundation is participating with the Cloud Native Computing Foundation (CNCF) in participation with Razom for Ukraine on its Project Veteranius to provide access to technology education for Ukrainian veterans, their families, and Ukrainians in need.
"We've realized that basically, we can benefit from the Linux Foundation training portfolio, including the most popular courses like the intro to Linux, or intro to Kubernetes, that can be pretty much easily translated to Ukrainian," Dvoretskyi said. "And in this way, we'll be able to offer the educational materials in their native language."
Ukraine has a pretty bright future.
"We just need to get through these difficult times," Dvoretskyi said. "But in the future, it's clear the tech industry in Ukraine is growing. Yeah. And people are needed for that."
Every effort matters, Dvoretskyi said.
"A strong, democratic Ukraine – that's essentially the vision – a European country, a truly European country, that is whole in terms of territorial integrity," Zakhalyavko said. "The future is in technology. And if we can help enable that – in any case, I think that's a win for Ukraine and the world. Technology can make the world a better place."
Redis is not just a cache. It is used in the broader cloud native ecosystem, fits into many service-oriented architectures, and simplifies the deployment and development of modern applications, according to Madelyn Olson, a principal engineer at AWS, during an interview on the New Stack Makers at KubeCon North America in Detroit.
Olson said that people have a primary backend database or some other workflow that takes a long time to run. They store the intermediate results in Redis, which provides lower latency and higher throughput.
"But there are plenty of other ways you can use Redis," Olson said. "One common way is what I like to call it a data projection API. So you basically take a bunch of different sources of data, maybe a Postgres database, or some other type of Cassandra database, and you project that data into Redis. And then you just pull from the Redis instance. This is a really great, great use case for low latency applications."
Redis creator Salvatore Sanfilippo's approach provides a lesson in how to contribute to open source, which Olson recounted in our interview.
Olson said he was the only maintainer with write permissions for the project. That meant contributors would have to engage quite a bit to get a response from Sanfilippo. So Olson did what open source contributors do when they want to get noticed. She "chopped wood and carried water," a term that in open source reflects on working to take care of tasks that need attention. That helped Sanfilippo scale himself a bit and helped Olson get involved in the project.
It is daunting to get into open source development work, Olson said. A new contributor will face people with a lot more experience and get afraid to open issues. But if a contributor has a use case and helps with documentation or a bug, then most open source maintainers are willing to help.
"One big problem throughout open source is, they're usually resource constrained, right?," Olson said. "Open source is oftentimes a lot of volunteers. So they're usually very willing to get more people to help with the project."
What's it like now working at AWS on open source projects?
Things have changed a lot since Olson joined AWS in 2015, Olson said. APIs were proprietary back in those days. Today, it's almost the opposite of how it used to be.
To keep something internal now requires approval, Olson said. Internal differentiation is not needed. For example, open source Redis is most important, with AWS on top as the managed service.
LOS ANGELES — When you’re deploying a business-critical application to the cloud, it’s nice to not need the “war room” you’ve assembled to troubleshoot Day 1 problems.
When BOK Financial, a financial services company that’s been moving apps to the cloud over the last three years, was launching its largest application on the cloud, its engineers supported it with a “war room type situation, monitoring everything” according to BOK’s Andrew Rau.
“After the first day, the system just scaled like it was supposed to … and they're like, ‘OK, I guess we don't need this anymore.’”
In this On the Road episode of The New Stack’s Makers podcast, Rau, BOK’s vice president and manager, cloud services, offered a case study about his organization’s cloud journey over the past four years, and the role HashiCorp’s Vault and Cloud Platform played in it.
Rau spoke to Heather Joslyn, features editor of The New Stack, about the challenges of moving a very traditional organization in a highly regulated industry to the cloud while maintaining tight security and resilience.
This episode of Makers, recorded in October at HashiConf in Los Angeles, was and sponsored by HashiCorp.
In late 2019, Rau said, BOK Financial deployed one small application to the cloud, an initial step on its digital transformation journey. It’s been building out its cloud infrastructure ever since, and soon ran into the limits of each cloud provider’s native tooling.
“Where we struggled was we didn't want to deploy and manage our clouds in different ways,” he said. “We didn't want our cloud engineers to know just one cloud provider, and their technology and their tech stack. So that's when we really started looking at how else can we do this. And that's when Terraform was a great option for us.”
In 2020, BOK Financial began using HashCorp’s open source Terraform to automate the creation of cloud infrastructure. “We made a conscious effort to really focus on automation,” Rau said. “We didn't want to do things manually, which is really that traditional data center, how we've done things for decades.
In tandem with adopting Terraform, BOK Financial’s teams began using GitOps processes for CI/CD. But doing “everything as code,” as Rau put it, “required a lot of upskilling for some of our staff, because they've never done version control or automation capabilities. So in addition to learning Terraform, and these other cloud concepts, they had to learn all of that.”
The challenge, though, has been worth it: “It's really empowered us to move a lot faster, and give our application teams the ability to deploy at their pace, versus waiting on other teams.”
It took about a year, Rau said, to get BOK Financial’s developers comfortable using Terraform, largely because many were new to version control procedures and strategies.
Because the company works in a highly regulated industry, handling customers’ financial data, security is of utmost importance.
“We had users credentials for our clouds, and we had them separated out based on the type of deployment that [developers] were doing,” said Rau.
“But it wasn't easy for us to rotate those credentials on a frequent basis. And so we really felt the need that we want to make these short, limited tokens, no more than an hour for that deployment. And so that's where we looked at Vault.”
HashiCorp’s secrets storage and management tool proved an easy add-on with Terraform. “That's really given us the ability to have effectively no credentials — long-lived credentials — out there,” Rau said. “And secure our environment even more.” And because BOK’s teams don’t want to manage Vault and its complexities themselves, it has opted for HashiCorp Cloud Platform to manage it.
For other organizations on a cloud native journey, Rau recommended taking time to do things right. “We went back to rework some things periodically, because we learned something too late,” he said.
Also, he advised, keep stakeholders in the loop: “You need to stay in front of the communication with business partners, IT leaders, that it's going to take longer to set this up. But once you do, it's incredible.”
Check out the podcast to learn more about BOK Financial's cloud native transformation.
DETROIT — Are we still shifting left? Is it realistic to expect developers to take on the burdens of security and infrastructure provisioning, as well as writing their applications? Is platform engineering the answer to saving the DevOps dream?
Bottom line: Do Devs and Ops really talk to each other — or just passive-aggressively swap Jira tickets?
These are some of the topics explored by a panel, “Devs and Ops People: It’s Time for Some Kubernetes Couples Therapy,” convened by The New Stack at KubeCon + CloudNativeCon North America, here in the Motor City, on Thursday.
Panelists included Saad Malik, chief technology officer and co-founder of Spectro Cloud; Viktor Farcic, developer advocate at Upbound; Liz Rice, chief open source officer at Isolalent, and Aeris Stewart, community manager at Humanitec.
The latest TNS pancake breakfast was hosted by Alex Williams, The New Stack’s founder and publisher, with Heather Joslyn, TNS features editor, fielding questions from the audience. The event was sponsored by Spectro Cloud.
A big pain point in the DevOps structure — the marriage of frontend and backend in cross-functional teams — is that all devs aren’t necessarily willing or able to take on all the additional responsibilities demanded of them.
A lot of organizations have “copy-pasted this one size fits all approach to DevOps,” said Stewart.
“If you look at the tooling landscape, it is rapidly growing not just in terms of the volume of tools, but also the complexity of the tools themselves,” they said. “And developers are in parallel expected to take over an increasing amount of the software delivery process. And all of this, together, is too much cognitive load for them.”
This situation also has an impact on operations engineers, who must help alleviate developers’ burdens. “It’s causing a lot of inefficiencies of these organizations,” they added, “and a lot of the same inefficiencies that DevOps was supposed to get rid of.”
Platform engineering — in which operations engineers provide devs with an internal developer platform that abstracts away some of the complexity — is “a sign of hope,” Stewart said, for organizations for whom DevOps is proving tough to implement.
The concept behind DevOps is “about making teams self-sufficient, so they have full control of their application, right from the idea until it is running in production,” said Farcic.
But, he added, “you cannot expect them to have 17 years of experience in Kubernetes, and AWS and whatnot. And that's where platforms come in. That's how other teams, who have certain expertise, provide services so that those … developers and operators can actually do the work that they're supposed to do, just as operators today are using services from AWS to do their work. So what AWS for Ops is to Ops, to me, that's what internal developer platforms are to application developers.”
Platform engineering has been a hot topic in DevOps circles (and at KubeCon) but the definition remains a bit fuzzy, the panelists acknowledged. (“In a lot of organizations, ‘platform engineering’ is just a fancy new way of saying ‘Ops,’” said Rice.)
The audience served up questions to the panel about the limits of the DevOps model and how platform engineering fits into that discussion. One audience member asked about balancing the need to provide a consistent platform to an organization’s developers while also allowing devs to customize and innovate.
Malik said that both consistency and innovation are possible in a platform engineering structure. “An organization will decide where they want to be able to provide that abstraction,” he said, adding, “When they think about where they want to be as a whole, they could think about, Hey, when we provide our platform, we're going to be providing everything from security to CI/CD from GitHub, from repository management, this is what you will get if you use our IDP or platform itself.
But “there are going to be unique use cases,” Malik added, such as developers who are building a new blockchain technology or running WebAssembly.
“I think it's okay to give those development teams the ability to run their own platform, as long as you tell them, these are the areas that you have to be responsible for,” he said. “ You're responsible for your own security, your own backup, your own retention capabilities.”
One audience member mentioned “Team Topologies,” a 2019 engineering management book by Manuel Pais and Matthew Skelton, and asked the panel if platform engineering is related to DevOps in that it’s more of an approach to engineering management than a destination.
“Platform engineering is in the budding stage of its evolution,” said Stewart. “And right now, it's really focused on addressing the problems that organizations ran into when they were implementing DevOps.
They added, “I think as we see the community come together more and get more best practices about how to develop platform, you will see it become more than just a different approach to DevOps and become something more distinct. But I don't think it's there quite yet.”
Check out the full panel discussion to hear more from our DevOps “counseling session.”
Terraform is HashiCorp’s flagship software. The open source tool provides a way to define IT resources — such as monitoring software or cloud services — in human-readable configuration files. These files, which serve as blueprints, can then be used to automatically provision the systems themselves. Kubernetes deployments, for instance, can be streamlined through Terraform.
"Terraform basically translates what your configuration was codified in by your configuration, and provisions it to that desired end state," explained Meghan Liese, [sponsor_inline_mention slug="hashicorp" ]HashiCorp[/sponsor_inline_mention] vice president of product and partner marketing in this podcast and video recording, recorded at the company's user conference, HashiConf 2022, held this month in Los Angeles.
For this interview, Liese discusses the latest enhancements to Terraform, and Terraform Cloud, a managed service offering that is part of the HashiCorp Cloud Platform.
[Embed Podcast]
Typically, the DevOps teams, or system administrators, use Terraform to provision infrastructure, but there is also growing interest to allow developers to do it themselves, in a self-service fashion, Liese explained. Multicloud skills are in short supply, concluded the 2022 HashiCorp State of Cloud Strategy Survey, so making the provision process easier could help more developers, the company reckons.
A Terraform self-service model, which was introduced earlier this year, could “cut down on the training an organization would need to do to get developers up to speed on using the infrastructure-as-code software,” Liese said.
In this “no code” setup, developers can pick from a catalog of no-code-ready modules, which can be deployed directly to workspaces. No need to learn the HCL configuration language. And the administrators will no longer have to answer the same “how-do-I-do-this-in-HCL?” queries.
The new console interface aims to greatly expand the use of Terraform. The company has been offering self-service options for a while, by way of an architecture that allows for modules to be reused through the private registry for Terraform Cloud and Terraform Enterprise.
The recent release of Terraform 1.3 came with the promise to greatly reduce the amount of code HCL jockeys must manage, through the improvement of the make
code block.
Actually, make
has been available since Terraform 1.1, but some kinks were worked out for this latest release. What make
does is provide the ability to refactor resources within a Terraform configuration file, moving large code blocks off as separate modules, where they can be discovered through a public or private registry.
With the known state of a system captured on Terraform, it is a short step to check to ensure that the actual running system is identical to the desired state captured in HCL. Many times “drift” can occur, as administrators, or even the apps themselves, make changes to the system. Especially in regulated environments, such as hospitals, it is essential that a system is in a correct state.
Earlier this year, HashiCorp added Drift Detection to Terraform Cloud to continuously check infrastructure state to detect changes and provide alerts and offer remediation if that option is chosen. Now, another update, Continuous validation expands these checks to include user assertions, or post-conditions, as well.
One post-condition may be something like ensuring that certificates haven’t expired. If they do, the software can offer an alert to the admin to update the certs. Another condition might be to check for new container images, which may have been updated as a response to a security patch.
GumGum is a company whose platform serves up online ads related to the context in which potential customers are already shopping or searching. (For instance: it will send ads for Zurich restaurants to someone who’s booked travel to Switzerland.) To handle that granular targeting, it relies on its proprietary machine learning platform, Verity.
“For all of our publishers, we send a list of URLs to Verity,” according to Keith Sader, GumGum’s director of engineering. “Verity goes in and basically categorizes those URLs as different [internal bus] categories. So the IB has tons of taxonomies, based on autos, based upon clothing based upon entertainment. And then that's how we do our targeting.”
Verity’s targeting data is stored in DynamoDB, but the rest of GumGum’s data is stored in managed MySQL and its daily tracking data is stored in ScyllaDB, a database designed for data-intensive applications. Scylla, Sader said, helps his company avoid serving audiences the same ads over and over again, by keeping track of which ads customers have already seen.
“That’s where Scylla comes into the picture for us,” he said. “Scylla is our rate limiter on ad serving.”
In this episode of The New Stack’s Makers podcast, Sader and Dor Laor, CEO and co-founder of Scylla, told how GumGum has used ScyllaDB shift more IT resources to its core business and keep it from repeating ads to audiences that have already seen them, no matter where they travel.
This case study episode of Makers, hosted Heather Joslyn, TNS features editor, was sponsored by ScyllaDB.
Before adding ScyllaDB to its stack, Sader said, “We had a Cassandra-based system that some very smart people put in. But Cassandra relies upon you to have an engineering staff to support it.
“That’s great. But like many types of systems, managing Cassandra databases is not really what our business makes money at.”
GumGum was hosting its Cassandra database, installed on Amazon Web Services, by itself — and the drain on resources brought the company’s teams to a crossroards, Sader said. “Where do we spend our limited funds? Do we spend it on Cassandra maintenance? Or do we hire someone to do it for us? And that’s really what determined the switch away from a sort of self-installed, self-managed Cassanda to another provider.”
A core issue for GumGum, Sader said, was making sure that it wasn’t over-serving consumers, even as they moved around the globe. “If you see an ad in one place, we need to make sure, if you fly across the country, you don’t see it agin,” he said.
That’s an issue Cassandra solved for his company, he said. Because ScyllaDB is a drop-in replacement for Apache Cassandra, it also helped prevent over-serving in all regions of the globe — thus preventing GumGum from losing money.
In addition to managing its database for GumGum and other customers, Laor said that an advantage ScyllaDB brings is an “always on” guarantee.
“We have a big legacy of infrastructure that's supposed to be resilient,” he said. “For example, every implementation of ours has consistent configurable consistency, so you can have multiple replicas.”
Laor added, “Many many times organizations have multiple data centers. Sometimes it's for disaster recovery, sometimes it's also to shorten the latency and be closer to the client.” Replica databases located in data centers that are geographically distributed, he said, protect against failure in any one data center.
Bringing ScyllaDB to GumGum was not without challenges, both Sader and Laor said. When ScyllaDB is added to an organization’s stack, Laor said, it likes to start with as small a deployment as possible.
“But in the GumGum case, all of these clients were new processes,” Laor said. So hundreds or thousands of processes, all trying to connect to the database, it's really a connection storm.”
Scylla’s team created a private version of its database to work on the problem and eventually solved it: “We had to massage the algorithm and make sure that all of the [open source] code committers upstream are summing it up.”
It ultimately designed an admission control mechanism that measures the amount of parallel requests that the distributed database is handling, and to slow down requests that arrived for the first time from a new process. “We tried to have the complexity on our end,” Laor said.
GumGum has seen the results of handing off that complexity and toil to a managed database. “We have pretty much reduced our entire operations effort with Scylla, to almost nothing,” Sader said.
He added, “We're coming into our busy point of the year, ads really get picked up in Q4. So we reach out so we go, ‘Hey, we need more nodes in these regions, can you make that happen for us?’ They go, ‘Yep.’ Give us the things, we pay the money. And it happens.”
In 2021, Sader said, “we increased our volume by probably 75% plus 50%, over our standard. The toughest thing to do in this industry is make things look easy. And Scylla helped us make ad serving look easy.”
Check out the podcast to get more detail about GumGum’s move to a managed database.
Wix is a cloud-based development site for making HTML 5 websites and mobile sites with drag and drop tools. It is suited for the beginning user or the advanced developer, said Hila Fish, senior DevOps engineer for Wix, in an interview for The New Stack Makers at HashiCorp’s HashiConf Global conference in Los Angeles earlier this month.
Our questions for Fish focused on Terraform, the open source infrastructure-as-code software tool:
Fish started using Terraform in an ad-hoc manner back in 2018. Over time she has learned how to use it for scaling operations.
“If you want to scale your infrastructure, you need to use Terraform in a way that will allow you to do that,” Fish said.
Terraform can be used ad-hoc to create a machine as a resource, but scale comes with enabling infrastructure that allows the engineers to develop templates that get reused across many servers.
“You need to use it in a way that will allow you to scale up as much as you can,” Fish said.
Fish said best practices come from how to structure the Terraform code base.
Much of it comes down to the teams and how Terraform gets implemented. Engineers each have their way of working. Standard practices can help. In onboarding new teams, a structured code base can be beneficial. New teams onboard and use models already in the code base.
And what are some of the pitfalls of using Terraform?
We get to that in the recording and more about integrations, why Wix is still on version 0.13, and some new capabilities for developers to use Terraform.
Users have historically needed to learn HashiCorp configuration language (HCL) to use the HashiCorp configuration language. At Wix, Fish said, the company is implementing Terraform on the backend with a UI that developers can use without needing to learn HCL.
DUBLIN — The mission of Linux Foundation Energy — a collaborative, international effort by power companies to help move the world away from fossil fuels — has never seemed more urgent.
In addition to the increased frequency and ferocity of extreme weather events like hurricanes and heat waves, the war between Russia and Ukraine has oil-dependent countries looking ahead to a winter of likely energy shortages.
“I think we need to go faster,” said Benoît Jeanson, an enterprise architect at RTE, the French electricity transmission system operator. He aded, “What we are doing with the Linux Foundation Energy is really something that will help for the future, and we need to go faster and faster.
For this On the Road episode of The New Stack’s Makers podcast, recorded at Open Source Summit Europe here, we were joined by two guests who work in the power industry and whose organizations are part of LF Energy.
In addition to Jeanson, this episode featured Jonas van den Bogaard, a solution architect and open source ambassador at Alliander, an energy network company that provides energy transport and distribution to a large part of the Netherlands. Van den Bogaard also serves on the technical advisory council of LF Energy.
Heather Joslyn, features editor of TNS, hosted this conversation.
LF Energy, started in 2018, now includes 59 member organizations, including cloud providers Google and Microsoft, enterprises like General Electric, and research institutions like Stanford University. It currently hosts 18 open source projects; the podcast guests encouraged listeners to check them out and contribute to them.
Among them: OpenSTEF, automated machine learning pipelines to deliver accurate forecasts of the load on the energy grid 48 hours ahead of time. “It gives us the opportunity to take action in time to prevent the maximum grid capacity [from being] reached,” said van den Bogaard.
“That’s going to prevent blackouts and that sort of thing. And also, another side: it makes us able to add renewable energies to the grid.”
Jeanson said that the open source projects aim to cover “every level of the stack. We also have tools that we want to develop at the substation level, in the field.” Among them: OperatorFabric, Written in Java and based on the Spring framework, OperatorFabric is a modular, extensible platform for systems operators, including several features aimed at helping utility operators.
It helps operators coordinate the many tasks and alerts they need to keep track of by aggregate notifications from several applications into a single screen.
“Energy is of importance for everyone,” said van den Bogaard. “And especially moving to more cleaner and renewable energy is key for us all. We have great minds all around the world. And I really believe that we can achieve that. The best way to do that is to combine the efforts of all those great minds. Open source can be a great enabler of that.”
But persuading decision-makers in the power industry to participate in building the next generation of open source solutions can be a challenge, van den Bogaard acknowledged.
“You see, that the energy domain has been there for a long time, and has been quite stable, up to like 10 years ago.” he said. In such a tradition-bound culture, change is hard. In the cloud era, he added, a lot of organizations “need to digitalize and focus more on it and those capabilities are new. And also, open source, for in that matter is also a very new concept.”
One obstacle in the energy industry taking more advantage of open source tools, Jeanson noted, is security: “Some organizations still see open source to be a potential risk.” Getting them on board, he said, requires education and training.
He added, “vendors need to understand that open source is an opportunity that they should not be afraid of. That we want to do business with them based on open source. We just need to accelerate the momentum.
Check out the whole episode to learn more about LF Energy’s work.
It's that time of the year again, when cloud native enthusiasts and professionals assemble to discuss all things Kubernetes. KubeCon+CloudNativeCon 2023 is being held later this month in Detroit, October 24-28.
In this latest edition of The New Stack Makers podcast, we spoke with Priyanka Sharma, general manager of the Cloud Native Computing Foundation — which organizes KubeCon —and CERN computer engineer and KubeCon co-chair Ricardo Rocha. For this show, we discussed what we can expect from the upcoming event.
This year, there will be a focus on Kubernetes in the enterprise, Sharma said. "We are reaching a point where Kubernetes is becoming the de facto standard when it comes to container orchestration. And there's a reason for it. It's not just about Kubernetes. Kubernetes spawned the cloud native ecosystem and the heart of the cloud native movement is building fast, resiliently observable software that meets customer needs. So ultimately, it's making you a better provider to your customers, no matter what kind of business you are."
Of this year's topics, security will be a big theme, Rocha said. Technologies such as Falco and Cilium will be discussed. Linux kernel add-on eBPF is popping up in a lot of topics, especially around networking. Observability and hybrid deployments also weigh heavily on the agenda. "The number of solutions [around Hybrid] are quite large, so it's interesting to see what people come up with," he said.
In addition to KubeCon itself, this year there are a number of co-located events, held during or before the conference itself. Some of them hosted by CNCF while others are hosted by other companies such as Canonical. They include the Network Application Day, BackstageCon, CloudNative eBPF Day, CloudNativeSecurityCon, CloudNative WASM Day, Data-on-Kubernetes Day, EnvoyCon, gRPCConf, KNativeCon, Spinnaker Summit, Open Observability Day, Cloud Native Telco Day, Operator Day, The Continuous Delivery Summit, among others.
What's amazing is not only the number of co-located events, but the high quality of talks being held there.
"Co-located events are a great way to know what's exciting to folks in the ecosystem right now," Sharma said. "Cloud native has really become the scaffolding of future progress. People want to build on cloud native, but have their own focus areas."
WebAssembly (WASM) is a great example of this. "In the beginning, you wouldn't have thought of WebAssembly as part of the cloud native narrative, but here we are," Sharma said. "The same thinking from professionals who conceptualized cloud native in the beginning are now taking it a step further."
"There's a lot of value in co-located events, because you get a group of people for a longer period in the same room, focusing on one topic," Rocha said.
Other topics discussed in the podcast include the choice of Detroit as a conference hub, the fun activities that CNCF have planned in between the technical sessions, surprises at the keynotes, and so much more! Give it a listen.
Armon Dadgar and Mitchell Hashimoto are long-time open source practitioners. It's that practitioner focus they established as core to their approach when they started HashiCorp about ten years ago. Today, HashiCorp is a publicly traded company.
Before they started HashiCorp, Dadgar and Hashimoto were students at the University of Washington. Through college and afterward, they cut their teeth on open source and learning how to build software in open source.
HashiCorp's business is an outgrowth of the two as practitioners in open source communities, said Dadgar, co-founder and CTO of HashiCorp, in an interview at the HashiConf conference in Los Angeles earlier this month.
Both of them wanted to recreate the asynchronous collaboration that they loved so much about the open source projects they worked on as practitioners, Dadgar said. They knew that they did not want bureaucracy or a hard-to-follow roadmap.
Dadgar cited Terraform as an example of their approach. Terraform is Hashicorp's open-source, infrastructure-as-code, software tool and reflects the company's model to control its core while providing a good user experience. That experience goes beyond community development and into the application architecture itself.
"If you're a weekend warrior, and you want to contribute something, you're not gonna go read this massively complicated codebase to understand how it works, just to do an integration," Dadgar said." So instead, we built a very specific integration surface area for Terraform."
The integration is about 200 lines of code, Dadgar said. They call the integration their core plus plugin model, with a prescriptive scaffold, examples of how to integrate, and the SDK. Their "golden path" to integration is how the company has developed a program that today has about 2,500 providers.
The HashiCorp open source model relies on its core and plugin model. On Twitter, one person asked why doesn't HashiCorp be a proprietary company. Dadgar referred to HashiCorp's open source approach when asked that question in our interview.
"Oh, that's an interesting question," Dadgar said. "You know, I think it'd be a much harder, company to scale. And what I mean by that is, if you take a look at like a Terraform community or Vault – there's thousands of contributors. And that's what solves the integration problem. Right? And so if you said, we were proprietary, hey, how many engineers would it take to build 2000 TerraForm integrations? It'd be a whole lot more people that we have today. And so I think fundamentally, what open source helps you solve is the fact that, you know, modern infrastructure has this really wide surface area of integration. And I don't think you can solve that as a proprietary business."
"I don't think we'd be able to have nearly the breadth of integration. We could maybe cover the core cloud providers. But you'd have 50 Terraform providers, not 2500 Terraform providers."
DUBLIN — Europe's open source contributors, according to The Linux Foundation's first-ever survey of them released in September, are driven more by idealism than their American counterparts. The data showed that social reasons for contributing to open source projects were more often cited by Europeans than by Americans, who were more likely to say they participate in open source for professional advancement.
A big part of Gabriele (Gab) Columbro's mission as the general manager of the new Linux Foundation Europe, will be to marry Europe's "romantic" view of open source to greater commercial opportunities, Columbro told The New Stack's Makers podcast.
The On the Road episode of Makers, recorded in Dublin at Open Source Summit Europe, was hosted by Heather Joslyn, TNS's features editor.
Columbro, a native of Italy who also heads FINOS, the fintech open source foundation. recalled his own roots as an individual contributor to the Apache project, and cited what he called "a very grassroots, passion, romantic aspect of open source" in Europe
By contrast, he noted, "there is definitely a much stronger commercial ecosystem in the United States. But the reality is that those two, you know, natures of open source are not alternatives."
Columbro said he sees advantages in both the idealistic and the practical aspects of open source, along with the notion in the European Union and other countries in the region that the Internet and the software that supports it have value as shared resources.
"I'm really all about marrying sort of these three natures of open source: the individual-slash-romantic nature, the commercial dynamics, and the public sector sort of collective value," he said.
Europe sits thousands of miles away from the headquarters of the FAANG tech behemoths — Facebook, Apple, Amazon, Netflix and Google. (Columbro, in fact, is still based in Silicon Valley, though he says he plans to return to Europe at some point.)
For individual developers, he said, Linux Foundation Europe will help give regional projects increased visibility and greater access to potential contributors. Contributing a project to Linux Foundation Europe, he said, is "a powerful way to potentially supercharge your project."
He added, "I think any developer should consider this as a potential springboard platform for the technology, not just to be visible in Europe, but then hopefully, beyond."
The European organization's first major project, the OpenWallet Foundation, will aim to help create a template for developers to build digital wallets. "I find it very aligned with not only the vision of the Linux Foundation that is about not only creating successful open source projects but defining new markets and new commercial ecosystems around these open source projects."
It's also, Columbro added, "very much aligned with the sort of vision of Europe of creating a digital commons, based on open source whereby they can achieve a sort of digital independence."
As geopolitical and economic turmoil roils several nations in Europe, Columbro suggested that open source could see a boom if the region's companies start cutting costs.
He places his hopes on open source collaboration to help reconcile some differences. "Certainly I do believe that open source has the potential to bring parties together, " Columbro said.
Also, he noted, "generally we see open source and investment in open source to be counter-cyclical with the trends of investments in proprietary software. ... in other words, when there is more pressure, and when there is more pressure to reduce costs, or to, you know, reduce the workforce.
"That’s when people are forced to look more seriously about ways to actually collaborate while still maintaining throughput and efficiency. And I think open source is the prime way to do so.
Listen to this On the Road episode of Makers to learn more about Linux Foundation Europe.
Brian Douglas was “the Beyoncé of GitHub.” He jokingly crowned himself with that title during his years at that company, where he advocated for open source and a more inclusive community supporting it. His work there eventually led to his new startup, Open Sauced.
Like the Queen Bey, Douglas’ mission is to empower a community. In his case, he’s seeking to support the open source community. With his former employer, GitHub, serving 4 million developers worldwide, the potential size of that audience is huge.
In this episode of The Tech Founder Odyssey podcast, he shared why empowerment and breaking down barriers to make anyone “awesome” in open source was the motivation behind his startup journey.
Beyoncé “has a superfan group, the Beyhive, that will go to bat for her,” Douglas pointed out. “So if Beyoncé makes a country song, the Beyhive is there supporting her country song. If she starts doing the house music, which is her latest album, [they] are there to the point where like, you cannot say bad stuff about, he pointed out,. So what I’m focused on is having a strong community and having strong ties.”
Open Sauced, which launched in June, seeks to build open source intelligence platform to help companies to stay competitive. Its aim is to help give more potential open source contributors the information they need to get started with projects, and help maintain them over time
The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack.
Douglas’ introduction to tech started as a kid “cutting his teeth” on a Packard Bell and a shared computer at the community center inside his apartment complex, where he grew up outside of Tampa, Florida.
“I don't know what computer was in there, but it ran DOS,” he said. “And I got to play, like, Wolfenstein and eventually Duke Nukem and stuff like that. So that was my first sort of like, touch of a computer and I actually knew what I was doing.”
With his MBA in finance, the last recession in 2008 left only sales jobs available. But Douglas always knew he wanted to “build stuff.”
“I've always been like a copy and paste [person] and loved playing DOS games,” he told The New Stack. “I eventually [created] a pretty nice MySpace profile. then someone told me ‘Hey, you know, you could actually build apps now.’
“And post Web 2.0. people have frameworks and rails and Django. You just have to run a couple scripts, and you've got a web page live and put that in Heroku, or another server, and you're good. And that opened the world.”
Open Sauced began as a side project when he was director of developer advocacy at GitHub; He started working on the project full time in June, after about two years of tinkering with it.
Douglas didn’t grow up with money, he said, so moving from as an employee to the risky life of a CEO seeking funding prompted him to create his own comprehensive strategy. This included content creation (including a podcast, The Secret Sauce), other marketing, and shipping frontend code.
GitHub was very supportive of him spinning off Open Sauced as an independent startup, with colleagues assisting in refining his pitches to venture capital investors to raise funds.
“At GitHub, they have inside of their employee employment contract a moonlight clause,” Douglas said. Which means, he noted, because the company is powered by open source, “basically, whatever you work on, as long as you're not competing directly against GitHub, rebuilding it from the ground up, feel free to do whatever you need to do moonlight.”
Open Sauced will also continue Douglas’ efforts to increase representation of Blacks in tech and open pathways to level up their skills, similar to his work at GitHub with the Employee Resource Group (ERG) the Blacktocats.
“The focus there was to make sure that people had a home, like a community of belonging,” he said. “If you're a black employee at GitHub, you have a space and it was very helpful with things like 2020, during George Floyd. lt was the community [in which] we all supported each other during that situation.”
Douglas’ mission to rid the effects of imposter syndrome and champion anyone interested in open source makes him sound more like an open source ”whisperer”’ than a Beyoncé. Whatever the title, his iconic pizza brand — the company’s web address is “opensauced.pizza” — was his version, he said, of creating album cover art before forming the band.
His podcast’s tagline urges listeners to “stay saucy.” His plan for doing that at Open Sauced is to encourage new open source contributors.
“It's nice to know that projects can now opt in … but as a first-time contributor, where do I start? We can show you, ‘Hey, this project had five contributions, they're doing a great job. Why don't you start here?’
Amazon Web Services would not be what it is today without open source.
"I think it starts with sustainability," said David Nalley, head of open source and marketing at AWS in an interview at the Open Source Summit in Dublin for The New Stack Makers. "And this really goes back to the origin of Amazon Web Services. AWS would not be what it is today without open source."
Long-term support for open source is one of three pillars of the organization's open source strategy. AWS builds and innovates on top of open source and will maintain that approach for its innovation, customers, and the larger digital economy.
"And that means that there's a long history of us benefiting from open source and investing in open source," Nalley said. "But ultimately, we're here for the long haul. We're going to continue making investments. We're going to increase our investments in open source."
Customers' interest in open source is the second pillar of the AWS open source strategy.
"We feel like we have to make investments on behalf of our customers," Nalley said. "But the reality is our customers are choosing open source to run their workloads on."
[sponsor_note slug="amazon-web-services-aws" ][/sponsor_note]
The third pillar focuses on advocating for open source in the larger digital economy.
Notable is how much AWS's presence in the market played a part in Paul Vixie's decision to join the company. Vixie, an Internet pioneer, is now vice president of security and an AWS distinguished engineer who was also interviewed for the New Stack Makers podcast at the Open Source Summit.
Nalley has his recognizable importance in the community. Nalley is the president of the Apache Software Foundation, one of the world's most essential open source foundations.
The importance of its three-pillar strategy shows in many of the projects that AWS supports. AWS recently donated $10 million to the Open Source Software Supply Chain Foundation, part of the Linux Foundation.
AWS is a significant supporter of the Rust Foundation, which supports the Rust programming language and ecosystem. It puts a particular focus on maintainers that govern the project.
Last month, Facebook unveiled the PyTorch Foundation that the Linux Foundation will manage. AWS is on the governing board.
Paul Vixie grew up in San Francisco. He dropped out of high school in 1980. He worked on the first Internet gateways at DEC and, from there, started the Internet Software Consortium (ISC), establishing Internet protocols, particularly the Domain Name System (DNS).
Today, Vixie is one of the few dozen in the technology world with the title "distinguished engineer," working at Amazon Web Services as vice president of security, where he believes he can make the Internet a more safe place. As safe as before the Internet emerged.
"I am worried about how much less safe we all are in the Internet era than we were before," Vixie said in an interview at the Open Source Summit in Dublin earlier this month for The New Stack Makers podcast. "And everything is connected, and very little is understood. And so, my mission for the last 20 years has been to restore human safety to pre-internet levels. And doing that at scale is quite the challenge. It'll take me a lifetime."
So why join AWS? He spent decades establishing the ISC. He started a company called Farsight, which came out of ISC. He sold Farsight in November of last year when conversations began with AWS.
Vixie thought about his mission to better restore human safety to pre-internet levels when AWS asked a question that changed the conversation and led him to his new role.
"They asked me, what is now in retrospect, an obvious question, 'AWS hosts, probably the largest share of the digital economy that you're trying to protect," Vixie said. "Don't you think you can complete your mission by working to help secure AWS?' "The answer is yes. In fact, I feel like I'm going to get more traction now that I can focus on strategy and technology and not also operate a company on the side. And so it was a very good win for me, and I hope for them."
Interviewing Vixie is such an honor. It's people like Paul who made so much possible for anyone who uses the Internet. Just think of that for a minute -- anyone who uses the Internet have people like Paul to thank.
Thanks Paul -- you are a hero to many. Here's to your next run at AWS.
Ryan Dahl is the co-founder and creator of Deno, a runtime for JavaScript, TypeScript, and WebAssembly based on the V8 JavaScript engine and the Rust programming language. He is also the creator of Node.js.
We interviewed Dahl for The New Stack Technical Founder Odyssey series.
"Yeah, so we have a JavaScript runtime," Dahl said. "It's pretty similar in, in essence, to Node. It executes some JavaScript, but it's much more modern. "
The Deno project started four years ago, Dahl said. He recounted how writing code helped him rethink how he developed Node. Dahl wrote a demo of a modern, server-side JavaSript runtime. He didn't think it would go anywhere, but sure enough, it did. People got pretty interested in it.
Deno has "many, many" components, which serve as its foundation. It's written in Rust and C++ with a different type of event loop library. Deno has non-blocking IO as does Node.
Dahl has built his work on the use of asynchronous technologies. The belief system carries over into how he manages the company. Dahl is an asynchronous guy and runs his company in such a fashion.
As an engineer, Dahl learned that he does not like to be interrupted by meetings. The work should be as asynchronous as possible to avoid interruptions.
Deno, the company, started during the pandemic, Dahl said. Everyone is remote. They pair program a lot and focus on short, productive conversations. That's an excellent way to socialize and look deeper into problems.
How is for Dahl to go from programming to CEO?
"I'd say it's relatively challenging," Dahl said. I like programming a lot. Ideally, I would spend most of my time in an editor solving programming problems. That's not really what the job of being a CEO is."
Dahl said there's a lot more communication as the CEO operates on a larger scale. Engineering teams need management to ensure they work together effectively, deliver features and solve problems for developers.
Overall, Dahl takes it one day at a time. He has no fundamental theory of management. He's just trying to solve problems as they come.
"I mean, my claim to fame is like bringing asynchronous sockets to the mainstream with nonblocking IO and stuff. So, you know, asynchronous is deeply embedded and what I'm thinking about. When it comes to company organization, asynchronous means that we have rotating meeting schedules to adapt to people in different time zones. We do a lot of meeting recordings. So if you can't make it for whatever reason, you're not in the right time zone, you're, you know, you're, picking up your kids, whatever. You can go back and watch the recording. So we basically record every every meeting, we try to keep the meeting short. I think that's important because nobody wants to watch hours and hours of videos. And we use, we use chats a lot. And chat and email are forms of asynchronous communication where you don't need to kind of meet with people one on one. And yeah, I guess I guess the other aspect of that is just keeping meetings to a minimum. Like there's there's a few situations where you really need to get everybody in the room. I mean, there are certainly times when you need to do that. But I tried to avoid that as much as possible, because I think that really disrupts the flow of a lot of people working."
The whole world uses open source, but as we’ve learned from the Log4j debacle, “free” software isn’t really free. Organizations and their customers pay for it when projects aren’t frequently updated and maintained.
How can we support open source project maintainers — and how can we decide which projects are worth the time and effort to maintain?
“A lot of people pick up open source projects, and use them in their products and in their companies without really thinking about whether or not that project is likely to be successful over the long term,” Dawn Foster, director of open source community strategy at VMware’s open source program office (OSPO), told The New Stack’s audience during this On the Road edition of The New Stack’s Makers podcast.
In this conversation recorded at Open Source Summit Europe in Dublin, Ireland, Foster elaborated on the human cost of keeping open source software maintained, improved and secure — and how such projects can be sustained over the long term.
The conversation, sponsored by Amazon Web Services, was hosted by Heather Joslyn, features editor at The New Stack.
One of the first ways to evaluate the health of an open source project, Foster said, is the “lottery factor”: “It's basically if one of your key maintainers for a project won the lottery, retired on a beach tomorrow, could the project continue to be successful?”
“And if you have enough maintainers and you have the work spread out over enough people, then yes. But if you're a single maintainer project and that maintainer retires, there might not be anybody left to pick it up.”
Foster is on the governing board for an project called Community Health Analytics Open Source Software — CHAOSS, to its friends — that aims to provide some reliable metrics to judge the health of an open source initiative.
The metrics CHAOSS is developing, she said, “help you understand where your project is healthy and where it isn't, so that you can decide what changes you need to make within your project to make it better.”
CHAOSS uses tooling like Augur and GrimoireLab to help get notifications and analytics on project health. And it’s friendly to newcomers, Foster said.
“We spend...a lot of time just defining metrics, which means working in a Google Doc and thinking about all of the different ways you might possibly measure something — something like, are you getting a diverse set of contributors into your project from different organizations, for example.”
It’s important to pay open source maintainers in order to help sustain projects, she said. “The people that are being paid to do it are going to have a lot more time to devote to these open source projects. So they're going to tend to be a little bit more reliable just because they're they're going to have a certain amount of time that's devoted to contributing to these projects.”
Not only does paying people help keep vital projects going, but it also helps increase the diversity of contributors, “because you by paying people salaries to do this work in open source, you get people who wouldn't naturally have time to do that.
“So in a lot of cases, this is women who have extra childcare responsibilities. This is people from underrepresented backgrounds who have other commitments outside of work,” Foster said. “But by allowing them to do that within their work time, you not only get healthier, longer sustaining open source projects, you get more diverse contributions.”
The community can also help bring in new contributors by providing solid documentation and easy onboarding for newcomers, she said. “If people don't know how to build your software, or how to get a development environment up and running, they're not going to be able to contribute to the project.”
And showing people how to contribute properly can help alleviate the issue of burnout for project maintainers, Foster said: “Any random person can file issues and bug maintainers all day, in ways that are not productive. And, you know, we end up with maintainer burnout...because we just don't have enough maintainers," said Foster.
“Getting new people into these projects and participating in ways that are eventually reducing the load on these horribly overworked maintainers is a good thing.”
Listen or watch this episode to learn more about maintaining open source sustainability.
In the early 2000s, Charity Majors was a homeschooled kid who’d gotten a scholarship to study classical piano performance at the University of Idaho.
“I realized, over the course of that first year, that music majors tended to still be hanging around the music department in their 30s and 40s,” she said. “And nobody really had very much money, and they were all doing it for the love of the game. And I was just like, I don't want to be poor for the rest of my life.”
Fortunately, she said, it was pretty easy at that time to jump into the much more lucrative tech world. “It was buzzing, they were willing to take anyone who knew what Unix was,” she said of her first tech job, running computer systems for the university.
Eventually, she dropped out of college, she said, “made my way to Silicon Valley, and I’ve been here ever since.”
Majors, co-founder and chief technology officer of the six-year-old Honeycomb.io, an observability platform company, told her story for The New Stack’s podcast series, The Tech Founder Odyssey, which spotlights the personal journeys of some of the most interesting technical startup creators in the cloud native industry.
It’s been a busy year for her and the company she co-founded with Christine Yen, a colleague from Parse, a mobile application development company that was bought by Facebook. In May, O’Reilly published “Observability Engineering,” which Majors co-wrote with George Miranda and Liz Fong-Jones. In June, Gartner named Honeycomb.io as a Leader in the Magic Quadrant for Application Performance Monitoring and Observability.
Thus far Honeycomb.io, now employing about 200 people, has raised just under $97 million, including a $50 million Series C funding round it closed in October, led by Insight Partners (which owns The New Stack).
This Tech Founder Odyssey conversation was co-hosted by Colleen Coll and Heather Joslyn of TNS.
Honeycomb.io grew from efforts at Parse to solve a stubborn observability problem: systems crashed frequently, and rarely for the same reasons each time. “We invested a lot in the last generation of monitoring technology, we had all these dashboards, we have all these graphs,” Majors said. “But in order to figure out what's going on, you kind of had to know in advance what was going to break.”
Once Parse was acquired by Facebook, Majors, Yen and their teams began piping data into a Facebook tool called Scuba, which ”was aggressively hostile to users,” she recalled.
But, “it did one thing really well, which is let you slice and dice in real time on dimensions that have very high cardinality,” meaning those that contain lots of unique terms. This set it apart from the then-current monitoring technologies, which were built around assessing low cardinality dimensions.
Scuba allowed Majors’ organization to gain more control over its reliability problem. And it got her and Yen thinking about how a platform tool that could analyze high cardinality data about system health in real time. “Everything is a high cardinality dimension now,” Majors said. “And [with] the old generation of tools, you hit a wall really fast and really hard.”
And so, Honeycomb.io was created to build that platform. “My entire career has been rage-driven development,” she said. “Like: sounds cool, I'm gonna go play with that. This isn't working — I'm gonna go fix it from anger.”
Yen now holds the CEO role at Honeycomb.io, but Majors wound up with the job for roughly the first half of the company’s life.
Did Majors like being the boss? “Hated it,” she said. “Constitutionally what you want in a CEO is someone who is reliable, predictable, dependable, someone who doesn't mind showing up every Tuesday at 10:30 to talk to the same people.
“I am not structured. I really chafe against that stuff.”
However, she acknowledged, she may have been the right leader in the startup’s beginning: “It was a state of chaos, like we didn't think we were going to survive. And that's where I thrive.”
Fortunately, in Honeycomb.io’s early days, raising money wasn’t a huge challenge, due to its founders’ background at Facebook. “There were people who were coming to us, like, do you want $2 million for a seed thing? Which is good, because I've seen the slides that we put together, and they are laughable. If I had seen those slides as an investor, I would have run the other way.”
The “pedigree” conferred on her by investors due to her association with Facebook didn’t sit comfortably with her. “I really hated it,” she said. “Because I did not learn to be a better engineer at Facebook. And part of me kind of wanted to just reject it. But I also felt this like responsibility on behalf of all dropouts, and queer women everywhere, to take the money and do something with it. So that worked out.”
Majors, a frequent speaker at tech conferences, has established herself as a thought leader in not only observability but also engineering management. For other women, people of color, or people in the tech field with an unconventional story, she advised “investing a little bit in your public speaking skills, and making yourself a bit of a profile. Being externally known for what you do is really helpful because it counterbalances the default assumptions that you're not technical or that you're not as good.”
She added, “if someone can Google your name plus a technology, and something comes up, you're assumed to be an expert. And I think that that really works to people's advantage.“
Majors had a lot more to say about how her outsider perspective has shaped the way she approaches hiring, leadership and scaling up her organization. Check out this latest episode of the Tech Founder Odyssey.
Idit Levine’s tech journey originated in an unexpected place: a basketball court. As a seventh grader in Israel, playing in hoops tournaments definitely sparked her competitive side.
“I was basically going to compete with all my international friends for two minutes without parents, without anything,” Levine said. “I think it made me who I am today. It’s really giving you a lot of confidence to teach you how to handle situations … stay calm and still focus.”
Developing that calm and focus proved an asset during Levine’s subsequent career in professional basketball in Israel, and when she later started her own company. In this episode of The Tech Founder Odyssey podcast series, Levine, founder and CEO of Solo.io, an application networking company with a $1 billion valuation, shared her startup story.
The conversation was co-hosted by Colleen Coll and Heather Joslyn of The New Stack
After finishing school and service in the Israeli Army, Levine was still unsure of what she wanted to do. She noticed her brother and sister’s fascination with computers. Soon enough, she recalled, “I picked up a book to teach myself how to program.”
It was only a matter of time before she found her true love: the cloud native ecosystem. “It's so dynamic, there's always something new coming. So it's not boring, right? You can assess it, and it's very innovative.”
Moving from one startup company to the next, then on to bigger companies including Dell EMC where she was chief technology officer of the cloud management division, Levine was happy seeking experiences that challenged her technically. “And at one point, I said to myself, maybe I should stop looking and create one.”
Winning support for Solo.io demanded that the former hoops player acquire an unfamiliar skill: how to pitch. Levine’s company started in her current home of Boston, and she found raising money in that environment more of a challenge than it would be in, say, Silicon Valley.
It was difficult to get an introduction without a connection, she said: “I didn't understand what pitches even were but I learned how … to tell the story. That helped out a lot.”
Founding Solo.io was not about coming up with an idea to solve a problem at first. “The main thing at Solo.io, and I think this is the biggest point, is that it's a place for amazing technologists, to deal with technology, and, beyond the top of innovation, figure out how to change the world, honestly,” said Levine.
Even when the focus is software, she believes it’s eventually always about people. “You need to understand what's driving them and make sure that they're there, they are happy. And this is true in your own company. But this is also [true] in the ecosystem in general.”
Levine credits the company’s success with its ability to establish amazing relationships with customers – Solo.io has a renewal rate of 98.9% – using a very different customer engagement model that is similar to users in the open source community. “We’re working together to build the product.”
Throughout her journey, she has carried the idea of a team: in her early beginnings in basketball, in how she established a “no politics” office culture, and even in the way she involves her family with Solo.io.
As for the ever-elusive work/life balance, Levine called herself a workaholic, but suggested that her journey has prepared her for it: “I trained really well. Chaos is a part of my personal life.”
She elaborated, “I think that one way to do this is to basically bring the company to [my] personal life. My family was really involved from the beginning and my daughter chose the logos. They’re all very knowledgeable and part of it.”
Aerospike Founder Srini Srinivasan had just finished his Ph.D. at the University of Wisconsin when he joined IBM and worked under Don Haderle, the creator of DB2, the first commercial relational database management system.
Haderle became a major influencer on Srinivasan when he started Aerospike, a real-time data platform. To this day, Haderle is an advisor to Aerospike.
"He was the first one I went back to for advice as to how to succeed," Srinivasan said in the most recent episode of The New Stack Maker series, "The Tech Founder Odyssey."
A young, ambitious engineer, Srinivasan left IBM to join a startup. Impatient with the pace he considered slow, Srinivasan met with Haderle, who told him to go, challenge himself, and try new things that might be uncomfortable.
Today, Srinivasan seeks a balance between research and product development, similar to the approach at IBM that he learned -- the balance between what is very hard and what's impossible.
Technical startup founders find themselves with complex technical problems all the time. Srinivasan talked about inspiration to solve those problems, but what does inspiration mean at all?
Inspiration is a complex topic to parse. It can be thought of as almost trivial or superficial to discuss. Srinivasan said inspiration becomes relevant when it is part of the work and how one honestly faces that work. Inspiration is honesty.
"Because once one is honest, you're able to get the trust of the people you're working with," Srinivasan said. "So honesty leads to trust. Once you have trust, I think there can be a collaboration because now people don't have to worry about watching their back. You can make mistakes, and then you know that it's a trusted group of people. And they will, you know, watch your back. And then, with a team like that, you can now set goals that seem impossible. But with the combination of honesty and trust and collaboration, you can lead the team to essentially solve those hard problems. And in some cases, you have to be honest enough to realize that you don't have all the skills required to solve the problem, and you should be willing to go out and get somebody new to help you with that."
Srinivasan uses the principles of honesty in Aerospike's software development. How does that manifest in the work Aerospike does? It leads to all kinds of insights about Unix, Linux, systems technologies, and everything built on top of the infrastructure. And that's the work Srinivasan enjoys so much – building foundational technology that may take years to build but over time, establishes the work that's important, scalable, and has great performance.
Ask a developer about how they got into programming, and you learn so much about them.
In this week's episode of The New Stack Makers, Chainguard founder Dan Lorenc said he got into programming halfway through college while studying mechanical engineering.
"I got into programming because we had to do simulations and stuff in MATLAB," Lorenc said. And then I switched over to Python because it was similar. And we didn't need those licenses or whatever that we needed. And then I was like, Oh, this is much faster than you know, ordering parts and going to the machine shop and reserving time, so I got into it that way."
It was three or four years ago that Lorenc got into the field of open source security.
"Open source security and supply chain security weren't buzzwords back then," Lorenc said. "Nobody was talking about it. And I kind of got paranoid about it."
Lorenc worked on the Minikube open source project at Google where he first saw how insecure it could be to work on open source projects. In the interview, he talks about the threats he saw in that work.
It was so odd for Lorenc. State of art for open source security was not state of the art at all. It was the stone age.
Lorenc said it felt weird for him to build the first release in MiniKube that did not raise questions about security.
"But I mean, this is like a 200 megabyte Go binary that people were just running as root on their laptops across the Kubernetes community," Lorenc said. "And nobody had any idea what I put in there if it matched the source on GitHub or anything. So that was pretty terrifying. And that got me paranoid about the space and kind of went down this long rabbit hole that eventually resulted in starting Chainguard.
Today, the world is burning down, and that's good for a security startup like Chainguard.
"Yeah, we've got a mess of an industry to tackle here," Lorenc said. "If you've been following the news at all, it might seem like the software industry is burning on fire or falling down or anything because of all of these security problems. It's bad news for a lot of folks, but it's good news if you're in the security space."
Good news, yes ,but how does it fit into a larger story?
"Right now, one of our big focuses is figuring out how do we explain where we fit into the bigger landscape," Lorenc. said. "Because the security market is massive and confusing and full of vendors, putting buzzwords on their websites, like zero trust and stuff like that. And it's pretty easy to get lost in that mess. And so figuring out how we position ourselves, how we handle the branding, the marketing, and making it clear to prospective customers and community members, everything exactly what it is we do and what threats our products mitigate, to make sure we're being accurate there. And conveying that to our customers. That's my big focus right now."
In the early 1990s, many kids got into programming video games. Tina Huang enjoyed developing her GeoCities site but not making games. Huang loved automating her website.
"It is not a lie to say that what got me excited about coding was automation," said Huang, co-founder of Transposit, in this week's episode of The New Stack Makers as part of our Tech Founder Series. "Now, you're probably going to think to yourself: 'what middle school kid likes automation?' "
Huang loved the idea of automating mundane tasks with a bit of code, so she did not have to hand type – just like the Jetsons and Rosie the Robot -- the robot people want. There to fold your laundry but not take the joy away from what people like to do.
Huang is like many of the founders we interview. Her job can be what she wants it to be. But Huang also has to take care of everything that needs to get done. All the work comes down to what the Transposit site says on the home page: Bring calm to the chaos. Through connected workflows, give TechOps and SREs visibility, context, and actionability across people, processes, and APIs.
The statements reflect on her own experience in using automation to provide high-quality information.
"I've always been swimming upstream against the tide when I worked at companies like Google and Twitter, where, you know, the tagline for Google News back then was "News by Robots," Huang said. "The ideal in their mind was how do you get robots to do all the news reporting. And that is funny because now I think we have a different opinion. But at the time, it was popular to think news by robots would be more factual, more Democratic."
Huang worked on a project at Google exploring how to use algorithms to curate the first pass of curation for human editors to go in and then add that human touch to the news. The work reflected her love for long-form journalism and that human touch to information.
Transport offers a similar next level of integration. Any RSS fans out there? Huang has a love/hate relationship with RSS. She loves it for what it can feed, but if the feed is not filtered, then it becomes overwhelming. Getting inundated with information happens when multiple integrations start to layer from Slack, for example, and other sources.
"And suddenly, you're inundated with information because it was information designed for the consumption by machines, not at the human scale," Huang said. "You need that next layer of curation on top of it. Like how do you allow people to annotate that information? "
Providing a choice in subscriptions can help. But at what level? And that's one of the areas that Huang hopes to tackle with Transposit."
Welcome to the first in our series on The New Stack Makers about technical founders, those engineers who have moved from engineering jobs to running a company of their own. What we want to know is what that's like for the founder. How is it to be an engineer turned entrepreneur?
We like to ask technologists about their first computer or when they started programming. We always find a connection to what the engineer does today. It's these kinds of questions you will hear us ask in the series to get more insight into everything that happens when the engineer is responsible for the entire organization. We've listened to feedback about what people want from this series. Here are a few of the replies we received to my tweet asking for feedback about the new series.
If they have kids, how much work is taken on by their SO? Lots of technical founders are only able to do what they do because their partner is lifting a lot in the background — they hardly ever get the credits tho
— Anaïs Urlichs ☀️ (@urlichsanais) August 4, 2022
I host the first four interviews. The New Stack's Colleen Coll and Heather Joslyn will co-host the following shows we run in the series.
We interviewed Cycle.io Founder Jake Warner for the first episode in the series about how he went from downloading a virus on an inherited Windows 95 machine as a 10-year-old to leading a startup.
"You know, I had to apologize to my Dad for needing to do a full reinstall on the family computer," Warner said. "But it was the fact that someone through just the use of a file could cause that much damage that started making me wonder, wow, there's a lot more to this than I thought."
Warner was never much of a gamer. He preferred the chat rooms and conversation more so than playing Starcraft, the game he liked to talk about more than play. Warner met people in those chat rooms who preferred to talk about the game instead of playing it. He became friends with a group that liked playing games over the network hosted by Starcraft. Games that kids play all the time. They were learning about firewalls to attack each other virtually, between chat rooms, for example.
"And because of that, that got me interested in all kinds of firewalls and security things, which led to getting into programming," Warner said. "And so it was, I guess, the point the to get back to your question, it started with a game, but very quickly went from a lot more than that.
And now Warner is leading Cycle, which he and his colleagues have built from the ground up. For a long time, they marketed Cycle as a container orchestrator. Now they call Cycle a platform for building platforms – ironically similar to the story of a kid playing a game in a game.
Warner has been leading a company that he described as a container orchestrator for some time. There is one orchestrator that enterprise engineers know well. And that's Kubernetes. Warner and his team realized that Cycle is different than a container orchestrator. So how to change the message?
Knowing what to do is the challenge of any founder. And that's a big aspect of what we will explore in our series on technical founders. We hope you enjoy the interviews. Please provide feedback and your questions. They are always invaluable and serve as a way to draw thoughtful perspectives from the founders we interview.
Web Application Firewalls (WAF) first emerged in the late 1990s as Web server attacks became more common. Today, in the context of cloud native technologies, there’s an ongoing rethinking of how a WAF should be applied.
No longer is it solely static applications sitting behind a WAF, said Tigera CEO Ratan Tipirneni, President & CEO of Tigera in this episode of The New Stack Makers.
“With cloud native applications and a microservices distributed architecture, you have to assume that something inside your cluster has been compromised,” Tipirneni said. “So just sitting behind a WAF doesn't give you adequate protection; you have to assume that every single microservice container is almost open to the Internet, metaphorically speaking.
So then the question is how do you apply WAF controls?
Today’s WAF has to be workload-centric, Tiperneni said. In his view, every workload has to have its own WAF. When a container launches, the WAF control is automatically spun up.
So that way, even if something inside a cluster is compromised or exposes some of the services to the Internet, it doesn't matter because the workload is protected, Tiperneni said.
So how do you apply this level of security? You have to think in terms of a workload-centric WAF.
The Scenario
The vulnerabilities are so numerous now and cloud native applications have larger attack surfaces with no way to mitigate vulnerabilities using traditional means, Tiperneni
“It's no longer sufficient to throw out a report that tells you about all the vulnerabilities in your system,” Tiperneni said. “Because that report is not actionable. People operating the services are discovering that the amount of time and effort it takes to remediate all these vulnerabilities is incredible, right? So they're looking for some level of prioritization in terms of where to start.”
And the onus is on the user to mitigate the problem, Tiperneni said. Those customers have to think about the blast radius of the vulnerability and its context in the system. The second part: how to manage the attack surface.
In this world of cloud native applications, customers are discovering very quickly, that trying to protect every single thing, when everything has access to everything else is an almost impossible task, Tiperneni said.
What’s needed is a way for users to control how microservices talk to each with permissions set for intercommunciation. In some cases, specific microservices should not be talking to each other at all.
“So that is a highly leveraged activity and security control that can stop many of these attacks,” Tiperneni said.
Even after all of that, the user still has to assume that attacks will happen, mainly because there's always the threat of an insider attack.
And in that situation, the search is for patterns of anomalous behavior at the process level, at the file system level or the system call level to determine the baseline for standard behavior that can then tell the user how to identify deviations, Tiperneni said. Then it’s a matter of trying to tease out some signals, which are indicators of either an attack or of a compromise.
“Maybe a simpler use case of that is to constantly be able to monitor and monitor at run time for known bad hashes or files or binaries, that are known to be bad,” Tipirneni said.
The real challenge for companies is setting up the architecture to make microservices secure. There are a number of vectors the market may take. In the recording, Tipirneni talks about the evolution of WAF, the importance of observability and better ways to establish context with the services a company has deployed and the overall systems that companies have architected.
“There is no single silver bullet,” Tipirneni said. “You have to be able to do multiple things to keep your application safe inside cloud native architectures.”
Passage adds device native biometric authorization to web sites to allow passwordless security on devices with or without Touch ID.
In this episode of The New Stack Makers, Passage Co-Founders Cole Hecht and Anna Pobletts talk about how the service works for developers to offer users its biometric service.
Hecht and Pobletts have worked in product security for many years and the recurring problem is always password-based security. But there really is no great solution, Pobletts said. Multi-factor authentication adds security but the user experience is lacking. Magic links, adaptive MFA, and other techniques add a bit of improvement but are not a great balance of user experience and security.
“Whereas biometrics is the only option we've ever seen that gives you both great security and great user experience right out of the box,” Pobletts.
The goal for Hecht and Pobletts: offer developers what is challenging to implement themselves: a passwordless service with a high security level and a great user experience.
Passage is built on WebAuthn, a Web protocol that allows a developer to connect Web sites with browsers and various devices through the authenticators on those devices, Pobletts said.
“So that could be anything right now,” Pobletts said. “It's things like fingerprint readers and face identification. But in the future, it could be voice identification, or it could be, you know, your presence and things like that like it could be all sorts of stuff in the future. But ultimately, your device is generating a cryptographic key pair and storing the private key in the TPM of your device. The cool thing about this protocol is that your biometric data never leaves your device, it's a huge win for privacy. In that passage, your browser, no one ever actually sees your fingerprint data in any way.”
It’s cryptographically secure under the hood with Passage as the platform on top, Pobletts said.
WebAuthn is designed for single devices, Pobletts said. A developer authenticated one fingerprint, for example, to one device. But that does not work well on the Internet where a user may have a phone, a tablet, and a computer. Passage coordinates and orchestrates between different devices to give an easy experience.
“So in my case, I have an iPhone, I do face ID,” said Hecht showing the service. “And then I'm going to be signed in on both devices automatically. So that's a great way to kind of give every user access to the site no matter what device they're on.”
With Passage, the biometric is added to any device a user adds, Hecht said. Passage handles the multidevice orchestration.
Use cases?
“FinTech people like the security properties of it, they kind of like that cool, shiny user experience that they want to deliver to their end users,” Hecht said. And then any website or business that cares about conversions is kind of a general term. People who want signups, who are trying to measure success by the number of people registering and creating accounts, are signing up. “Passage has a really nice story for that because we cut out so much friction around those conversion points.”
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Webb Brown, CEO and co-founder of KubeCost, talked with The New Stack about opening up the black box on how much Kubernetes is really costing.
Whether we’re talking about cloud costs in general or the costs specifically associated with Kubernetes, the problem teams complain about is lack of visibility. This is a cliche complaint about AWS, but it gets even more complicated once Kubernetes enters the picture. “Now everything’s distributed, everything’s shared,” Brown said. “It becomes much harder to understand and break down these costs. And things just tend to be way more dynamic.” The ability of pods to spin up and down is a key advantage of Kubernetes and brings resilience, but it also makes it harder to understand how much it costs to run a specific feature.
And costs aren’t just about money, either. Even with unlimited money, looking at cost information can provide important information about performance issues, reliability or availability. “Our founding team was at Google working on infrastructure monitoring, we view costs as a really important part of this equation, but only one part of the equation, which is you’re really looking at the relationship between performance and cost,” Brown said. “Even with unlimited budged, you would still care about resourcing and configuration, because it can really impact reliability and availability of your services.”
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Amanda Brock, CEO and founder of OpenUK, talked with The New Stack about revenue models for open source and how those fit into building a sustainable project.
Funding an open source project has to be part of the sustainability question — open source requires humans to contribute, and those humans have bills to pay and risk burnout if the open source project is a side gig after their full time job. That’s not the only expenses a project might accrue, either — there might be cloud costs, for example. Brock says there are essentially eight categories of funding models for open source, of which really two or three have been proven successful. They are support, subscription and open core.
So how do we define open core, exactly? “You get different kinds of open core businesses, one that is driven very much by the needs of the company, and one that is driven by the needs of the open source project and community,” Brock said. In other words, sometimes the project exists to drive revenue, sometime the revenue exists to support the project — a subtle distinction, but it’s easy to see how one or the other orientation could change a company’s relationship with open source.
Are both types really open source? For Brock, it all comes down to community. “It’s the companies that have proper community that are really open source to me,” she said. “That’s where you’ve got a proper project with a real community, the community is not entirely based off of your employees.”
AUSTIN, TEX. — In one of the most compelling keynote addresses at The Linux Foundation’s Open Source Summit North America, held here in June, Aeva Black, a veteran of the open source community, said that a friend of theirs recently commented that, “I feel like all the trans women I know on Twitter, are software developers.”
There’s a reason for that, Black said. It’s called “survivor bias”: The transgender software developers the friend knows on Twitter are only a small sample of the trans kids who survived into adulthood, or didn’t get pushed out of mainstream society.
“It's a pretty common trope, at least on the internet: transwomen are all software developers, we all have high-paying jobs, we're TikTok or on Twitter. And that's really a sampling bias, the transgender people who have the privilege to be loud,” said Black, in this On the Road episode of The New Stack Makers podcast.
Black, whose keynote alerted the conference attendees about how the rights of transgender individuals are under attack around the United States, and the role tech can play, currently works in Microsoft Azure's Office of the Chief Technology Officer and holds seats on the boards of the Open Source Initiative and on the OpenSSF's Technical Advisory Council. In this episode of Makers, they unpacked the keynote’s themes with Heather Joslyn, TNS features editor.
Citing Pew Research Center data, released in June, reports that 5% of Americans under 30 identify as transgender or nonbinary — roughly the same percentage that have red hair.
The Pew study, and the latest "Stack Overflow Developer Survey," reveal that younger people are more likely than their elders to claim a transgender or nonbinary identity. Failure to accept these people, Black said, could have an impact on open source work, and tech work more generally.
“If you're managing a project, and you want to attract younger developers who could then pick it up and carry on the work over time, you need to make sure that you're welcoming of all younger developers,” they said.
Codes of Conduct, must-haves for meetups, conferences and open source projects over the past few years, are too often thought of as tools for punishment, Black said in their keynote. For Makers, they advocated for thinking of those codes as tools for community stewardship.
As a former member of the Kubernetes Code of Conduct committee, Black pointed out that “80% of what we did … while I served wasn't punishing people. It was stepping in when there was conflict, when people you know, stepped on someone else's toe, accidentally offended somebody. Like, ‘OK, hang on, Let's sort this out.' So it was much more stewardship, incident response mediation.”
LGBT people are currently the targets of new legislation in several U.S. states. The tech world and its community leaders should protect community members who may be vulnerable in this new political climate, Black said.
“The culture of a community is determined by the worst behavior its leaders tolerate, we have to understand and it's often difficult to do so how our actions impact those who have less privileged than us, the most marginalized in our community,” they said.
For example, “When thinking of where to host a conference, think about the people in one's community, even those who may be new contributors. Will they be safe in that location?”
Listen to the episode to hear more of The New Stack’s conversation with Black.
AUSTIN, TEX. —What’s the future of WebAssembly — Wasm, to its friends — the binary instruction format for a stack-based virtual machine that allows developers to build in their favorite programming language and run their code anywhere?
For Matt Butcher, CEO and founder of Fermyon Technologies, the future of Wasm lies in running it outside of the browser and running it inside of everything, from proxy servers to video games.”
And, he added, “the really exciting part is being able to run it in the cloud, as well as a cloud service alongside like virtual machines and containers.”
For this On the Road episode of The New Stack Makers podcast, Butcher was interviewed by Heather Joslyn, features editor of TNS.
With key programming languages like Ruby, Python and C# adding support for WebAssembly’s new capabilities, Wasm is gaining critical mass, Butcher said.
“What we're talking about now is the realization of the potential that's been around in WebAssembly for a long time. But as people get excited, and open source projects start to adopt it, then what we're seeing now is like the beginning of the tidal wave.”
But before widespread adoption can happen, Butcher said, there’s still work to be done in preparing the environment the next wave of Wasm: cloud computing.
Along with other members of the Bytecode Alliance, such as Cosmonic, Fastly, Intel and Fermyon is working to improve the developer experience and environment this year. The next step, he added is to “start to build this first wave of applications that really highlight where it can happen for us.”
The rise of Wasm represents a new era in cloud native technology, Butcher noted. “We love containers. Many of us have been involved in the Kubernetes ecosystem for years and years. I built Helm originally; that's still, in a way, my baby.
“But also we're excited because now we're finding solutions to some problems that we didn't see get solved in the container ecosystem. And that's why we talk about it as sort of like the next wave.”
Fermyon introduced its “frictionless” WebAssembly platform in June here at The Linux Foundation’s Open Source Summit North America. The platform, built on technologies including HashiCorp’s Nomad and Consul, enables the writing of microservices and web applications. Fermyon’s open source tool, Spin, helps developers push apps from their local dev environments into their Fermyon platform.
One aspect of Wasm’s future that Butcher highlighted in our Makers discussion is how it can be scalable while also remaining lightweight in terms of the cloud resources it consumes.
“Along with creating this great developer experience in a secure platform, we're also going to help people save money on their cloud costs, because cloud costs have just kind of ballooned out of control,” he said.
“If we can be really mindful of the resources we use, and help the developer understand what it means to write code that can be nimble, and can be light on resource usage. The real objective is to make it so when they write code, it just happens to have those characteristics.”
For those interested in taking WebAssembly for a spin, Fermyon has created an online game called Finicky Whiskers, intended to show how microservices can be reimagined with Wasm.
VALENCIA, Spain — WebAssembly (Wasm) is among the more hot topics under the CNCF project umbrella. In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022, Liam Randall, CEO and co-founder, Cosmonic, and Colin Murphy, senior software engineer, Adobe, discuss why Wasm’s future looks bright.
A quintessential feature of Wasm is that it functions on a CPU level, not unlike Java or Flash. This means, Randall said, that Wasm “can run anywhere.” “Everybody can start using Wasm, which functionally works like a tiny CPU. You can even put WebAssembly inside other applications.”
The fact that Wasm has a binary format (with .wasm file format) and can be used to run on a CPU level like C or C++ does means it is highly portable. “WebAssembly really is exciting because it gives us two fundamental things that are truly amazing: One is portability across a diverse set of CPUs and architectures, and even portability into other places, like into a web browser,” said Randall. “It also gives us a security model that's portable, and works the same across all of those different landscape settings.”
This portability makes wasm an excellent candidate for edge applications. Its inference capabilities for machine learning (ML) at the edge are particularly promising for applications distributed across many different applications, Murphy described. Wasm is also particularly apt for collaboration for ML edge and other applications. “Collaborative experiences are what WebAssembly is really perfectly in position for," he continued.
In many ways, the name “WebAssembly” is not intuitively reflective of its meaning. “WebAssembly is neither web nor assembly — so, it's a somewhat awkwardly named technology, but a technology that is worth looking into,” Randall said. “There are incredible opportunities for your internal teams to transform the way they do business to save costs and be more secure by adopting this new standard.”
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Julia Ferraioli, open source technical leader at Cisco’s open source programs office, spoke with The New Stack about some alternative ways to define what is and is not ‘open source.’
When someone says, well, that’s ‘technically’ open source, it’s usually to be snarky about a project that meets the legal criteria to be open source, but doesn’t follow the spirit of open source. Ferraioli doesn’t think that the ‘classic’ open source project, like a Kubernetes or Linux, are the only valid models for open source. She gives the sample of a research project — the code might be open sourced specifically so that others can see the code and reproduce the results themselves. However, for the research to remain valid, they it can’t accept any contributions.
“It’s no less open source than others,” Ferraioli said about the hypothetical research project. “If you break things down by purpose, it’s not always that you’re trying to build the robust community.” The social model of open source, Ferraioli says, is about understanding the different use cases for open source, as well as providing a framework for determining what appropriate success metrics could be depending on what the project’s motivations are. And if you’re just doing a project with friends for laughs, well, quantifying fun isn’t going to be easy.
AUSTIN, TEX. — How safe is the open source software that virtually every organization uses? You might not want to know, according to the results of a survey released by The Linux Foundation and Snyk, a cloud native cybersecurity company, at the foundation’s annual Open Source Summit North America, held here in June.
Forty-one percent of the more than 500 organizations surveyed don’t have high confidence in the security of the open source software they use, according to the research. Only half of participating companies said they have a security policy that addresses open source.
Furthermore, it takes more than double the number of days — 98 — to fix a vulnerability compared to what was reported in the 2018 version of the survey.
The research was conducted at the request of the Open Source Security Foundation (OpenSSF), a project of The Linux Foundation. For this On the Road episode of The New Stack Makers, Steve Hendrick, vice president of research at The Linux Foundation, and Matt Jarvis, director of developer relations at Snyk, were interviewed by Heather Joslyn, features editor at TNS.
Despite the alarming statistics, Jarvis cautions against treating all vulnerabilities as four-alarm fires, our guests said.
“Having a kind of zero-vulnerability target is probably unrealistic, because not all vulnerabilities are treated equal,” Jarvis said. Some “vulnerabilities” may not necessarily be a risk in your particular environment. It’s best to focus on the most critical threats to your network, applications and data.
One bright spot in the new report: Nearly one in four respondents said they’re looking for resources to help them keep their open source software — and all that depends on it — safe. Perhaps even more relevant to vendors: 62% of survey participants said they are looking to use more intelligent security-focused tools.
“There's a lot from a process standpoint that they are responsible for,” said Hendrick. “But they were very quick to jump on the bandwagon and say, we want the vendor community to do a better job at providing us tools, that makes our life a lot easier. Because I think everybody recognizes that solving the security problem is going to require a lot more effort than we're putting into it today.”
Many organizations still seem confused about which of the dependencies the open source software they use has are direct and which are transitive (dependent on the dependencies), said Hendrick. One of the best ways to clarify things, he said, “ is to get on the SBOM bandwagon.”
Understanding an open source tool’s software bill of materials, or SBOM, is “going to give you great understanding of the components, it's going to give you usability, it's going to give you trust, you're gonna be able to know that the components are nonfalsified,” Hendrick said.
“And so that's all absolutely key from the standpoint of being able to deal with the whole componentization issue that is going on everywhere today.
Additional results from the research, in which core project maintainers discussed their best practices, will be released in the third quarter of 2022. Listen to the podcast to learn more about the report’s results and what Linux Foundation is doing to help upskill the IT workforce in cybersecurity.
AUSTIN, TEX. —Forty-one percent of organizations in a new survey said they expect to increase hiring for open source roles this year. But the study, released in June by the Linux Foundation and online learning platform edX during the foundation’s Open Source Summit North America, also found that 93% of employers surveyed said they struggle to find the talent to fill those roles.
At the Austin summit, The New Stack’s Makers podcast sat down with Hilary Carter, vice president for research at the Linux Foundation, who oversaw the study. She was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.
“I think it's a very good time to be an open source developer, I think they hold all the cards right now,” Carter said. “And the fact that demand outstrips supply is nothing short of favorable for open source developers, to carry a bit of a big stick and make more demands and advocate for their improved work environments, for increased pay.”
But even sought-after developers are feeling a bit anxious about keeping pace with the cloud native ecosystem’s constant growth and change. The open source jobs study found that roughly three out of four open source developers said they need more cybersecurity training, up from about two-thirds in 2021’s version of the report.
“Security is the problem of the day that I think the whole community is acutely aware of, and highly focused on, and we need the talent, we need the skills,” Carter said. “And we need the resources to come together to solve the challenge of creating more secure software supply chains.”
Carter also told the Makers audience about the role open source program offices, or OSPOs, can play in nurturing in-house open source talent, the impact a potential recession may have (or not have) on the tech job market, and new surveys in the works at Linux Foundation to essentially map the open source community outside of North America.
Its first study, of Europe’s open source communities, is slated to be released in September at Open Source Summit Europe, in Dublin. Linux Foundation Research is currently fielding its annual survey of OSPOs; you can participate here. It is also working with the Cloud Native Computing Foundation on its annual survey of cloud native adoption trends. You can participate in that survey here.
In this episode of The New Stack’s On the Road show at Open Source Summit in Austin, Matt Yonkovit, Head of Open Source at Percona, shared his thoughts on how economic uncertainty could affect the open source ecosystem.
Open source, of course, is free. So what role does the economic play in whether or not open source software is contributed to, downloaded and used in production? “Generally, open source is considered a bit recession proof,” Yonkovit said. But that doesn’t mean that things won’t change. Over the past several years, the number of open source companies has increased dramatically, and the amount of funding sloshing around in the ecosystem has been huge. That might change.
And if the funding situation does change? “I think the big differentiator for a lot of people in the open source space is going to be the communities,” Yonkovit said. When we talk about having ‘backing,’ it’s usually in reference to financial investors, but in open source the backing of a community is just as important. In the absence of deep pockets, a community of people who believe in the project can help it survive — and show that the idea is really solid.
If you look back at the history of open source, Yonkovit said, it’s about people having an idea that inspires other people to contribute to make it a reality. Sometimes those ideas aren’t commercially viable, even in the best of times — even if they do get widespread adoption. The only thing that’s changing now is that financial investors are going to be a bit more picky in making sure the projects they fund aren’t just inspirational ideas, but also are commercially viable.
AUSTIN, TEX. —Everyone uses open source software — and it’s become increasingly apparent that not nearly enough attention has been paid to the security of that software. In a survey released by The Linux Foundation and Synk at the foundation’s Open Source Summit in Austin, Tex., this month, 41% of organizations said they aren’t confident in the security of the open source software they use.
At the Austin event, The New Stack’s Makers podcast sat down with Brian Behlendorf, general manager of Open Source Security Foundation (OpenSSF), to talk about a new plan to attack the problem from multiple angles. He was interviewed for this On the Road edition of Makers by Heather Joslyn, features editor at The New Stack.
Behlendorf, who has led OpenSSF since October and serves on the boards of the Electronic Frontier Foundation and Mozilla Foundation, cited the discovery of the Log4j vulnerabilities late in 2021, and other recent security “earthquakes” as a key turning points.“I think the software industry this year really woke up to not only the fact these earthquakes were happening,” he said, “and how it's getting more and more expensive to recover from them.”
The Open Source Security Mobilization Plan sprung from an open source security summit in May. It identifies 10 areas that will be targeted for attention, according to the report published by OpenSSF and the Linux Foundation:
The price tag for these initiatives over the initial two years is expected to total $150 million, Behlendorf told our Makers audience.
The plan was sparked by queries from the White House about the various initiatives underway to improve open source software security — what they would cost, and the time frame the solution-builders had in mind. “We couldn't really answer that without being able to say, well, what would it take if we were to invest?” Behlendorf said. “Because most of the time we sit there, we wait for folks to show up and hope for the best.”
The ultimate price tag, he said, was much lower than he expected it would be. Various member organizations within OpenSSF, he said, have pledged funding. “The 150 was really an estimate. And these plans are still being refined,” Behlendorf said. But by stating specific steps and their costs, he feels confident that interested parties will feel confident when it comes time to make good on those pledges.
Listen to the podcast to get more details about the Open Source Security Mobilization Plan.
British telecommunications provider, Vodafone, which owns and operates networks in over 20 countries and is on a journey to become a tech company focused around digital services, has plans to hire thousands of software engineers and developers that can help put the company on the cloud-native track and utilize their network through API’s.
In this episode of The New Stack Makers podcast at MongoDB World 2022 in New York City, Lloyd Woodroffe, Global Product Manager at Vodafone, shares how the company is working with MongoDB on the development of a Telco as a Service (TaaS) platform to help their engineers increase their software development velocity, and drive adoption of best-practice automation within DevSecOps pipelines. Alex Williams, Founder of The New Stack hosted this podcast.
Vodafone has built a backbone to keep the business resilient and scalable. But one thing they are looking to do now is innovate and give their developers the freedom and flexibility to develop creatively. “The TaaS platform – which is the product we’re building – is essentially a developer first framework that allows developers and Vodafone to build things that you think could help the business grow. But because we’re an enterprise, we need security and financial assurance and TaaS is the framework that allows us to do it in a way that gives developers the tools they need but also the security we need,” said Woodroffe.
The idea of reuse as part of an inner sourcing model is key as Vodafone’s scales. The company’s key initiative ‘one source’ enables their developers to incorporate such a strategy, “We have a single repository across all our markets and teams where you can publish your code and other teams from other countries can take that code, reuse it, and implement it into their applications,” said Woodroffe. “In terms of outsourcing to the community, our engineers want to start productizing APIs and build new, innovative applications which we'll see in a bit,” he added.
“The TaaS developer platform that we’re building with MongoDB acts as our service registry for the platform. When you provision the tools for the developer, we register the organizations, the cost center and guardrails that we’ve set up from a security and finance perspective,” said Woodroffe. “Then we provision MongoDB for the developers to use as their database of choice.”
“What we'll see ultimately, as the developer has access to these tools [TaaS] and products more, is they'll be able to build new innovations that can be utilized through our network via API's,” Woodroffe said.
VALENCIA – The goal of DevOps was to break down silos between software development and operations. The side effect has become the blurring of lines between dev and ops. For better or for worse. Because the role of software developer is just continuously expanding causing cognitive overload and burnout. This is why the developer tooling market has exploded to automate and assist developers right when and where they need to build, in whatever language they already know.
In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, Matty Stratton, staff developer advocate at Pulumi, talks about this recently universal Infrastructure-as-Code and that impact on both dev and ops teams.
Earlier this May, Pulumi released updates that took the platform closer to becoming a truly polyglot way to enforce best cloud practices, including support for:
These are significant updates because they dramatically expand the languages that are available in this low-code way of creating, deploying and managing infrastructure on any cloud.
"A lot of times, in Infrastructure-as-Code, we're using domain-specific language using a config file. We call it Infrastructure as Code and are not actually writing any code. So I like to think about Pulumi as Infrastructure as Software." For Stratton, that means writing Pulumi code using a general purpose programming language, like TypeScript, Python, Go, .NET languages, or now Java. "The great thing about that is, not only do you maybe already know this programming language, because that's the language you use to build your applications, but you're able to use all the things that a programming language has available to it, like conditionals, and loops, and packages, and testing tools, and an IDE [integrated development enviornment] and a whole ecosystem. So that makes it a lot more powerful, and gives us a lot of great abstractions we can use," he continued.
Pulumi now follows the low-code development trend where, Stratton says, "We're enabling people to solve a problem with just enough tech." But specifically in their common coding language, to limit the tool onboarding needed.
This is not only attractive to new customers but specifically to expand Pulumi adoption across organizations, without much adaptation of the way they work. Just making it easier to work together.
"I've been part of the DevOps community for a long time. And all that I want to see out of DevOps and all of this work is how do we collaborate better together? How do we be more cross functional?"
Proper tooling is perhaps the primary key to unlocking developer productivity. With the right tools and frameworks, developers can be productive in minutes versus having to toil over boilerplate code. And as data-hungry use cases such as AI and machine learning emerge, data tooling is becoming paramount.
This was evident at the recent MongoDB World conference in New York City where TNS Founder and Publisher Alex Williams recorded this episode of The New Stack Makers podcast featuring Peggy Rayzis, senior director of developer experience at Apollo GraphQL; Lee Robinson, vice president of developer experience at Vercel; Ian Massingham, vice president of developer relations and community at MongoDB; and Søren Bramer Schmidt, co-founder and CEO of Prisma, discussing how their companies’ offerings help unlock developer productivity.
Apollo GraphQL unlocks developers by helping them build supergraphs, Raysiz said. A supergraph is a unified network of a company's data services and capabilities that is accessible via a consistent and discoverable place that any developer can access with a GraphQL query. GraphQL is a query language for communicating about data.
“And what's really great about the supergraph is even though it's unified, it's very modular and incrementally adoptable. So you don't have to like rewrite all of your backend system and API's,” she said. “What's really great about the Super graph is you can connect like your legacy infrastructure, like your relational databases, and connect that to a more modern stack, like MongoDB Atlas, for example, or even connected to a mainframe as we've seen with some of our customers. And it brings that together in one place that can evolve over time. And we found that it just makes developers so much more productive, helps them shave, shave months off of their development time and create experiences that were impossible before.”
Meanwhile, Robinson touted the virtues of Next.js, Vercel’s popular React-based framework, which provides developers with the tools and the production defaults to make a fast web experience. The goal is to enable frontend developers to be able to move from an idea to a global application in seconds.
Robinson said he believes it’s important for a tool or framework to have good, strong defaults, but to also be extensible and available for developers to make changes such that they do not have necessarily eject fully out of the tool that they're using, but to be able to customize without having to leave the framework library tool of choice.
“If you can provide that great experience for the 90% use case by default, but still allow maybe the extra 10% power, you know, power developer who needs to modify something without having to just rewrite from scratch, you can get go pretty far,” he said.
When it comes to data tooling, MongoDB is trying to help developers manipulate and work with data in a more productive and effective way, Massingham said.
One of the ways MongoDB does this is through the provision of first-party drivers, he said. The company offers 12 different programming language drivers for MongoDB, covering everything from Rust to Java, JavaScript, Python, etc.
“So, as a developer, you’re importing a library into your environment,” Massingham said. “And then rather than having to construct convoluted SQL statements -- essentially learning another language to interact with the data in your database or data store -- you're going to manipulate data idiomatically using objects or whatever other constructs that are normal within the programming language that you're using. It just makes it way simpler for developers to interact with the data that's stored in MongoDB versus interacting with data in a relational database.”
Bramer Schmidt said while a truism in software engineering is that code moves fast and data moves slow, but now we are starting to see more innovation around the data tooling space.
“And Mongo is a great example of that,” he said. “Mongo is a database that is much nicer to use for developers, you can express more different data constructs, and Mongo can handle things under the hood.”
Moreover, Prisma also is innovating around the developer experience for working with data, making it easier for developers to build applications that rely on data and do that faster, Bramer Schmidt said.
“The way we do that in Prisma is we have the tooling introspect your database, it will go and assemble documents in MongoDB, and then generate a schema based on that, and then it will pull that information into your development environment, such that you can, when you write queries, you will get autocompletion, and the IDE will tell you if you're making a mistake,” he said. “You will have that confidence in your environment instead of having to look at the documentation, try to remember what fields are where or how to do things. So that is increasing the confidence of the developer enabling them to move faster.
"Developers aren't cryptographers. We can only do so much security training, and frankly, they shouldn't have to make hard choices about this encryption mode or that encryption mode. It should just, like, work," said Kenneth White, a security principal at MongoDB, explaining the need for MongoDB's new Queryable Encryption feature.
In this latest edition of The New Stack Makers podcast, we discuss [sponsor_inline_mention slug="mongodb" ]MongoDB[/sponsor_inline_mention]'s new end-to-end client-side encryption, which allows an application to query an encrypted database and keep the queries in transit encrypted, an industry first, according to the company.
White discussed this technology in depth to TNS publisher Alex Williams, in a conversation recorded at MongoDB World, held last week in New York.
MongoDB has offered the ability to encrypt and decrypt documents since MongoDB 4.2, though this release is the first to allow an application to query the encrypted data. Developers with no expertise in encryption can write apps that use this capability on the client side, and the capability itself (available in preview mode for MongoDB 6.0) adds no noticeable overhead to application performance, so claims the company.
Data remains encrypted all times, even in memory and in the CPU; The keys never leave the application and cannot be accessed by the server. Nor can the database or cloud service administrator be able to look at the raw data.
For organizations, queryable encryption greatly expands the utility of using MongoDB for all sorts of sensitive and secret data. Customer service reps, for instance, could use the data to help customers with issues around sensitive data, such as social security numbers or credit card numbers.
In this podcast, White also spoke about the considerable engineering effort to make this technology possible — and make it easy to use for developers.
"In terms of how we got here, the biggest breakthroughs weren't cryptography, they were the engineering pieces, the things that make it so that you can scale to do key management, to do indexes that really have these kinds of capabilities in a practical way," Green said.
It was necessary to serve a user base that needs maximum scalability in their technologies. Many have "monster workloads," he notes.
"We've got some customers that have over 800 shards, meaning 800 different physical servers around the world for one system. I mean, that's massive," he said. "So it was a lot of the engineering over the last year and a half [has been] to sort of translate those math and algorithm techniques into something that's practical in the database."
For the past six years, WSO2 has been developing Ballerina, an open-source programming language that streamlines the writing of new services and APIs. It aims to simplify the process of being able to use, combine, and create network services and get highly distributed applications to work together toward a determined outcome.
In this episode of The New Stack Makers podcast Eric Newcomer, Chief Technology Officer of WSO2 discusses how the company created a new programming language from the ground up, and the plans for it to become a predominant cloud native language. Darryl Taft, news editor of The New Stack hosted this podcast.
Founded on the idea that it was too hard to do development with integration, Ballerina was created to program in highly distributed environments. “Cloud computing is an evolution of distributed computing of integration. You're talking about microservices and APIs that need to talk to each other in the cloud,” said Newcomer. “And what Ballerina does, is it thinks about what functions outside of the program that need to be talked to,” he added.
With Ballerina, developers can easily pick it up to create cloud applications. The language design is informed by TypeScript and JavaScript but with some additional capabilities, Newcomer said. “Developers can create records and schemas for JSON payloads in and out to support the API's for cloud mobile or web apps, and it has concurrency for concurrent processing of multiple calls transaction control but in a very familiar syntax, like TypeScript or JavaScript.”
WSO2 is using Ballerina in the company’s low-code like offering, Choreo, which includes features such as the ability to create diagrams. “The long-time challenge in the industry is how do you represent your programming code in a graphical form. [Sanjiva Weerawarana, Founder of WSO2] has solved this problem by putting into the language syntax elements from which you can create diagrams. And he did it in such a way that you can edit the diagram and create code,” said Newcomer.
Engineering for the cloud requires a programing language that can reengineer applications to achieve the auto scale, resiliency, and independent agility, said Newcomer. WSO2 is continuing push their work forward to tackle this challenge. “We're thinking Choreo is going to help us because it's leveraging the magic of Ballerina to help people get their job done faster. Once they see that, they'll see Ballerina and get the benefits of it,” Newcomer said.
VALENCIA – Open source code is part of at least 70% of enterprise stacks. Yet, a lot of open source contributors are still unpaid volunteers. Even more than tech as a whole, the future of open source relies on the community. Unless you're among the top tier funded open source projects, your sustainability replies on building a community – whether you want to or not – and cultivating project leadership to help recruit new maintainers – whether you want to hand over the reins or not.
That's where the Tech Advisory Group or TAG on Contributor Strategy comes in, acting as maintainer relations for the Cloud Native Computing Foundation. In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we talk to Dawn Foster, VMware's director of open source community strategy; Josh Berkus, Red Hat's Kubernetes community manager; Catherine Paganini, Bouyant's head of marketing and community; and Deepthi Sigireddi, a software engineer at PlanetScale. Foster and Berkus are the co-chairs of the Contributor Strategy TAG, while Paganini is the creator of Linkerd and Sigireddi is a maintainer of Vitess, both CNCF graduated projects. Each brought their unique experience in both open source contribution and leadership to talk about the open source contributor experience, sustainability, governance, and guidance.
With 65% of KubeConEU attendees at a CNCF event for the first time, albeit still during a pandemic, it makes for an uncertain signal for the future of open source. It either shows that there's a burst of interest for newcomers or that there's a dwindling interest in long-term contributions. The executive director of CNCF Priyanka Sharma even noted in her keynote that contributions for the foundation's biggest project Kubernetes have grown stagnant.
"I see it as a positive thing. I think it's always good to get some new blood into the community. And I think you know, the projects are working to do whatever they can to get new contributors," Foster said.
[sponsor_note slug="kubecon-cloudnativecon" ][/sponsor_note]
But it's not just about how many contributors but who. One thing that was glaringly apparent at the event was the lack of diversity, with the vast majority of the 7,000 KubeConEU participants being young, white men. This isn't surprising at all, as open source is still based on a lot of voluntary work which naturally excludes those most marginalized within the tech industry and society, which is why, according to GitHub's State of the Octoverse, it sees only about 4% women and nonbinary contributors, and only about 2% from the African continent.
If open source is such an integral part of tech's future, that future is built with more inequity than ever before.
"The barrier to entry to open source right now is having free time. And to do free work? Yes, and let's face it, women still do a lot of childcare, a lot of housework, much more than men do, and they have less free time." Sigireddi continued that there are other factors which discourage those widely underrepresented in tech from participating, including "not having role models, not seeing people who look like you, the communities tend to have in-jokes [and other] things that are cultural, which minorities may not be able to relate to." Most open source code, while usually forked globally, exists in English only.
One message throughout KubeConEU was, if a company relies on an open source project, it should pay some of its staff to contribute to and support that project because business may depend on it. This will in turn help bring OSS up a bit closer to the standard of the still abysmal tech industry statistics.
"I think from an ecosystem perspective, I think that companies paying people to do the work on open source makes a big difference," Foster said. "At VMware, we pay lots of people who work primarily on upstream open source projects. And I think that does help us get more diversity into the community, because then people can do it as part of their regular day jobs."
Encouraging those contributors that are underrepresented in OSS to speak up and be more representative of projects is another way to attract more diverse contributors. Berkus said the Contributors Strategy TAG had a meeting at KubeConEU with a group of primarily Italian women who have started in inclusiveness effort, starting with some things like speaker coaching and placement.
"It turns out that a lot of things that you need to do to have more diverse contributors are things you actually needed to do anyway, just to make things better for all new contributors," Berkus explained.
Indeed, welcoming new open source contributors – at all levels and in both technical and non-technical roles – is an important focus of the TAG. Paganini, along with colleague Jason Morgan, is co-author of the CNCF Landscape Guide, which acts as a welcome to the massive, overwhelming cloud native landscape. What she has found is that people will use the open source technology, but they will contribute to it because of the community.
"We see a lot of projects really focusing on code and docs, which of course is the basics, but people don't come for the technology per se. You can have the best technology, it's amazing, and people are super excited, but if the community isn't there, if they don't feel welcome," they won't stick around, Paganini said. "People want to be part of a tribe, right?"
Then, once you've successfully recruited and onboarded your community, you've got to work to not only retain but promote from within. All this and more is jam-packed into this lively discussion that cannot be missed!
More on open source diversity and inclusion efforts:
VALENCIA, SPAIN —Managing the cloud virtual machines (VMs) your containers run on. Running data-intensive workloads. Scaling services in response to spikes in traffic — but doing so in a way that doesn’t jack up your organization’s cloud spend. Kubernetes (K8s) seems so easy at the beginning, but it brings challenges that rachet up complexity as you go.
The cloud native ecosystem is filling up with tools aimed at making these challenges easier on developers, data scientists and Ops engineers. Increasingly, automation is the secret sauce helping teams and their companies work faster, safer and more productively.
In this special On the Road edition of The New Stack Makers podcast recorded at [sponsor_inline_mention slug="kubecon-cloudnativecon" ]KubeCon + CloudNativeCon EU[/sponsor_inline_mention], we unpacked some of the ways automation helps simplify Kubernetes. We were joined by a trio of guests from [sponsor_inline_mention slug="netapp" ]Spot.io by NetApp[/sponsor_inline_mention]: Jean-Yves “JY” Stephan, senior product manager for Ocean for Apache Spark, along with Gilad Shahar, and Yarin Pinyan —product manager and product architect, respectively, for Spot.io.
Until recently, Stephan noted, Apache Spark, the open source, unified analytics engine for large-scale data processing, couldn’t be deployed on K8s. “So all these regular software engineers were getting the cool technology with Kubernetes, cloud native solutions,” he said. “And the big data engineers, they were stuck with technologies from 10 years ago.”
Spot.io, he said, lets Apache Spark run atop Kubernetes: “It’s a lot more developer friendly, it’s a lot more flexible and it can also be more cost effective.”
The company’s Ocean CD, expected to be generally available in August, is aimed at solving another Kubernetes problem, said Pinyan: canary deployments.
Previously, if you were running normal VMs, without Kubernetes, it was pretty easy to do canary deployments because you had to scale up a VM and then see if the new version worked fine on it, and then gradually scale the others,” he said. “In Kubernetes, it’s pretty complex, because you have to deal with many pods and deployments.”
In enterprises, where DevOps and SRE team members are likely serving multitudes of developers, automating as much toil as possible for devs is essential, said Shahar. For instance, Spot.io’s tools allow users to “break the configuration into parts,” he said, which can task developers with whatever percentage of responsibility for the config that is deemed best for their use case.
“We try to design our solutions in a way that will allow the DevOps [team] to set things once and basically provide pre-baked solutions for the developers,” he said. “Because the developer, at the end of the day, knows best what their application will require.”
Telecoms are not necessarily associated with adopting new-generation technologies. However, Deutsche Telekom has made considerable investments cloud in native environments, by creating and supporting Kubernetes clusters to supports its operations infrastructure.
In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, DevOps engineers Christopher Dziomba and Samy Nitsche of Deutsche Telekom discuss how one of Europe’s largest telecom providers made the shift to cloud native.
Deutsche Telekom obviously didn’t start from scratch. It had decades worth of telecom infrastructure and networks that all needed to be integrated into the new world of Kubenetes. This involved a lot of “discussion with the other teams,” Dziomba said. “We had to work together [with other departments] to see how we wanted to manage legacy integration, and especially, and especially, policy and process integration,” Dziomba said.
As it turned out, many of the existing services Deutsche Telekom offered were conductive to integrating into the distributed Kubernetes infrastructure. “It was suited to be deployed on something like Kubernetes,” Dziomba said. “The decision was also made to build the Kubernetes platform by ourselves inside Deutsche Telekom and not to buy one. This really facilitated the move towards cloud native infrastructure.”
The shift also heavily involved the vendors that were “coming from the old route,” Nitsche said. “It's sometimes a challenge to make sure that the application is really also cloud native and to make sure it can use all the benefits Kubernetes offers.
OpenTelemetry is defined by its creators as a collection of APIs used to instrument, generate, collect and export telemetry data for observability. This data is in the form of metrics, logs and traces and has emerged as a popular CNCF project. For this interview, we're delving deeper into OpenTelemetry and its metrics support which has just become generally available.
The specifications provided for the metrics protocol are designed to connect metrics to other signals and to provide a path to OpenCensus, which enables customers to migrate to OpenTelemetry and to work with existing metrics-instrumentation protocols and standards, including, of course, Prometheus.
In this episode of The New Stack Makers podcast, recorded on the show floor of KubeCon + CloudNativeCon Europe 2022 in Valencia, Spain, Morgan McLean, director of product management, Splunk, Ted Young, director of developer education, LightStep and Daniel Dyla, senior open source architect, Dynatrace discussed how OpenTelemetry is evolving and the magic of observability in general for DevOps.
Nearly seven years after Google released Kubernetes, the open source container orchestrator, into an unsuspecting world, 5.6 million developers worldwide use it.
But that number, from the latest Cloud Native Computing Foundation (CNCF) annual survey, masks a lot of frustration. Kubernetes (K8s) can make life easier for the organization that adopts it — after it makes it a lot harder. And as it scales, it can create an unending cadence of triumph and challenge.
In other words: It’s complicated.
At KubeCon + CloudNativeCon EU in Valencia, Spain last week, a trio of experts — Saad Malik, chief technology officer and co-founder of Spectro Cloud; Bailey Hayes, principal software engineer at SingleStore; and Fabrizio Pandini, a staff engineer at VMware — joined Alex Williams, founder and publisher of The New Stack, and myself for a livestream event.
The pandemic has significantly accelerated the adoption of Kubernetes and cloud native environments as a way to accommodate the surge in remote workers and other infrastructure constraints. Following the beginning of the pandemic, however, organizations are retaining their investments for those organizations with cloud native infrastructure already in place. They have realized that cloud native is well worth maintaining their investments. Meanwhile, Kubernetes adoption continues to remain on an upward curve. And yet, challenges remain, needless to say. In this context, we look at the status of cloud native adoption, and in particular, Kubernetes at this time, compared to a year ago.
In this episode of The New Stack Makers podcast, recorded on the floor of KubeCon + CloudNativeCon Europe 2022, we discussed these themes along with the state of Kubernetes and the community with James Laverack, staff solutions engineer, Jetstack a member of the Kubernetes release team, and Christoph Blecker, site reliability engineer, Red Hat, a member of the Kubernetes steering committee.
Go was created at Google in 2007 to improve programming productivity in an era of multi-core networked machines and large codebases. Since then, engineering teams across Google, as well as across the industry, have adopted Go to build products and services at massive scale, including the Cloud Native Computing Foundation which has over 75% of the projects written in the language.
In this episode of The New Stack Makers podcast, Steve Francia, Head of Product: Go Language, Google and alumni of MongoDB, Docker and Drupal board member discusses the programming language, the new features in Go 1.18 and why Go is continuing on a path of accelerated adoption with developers. Darryl Taft, News Editor of The New Stack hosted this podcast.
In the State of Developer Ecosystem 2021, Go ranked in the top five languages that developers planned to adopt and continues to be one of the fastest growing languages. According to Francia, it was created with the motivation to see if a new system programming language could be built and compile quick with security as the top focus. With developers coming and going at Google, the simplicity and scalability of the language enabled many to contribute across several projects at any given time.
“The influences that separates Go from most languages is the experience of the creators behind it who all came to build it with their collective experience,” Francia said. Today “Go is influencing a lot of the mainstream languages. Elements of it can be found in a tool that formats everyone’s source code to be identical and more readable. Since then, a lot of languages have adopted that same practice,” said Francia. “And then there’s rust. Go and rust are on parallel tracks and we're learning from each other. There's also a new language called V that has recently been open sourced which is the first major language inspired by Go,” Francia said.
The latest release of Go 1.18 was Google’s biggest yet. “It included four major features, each of which you could build a release around,” said Francia. In this release, “Generics is the biggest change of the Go language which has been in the works for 10 years,” Francia added. “Because we knew that generics have the potential to make a language more complicated, we spent a long time going through different proposals,” he said. Fuzzing, workspaces and performance were three other features released in this past version of Go.
“From improving our documentation and learning – which you can go to go.dev/learn/ to get the latest resources – we’re really focused on the broad view of the developer experience,” Francia said. “And in the future, we're seeing not our team so much as the community taking Go in new ways,” he added.
First released in 2016, the Svelte Web framework has steadily gained popularity as an alternative approach to building Web applications, one that prides itself on being more intuitive (and less verbose) than the current framework du jour, Facebook's React. You can say that it reaches back to the era before the web app — when desktop and server applications were compiled — to make the web app easier to develop and more enjoyable to user.
In this latest episode of The New Stack Makers podcast, we interview the creator of Svelte himself, Rich Harris. Harris started out not as a web developer, but as a journalist who created the framework to do immersive web journalism. So we were interested in that.
In addition to delving into history, we also discussed the current landscape of Web frameworks, the Web's Document Object Model, the way React.js updates variables, the value of TypeScript, and the importance SvelteKit. We also chatted about why Vercel, where Harris now works maintaining Svelte, wants to make a home for Svelte.
TNS Editor Joab Jackson hosted this conversation.
Below are a few excerpts from our conversation, edited for brevity and clarity.
So set the stage for us. What was the point that inspired you to create Svelte?
To fully tell the story, we need to go way back into the mists of time, back to when I started programming. My background is in journalism. And about a decade ago, I was working in a newsroom at a financial publication in London. I was very inspired by some of the interactive journalism that was being produced at places like the New York Times, but also the BBC and the Guardian and lots of other news organizations, where they were using Flash and increasingly JavaScript, to tell these data rich interactive stories that couldn't really be done any other way.
And to me, this felt like the future of journalism, it's something that was using the full power of the web platform as a storytelling medium in a way that just hadn't been done before. And I was very excited about all that, and I wanted a piece of it.
So I started learning JavaScript with the help of the help of some some friends, and discovered that it's really difficult. Particularly if you're doing things that have a lot of interactivity. If you're managing lots of state that can be updated in lots of different ways, you end up writing what is often referred to as spaghetti code.
And so I started building a toolkit, really, for myself. And this was a project called Reactive, short for interactive, something out of a out of a Neal Stephenson book, in fact, and it actually got a little bit of traction, not it was never huge, but you know, it was my first foray into open source, and it got used in a few different places.
And I maintained that for some years, and eventually, I left that company and joined the Guardian in the U.K. And we used Reactive to build interactive pieces of journalism there, I transferred to the U.S. to continue at the guardian in New York. And we use directive quite heavily there as well. After a while, though, it became apparent that, you know, as with many frameworks of that era, it had certain flaws.
A lot of these frameworks were built for an era in which desktop computing was prevalent. And we were now in firmly in this age of mobile, first, web development. And these frameworks weren't really up to the task, primarily because they were just too big, they were too big, and they were too bulky and they were too slow.
And so in 2016, I started working on what was essentially a successor to that project. And we chose the name Svelte because it has all the right connotations. It's elegant, it's sophisticated. And the idea was to basically provide the same kind of development experience that people were used to, but change the was that translated into the experience end users have when they run it in the browser.
It did this by adopting techniques from the compiler world. The code that you write doesn't need to be the code that actually runs in the browser. Svelte was really one of the first frameworks to lean into the compiler paradigm. And as a result, we were able to do things with much less JavaScript, and in a way that was much more performant, which is very important if you're producing these kinds of interactive stories that typically involve like a lot of data, a lot of animation
Can you talk a bit about more about the compiler aspect? How does that work with a web application or web page?
So, you know, browsers run JavaScript. And like nowadays, they can run WASM, too. But JavaScript is the language that you need to write stuff in if you want to have interactivity on a web page. But that doesn't mean that you need to write JavaScript, if you can design a language that allows you to describe user interfaces in a more natural way, then the compiler could turn that intention into the code that actually runs. And so you get all the benefits of declarative programming but without the drawbacks that historically have accompanied that.
There is this trade off that historically existed: the developer wants to write this nice, state driven declarative code and the user doesn't want to have to wait for this bulky JavaScript framework to load over the wire. And then to do all of this extra work to translate your declarative intentions into what actually happens within the browser. And the compiler approach basically allows you to, to square that circle, it means that you get the best of both worlds you're maximizing the developer experience without compromising on developer experience.
Stupid question: As a developer, if I'm writing JavaScript code, at least initially, how do I compile it?
So pretty much every web app has a build step. It is possible to write web applications that do not involve a build step, you can just write JavaScript, and you can write HTML, and you can import the JavaScript into the HTML and you've got a web app. But that approach, it really doesn't scale, much as some people will try and convince you otherwise.
At some point, you're going to have to have a build step so that you can use libraries that you've installed from NPM, so that you can use things like TypeScript to optimize your JavaScript. And so Svelte fits into your existing build step. And so if you have your components that are written in Svelte files, it's literally a .SVELTE extension. Then during the build step, those components will get transformed into JavaScript files.
Svelte seemed to take off right around the time we heard complaints about Angular.js. Did the frustrations around Angular help the adoption of Svelte?
Svelte hasn't been a replacement for Angular because Angular is a full featured framework. It wants to own the entirety of your web application, whereas Svelte is really just a component framework.
So on the spectrum, you have things that are very focused on individual components like React and Vue.js and Svelte. And then at the other end of the spectrum, you have frameworks like Angular, and Ember. And historically, you had to do the work of taking your component framework and figuring out how to build the rest of the application unless you were using one of these full-featured frameworks.
Nowadays, that's less true because we have things like Next.js, and remix-vue, And on the Svelte team are currently working on SvelteKit, which is the answer to that question of how do I actually build an app with this?
I would attribute the growth in popularity is felt to different forces. Essentially, what happened is it trundled along with a small but dedicated user base for a few years. And then in 2019, we released version three of the framework, which really rethought the authoring experience, the syntax that you use to write components and, and the APIs that are available.
Around that time, I gave a couple of conference talks around it. And that's when it really started to pick up steam. Now, of course, we're growing very rapidly. And we're consistently at the top of developer-happiness surveys. And so now, like a lot of people are aware of is, but we're still like a very tiny framework, compared to the big dogs like React and Vue.
You have said that part of the Svelte mission has been to make web development fun. What are some of Svelte's attributes that make it less aggravating for the developer?
The first thing is that you can write a lot less code. If you're using Svelte, then you can express the same concepts with typically about 40% less code. There's just a lot less ceremony, a lot less boilerplate.
We're not constrained by JavaScript. For example, the way that you use state inside a component with React, you have to use hooks. And there's this slightly idiosyncratic way of declaring a local piece of state inside the component. With Svelte, you just declare a variable. And if you assign a new value to that variable, or if it's an object, and you mutate that object, then the compiler interprets that as a sign that it needs to update the component.
First released in 1995, Java’s programming language has been a leading developer platform that has become a workhorse for hundreds of enterprise applications. With each new technology evolution, Java has successfully adapted to change. But even while a recent Java ecosystem study found that more than 70% of Java applications in production environments are running inside a container, there continues to be hurdles the language must overcome to adapt to the cloud-native world.
In this episode of The New Stack Makers podcast, Simon Ritter, deputy CTO of Azul Systems and Dalia Abo Sheasha, Java developer advocate of JetBrains discuss some of the challenges the language is working to overcome, and share some insight into the new features that developers are requesting. Darryl Taft, news editor of The New Stack hosted this podcast.
The complexity of modern applications requires developers to master a growing array of skills, technologies, and concepts to develop in the cloud. And “what I've seen is that there is a gap in skills, and what it would take to get existing Java applications into the cloud,” said Abo Sheasha.
“What developers really want is to focus on the idea of developing the Java code,” said Ritter. “Having the ability to plug in to different cloud providers, but also the ability to integrate with things like your CI/CD tooling so that you've got continuous integration, continuous deployment built in,” he added.
Getting Java ready for the cloud is a “distributed responsibility across the people – from cloud providers to tooling providers,” said Ritter. “Everyone recognizes that the more folks we have on it, the more minds we have on it, the better outcome we're going to have for the developer’s language,” Abo Sheasha said.
Making developers more efficient and productive is coming into the fold with the introduction of JEP, or JDK Enhancement Proposals - a lightweight approach to add new features in the development of the Java platform itself. “But there's some bigger projects like Project Amber which is all about small changes to the language syntax of Java with the idea of making it more productive by taking some of the boilerplate code out,” Ritter said.
The journey to the next chapter of Java is multi-dimensional. While “most developers are focused on getting the job done, picking up skills for new things is a challenge because it takes time. Many still have the issue of using whichever Java version their company is stuck on,” said Ritter. “It's not because the developers don't want to do it; it’s that they need to convince management that it's worth investing in,” added Abo Sheasha.
Last week, the country of Spain dropped its mandate for residents and visitors to wear masks, to ward off further infections of the Coronavirus. So, for this year's KubeCon + CloudNativeCon Europe conference, to be held May 16 - 20th of May in Valencia, Spain, the Cloud Native Computing Foundation dropped its own original mandate that attendees wear masks, a rule that had been in place for its other recent conferences.
This turned out to be the wrong decision, CNCF admitted a week later. A lot of people who already bought tickets were upset at this laxing of the rules for the conference, which could put them in greater danger of contacting the disease.
So the CNCF put the mandate back in place, and offered refunds for those who felt Spain's own decision would put them in harm's way. CNCF will even send you a week's worth of N95 masks if you request them.
So, long story short: bring a mask to KubeCon. And, as always, it is still a requirement to show proof of vaccination and temperature checks will be made as well.
Tricky business running a conference in this time, no?
In this latest episode of The New Stack Makers podcast, we take a look at what to expect from this year's KubeCon EU 2022. Our guests for this podcast are Priyanka Sharma, the executive director of CNCF, and Ricardo Rocha, who is a KubeCon co-chair and computer engineer at CERN. TNS Editor-in-chief Joab Jackson hosted this podcast.
We recorded this podcast prior to the discussion around masks, and at the time, Sharma said that the CNCF based the mask ruling on Spain's own country-wide mandates. "So we are being very cautious with the health requirements for the event," she said.
The conference team is also keeping an eye on Russia's aggressive moves in the Ukraine, though it is unlikely that the chaos will reach all the way to Spain. Still, "this is why it's essential to always have the hybrid option .. [to] have the virtual elements sorted," Sharma said.
As the CNCF flagship conference, KubeCon brings together managers and users of a wide variety of cloud native technologies, including containerd, CoreDNS, Envoy, etcd, Fluentd, Harbor, Helm, Istio, Jaeger, Kubernetes, Linkerd, Open Policy Agent, Prometheus, Rook, Vitess, Argo, CRI-O, Crossplane, dapr, Dragonfly, Falco, Flagger, Flux, gRPC, KEDA, SPIFFE, SPIRE, and Thanos, and many many more. Most have been featured on TNS at one time or another.
In this podcast, we also discuss what to expect from the virtual sessions at the conference, what to do in Valencia, the current state of Kubernetes, and we get some unofficial picks from Sharma and Rocha as to what keynotes not miss and what sessions to attend.
"The virtual option is great," Rocha said. "But I think the in-person conferences have have their own value. And there's a lot to be to be gained about meeting people directly and exchanging ideas and going to these events on the side of the conference as well."
Low-code and no-code is becoming increasingly popular in software development, particularly in enterprises that are looking to expand the number of people who can create applications for digital transformation efforts. While in 2020, less than 25% of new apps were developed using no code/low code, Gartner predicts that by 2025, 70% will utilize this means. Microsoft is one vendor who has been paving the way in this shift by reducing the burden on those in the lines of business and developers in exchange for speed. But what are the potential and best practices for low code/no code software development?
In this episode of The New Stack Makers podcast, Charles Lamanna, Corporate Vice President, Business Apps and Platform at Microsoft discusses what the company is doing in the low-code/no code space with its Power Platform offering, including bringing no code/low-code professionals together to deliver applications.
Joab Jackson, Editor-in-Chief of The New Stack and Darryl Taft, News Editor of The New Stack hosted this podcast.
Developers are often faced with complexity when building and operating long-running processes that involve multiple service calls and require continuous coordination. To solve this challenge, Uber built and introduced Cadence, the open-source solution for workflow orchestration in 2016 that enables developers to directly express complex, long-running business logic as simple code. Since its debut, it continues to find increased traction with developers operating large-scale, microservices-based architectures. More recently, Instaclustr announced support for a hosted version of Cadence.
In this episode of The New Stack Makers podcast, Ben Slater, Chief Product Officer at Instaclustr and Emrah Seker, Staff Software Engineer at Uber discuss Cadence, and how it is used by developers to solve various business problems by enabling them to focus on writing code for business logic, without worrying about the complexity of distributed systems.
Alex Williams, founder and publisher of The New Stack hosted this podcast, along with co-host Joab Jackson, Editor-in-Chief of The New Stack.
As the tech stack grows, the list of technologies that must be configured in cloud computing environments has grown exponentially and increased the complexity in the IT infrastructure. While every layer of the stack comes with its own implementation of encrypted connectivity, client authentication, authorization and audit, the challenge for developers and DevOps teams to properly set up secure access to hardware, software throughout the organization will continue to grow, making IT environments increasingly vulnerable.
In this episode of The New Stack Makers podcast, Ben Arent, Developer Relations Manager, Teleport discusses how to address the hardware, software and peopleware complexity that comes from the cloud by using tools like Teleport 9.0 and the company’s first release of Teleport Machine ID.
From cloud security providers to open source, trust has become a staple from which an organization's security is built. But with the rise of cloud-native technologies, the new ways of building applications are challenging the traditional approaches to security. The changing cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. So how should DevOps and InfoSec teams across commercial businesses and governments rethink their security approach?
In this episode of The New Stack Makers podcast, Tom Bossert, president of Trinity Cyber (and former Homeland Security Advisor to two Presidents); Patrick Hylant, client executive of VMware; and Chenxi Wang, managing general partner, Rain Capital discuss how businesses and the U.S. government can adapt to the evolving threat landscape, including new initiatives and lessons that can be applied in this high-risk environment.
Alex Williams, founder and publisher of The New Stack, hosted this podcast. Jim Douglas, CEO of Armory also joined as co-host of this livestream event.
"Many Ukrainians continue working. A very good opportunity is to continue working with them, to buy Ukrainian software products, to engage with people who are working [via] UpWork. Help Ukrainians by giving them the ability to work, to do some paid work," whether still in the country or as refugees abroad. If you take something from this conversation, Anastasiia Voitova's words may be the ones that should stick. After all, Ukraine has a renowned IT workforce, with IT outsourcing among its most important exports.
Voitova, the head of customer solutions and security software engineer at Cossack Labs, just grabbed her laptop and some essentials when she suddenly fled to the mountains last month to "a small village that doesn't even have a name." She doesn't have much with her, but she has more work to do than ever — to meet her clients' increasing demand for cybersecurity defenses and to support the Ukrainian defense effort. Earlier this month, her Ukraine-based team even released a new open source cryptographic framework for data protection, on time, amidst the war.
Voitova was joined in this episode of The New Stack Makers by Oleksii Holub, open source developer, software consultant and GitHub Star, and Denys Dovhan, front-end engineer at Wix. All three of them are globally known open source community contributors and maintainers. And all three had to suddenly relocate from Kyiv this February. This conversation is a reflection into the lives of these three open source community leaders during the first three weeks of the Russian invasion.
This conversation aims to help answer what the open source community and the tech community as a whole can do to support our Ukrainian colleagues and friends. Because open source is a community first and foremost.
"Open source for me is a very big part of my life. Idon't try to like gain anything out of it, I just code things. If I had a problem, I solve it, and I think to myself, why not share it with other people," Holub said.
He sees open source as an opportunity for influence in this war, but also is acutely aware that his unpaid labor could be used to support the aggression against his country. That's why he added terms of use to his open source projects that use of his code implicitly means you condemn the Russian invasion. This may be controversial in the strict open source licensing world, but the semantics of OSS seem less important to Holub right now.
Of course, when talking about open source, the world's largest code repository GitHub comes up. Whether GitHub should block Russia is an on going OSS debate. On the one hand, many are concerned about further cutting off Russia — which has already restricted access to Facebook, Instagram, and Twitter — from external news and facts about the ongoing conflict. On the other hand, with the widespread adoption of OSS in Russia, it's reasonable to assume swaths of open source code is directly supporting the invasion or at least supporting the Russian government through income, taxes, and some of the Kremlin's technical stack.
For Dovhan, there's a middle ground. His employer, website builder Wix, has blocked all payments in Russia, but has maintained its freemium offering there. "There is no possibility to pay for your premium website. But you still can make a free one, and that's a possibility for Russians to express themselves, and this is a space for free speech, which is limited in Russia." He proposes that GitHub similarly allows the creation of public repos in Russia, but that it blocks payments and private repos there.
Dovhan continued that "I believe [the] open source community is deeply connected and blocking access for Russian developers, might cause serious issues in infrastructure. Alot of projects are actually made by Russian developers, for example, PostCSS, Nginx, and PostHTML."
These conversations will continue as this war changes the landscape of the tech world as we know it. One thing is for sure, Voitova, Dovhan and Holub have joined the hundreds of thousands of Ukrainian software developers in making a routine of work-war balance, doing everything they can, every waking hour of the day.
From cloud security providers to open source, trust has become the foundation from which an organization's security is built. But with the rise of cloud-native technologies such as containers and infrastructure as code (IaC), it has ushered in new ways to build applications and requirements that are challenging the traditional approaches to security. The changing nature of the cloud-native landscape is requiring broader security coverage across the technology stack and more contextual awareness of the environment. But how should teams like Infosec, DevOps rethink their approach to security?
In this episode of The New Stack Makers podcast, Guy Eisenkot, co-founder and vice president of product at Bridgecrew, Barak Schoster Goihman, senior director, chief architect at Palo Alto Networks and Ashish Rajan, head of security and compliance at PageUp and producer and host for Cloud Security Podcast preview what’s to come at Palo Alto Network’s Code to Cloud Summit on March 23-24, 2022, including the role of security and trust as it relates to DevOps, cloud service providers, software supply chain, SBOM (Software Bill of materials) and IBOM (Infrastructure Bill of Material),
Alex Williams, founder and publisher of The New Stack hosted this podcast.
Kubernetes is great at large-scale systems, but its complexity and transparency has caused higher cloud costs, delays in deployment and developer frustration. As Kubernetes has taken off and workloads continue to move to a containerized environment, optimizing resources is becoming increasingly important. In fact, the recent 2021 Cloud Native Survey revealed that Kubernetes has already crossed the chasm to mainstream with 96 percent of organizations using or evaluating the technology.
In this episode of The New Stack Makers podcast, Matt Provo, founder and CEO of StormForge, discusses new ways to think about Kubernetes, including resource optimization which can be achieved by empowering developers through automation. He also shared the company’s latest new machine learning-powered multi-dimensional optimization solution, Optimize Live.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
While Java continues to be the most widely used programming language in the enterprise, how is it faring the emerging cloud native ecosystem? Quite well, observed a panel of Oracle engineers who work the language. In fact, they estimate that they there are more than 50 million Java virtual machines running concurrently in the cloud at present.
In this latest edition of The New Stack Makers podcast, we discussed the current state of Java with Georges Saab, who is Oracle's vice president of software development, for the Java Platform Group; Donald Smith, Oracle senior director of product management; and Sharat Chander, Oracle senior director of product management. TNS editors Darryl Taft and Joab Jackson hosted the conversation.
Two decades ago, security was an afterthought; it was often ‘bolted on’ to existing applications that left businesses with a reactive approach to threat visibility and enforcement. But with the proliferation of cloud native applications and businesses employing a work from anywhere model, the traditional approach to security is being reimagined to play an integral role from development through operations. From identifying, assessing, prioritizing, and adapting to risk across the applications, organizations are moving to a full view of their risk posture by employing security across the entire lifecycle.
In this episode of The New Stack Makers podcast, Ratan Tipirneni, President and & CEO, Tigera discusses how organizations can take an active approach to security by bringing zero-trust principles to reduce the application’s attack surface, harness machine learning to combat runtime security risks and enable a continuous compliance while mitigating risks from vulnerabilities and attacks through security policy changes.
Alex Williams, founder and publisher of The New Stack hosted this podcast.
Cloud-native applications provide an advantage in terms of their scalability and velocity. Yet, despite their resiliency, the complexity of these systems has grown as the number of application components continue to increase. Understanding how these components fit together has stretched beyond what can be easily digested, further challenging the ability for organizations to prepare for technical issues that may arise from the system complexities.
Last month, ChaosNative hosted its second annual engineering event, Chaos Carnival where we discussed the principles of chaos engineering and using them to optimize cloud applications in today’s complex IT systems.
The panelists for this discussion:
In this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack served as the moderator, with the help of Joab Jackson, editor-in-chief of The New Stack.
If there is a secret to the success of TypeScript, it is in the type checking, ensuring that the data flowing through the program is of the correct kind of data. Type checking cuts down on errors, sets the stage for better tooling, and allows developers to map their programs at a higher level. And TypeScript itself, a statically-typed superset of JavaScript, ensures that an army of JavaScript programmers can easily enjoy these advanced programming benefits with a minimal learning curve.
In this latest edition of The New Stack Makers podcast, we spoke with a few of TypeScript's designers and maintainers to learn a bit more about the design of the language: Ryan Cavanaugh, a principal software engineering manager for Microsoft; Luke Hoban, chief technology officer for Pulumi, who was one of original creators of TypeScript, and; Daniel Rosenwasser, Senior Program Manager, Microsoft. TNS editors Darryl Taft and Joab Jackson hosted the discussion.
While Kubernetes brings a great deal of flexibility to application management, the Cloud Foundry platform-as-a-service (PaaS) software offers the best level of standardization, observed Julian Fischer, CEO, of cloud native services provider anynines.
We chatted with Fischer for this latest episode of The New Stack Makers podcast, to learn about the company's experience in managing large-scale deployments of both Kubernetes and Cloud Foundry.
"A lot of the conversation today is about Kubernetes. But the Cloud Foundry ecosystem has been very strong," especially for enterprises, noted Fischer.
Want an easy way to get started in Web3? Download a desktop copy of IPFS (Interplanetary File System) and install it on your computer, advises Dietrich Ayala, IPFS Ecosystem Growth Engineer, Protocol Labs, in our most recent edition of The New Stack Makers podcast.
We've been hearing a lot of hype about the Web3 and its promise of decentralization — how it will bring the power of the web back to the people, through the use of a blockchain. So what's up with that? How do you build a Web3 stack? What can you build with a Web3 stack? How far along is the community with tooling and ease-of-use?
This virtual panel podcast sets out to answer all these questions.
In addition to speaking to Ayala, we spoke with Rowland Graus, head of product for Agoric, and Marko Baricevic, software engineer for The Interchain Foundation, which manages Cosmos Network. an open source technology to help blockchains interoperate. Each participant describes the role in the Web3 ecosystem where their respective technologies play. These technologies are often used together, so they represent an emerging blockchain stack of sorts.
TNS Editor-in-Chief Joab Jackson hosted the discussion.
Kubernetes, containers, and cloud-native technologies offer organizations the benefits of portability, flexibility and increased developer productivity but the security risks associated with adopting them continue to be a top concern for companies. In the recent State of Kubernetes Security report, 94% of respondents experienced at least one security incident in their Kubernetes environment in the last 12 months.
In this episode of The New Stack Makers podcast, Avi Shua, CEO and Co-Founder of Orca Security talks about how organizations can enhance the security of their cloud environment by acting on the critical risks such as vulnerabilities, malware and misconfigurations by taking a snapshot of Kubernetes clusters and analyzing them, without the need for an agent.
As machine learning models proliferate and become sophisticated, deploying them to the cloud becomes increasingly expensive. This challenge of optimizing the model also impacts the scale and requires the flexibility to move the models to different hardware like Graphic Processing Units (GPUs) or Central Processing Units (CPUs) to gain more advantage. The ability to accelerate the deployment of machine learning models to the cloud or edge at scale is shifting the way organizations build next-generation AI models and applications. And being able to optimize these models quickly to save costs and sustain them over time is moving to the forefront for many developers.
In this episode of The New Stack Makers podcast recorded at AWS re:Invent, Luis Ceze, co-founder and CEO of OctoML talks about how to optimize and deploy machine learning models on any hardware, cloud or edge devices.
Alex Williams, founder and publisher of The New Stack hosted this podcast.
The most attractive characteristic of open-source projects is the potential to tap into the total addressable market of collaborators. But when looking for users to your project and building a community around it requires the project to stand out from the millions of others, how do you build a plan to monetize it?
In this podcast, Emily Omier, a positioning consultant who works with startups to stake out the right position in the cloud native / Kubernetes ecosystem, discusses how to grow your project by finding the right market category for your open-source startup.
Alex Williams, founder and publisher of The New Stack hosted this podcast.
There’s no doubt that the cognitive load developers are facing is seemingly endlessly increasing. Microservices and open source have aggravated the situation, where it’s nearly impossible for one developer to get up to speed with any codebase. This makes onboarding extra challenging, and contributes to about two-thirds of tech workers experiencing burnout. CodeSee looks to help developers get up to speed faster by visualizing a codebase in just a few clicks.
Shanea Leven, CEO and founder of CodeSee, sat down with TNS writer, Jennifer Riggins, on this episode of The New Stack Makers podcast to discuss workload complexity, work-life balance, and hiring/retaining best practices within the DevOps community.
Artificial intelligence (AI) and machine learning (ML) have seen a surge in adoption and advances for IT applications, especially for database management, CI/CD support and other functionalities. Robotics, meanwhile, is largely relegated to factory-floor automation. In this The New Stack Makers podcast, Pieter Abbeel, co-founder, president, chief scientist at covariant.ai, a supplier of “universal AI” for robotics, discusses why and how the potential of robotics can evolve beyond just serving as pre-programmed devices thanks to advances in IT. Abbeel also draws on his background to offer his perspective, as a professor at the University of California, Berkeley and a podcast host at The Robot Brains Podcast.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
Continuous integration and delivery (CI/CD) has seen some radical changes during the past few years, especially for continuous delivery. While not so long ago, application development and delivery was exclusively for monolithic stacks but delivering software for microservices and container environments is a very different animal.
In this The New Stack Maker podcast, recorded at KubeCon+CloudNativeCon in October, guest Rob Zuber, chief technology officer at CircleCI, discusses the evolution of CI/CD from the perspective of CircleCI’s experience for over a decade.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
Improving the cadences for application delivery and updates and maintaining their availability over Internet infrastructure remain quintessential challenges for organizations delivering distributed digital experiences. Today, especially palpable among DevOps teams are the challenges associated with optimizing application delivery and security infrastructure in today’s increasingly cloud-centric world.
In this The New Stack Maker podcast, Pankaj Gupta, senior director, product marketing, Citrix, discusses why a radical change for application delivery is in order.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
There is much discussion about boosting application release cadences, but the fact is that most organizations have not figured out how to deploy applications more quickly. According to data from analyst firm Gartner, 90% of DevOps initiatives will fail to fully meet expectations through 2023.
In this breakfast episode of The New Stack Makers podcast, streamed live during LaunchDarkly’s annual Trajectory user’s conference, we discussed today’s DevOps struggles and challenges. Potential solutions were also covered, such as how DevOps teams are turning to self-service developer platforms to meet their cloud-deployment goals.
Cody De Arkland, principal technical marketing engineer, LaunchDarkly; Rachel Stephens, senior analyst for analyst firm RedMonk; Steve George, chief operations officer for GitOps solutions provider and Flux creator Weaveworks; and Margaret Francis, president and chief operating officer for Armory, all participated in this discussion.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
The number of Cloud Native Computing Foundation (CNCF) projects has exploded since Kubernetes came onboard, setting the stage for hundreds of tools and platforms that have achieved the various CNCF project maturity milestones of Sandbox, Incubated or Graduated.
With the profound influence the adoption of the projects have had on cloud native notwithstanding, it can be easy to sometimes overlook the monumental effort involved in every project by their contributors. In this The New Stack Makers podcast, we look at two CNCF projects that have gone from sandbox to incubation: Crossplane, a Kubernetes add-on for infrastructure assembly and OpenTelemetry, which supports a collection of tools, APIs, and SDKs for observability.
The podcast featured guests involved with the projects including Dan Mangum, senior software engineer, for cloud platform provider Upbound (Crossplane), Constance Caramanolis, principal software engineer, data platform provider Splunk, and on the OpenTelemetry Governance Committee and Ted Young, director of developer education, observability platform provider Lightstep and an OpenTelemetry co-founder who is also on the OpenTelemetry Governance Committee.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
Cloud native is really only as good as the support and input the community provides. It is in this spirit that the Cloud Native Computing Foundation (CNCF) continues to invest heavily in the community to support new and existing projects, including Kubernetes, Prometheus and Envoy that are among the cornerstones of cloud native today.
During this latest episode of The New Stack Makers podcast, held live at KubeCon + CloudNativeCon last month, CNCF Marketing Manager Bill Mulligan and CNCF Developer Advocate Ihor Dvoretskyi spoke of the CNCF's Cloud Native Credits and Kubernetes Community Day program, as well as why these and other initiatives are vital to building cloud native tools and infrastructure of today and in the future.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
Kubernetes played a key role in maintaining Pokemon Go, Niantic’s wildly popular augmented-reality development. Kubernetes, and the efficiencies it offers DevOps teams, continue to play a role at Niantic, as the company builds on the game’s architecture to third-party developers.
In this latest episode of The New Stack Makers podcast, Ria Bhatia, senior product manager of Niantic, discusses why the Pokemon Go platform remains relevant and why Kubernetes will remain an integral part of the platform as the company hopes to bring in more “developer customers.”
Google’s open source program certainly has come a long way since 2003. That was when the search engine giant could still arguably be called a startup, Android had not yet been acquired and open source projects Kubernetes, Go and Chromium were years away in the making.
It was also then that Google co-founders Larry Page and Sergey Brin asked their favorite recruiter to go and find an “open source person,” recounted Chris DiBona, the company’s director for open source. Already an open source pioneer before joining Google, DiBona continues to oversee the tech giant’s open source program, which continues to have major implications for the IT industry and the open source community.
In this New Stack Makers podcast, DiBona discusses Google’s open source policy, as well as the search engine giant’s plans for its open source future. Alex Williams, founder and publisher of The New Stack, hosted this podcast.
The number of open source components inside services and applications continues to increase exponentially, and this adoption is creating a lot of change in how software is created, deployed and managed. in 2016, applications on average had 86 open source software components. Today, the average number of components is 528, according to “The 2021 Open Source Security and Risk Analysis (OSSRA) report.”
In this latest edition of The New Stack Makers podcast, we discuss the implications of the explosion of open source’s adoption and its effect on data center operations.
The guests were Mark Hinkle, co-founder and CEO, TriggerMesh, Shaun O’Meara, field CTO, Mirantis; Jeremy Tanner, developer relations, Equinix and Sophia Vargas, research analyst, open source programs office, Google.
TNS’ Founder and Publisher Alex Williams and TNS Editor Joab Jackson hosted this podcast.
In March, Daniel Prizmant, senior security researcher for Palo Alto Networks, uncovered the malware targeting Windows containers, calling the exploit “Siloscape.” In a blog post, he wrote the emergence of such an attack was not “not surprising given the massive surge in cloud adoption over the past few years.”
In this edition of The New Stack Makers podcast, Prizmant, as the guest, described what makes Siloscape a threat for Kubernetes clusters — both for Linux and Windows containers.
The New Stack’s publisher and founder, Alex Williams, hosted this episode.
Since its creation almost six years ago and 120 projects later, the Cloud Native Computing Foundation (CNCF) has played a key role in the ongoing adoption of Kubernetes and associated tools and platforms for organizations making the shift to cloud native environments. In this The New Stack Makers podcast, Chris Aniszczyk, CTO, CNCF, discusses with The New Stack’s publisher and founder, Alex Williams, what’s hot in cloud native land and offers a glimpse of what is emerging.
How Kubernetes environments might be able to offer hooks for storage, databases and other sources of persistent data still is a question in the minds of many potential users. To that end, a new consortium called the Data on Kubernetes Community (DoKC) was formed to help organizations find the best ways of working with stateful data on Kubernetes.
In this latest episode of The New Stack Maker podcast, two members of the group discuss the challenges associated with running stateful workloads on Kubernetes and how DoKC can help.
Participants for this conversation were Melissa Logan, principal, of Constantia.io, an open source and enterprise tech marketing firm, and director of DoKC; Patrick McFadin, vice president, developer relations and chief evangelist for the Apache Cassandra NoSQL database platform from DataStax; and Evan Powell, advisor, investor and board member, MayaData, a Kubernetes-environment storage-solution provider.
TNS Editor Joab Jackson hosted the podcast.
Five former Googlers recently started Chainguard, a newly minted supply chain security company focusing on Zero Trust principles. Their mission is to help support DevOps teams with their monumental struggles of securing application code across the development, deployment and management cycle.
“Supply chain security by default is our mission and making it really easy for developers to do the right thing,” Kim Lewandowski, founder and product, for Chainguard, said during a The New Stack Makers podcast recorded live at KubeCon + CloudNativeCon in October.
Alex Williams, founder and publisher of TNS, hosted the podcast.
Security-as-code is the practice of “building security into DevOps tools and workflows by mapping out how changes to code and infrastructure are made and finding places to add security checks, tests, and gates without introducing unnecessary costs or delays,” according to tech publisher O’Reilly. In this latest “pancakes and podcast” special episode —recorded during a pancake breakfast during KubeCon + CloudNativeCon in October — we discuss how security-as-code can benefit emerging GitOps practices.
The guests were Sean O’Dell, director of developer advocacy, Accurics, Sara Joshi, who was an associate software engineer for Accurics when this recording was made; Parminder Singh, chief information security officer (CISO), for hybrid-cloud digital-transformation services provider DigitalOnUs; Brendan O’Leary, staff developer evangelist, GitLab; Cindy Blake, senior security evangelist, GitLab; and Emily Omier, contributor, The New Stack and owner of marketing consulting provider Emily Omier Consulting.
Alex Williams, founder and publisher of TNS, hosted the podcast.
It takes more than just years of experience to become a senior software engineer — among the prerequisites are having a good marketing sense, interviewing skills and other personal qualities required to become one.
In this The New Stack Makers podcast, guests Swizec Teller, a senior software engineer, Tia — a healthcare company— and author, and Shawn Wang, head of developer experience for microservices orchestration platform provider Temporal.io, describe the mindset and other attributes required to become a senior engineer.
Darryl Taft, TNS news editor, hosted the podcast.
Software deployments increasingly involve highly distributed and decentralized application development processes for deployments across any combination of data centers, public cloud and to the edge. All the while, reliability, security or performance cannot be compromised.
In this The New Stack Makers podcast, a panel of technology executives discussed the best ways to speed up business innovation in today’s multicloud and multi-infrastructure world. They also discussed how to deliver apps and services faster to improve the customer experience — over a pancake breakfast during VMworld, VMware’s annual user’s conference.
The guests were Dormain Drewitz, senior director of product marketing for VMware Tanzu, Mandy Storbakken, cloud technologist for VMware, Shawn Bass, CTO for VMware’s end-user computing business, and Jo Peterson, vice president cloud and security services, Clarify360.
Alex Williams, founder and publisher of TNS, and Joab Jackson, TNS editor-in-chief, hosted the podcast.
Sometimes, multicloud just happens. Some organizations might have, for example, applications running on Amazon Web Services in one department, while at the same time, while another may come to rely on Google Cloud or other cloud provider services.
How do you make them work under one unified architecture? The difficulties of multicloud management is the main topic of this latest episode of the New Stack Makers podcast, where we interview the CEO and co-founder of mulitcloud management platform provider Mist.io, Chris Psaltis. Here we discussed the inherent difficulties and possible solutions for running operations across multiple cloud services, as well as how Mist.io can help. TNS Editor Joab Jackson was the host.
Many organizations need better and tighter infrastructure policy for their distributed systems. This need has been underscored by an increasing number of misconfigurations, especially in distributed microservices and Kubernetes environments.
How policy as code extends infrastructure as code was discussed in this latest episode of The New Stack Makers podcast, another one of our “pancakes and podcast” special episodes. The guests were Deepak Giridharagopal, chief technology officer of Puppet; Tiffany Jachja, data engineering manager for Vox Media; James Turnbull, vice president of engineering of the internationally known luxury and art auctioneer Sotheby’s; and Shea Stewart, a self-professed DevOps tech nerd. Alex Williams, founder and publisher of TNS, hosted the podcast.
As the internet fills every nook and cranny of our lives, it runs into greater complexity for developers, operations engineers, and the organizations that employ them. How do you reduce latency? How do you comply with the regulations of each region or country where you have a virtual presence? How do you keep data near where it’s actually used?
For a growing number of organizations, the answer is to use the edge.
In this episode of Makers, the New Stack podcast, Ron Lev, general manager of Cox Edge, and Sheraline Barthelmy, head of product, marketing and customer success for Cox Edge, were joined by Chetan Venkatesh, founder and CEO of Macrometa. The trio discussed the best use cases for edge computing, the advantages it can bring, and the challenges that remain.
The podcast was hosted by Heather Joslyn, features editor of The New Stack.
Cloud native systems are, by definition, distributed —but to run databases securely and effectively on them, what’s needed is not only purpose-fit technology, but a change of mindset, according to this podcast episode’s guests.
In this episode of Makers, the New Stack podcast, Jim Walker, principal product evangelist and Michelle Gienow, senior technical content manager, of Cockroach Labs (and a former New Stack reporter), discussed how distributed systems create new challenges for databases, the paradigm shift that’s needed to run databases effectively on Kubernetes, and the results of a new survey of Kubernetes users.
The podcast was hosted by Heather Joslyn, features editor of The New Stack.
It’s that time of the year again, when we gather to discuss all matters related to Kubernetes and the other assorted tooling necessary to make cloud native computing happen.
KubeCon+CloudNativeCon will be held in Los Angeles next month, October 11 -15.
A key difference at this year’s event — the first onsite event from the Cloud Native Computing Foundation since the beginning of the pandemic — is that the flagship cloud native conference will offer a much more significant virtual experience for those unable to travel to the venue in L.A..
The virtual aspect of this year’s KubeCon+CloudNativeCon “is expected to continue indefinitely,” Priyanka Sharma, general manager, CNCF said in this edition of The New Stack Makers podcast. Sharma was joined by conference co-chair Jasmine James, who is the Twitter developer experience lead and manager for engineering effectiveness. They discussed this year’s schedule and agenda, how it will all compare to KubeCon+CloudNativeCon of years past and general cloud native trends. TNS Editor-In-Chief, Joab Jackson, hosted this episode of The New Stack Makers.
Database giant Oracle added a container native CI/CD platform to its cloud portfolio when it purchased Wercker in 2017. Since the acquisition, Wercker founder, Micha Hernandez van Leuffen, started Fiberplane, for which he is the CEO. In this latest episode of The New Stack Makers podcast, van Leuffen discusses the different aspects of the development of the Wercker and how that has parlayed into his work at Fiberplane, which offers collaborative notebooks for resolving incidents. Alana Anderson, founder and managing partner, base case capital, offered input from an investment capital firm perspective as well.
Alex Williams, founder and publisher, and Joab Jackson, editor-in-chief, both of The New Stack, hosted the podcast.
An organization that has any ambitions or hopes to scale application deployments across cloud native environments is not going to get very far without automation.
From CI/CD support, increasing application deployment speed — often across different environments — and maintaining compliance and security, operations teams manually managing these processes is just not humanly possible after a certain point.
In this latest episode of The New Stack Makers podcast, Abby Kearns, Chief Technology Officer and head of R&D, and Chip Childers, Puppet Chief Architect, discussed what automation for infrastructure management for cloud native deployments means for Puppet and for the IT industry. Alex Williams, founder and publisher of TNS, hosted this interview.
At last count, social media giant Twitter enjoys around 353 million active users, and streaming music service Spotify has 356 million active listeners. In both cases, open source tools and platforms for cloud native environments have served as the cornerstones for their tremendous growth.
In this latest episode of The New Stack Makers podcast, Spotify Senior Staff Engineer Dave Zolotusky, and Twitter Developer Experience Lead and Manager for Engineering Effectiveness Jasmine James discussed the role of open source software in their respective organizations. Katie Gamanj, ecosystem manager of the Cloud Native Computing Foundation and Alex Williams, founder and publisher of TNS, co-hosted this interview.
There is much discussion about technology and tool gaps when organizations make the shift to cloud environments. However, a major — and often less-discussed — challenge is how to ensure that the DevOps team has the necessary skillsets to see the project through. Making sure that the right in-house talent and DevSecOps culture is in place to make the shift without exposing the organization's data and applications to security risks is especially critical.
In this The New Stack Makers podcast hosted by Alex Williams, founder and publisher of TNS, guest Ashley Ward, technical director, office of the CTO, Palo Alto Networks, discussed the associated DevSecOps skillsets challenges for cloud deployments.
It's said we can all stand to make improvements when it comes to empathy. In software engineering, empathy is required to create something that the end user can easily figure out; it's unacceptable to build something you think is great but expect customers to figure it out on their own, just because you think they should. Search engine giant, cloud services leader and Kubernetes creator, Google, realizes this.
In this latest episode of The New Stack Makers podcast, The New Stack Founder and Publisher Alex Williams and TNS News Editor Darryl Taft sit down with Google’s Kim Bannerman, program manager for Empathetic Engineering, and Kelsey Hightower, principal developer advocate, Google Cloud Platform (GCP), to discuss Google's Customer Empathy Program and end-user satisfaction.
The definition of “low-code, no-code” remains a subject of debate. For some, it is the ability for a so-called “citizen developer” who lacks the training and skills to develop software — to be able to rely on a platform to deploy code with the same level of competence as that of a professional software engineer. Others describe low-code, no-code as a way to rely on a platform that facilitates software development — while automating many of the tasks in a build — to both simplify the process for inexperienced developers and to save time and resources for experienced developers. In both cases in this increasingly crowded space, low-code, no-code makes the coding and software development process simpler and more automated as a result.
In the case of low-code, no-code platform provider gopaddle, the idea is to to “unleash the power of a no-code platform for modern applications.” How low-code, no-code can be applied to Go-centric applications running in cloud native environments was the main subject of this The New Stack Makers podcast with Vinothini Raju, founder and CEO, gopaddle as the guest. The New Stack founder and publisher Alex Williams and TNS news editor Darryl Taft hosted the conversation.
As continuous integration and delivery provider CloudBees prepares for its annual DevOps World conferences, the company also is gearing up for a new phase of growth with a greater focus on security, AI and making DevOps easier.
DevOps World will run September 28-30. Last year, the event drew around 30,000 virtual attendees. This year the event is again virtual and is also free. With a tagline of “building the future of software delivery together,” the focus of DevOps World will be to reach out to the entire DevOps ecosystem to share knowledge on the tools, techniques and best practices currently in use and those anticipated for the future.
In this latest episode of The New Stack Makers podcast we interview Sacha Labourey, co-founder and chief strategy officer of CloudBees, about both DevOps World and the future of the company. TNS Publisher Alex Williams hosted this episode, with the help of TNS News Editor Darryl K. Taft.
Both APIs and microservices play a key role in cloud native environments. Microservices serve as the cornerstone of distributed and shared computing resources. At the same time, APIs serve as a very efficient way to streamline many operations and development tasks from DevOps teams.
However, both microservices and APIs carry with them their own security risks. All it takes is for one compromised Kubernetes node to allow for an intruder to gain root access through an API to an organization’s entire container infrastructure across multiple clusters (a worst-case scenario).
In this episode of The New Stack Makers podcast, we look at how to both secure microservices with APIs and how to rely on APIs to delegate certain security tasks to a trusted third party. Our guest is Viktor Gamov, principal developer advocate for Kong, an API-connectivity company. The episode is hosted by Alex Williams, TNS founder and publisher, and Bharat Bhat, marketing lead, developer relations, Okta.
You have a teddy bear you want to love and protect. A big brother or sister takes the teddy bear and threatens to hold it for ransom until you pay up. What do you do?
The teddy bear analogy is certainly simplistic, but it also reflects the reality of the ransomware attacks that organizations increasingly face. Attackers block access to critical data in exchange for increasingly outlandish ransoms. According to a Palo Alto Networks’ Unit 42 report, the highest ransom in 2020 was $30 million, up from $15 million in 2019.
In this latest episode of The New Stack Makers podcast, we spoke with Jason Williams, product marketing manager for Prisma Cloud at Palo Alto Networks, about what organizations should do to protect themselves from ransomware attacks. Alex Williams, founder and publisher of TNS, hosted this episode.
Many organizations are finding that shifting to cloud native environments has become easier than it was in the past. However, the complexities and ensuing challenges can still surmount once at-scale deployments begin.
In this episode of The New Stack Makers podcast, hosted by TNS’ Alex Williams, founder and publisher, and Joab Jackson, TNS managing editor, application-deployment standards are the discussion of the day. The featured guests are Bruno Andrade, founder, Shipa, a provider of frameworks for Kubernetes; and Bassam Tabbara, founder and CEO, Upbound, which offers a universal control plane for multi-cluster management.
Canonical's wildly popular Ubuntu Linux distribution continues to quietly play a role in the continued widespread adoption of Kubernetes. And that quiet support is as it should be, concluded Kelsey Hightower, Google Cloud Platform principal developer advocate, and Mark Shuttleworth, CEO of Canonical, in this latest episode of The New Stack Makers podcast. Alex Williams, founder and publisher of TNS hosted this episode.
Taking a step back, Ubuntu, as well as Linux in general, has become much easier to use, expanding beyond what many once considered to be a server operating system and an esoteric alternative to Windows.
“There was this kind of inflection point where Linux has gone from like this command line server-side thing to something that you could actually run on a desktop with a meaningful UI and it felt like we were closing the gap on all the other popular open operating systems,” said Hightower.
Network connections can be likened to attending an amusement park, where Dynamic Host Configuration Protocol (DHCP), serves as the ticket to enter the park and the domain name system (DNS) is the map around the park. Network management and security provider Infoblox made a name for itself by collapsing those two core pieces into a single platform for enterprises to be able to control where IP addresses are assigned and how they manage network creation and movement.
"They control their own DNS so that they can have better control over their traffic,” explained Anthony James, Infoblox vice president of product marketing, in this latest episode of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack.
When it comes to at-scale software development, is continuous delivery and release automation (CDRA) the next step in the evolution of continuous integration/continuous delivery (CI/CD)?
Forrester Research thinks so. The analysis firm describes CDRA as a way for organizations to deliver better-quality software faster and more securely, by automating digital pipelines and improving end-to-end management and visibility.
In this edition of The New Stack Makers podcast, Anders Wallgren, CloudBees vice president of technology strategy, discusses CDRA, supporting tools and the goals and challenges DevOps teams have when delivering software today. CI/CD systems provider CloudBees was named a leading CDRA vendor in the report "The Forrester Wave: Continuous Delivery And Release Automation, Q2 2020."
The episode was hosted by Alex Williams, founder and publisher of The New Stack, and co-hosted by Joab Jackson, TNS managing editor.
The amount of data created has doubled every year, presenting a host of challenges for organizations: security and privacy issues for starters, but also storage costs. What situations call for that data move to decentralized cloud storage rather than on-prem or even a single public cloud storage setup? What are the advantages and challenges of a decentralized cloud storage solution for data, and how can those be navigated?
On this episode of Makers, the New Stack podcast, Ben Golub, CEO of Storj, and Krista Spriggs, software engineering manager at the company, were joined by Alex Williams, founder and publisher of The New Stack, along with Heather Joslyn, TNS’ features editor. Golub and Spriggs talked about how decentralized storage for data makes sense for organizations concerned about cloud costs, security, and resiliency.
Once they have piloted Kubernetes, many organizations then want to scale up their K8s deployments, and run workloads across many clusters. But managing multiple clusters requires a new set of tools, ones that automate many routine and manual tasks. So, for its fifth Tech Radar report, the Cloud Native Computing Foundation surveyed the tools available for multicluster management, based on the input from its end-user community.
In this edition of The New Stack Analysts podcast, we talk with two people who helped assemble the report, Federico Hernandez, principal engineer social media analysis provider Meltwater, and Simone Sciarrati, Meltwater engineering team lead. We chatted about the report's findings and how the multicluster management tool landscape is taking shape. Co-hosting this episode is Alex Williams, founder and publisher of The New Stack and the Tech Radar's organizer Cheryl Hung, CNCF vice president of ecosystem.
Video games continue to explode in popularity, while the number of potential attack vectors increase as well. In this The New Stack Makers podcast host Alex Williams, publisher and founder of TNS and co-host Bharat Bhat, marketing lead, developer relations, for Okta, cover why and how video game platforms and connections should be more secure with guest Okta Senior Developer Advocate Nick Gamb.
The gaming industry has often served as a showcase for some of industry’s greatest programming talents. As a case in point, John Carmack’s C++ code underpinning “Doom” is considered one of historical greats of programming not just for gaming but for software in general. For Gamb, while growing up, playing “Quake” and “Doom” before studying the code for these games served as his entry point into the software industry, as he noted how these games helped to “revolutionize gaming with first-person shooters (FPS).
The internet's fabled history includes such milestones as the Advanced Research Projects Agency's (ARPA) development of packet switching (ARPANET), paving the way for today's modern infrastructure, or Tim Berners-Lee’s research that culminated in the explosive adoption of the World Wide Web (WEB) in the 1990s. Today, as microservices, Kubernetes and distributed environments and connections become more prevalent, the use of the Internet is becoming more decentralized as well.
In this episode of The New Stack Makers podcast hosted by Alex Williams, founder and publisher of TNS, Storj Labs' Ben Golub, chairman and interim CEO, and Katherine Johnson, head of compliance, discuss how the Internet today centers around decentralization — and more importantly — how decentralization reflects upon the roots of the internet.
Observability is widely misunderstood, but in an age of increased security breaches and more business being conducted online, it’s never been more important. How should organizations be thinking about their resources in multicloud environments? What strategies should they adopt to catch gaps in their security before hackers do? And also, what cultural changes might DevOps teams adopt to strengthen their observability?
In this episode of The New Stack Makers podcast, Maya Levine, technical marketing engineer and cloud native and cyber security evangelist for Check Point, joined co-hosts Alex Williams, The New Stack’s publisher, and Heather Joslyn, TNS’s features editor, for a discussion of what observability means now.
Go owes its popularity to a number of factors as Golang advocates often speak of its speed, robustness and versatility, especially compared to C++ and Java and JavaScript. In this The New Stack Makers podcast, hosts TNS’ Alex Williams, founder and publisher, and Darryl Taft, news editor, cover the reasons for decentralized storage provider Storj’s shift to Go with featured guests Storj’s JT Olio, CTO, and Natalie Villasana, software engineer.
Storj’s needs for Go to support its development and operations stems from its unique requirements as the “Airbnb for hard drives,” Olio explained.
Cloud native computing is bringing about such a sea change in how applications are developed, deployed and run, that, not surprisingly, it is changing the rules for information security as well. Case in point: serverless computing.
In this latest edition of The New Stack makers podcast, we speak with Check Point's Cloud Security Strategist Hillel Solow, who has been at the cutting edge of these changes. Solow co-founded Protego Labs, a pioneer in serverless security. Security vendor Check Point saw the writing on the security wall early on and gobbled up Protego in 2019. The New Stack Publisher Alex Williams and TNS Editor Joab Jackson hosted this episode.
The number of services cloud providers alone have begun to offer over the past couple of years has exploded, potentially exposing an exponentially larger number of microservices to vulnerabilities that support these services across multiple cloud and on-premises environments.
In this The New Stack Makers podcast hosted by Jack Wallen, a correspondent for The New Stack, TJ (Tsion) Gonen, head of Cloud Security, Check Point, puts microservices security in context and describes the critical role security tools play and the support that artificial intelligence (AI) and machine learning (ML) offer.
The definitions of progressive delivery can vary, while many, if not most, would agree it represents an evolution of CI/CD. In this The New Stack Makers podcast The New Stack’s Alex Williams, publisher and founder, and B. Cameron Gain, correspondent, of The New Stack, cover why progressive delivery will play a large role in the future of DevOps. Nick Rendall, senior product marketing manager, CloudBees, is the featured guest.
While progressive delivery is universally accepted as important for DevOps and software development, delivery and post-deployment management, how best to implement it remains a challenge for many organizations. “Everyone understands that progressive delivery is a good thing, and now it's like, ‘okay, great, but how do we really do it and let's take this concept and let's really build it out into our big enterprise organizations,” said Rendall.
The tech industry is broken. We deify overworking, and think burnout comes with bragging rights. But how do we break this exhausting cycle? In this episode of The New Stack Makers, we talk with LaunchDarkly's Manager of Developer Marketing Dawn Parzych about how to identify burnout in others and in yourself, how to treat it, and how to build a psychologically safe working environment that allows folks to say no.
With a masters in psychology and a DevRel role that certainly straddles people and tech, Parzych's work often sits on the people side of what they're building.
"I love the idea ofthe socio-technical systems that we're building,like tech, doesn't exist in a bubble. People are building the technology. They're very interrelated and you can't just focus on the tech, the people are the hardest part of tech. And we spend more time talking about how tech's the hard piece,where it's reallythe people and the interrelation betweenthe people and the machines," she said.
No longer considered an ephemeral concept as it originally was, data management has become a huge issue and challenge, especially for managing stateless data in Kubernetes environments.
Cloud Native Data Management Day at the recently held KubeCon + CloudNativeCon Europe 2021 event in May and the state of data management were the subject of discussion in this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack. The guests were Michael Cade, senior global technologist, Veeam Software and Nigel Poulton, owner of nigelpoulton.com, which offers Kubernetes and Docker training and other services. Both Cade and Poulton were also involved in the organization of Cloud Native Data Management Day.
Today’s developer seems to be working with more tools than ever. Building a Node.js-based JavaScript application could require over a dozen tools at times to get code out into production. It's easy to get sucked down a rabbit hole and not stay focused. Debugging an application once in production can also be a challenge: You want as much context at your fingertips as needed while maintaining a reasonable signal-to-noise ratio.
Dan O’Brien, a software engineer for feature management platform provider LaunchDarkly, has a personal interest in how to keep from being distracted/staying in the flow when working on a new feature or any piece of code.
In this very latest episode of The New Stack Makers podcast, we ask O'Brien about the complexities he sees in today’s developer workflow, as well as some tips he has to stay “in the zone” when writing code. We’ll also discuss the tools that LaunchDarkly has that can help expedite application development. TNS founder and Publisher Alex Williams, along with TNS Managing Editor Joab Jackson, hosted this podcast.
No matter how much we prepare, deployments don’t always go as planned. In this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Isabelle Miller, software engineer, LaunchDarkly, describes how DevOps teams can build processes to help remove unwanted surprises during release cycles — and why they do not need to be stressful.
One of the main things Miller said she has discovered since joining LaunchDarkly at the beginning of 2020 is the importance of having procedures in place for when things do go wrong, “because things are going to go wrong,” she said.
“You need to be able to manage that problem as quickly as possible, and minimize any harm before things get out of control when that happens,” said Miller. “So, one of the great things about working at LaunchDarkly is that I get to use our products. And one of the wonderful things about LaunchDarkly’s feature flags is that you can just turn things off.”
How Adidas manages for scale gives a sense for how a sportswear company is also in part similar to a software house just in terms of measuring how much code they run.
In this episode of The New Stack Makers podcast, Alex Williams, founder and publisher of The New Stack, speaks with Adidas’ Iñaki Alzorriz, senior director platform engineering, and Rastko Vukasinovic, director solution architecture, on how Adidas scales DevOps and resiliency on Kubernetes. They also discuss how Adidas views managing at scale in three ways: technically, culturally and strategically.
Debate continues in the industry about what observability is, and more specifically, what it should offer DevOps, especially those working in operations who are often responsible for detecting those “unknown unknowns.” In this The New Stack Makers podcast hosted by Alex Williams, founder and publisher of The New Stack, Bartek Plotka, a principal engineer at Red Hat, a SIG observability tech Lead for Thanos and a Prometheus maintainer; and Richard Hartmann, community director at Grafana, a Prometheus maintainer, OpenMetrics founder and a CNCF SIG observability chair member, discuss how observability should be easier to use and how it can be cost effective.
At this year’s KubeCon EU 2021, some things were the same — it was still virtual which meant it attracted a huge turnout of a more broadly international audience — and some things were different — like that almost everyone’s bought into Kubernetes and cloud native architecture, it’s now just how they use it. Another KubeCon tradition, The New Stack hosted a live pancake breakfast to reflect on the maturity of Kubernetes particularly around data persistence and storage.
Our Publisher Alex Williams hosted this year’s very early discussion (at least for him) with Itzik Reich, VP of technologists, and Nivas Iyer, senior principal product manager, both at Dell Technologies, along with pancakes regular Cheryl Hung, VP of ecosystem at the Cloud Native Computing Foundation.
The adoption of GitOps, improvements to APIs and the increasing reach of virtual machine language WebAssembly (Wasm) are influencing the developer experience, and ultimately, how DevOps teams reach their application-deployment and -management goals. These were among the more talked-about themes at Cloud Native Computing Foundation KubeCon + CloudNativeCon EU
Putting it all into context, Alex Williams, founder and publisher, and Joab Jackson, managing editor, of The New Stack, are the hosts of this The New Stack Makers podcast. The featured guests are Bryan Liles, principal engineer, VMware and Cheryl Hung, vice president of ecosystem, CNCF.
A major part of improving developer velocity is about getting the most out of an observability platform. While that is a commonly held assumption, this best practice is also a far-reaching goal for many DevOps teams.
Hosted by Alex Williams, founder and publisher of The New Stack, this The New Stack Makers — podcast recorded during a virtual pancake breakfast — features a discussion on improving observability for developers. The featured guests were Zain Asgar, general manager, Pixie and New Relic open source and CEO and co-founder of Pixie Labs, Roopak Venkatakrishnan, engineering manager, Bolt (an e-commerce retailer tool), Ihor Dvoretskyi, developer advocate, Cloud Native Computing Foundation (CNCF) and Christine Wang, senior solutions engineer, Grafana Labs.
As GitOps moves beyond improving how code repositories are managed for continuous integration/continuous integration (CI/CD), the security component of GitOps has become more of a pressing issue as the use of Git, and GitOps, becomes more widely adopted. The open source community should also play a critical role in improving GitOps.
Hosted by Alex Williams, founder and publisher of The New Stack, this recording features Om Moolchandani, co-founder and CISO/CTO, for Accurics, Cindy Blake, senior security evangelist, GitLab, Frank Kim, fellow, SANS Institute; Sanjeev Sharma, head of platform engineering, Truist Financial and Katie Gamanji, ecosystem advocate, Cloud Native Computing Foundation (CNCF).
Developers just want to know if they have a vulnerability before putting code into production. But often, the answer back is not what the developer wants to hear.
More analysis is needed, the software security group will often reply, said Meera Rao, senior director of product management at Synopsys, in this latest episode of The New Stack Makers, hosted by Alex Williams, founder and publisher of The New Stack,
Rao is the creator of a new intelligent orchestration technology that helps developers get their issues resolved without a long wait. That long wait is remedied by relying on Synopsys’ system to let developers know what’s wrong and whether specific security holes require immediate fixing or not.
What we do know about Kubernetes? It’s a raw, gaping maw. It’s not meant for most of us. What is needed? Access to the grinding, digital gears that make what we know of as distributed architectures.
Istio is an example of a management layer for Kubernetes, said Zack Butcher, part of the founding engineering team at Tetrate, a service mesh company. He joins Varun Talwar, co-founder at Tetrate for a discussion about the service mesh Istio and its role in the management of highly distributed networks, including, of course, Kubernetes in this The New Stack Makers podcast. Alex Williams, founder and publisher of The New Stack, hosted this episode.
In this episode of The New Stack Makers podcast, hosted by Joab Jackson, managing editor for The New Stack, we speak two of the fabled conference’s key organizers about what to expect and what the organizers’ goals are: Priyanka Sharma, general manager for CNCF and Stephen Augustus, engineering director and head of open source at Cisco.
This is no business-as-usual KubeCon conference, of course. Last year’s KubeCon EU was cancelled just a few weeks before the event was scheduled to take place. Then, many question marks remained during the early days of the pandemic about not only the future of conferences but how workers in the IT industry would continue to live and work. As it turns out, this year’s event is virtual, of course, and at the very least, there is no shortage of talks and events.
All told, for KubeCon, experts from organizations including Adobe, Apple, CERN, Nvidia and OVHcloud will deliver more than 100 sessions, keynotes, lightning talks, and breakout sessions. There will also be more than 60 sessions hosted by project maintainers – spanning beginner-level introductions, end-user case studies and technical deep dives.
This The New Stack Makers podcast explores the state of open source software today and features a case example of what is possible: Kasten by Veeam has created Kubestr to identify, validate and evaluate storage systems running in cloud native environments. As Michael Cade, a senior global technologist for Veeam, describes Kubestr, the open source tool provides information about what storage solutions are available for particular Kubernetes clusters and how well they are performing. The software project is also intended to offer DevOps teams an “easy button” to automate these processes.
Hosted by Alex Williams, founder and publisher of The New Stack, Cade and fellow guest Sirish Bathina, a software engineer, for Kasten, describe Kasten’s long-standing collaboration with the open source community and how Kubestr serves as case study example of both an ambitious open source project and what is possible today for stateful storage in Kubernetes environments.
The New Stack Makers’ recent “eBay Baby! How eBay Is Working for Developer Speed” livestream podcast covered a lot of ground about eBay’s five successive reengineers of its IT architecture. Recorded on April 1 and hosted by Alex Williams, founder and publisher of The New Stack, eBay’s challenges and achievements were certainly no joke. The eBay guests Randy Shoup, vice president, engineering and chief architect, Mark Weinberg, vice president, core product engineering and Lakshimi Duraivenkatesh, vice president, buyer experience engineering offered their insight and lessons learned over pancakes.
In this The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Lad discusses ThousandEyes’ work as a network monitoring provider to meet the challenges of the day. Lad discussed his background in networking, the evolution of networking and software in general and the parallel growth of the Internet as it relates to ThousandEyes.
Needless to say, the ongoing COVID-19 pandemic continues to have a profound impact on remote work in a number of ways. Mohit Lad, general manager, co-founder and former CEO of ThousandEyes, has been at the front lines. He and his team at ThousandEyes have helped a number of customers meet the networking- and infrastructure-management challenges associated with tremendous surges in remote data connections during the past year.
In 2018, Kubernetes had become too big to run on Raspberry Pi. For a while, it meant that kubeadmin could not run on the micro-device. K3s changed that and represents a new take on Kubernetes: stripped of code, K3s is a lightweight version of Kubernetes meant to run on edge devices.
Today, K3s is seeing a rise in popularity as are a host of other new services that focus on the edge for Kubernetes architectures.
It’s now at the point that the Cloud Native Computing Foundation (CNCF) is planning Kubernetes on Edge Day at KubeCon, said Bill Mulligan, marketing manager for CNCF in a podcast recording with Alex Ellis, founder at OpenFaaS and the author of a new course on K3s that will be available for KubeCon, scheduled for May 4-7.
In this, The New Stack Makers Livestream podcast, hosted by Alex Williams, founder and publisher of The New Stack, the security challenges associated with moving to a public cloud is the central theme. Issues such as what are the different ways that attackers can attack an enterprise that is using public cloud infrastructure and how the enterprises can defend themselves from such attacks are discussed.
The guests are Ankur Shah, vice president of products, Prisma Cloud, Alok Tongaonkar, director, data science, Palo Alto Networks and Gaspar Modelo-Howard, principal data scientist at Palo Alto Networks.
In this episode, co-hosts Alex Williams, founder, and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, speak with guest Ev Kontsevoy, co-founder and CEO of Teleport, which offers organizations instant access to computing resources.
An organization’s cloud security processes often cover several different cloud providers, while oftentimes hundred, if not thousands, of developers all have multiple cloud accounts. Since each account typically adheres to different security systems and policies, managing it all represents yet another security challenge DevOps teams face.
Web security is the theme of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, discuss the challenges associated with building a security-minded culture and what works and what does not work.
Culture is a cornerstone of sound security policy. However, at many — if not most — organizations, cultural changes are warranted in a number of ways, not least of which for security and policy.
How to build a security-minded culture is the theme of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
In this, The New Stack Makers podcast, hosted by Alex Williams, founder, and publisher of The New Stack, Joe Vaccaro, head of products, ThousandEyes, discussed today’s digital supply chain for the modern app experience and managing backend interdependencies.
The days are long gone when users accessed data mainly through local area network (LAN) connections and ran applications stored on centralized servers in the data center. Conversely, in today’s highly distributed network experience, the user’s access to applications is through a vast contingent of network connections, supported by microservices and in multicloud environments. Application performance is also highly dependent on DNS and other network connections for which organizations often lack visibility into the complete digital supply chain. In many cases, for example, it is thus difficult to determine whether sub-par application performance is due to network connectivity or bad code in the stack.
This episode of The New Stack Makers series with Okta, on all topics related to development and security at scale, features the development requirements for securing mobile apps. They are explored from two points of view: the database and authentication.
Guests Ian Ward, senior product manager, mobile, for MongoDB discusses synchronizing mobile data with backend databases and his related work on Realm, a mobile database, and Aaron Parecki, senior security architect, for Okta, describes authentication, and OAuth, for which he is the spec editor and member of the OAuth working group. Alex Williams, founder and publisher of The New Stack hosts with co-host Randall Degges, head of developer advocacy at API security firm at Okta.
“Black women are the moral compass of this country,” Kim Crayton said, referring to the United States in this episode of The New Stack Makers. But it’s exhausting work. And repetitive, to continue to offer the same basics to white people of what’s wrong with a country, an economy, and a tech industry that’s systemically built on anti-Blackness.
“Tech always thinks in binaries, which gets on my nerves. People of color, people from marginalized communities, we survive living in the gray. There is no right, wrong, good, bad because it changes situationally. So you have people who want to flip the tables. And then folks act like the only alternative is to prepare marginalized communities to go into spaces and work in places where they’re going to be harmed,” Crayton said.
In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-host Cheryl Hung, vice president of ecosystem at CNCF Cloud Native Computing Foundation (CNCF), discuss why secrets management is essential for DevOps teams, what the tool landscape is like and why Vault was selected as the top alternative. CNCF Tech Radar contributors and featured guests were Steve Nolen, site reliability engineer, RStudio — which creates open source software for data science, scientific research and technical communication — and Andrea Galbusera, engineering and co-founder, AuthKeys, a SaaS platform provider for managing and auditing servers authorizations and logins.
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at security services provider Okta, speak with guest Dustin Rogers, staff application security engineer, Netlify, about all things related to static Web security management.
Netlify is a popular static website hosting platform for Jamstack used by over a million web developers. But while Netlify is popular, thanks to its simplicity for uploading code to the platform from GitHub and managing Web applications once uploaded, the security it offers for the static environments is of interest as well.
Using Netlify as a case example, static websites’ security layers and related security practices are the themes of the latest episode in our new series “Security @ Scale” on The New Stack Makers with Okta. The series explores security in modern environments with stories from the trenches including security horror stories and fantastic failures.
In this The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, Vaibhav Kamra, chief technology officer, Kasten by Veeam, discussed the changes he has observed, and ultimately, the lessons learned during the past year. During this time, Kasten has provided the necessary platforms for application and data management that organizations rely on to scale across Kubernetes applications.
Okta sponsored this podcast.
This episode of The New Stack Makers series with Okta on all topics related to development and security at scale features guest Anant Jhingran, CEO, StepZen. Jhingran’s deep well of experience, including long stints at IBM, Apigee, Google, and, currently, CEO of StepZen certainly qualifies him as a leading expert on APIs and their role in today’s DevOps environments. Co-hosted by Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at Okta, Jhingran offers his take on how APIs have evolved, their potential for the developer community and how their success accounts, in part, for their exposure to vulnerabilities.
In this The New Stack Makers podcast, Varun Badhwar, senior vice president, product, Palo Alto Networks, puts today’s multicloud security challenges into perspective. He also describes how Prisma Cloud 2.0 offers a single and comprehensive security alternative for cloud native applications across different cloud platforms.
Welcome to our new series ‘Security @ Scale’ on The New Stack Makers with Okta exploring security in modern environments with stories from the trenches including security horror stories and fantastic failures.
In this episode, co-hosts Alex Williams, founder and publisher of The New Stack, and Randall Degges, head of developer advocacy at Okta, speak with guest Marc Rogers, vice president, cybersecurity, Okta, and co-founder of the CTI League, to discuss the anatomy of what will likely be considered to be one of the most disruptive hacks in the history of Wall Street. It could also change how institutional and individual investors buy, sell — and short — stocks in the future that are traded on U.S. exchanges.
This The New Stack Makers podcast series features a number of guests who speak during Palo Alto Networks’ Cloud Native Security Virtual Event. In this segment, Alex Williams, founder and publisher of The New Stack, hosts a roundtable with Palo Alto Networks customers who share their experiences and insights about cloud native security and other related topics. The guests are Brian Cababe, director of cyber security, architecture and governance, Cognizant; Tyler Warren, director of IoT security, Prologis and Alex Jones, infosec manager, Cobalt.io.
A key talking point is how legacy on-premises practices and processes cannot be directly transferred to work for cloud native security and management. Jones noted, for example, that when moving to the cloud, the first question for threat modeling is “what are we doing?”
This edition of The New Stack Makers podcast features a number of guests who speak during Palo Alto Networks’ Cloud Native Security Virtual Event. It kicks off with none other than Seth Meyers, an Emmy Award-winning comedian of “Late Night with Seth Meyers” and “Saturday Night Live (SNL)” fame. Meyers’ interview with Palo Alto Networks founder and CTO Nir Zuk is followed by a customer roundtable hosted by Alex Williams, founder and publisher of The New Stack, with guests Brian Cababe, director of cyber security, architecture and governance, Cognizant; Tyler Warren, director of IoT security, Prologis and Alex Jones, infosec manager, Cobalt.io. Meanwhile, the event concludes with a talk on Prisma Cloud 2.0, given by Varun Bradhwar, senior vice president, product, Palo Alto Networks.
Meyers began the session by declaring that “much like Nir Zuk, I am a cyber security luminary.” He also said he didn’t want to “brag too much” about his accomplishments, but said using your mother’s maiden name to recover passwords was his idea.
Meyers then asked Zuk, while at least feigning to be serious, what cloud native means for organizations, as well as its impact on security management.
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Security teams need a higher appetite for risk. While accepting, and even embracing risk is widely accepted outside the sphere of IT, risk also often plays a role in DevOps operations, developer and SRE team culture. However, security teams typically have yet to accept and manage risk in this way.
In this edition of The New Stack Makers podcast, hosted by Alex Williams, founder and publisher of The New Stack, how and why security teams need to rethink risk, with the aim of improving resiliency and achieving other benefits that thus far have remained elusive for many organizations, is discussed.
The guests were Matt Chiodi, chief security officer of Public Cloud at Palo Alto Networks, Meera Rao, senior director of product management, Synopsys and Tal Klein, chief marketing officer, Rezilion.
Prisma Cloud by Palo Alto Networks sponsored this podcast.
Palo Alto Networks John Morello, vice president of product, has for a long time talked about the basics that come with cloud native security. In this edition of The New Stack Makers, hosted by Alex Williams, founder and publisher of The New Stack, Morello discusses how APIs are less the weakest link and are more so better known due to the widespread use of APIs, especially in the past five years. There are more people developing APIs, there are more people consuming APIS and there are more attackers who are exploiting APIs — and that makes the basics more important than ever both now and as more applications go online.
Harness sponsored this podcast.
The growing pains continue: As organizations push ahead, shifting to Kubernetes and cloud native environments at scale, the complexities of managing Kubernetes clusters increase as well. The associated challenges of adoption, and then managing these highly distributed containerized environments, remain daunting. For many DevOps teams, the advent of “Kubernetes complexity fatigue” has become a concern.
In this episode of The New Stack Makers podcast, hosted by TNS founder and Publisher Alex Williams, Kubernetes complexity fatigue, and more importantly, what can be done about it, are discussed. The guests were Ravi Lachhman, evangelist at Harness, and Frank Moley, senior technical engineering manager at DataStax.
Prisma Cloud by Palo Alto Networks sponsored this podcast.
This edition of The New Stack Makers podcast featured a news announcement: Palo Alto Networks is providing a new approach to protecting APIs with the release of its WAAS (web application and API security). As BotNets become more sophisticated, Palo Alto’s WAAS bot-defense platform offers API security, runtime protection, and other security features for today’s cloud native environments.
Hosted by Alex Williams, founder, and publisher of The New Stack, guest Ory Segal, senior distinguished research engineer, Palo Alto Networks, discussed how the company’s WAAS offers apps end-to-end protection for loosely coupled services in declarative environments and a range of other capabilities.
In this, The New Stack Makers podcast hosted by TNS founder and publisher Alex Williams; guest Nanda Vijaydev, distinguished technologist and lead data scientist, HPE, discusses how the concepts of loosely coupled architectures are now playing a part in data-centric applications on Kubernetes. It’s an evolution that has been taking shape, preceded by the use of Kubernetes for microservices development — as opposed to data-centric approaches that have historically been developed on tightly coupled, monolithic architectures.
In my 2021 web development predictions, I identified 2 key trends heading into this year: serverless expanding into a more full-featured platform (for example, stateful apps becoming a reality on serverless), and the continued growth of JavaScript (and especially React). Jamstack is another growth area, although that’s at a much earlier stage. To discuss these and other frontend trends, I spoke to David Cramer, co-founder, and CTO of Sentry, an application monitoring platform. You can hear the full discussion on The New Stack Makers podcast, but in this article, I’ll review the main talking points.
New Relic sponsored this podcast.
In this The New Stack Makers podcast, Wendy Shepperd, general vice president of engineering, New Relic, describes the challenges of migrating New Relic’s telemetry platform to a cloud native environment on Amazon Web Services (AWS). Hosted by TNS founder and publisher Alex Williams, Shepperd discussed key lessons learned about New Relic’s shift to AWS, as well as implications for observability following the move.
In this episode of The New Stack Analysts podcast, TNS founder and publisher Alex Williams virtually shared pancakes and syrup with guests to discuss how Apache Cassandra, gRPC and, other tools and platforms play a role in managing data on Kubernetes.
Mya Pitzeruse, software engineer and OSS contributor from effx; Sam Ramji, chief strategy officer at Datastax; and Tom Offermann, a lead software engineer at New Relic were the guests. They offered deep perspectives about the evolution of data management on Kubernetes and the work that remains to be done.
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Infrastructure as code is a movement ready to boom. It’s also emerging as one of the three pillars in cloud security that are bringing DevOps and security together in the evolving DevSecOps market, said Varun Badhwar, senior vice president, Prisma Cloud at Palo Alto Networks, in this episode of The New Stack Makers hosted by TNS Founder and Publisher Alex Williams.
Infrastructure as code is also a major component of the DevOps’ trend to shift left. “Shift left security now means application security, it means software composition analysis and it means infrastructure as code scanning — and all of that now is available for DevOps teams to do in the pipeline,” Badhwar explained.
“And in an ideal situation,” he continued, “you want to tie all of that to the tools that your infosec teams want to use in runtime in production, such that you have one set of policies globally recognized in your enterprise. And you’re working against the same standards — it’s just a matter of fact about where you’re deploying those tools in your lifecycle.”
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews, conducted by Scalyr CEO Christine Heckart, that cover the challenges engineering managers have faced when scaling architectures to support the demands of the business.
Uber. Recall the company in 2017, the management, the scale, and the post by Susan Fowler, who detailed experiences that speak to the hopes and terrible realities at the company. That’s the scenario that faced Donald Sumbry, who now heads reliability engineering at Airbnb in this interview with Heckart. He was not aware of the issues internally at Uber due, he says, to the work and all the technical problems that needed resolving.
"In early 2017, we had the Susan Fowler blog post, and one of the things I remember the most was that some of what was what had happened was actually a surprise to me," Sumbry said, "And I realized that I was so knee-deep in the work that I was doing, that there were so many problems to solve. And we attracted the type of people that just jumped into a problem.
Joining Airbnb, Sumbry brought what he learned at Uber about looking at the big picture. He also learned to avoid the savior complex. Every company is different, no matter how much it may seem that the engineer has seen it all and can solve all the problems.
On the last The New Stack Analysts of the year, the gang got together — remotely, obviously — to reflect on this year. And oh what a year! But for a year in tech, 2020 still had a lot of hits — and some misses.
Publisher Alex Williams was joined by Libby Clark, Joab Jackson, Bruce Gain, Steven Vaughan-Nichols, and Jennifer Riggins. We looked back on the year that saw millions die, no one fly, and a lot of jobs in turmoil. It was also a year that, while many things screeched to a halt, much of the tech industry had to keep going more than ever.
KubeCon+CloudNativeCon sponsored this podcast.
Kubernetes is certainly evolving, but it will be some time before organizations deploy and run applications seamlessly in cloud native environments without today’s associated challenges of its adoption and maintenance. Amazon Web Services (AWS), of course, is both an early proponent of Kubernetes and a leading provider of cloud native services and support, and has thus been implicitly involved with its changes over the past few years.
In this The New Stack Makers podcast, AWS’ Bob Wise, general manager of Kubernetes, and Peder Ulander, head of product marketing for enterprise, developer and open source initiatives, described AWS’ role in Kubernetes and how cloud native plays into the company’s open source strategy. They also discussed how Kubernetes is evolving in the market, including in terms of how customer needs are changing, and why open source technologies are critical to fill in gaps in order for cloud native to realize its full potential.
Alex Williams, founder and publisher of the New Stack, hosted this episode.
New Relic sponsored this podcast.
The Cloud Native Computing Foundation’s (CNCF) OpenTelemetry project was created to help foster the adoption of observability by helping to improve interoperability among the different observability toolsets through a vendor-neutral framework. In this way, OpenTelemetry should help to provide a single set of APIs, libraries, agents and collector services to capture distributed traces, metrics and other information from an application for improved observability.
In this The New Stack Makers podcast, hosted by TNS Founder and Publisher Alex Williams, Ben Evans, principal engineer and JVM technologies architect, New Relic, discussed OpenTelemetry and New Relic’s contributions to OpenTelemetry and other open source projects.
The genesis of OpenTelemtry was not to create a technology for its own sake in anticipation of what observability users might need, but to serve as a common framework to meet palpable challenges organizations already face.
Prisma Cloud from Palo Alto Networks sponsored this podcast.
Identity and access management (IAM) was previously relatively straightforward. Often delegated as a low-level management task to the local area network (LAN) or wide area network (WAN) admin, the process of setting permissions for tiered data access was definitely not one of the more challenging security-related duties. However, in today’s highly distributed and relatively complex computing environments, network and associated IAM are exponentially more complex. As application creation and deployment become more distributed, often among multicloud containerized environments, the resulting dependencies, as well as vulnerabilities, continue to proliferate as well, thus widening the scope of potential attack surfaces.
How to manage IAM in this context was the main topic of this episode of The New Stack Analysts podcast, as KubeCon + CloudNativeCon attendees joined TNS Founder and Publisher Alex Williams and guests live for the latest “Virtual Pancake & Podcast.” They discussed why IAM has become even more difficult to manage than in the past and offered their perspectives about potential solutions. They also showed how enjoying pancakes — or other variations of breakfast — can make IAM challenges more manageable.
The event featured Lin Sun, senior technical staff member and Master Inventor, Istio/IBM; Joab Jackson, managing editor, The New Stack and Nathaniel “Q” Quist, senior threat researcher (Public Cloud Security – Unit 42), Palo Alto Networks. Jackson noted how the evolution of IAM has not been conducive to handling the needs of present-day distributed computing. Previously, it was “not exactly a security thing” nor a “developer problem,” and wasn’t even “a security problem, he said.
“[IAM] really almost was a network problem: if a certain individual or a certain process wants to access another process or a resource online, then you have to have the permissions in place to meet all the policy requirements about who can ask for these particular resources,” Jackson said. “And this is an entirely new problem with distributed computing on a massive and widespread scale…it’s almost a mindset, number one, about who can figure out what to do and then how to go about doing it.”
KubeCon+CloudNativeCon sponsored this podcast.
How to manage database storage in cloud native environments continues to be a major challenge for many organizations. Database storage also came to the fore as the issue to explore in the latest Cloud Native Computing Foundation (CNCF) Tech Radar report.
In this edition of The New Stack Analysts podcast, host Alex Williams, founder and publisher of The New Stack and co-hosts Cheryl Hung, vice president of ecosystem at Cloud Native Computing Foundation (CNCF) and Dave Zolotusky, senior staff engineer at Spotify discuss stateless database storage, recent results of the report findings and perspectives from the user community.
The podcast guests — who both contributed to the CNCF Tech Radar report and hail from the database storage user community — were Jackie Fong, engineering leader, Kubernetes and developer experience for Ticketmaster, and Mya Pitzeruse, software engineer, OSS contributor, effx.
In this The New Stack Makers podcast featured during KubeCon + CloudNativeCon North America, Eric Sorenson, technical product manager for Relay at Puppet and Dave Lindquist, general manager and vice president engineering, hybrid cloud management, Red Hat, discuss the state of Kubernetes cluster configuration management, associated DevOps challenges and how problems can be solved in the future. TNS correspondent B. Cameron Gain hosted the episode.
Welcome to The New Stack Makers: Scaling New Heights, a series of interviews with engineering managers who talk about the problems they have faced and the resolutions they sought, conducted by guest host Scalyr CEO Christine Heckart.
Bhawna Singh had two mandates at Glassdoor when she started as senior vice president of engineering and CTO: open an office in San Francisco to access the region’s talent pool, and rebuild the search vertical for job results. Glassdoor is a job and recruiting site that offers services that allow people to see information such as company reviews, salary reviews, and benefits that a potential employer offers.
To improve the quality of search, the team had to set metrics that the team trusted. Performance challenges surfaced when the team focused its efforts on the tactical aspects of architecting the platform. The Glassdoor team had tuned the system for quality; building out the deployment infrastructure and adding machine learning models. The work made the system heavier and less performant.
In this The New Stack Makers podcast, Alex Williams, publisher and founder of The New Stack, spoke with Ev Kontsevoy, co-founder and CEO of Teleport, about what the shift to a widely distributed architecture means for engineers and developers and how Teleport accommodates their needs in this new dynamic.
Teleport was formerly known as Gravitational until just recently when it rebranded itself after the name of its flagship unified access plane platform. Built with Go on Google’s cryptography, Teleport allows engineers, among other things, to bypass layers of legacy architecture in order to securely take advantage of cloud resources from any location worldwide with an Internet connection.
What is DataOps? Why is a real-time data platform essential to the use cases driving it? How can you build data pipelines with open source complexity?
In this episode of The New Stack Makers live — yet from our respective sofas — from KubeCon North America, we talk to Andrew Stevenson, chief technical officer and co-founder of Lenses, about how Apache Kafka and Kubernetes can together dramatically increase the agility, efficiency and security of building real-time data applications.
VMware sponsored this podcast.
SaltStack’s Salt is a leading automation and security platform for configuration management for on premises and cloud native environments. Created with Python, Salt is in use among Juniper, Cisco, Cloudflare, Nutanix, SUSE and Tieto, as well as a number of other Fortune 500 technology companies, as well as banks. Saltstack also offers a suite of tools, including SaltStack Enterprise for Salt, Plugin Oriented Programming (POP) and Tiamat,
SaltStack’s portfolio has also been merged with VMware’s suite of offerings, following VMware’s purchase of SaltStack earlier this year.
In this The New Stack Makers podcast, SaltStack’s Thomas Hatch, founder and CTO and Salt’s creator, and Janae Andrus, community manager for Salt, discuss SaltStack’s roots, evolution and integration with VMware’ platforms and technologies. The future of SaltStack’s open source projects were also discussed.
Alex Williams, founder and publisher of The New Stack, hosted this podcast.
Accurics sponsored this podcast.
Who doesn’t love hotcakes? And to make them right, you need to wait until the batter starts to bubble up before you flip them. Immutable infrastructure management and related security challenges are also “bubbling up” these days, as many organizations make the shift to cloud native environments, with containerized, serverless and other layers.
In this The New Stack Analysts podcast, TNS founder and publisher Alex Williams asked served up pancakes with KubeCon attendees who joined him for a “stack” at the “Virtual Pancake Breakfast and Podcast” while they offered their deep perspectives on what is at stake as immutable infrastructure security and other related concerns take hold.
The guests joining the virtual breakfast were Om Moolchandani, co-founder and CTO for Accurics, Rosemary Wang, developer advocate for HashiCorp, Krishna Bhagavathula, CTO, for the NBA (who also brought his own L.A. Lakers-branded spatula), Chenxi Wang, Ph.D., managing general partner of Rain Capital, and Priyanka Sharma, general manager, for the Cloud Native Computing Foundation (CNCF).
En liten tjänst av I'm With Friends. Finns även på engelska.