Alan and Brent talk about Modern Testing (which isn’t all that modern, and not really about testing – except when it is). Every other week or so, A&B tell stories and discuss what’s happening in the world of software – including a variety of topics like Agile, Lean, Delivery, DevOps, Data Science, Experimentation, Leadership, and more.
Intro / Outro music on the AB Testing podcast is The One by Rivet. License: CC BY
Intro / Outro music on ABT343 is from https://filmmusic.io
“Werq” by Kevin MacLeod (https://incompetech.com)
License: CC BY (http://creativecommons.org/licenses/by/4.0/)
Support this podcast: https://podcasters.spotify.com/pod/show/abtesting/support
The podcast AB Testing is created by AB Testing. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
It's been a while, but we're back to disagree about measuring developer productivity, and then talk about the doom and rebirth coming from AI.
Usual end of year episode where we reflect on 2024, and look forward to 2025.
Once again, we ramble on topics we didn't plan on talking about.
Welp - Alan asked Brent how it was going, he said it was review season, and then 40 minutes of babbling commenced
We're lazy, so we let chatgpt kick us off with a mailbag question. Tangents followed.
It's hard to tell what we talked about in this episode. But it involved a sleep-deprived Alan, and Brent, the Cat-Lady.
The wonderful Dagna Bieda joins us to talk about her new book, Brain Refactor.
Buy the book at https://www.amazon.com/Brain-Refactor-Engineering-Fulfillment-Opportunities/dp/B0DB69G798/ and learn more about Dagna at https://www.themindfuldev.com/
We talk about this article - https://stackoverflow.blog/2024/06/10/generative-ai-is-not-going-to-build-your-engineering-team-for-you/ - which is good, but hyperbolic at times.
Just like millions of others, we feel the need to talk about crowdstrike - just a bit. Then we talk about leadership, engagement, and potential new podcasting ventures.
It's another episode of the AI Testing Podcast, where we discuss the advancements of AI, parallels to nuclear war, and what humans will do in the future.
Links -
OpenAI reportedly nears breakthrough with “reasoning” AI, reveals progress framework
We talk about a lot of stuff - because that's what we do. Somewhere in there, we talk about the overlap between the ways of devops and MT, and Brent threatens to rewrite the first principle.
We talk about "Podcast Alan", and a bit about Alan's day job. Once again, going with the minimal editing approach.
It's been a long break. Again. But there are reasons. This time, we take a mailbag question and talk about the challenges of organizational change.
In the aftermath of the episode 200 extravaganza, we talk about Fallout (the show), and take a mailbag question that evolves into yet more discussions on AI.
We recorded this episode live - sort of. We have guests. We have fun. We get bobbleheads. We talk about AI. It's EVERYTHING you could ask for on our 200th episode AND 10th anniversary.
For the first time in ten years, Brent and Alan both listen to the very first episode of AB Testing (so you don't have to).
We talk about fiefdoms - but mostly we end up talking about the growth mindset.
Somewhere in the middle of a whole lot of tangents, we take a walk though James Whittaker's post on The Resurrection of Software Testing - https://medium.com/@docjamesw/the-resurrection-of-software-testing-634423cd8411
We had a great time with a wonderful guest. Big thanks to Kat Obring for hanging out with us.
Sign up for Kat's newsletter here - https://katobring.podia.com/
You can find Kat on linkedin here - https://www.linkedin.com/in/katjaobring/
And even more about Kat and her business here - https://kato-coaching.com/
We start with a bit of a retro of the last few episodes and once again head off into the weeds of AI.
It's the remainder of the interview with Jason. It gets fun, it gets weird, and it gets cool.
We spent a little extra time with Jason and talked A LOT about AI, testing in AI, and whether or not Alan or Brent will have a quality title some day.
We are joined by Bryan Finster who talks about his journey with CD, Developer Testing, and he shares all kinds of fantastic information on delivering quality software.
You can read his 5-minute devops blog at blog.bryanfinster.com see his podcast (vcast?) on youtube (https://www.youtube.com/@devhopspodcast9535) and see some of his work on CD at minimumcd.org
It's our annual year end episode where we discuss our predictions from a year ago, reflect on the last year, and think a bit about what may happen in 2024.
We talk a bit about our friend chat GPT, and then talk about heroes in the workplace. There's an obvious cross-over to Alan's post on angryweasel.substack.com that's worth checking out.
We're back and recording on a new platform - except when we aren't. In this episode, we talk about...not AI...and instead talk about the Peter Principle and Brent finally reading the Phoenix Project.
We talk with Jason Arbon about - you guessed, AI. Unfortunately, this recording only contains two thirds of our discussion, since zencastr decided to do its own thing.
Yeah - we talk about AI, and (eventually) wonder if AI can be a good CEO.
In this episode, Brent and Alan answer a mailbag question, and talk about how and when they have changed between manager and IC (individual contributor - aka non-manager) roles.
So...Perze asked a question that he said was probably a thought experiment, but we tried to answer, but Perze was right and it ended up being a thought experiment after all - but the answer is in there somewhere.
We discuss an article by the same name, but also dive extensively into the topic, whether it's true or not (or maybe), and tell some stories along the way.
We're back after a mini-break. In this episode, we talk about the concept of Premature Escalation, and whether or not teams are crossing the chasm into Modern Testing. And a bunch of tangents along the way.
We are joined by Brian Pulliam, and we go waaaay deep into StrenthgsFinders. It was fascinating and fun, and a ton of fun. Brian's website is https://refactorcoaching.com/, and you can find more about StrengthsFinders at https://www.gallup.com/cliftonstrengths/en/strengthsfinder.aspx
We talk a lot (again) about AI. Just like Agile was a disruptor when we began the podcast AI is a disruptor that you can ignore at your own peril, or embrace and ride the wave. There's some other good stuff here as well.
Apologies for the delay - anchor failed us for the first time (or maybe the problem was pbkac) - but posting 12 hours late regardless.
We spend a little time talking about our previous guest (chat gpt), whether AI is taking our jobs away, return to office, windows phone, and /finally/ a bit (abstractly) about Alan's new gig.
We talk almost entirely about AI, with the bulk of the time "interviewing" chat GPT about the modern testing principles. Lots of fun, and lots of fodder for future podcasts.
Beware - there are tangents and rambling. But - if you like listening to us talk about stuff in our lives, and eventually whether the emphasis should be on great testing or great quality, it should be fun.
We start with our (now) regular discussion of AI, and then talk about the deets of Alan leaving his job at Unity (and then more about AI)
It's a pretty rambly episode, but we talk more about AI and chat gpt (sorry), touch on a mailbag question about migrations, and hit a few dozen other subjects on the way. Hold on tight.
Brent puts on his tinfoil hat and talks about how tech and AI are going to doom us all.
We talk quite a bit about the layoff situation in text - and then tell some stories about "the old days" at microsoft.
We are joined by Dagna Bieda from https://www.themindfuldev.com to talk about coaching, burnout, and growth as a software engineer.
We spend the whole episode talking about the reactions to Alan's post describing why most software teams don't need dedicated testers. Bring the Hate!
It's our last podcast of 2022, so we look back on the year, talk about our 2022 predictions, and look forward into 2023. And we once again, talk quite a bit about chatgpt.
We try not to talk about https://chat.openai.com/chat the entire episode, but we mostly fail.
We mostly talk about prioritization, but as usual, it's impossible for us to stick to a single topic.
We talk a bit about the bubbles we're in and the bubble's we're not in. And then finally, we talk about how people can move into experimentation and Modern Testing.
We record an entire podcast without getting to the point - or maybe we just created a lot of points. We talk about podcasting, how we learn, and async work - and about 30 other things.
We chat a bit about filling offices, then a lot about A/B testing, and share a few stories along the way.
After a very brief political discussion (apologies), we talk about what we'd do as testers today, and along the way, accidentally reinvent Modern Testing.
We are joined by Matt from Speedscale. We talk a little about his company, but talk a lot about everything else.
Important links
Interspersed with a ridiculous number of tangents, we talk about layoffs.
We are joined by Kirk Marple, CEO of Unstruk. We talk about data mostly, but also baseball (which is also data) - and share a few microsoft stories.
We talk about a lot of stuff - including posting salary ranges for jobs, SDETs making devs cry, and the pesky Principle 7
We talk about...a lot of things. From testing podcasts to A/B (with the slash) testing to the idea of Internal Developer Relations. It's a lot, so apologies in advance.
We have Darko from Semaphore - that's the good news. The bad news is that we lost the last 5-ish minutes of the podcast due to me not planning ahead. We will work with Darko to crash an episode of his podcast in the near future and complete the conversation.
We talk about setting goals...or commitments...or /PLANS/ for our team members.
We play the kanban game - but stop, because playing the kanban game (kanbanboardgame.com) on a podcast is super boring. So then we just gave up and started talking and wound up being even more boring. Enjoy!
We talk a lot about Modern Testing Principle number 5 - the one that says only customers can evaluate quality, and why - despite controversy, we believe it is true. We also discuss whether or not Deming would be a fan of the principles.
We are honored to have Anne-Marie Charrett join us this week to talk about quality coaching.
Anne-Marie currently works at CultureAmp where she puts quality coaching into practice. You can find her on twitter (@charrett), and on her website. Her ongoing book on quality coaching is available here, with fresh installments monthly.
Kristin Jackvony - author of The Complete Software Tester - (https://www.amazon.com/Complete-Software-Tester-Strategies-High-Quality-ebook/dp/B09NGVVCJ9) joins us for a wonderful discussion on testing, testers, and the value of a music education.
This week, we are joined by Henry Golding, who doesn't call himself a quality coach, but that's what he does anyway.
For more about Henry, check out this preview of Henry's talk at GDC, and his Game Automated Testing Resource Hub
We are joined by Al Shalloway. We talk about Agile mostly, but we finally get to talk about SAFE. You can contact Al at al.shalloway at successengineering dot works
It's our last episode of 2021, so we take some time to reflect on the last year, look forward into the next year.
It's our 150th episide. With absolutely zero celebration or anything else special, we ramble our way through discussions on Customer-Centric Monitoring, Alan's 3 biggest accomplishments in 2021, and our favorite video games.
We talk a little about work place perks (for reasons unknown), and then talk more about the new fun challenges with Alan's new org.
Eventually, we talk about how we could gear the Modern Testing Principles more towards development teams.
We attempt to talk about coaching and quality coaching...and we do - mostly.
We chat a bit about the rise (?) of Modern Testing practices in the industry, and then look at a few TestSphere cards - and eventually end up talking a whole lot about data.
We kick off the episode with 5 painful minutes of discussion on Brent's hernia surgery before making it into a discussion on fairness, diversity, psychological safety, and other stuff. And we talk about a lot of books.
We talk a bit about Alan's recent AMA on the future of test automation and then about his past with Microsoft Teams.
We dive (wade? splash??) into some listener questions on Theory of Constraints applied to different aspects of software development and Modern Testing.
We're joined by Nick and Todd from https://reflect.run to talk about the future of test automation (sort of) and developers creating UI automation. We loved having these folks on the show, and hope you enjoy the conversation.
In this episode, we discuss deep vs. shallow testing, choosing what to automate, and the future of testing roles (sort of).
The link we discuss is https://www.onetonline.org/link/summary/15-1253.00
We discuss Alan's experiences at TestingFestival, talk a bit about DevOps and answer (sort of) a mailbag question on BDD
We talk more about Lencioni's working genius model, and attempt to attach it to a mailbag question about how projects get done in a modern testing world.
We discuss one of the age-old questions of software testing. How does a "manaual tester" 'get into' automation.
We dive into our history of personality type tests. Starting with the classic MBTI, we move to Insights, Strengths Finders, and then Working Genius (we also briefly mention our Harry Potter houses).
This time we talk a bit about learning (including an upcoming webinar from alan - https://www.meetup.com/Pacific-NW-Software-Quality-Conference-PNSQC/events/276239634, and talk about quality vs. testing - which is just another framing of principle 5 of the modern testing principles.
We talk a bit about personality types (more on this in 136), touch on some thoughts on record & playback automation, and a few other tangential topics.
Books mentioned on this podcast include:
One more for the bookshelf that we didn't call out is Principles of Product Development Flow (Reinertsen)
It's a bit of a meandering discussion this time, but we dance around the role of teams that support other teams success.
Here's the blog post link Brent refers to in the podcast - Test Doesn't Understand the Customer
We reflect a bit on Alan's recent article (inspired by episode 132), and then talk quite a bit about the correlation of test automation ownership and quality. We hope you enjoy the show.
It's our first episode of 2021 - where we talk (a little) about the US political situation, then discuss mindsets and skillsets - and then wrap things up with a discussion on estimation metrics.
En liten tjänst av I'm With Friends. Finns även på engelska.