447 avsnitt • Längd: 40 min • Månadsvis
On The Bike Shed, hosts Joël Quenneville and Stephanie Minn discuss development experiences and challenges at thoughtbot with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire this week.
The podcast The Bike Shed is created by thoughtbot. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
For developers, impersonation can be a powerful tool, but with great power comes great responsibility. In today’s episode, hosts Stephanie and Joël explore the complexities of implementing impersonation features in software development, giving you the ability to take over someone’s account and act as the user. They delve into the pros and cons of impersonation, from how it can help with debugging and customer support to its prime drawbacks regarding security and auditing issues. Discover why the need for impersonation is often a sign of poor admin tooling, alternative solutions to true impersonation, and the scenarios where impersonation might be the most pragmatic approach. You’ll also learn why they advocate for understanding the root problem and considering alternative solutions before implementing impersonation. Tune in today for a deep dive into impersonation and the best ways to use it (or not use it)!
Key Points From This Episode:
What’s new in Stephanie’s world: how Notion Calendar is helping her manage her schedule.
Joël’s quest to find a health plan: how he used a spreadsheet to compare his options.
A client request to build an impersonation feature, and why Joël has mixed feelings about it.
What an impersonation tool does: it allows you to take over someone’s account.
When it’s useful to use implementation as a feature, like for debugging and support.
Potential risks and responsibilities associated with impersonation.
Why the need for impersonation often indicates poor admin tooling.
Technical and security implications of impersonation.
Solutions for logging the audit trail when you’re doing impersonation.
Differentiating between the logged-in user and the user you’re rendering views for.
Building an app that isn’t as tightly coupled to the “current user.”
Suggested alternatives to true impersonation.
The value of cross-functional teams and collaborative problem-solving.
Links Mentioned in Today’s Episode:
Mailtrap
Notion Calendar
'Implementing Impersonation'
Sustainable Web Development with Ruby on Rails
The Bike Shed
Joël Quenneville on LinkedIn
Joël Quenneville on X
Support The Bike Shed
WorkOS
When is it time for a rewrite? How do you justify it? If you’re tasked with one, how do you approach it? In today’s episode of The Bike Shed, we dive into the tough question of software rewrites, sharing firsthand experiences that reveal why these projects are often more complicated and risky than they first appear. We unpack critical factors that make or break a rewrite, from balancing developer satisfaction with business value to managing stakeholder expectations when costs and timelines stretch unexpectedly. You’ll hear about real-world rewrite pitfalls like downtime and reintroducing bugs, as well as strategies for achieving similar improvements through incremental changes or refactoring instead. If you’re a developer or team lead considering a rewrite, this conversation offers a pragmatic perspective that could save your team time, effort, and potential setbacks. Tune in to learn how to make the best call for your codebase and find out when a rewrite might actually be necessary!
Key Points From This Episode:
Accessible selectors versus test IDs: best practices in Capybara and React Testing Library.
Balancing test coverage with pragmatism and risk tolerance with Good Enough Testing.
Software rewrites and the tough questions around deciding when they're necessary.
The importance of prioritizing business value over frustrations with the current codebase.
Drawbacks of rewrites, such as downtime, data loss, and reintroducing past bugs.
Risks of “grass is greener” thinking and using mocked data in demos.
Unrealistic expectations of full feature parity and why an MVP approach is better.
How incremental refactoring can achieve similar goals to a complete rewrite.
The appeal and hubris of a “fresh start” and why it’s much more complex than that.
Balancing innovation with practicality: ways to introduce new elements without rewriting.
An example that illustrates when a rewrite might actually be necessary.
Reasons that early prototypes and test builds are the best candidates for rewrites.
Links Mentioned in Today’s Episode:
Mailtrap
WorkOS
Matt Brictson: ‘Simplify your Capybara selectors’
React Testing Library Guidelines
Capybara Accessibility Selectors
Good Enough Testing
‘RailsConf 2023: The Math Every Programmer Needs by Joël Quenneville’
‘Testing Your Edge Cases’
'Working Iteratively'
'Technical Considerations to Help Scale Your Product'
Dan McKinley: ‘Choose Boring Technology'
The Bike Shed
Joël Quenneville on LinkedIn
Joël Quenneville on X
Support The Bike Shed
Does having smaller, more frequent iterations help to ease your cognitive load? During this episode, we discuss the benefits and challenges of working iteratively and whether or not it can prevent costly errors. You’ll hear about juggling individual pieces effectively, factors that incentivize and de-incentivize working iteratively, and how Joël gauges whether or not a project should be broken up into smaller tasks. It can be hard to adopt small iterations, and this conversation also touches on the idea of ‘good enough code’ and discusses how agility can reduce the cost of making changes. Tuning in, you’ll hear about some of the challenges of keeping up with changes as they evolve and why it is beneficial to do so. You will also be equipped with a thought experiment involving elephant carpaccio to build your understanding of working iteratively, explore the challenge of keeping up with evolving changes, and more. Thanks for listening.
Key Points From This Episode:
Stephanie shares a recent mishap that happened at work and what she learned from it.
Unpacking pressures and other aspects that may have contributed to the error.
Joël’s recent travels and his fresh appreciation for fall.
The cost of an incident occurring, how this increases, and the role of code review.
Benefits and pitfalls of more regular code review.
Why working with smaller chunks of work is helpful for Joël’s focus.
Juggling individual pieces effectively.
Factors that de-incentivize working iteratively such as waiting on 24-hour quality control processes.
How working iteratively can facilitate better communication.
Why Joël feels that work that spans a few days should be broken up into smaller chunks.
The idea of ‘good enough code’.
How agility can reduce the cost of making changes.
Using the elephant carpaccio exercise to bolster your understanding of working iteratively.
The challenge of keeping up with changes as they evolve and why it is beneficial to do so.
Involvement from the team and the capacity to change course.
Links Mentioned in Today’s Episode:
WorkOS
Working Incrementally
Working Iteratively
Elephant Carpaccio Exercise
The Bike Shed
Joël Quenneville on LinkedIn
Joël Quenneville on X
Support The Bike Shed
What’s the difference between solving problems and recognizing patterns, and why does it matter for developers? In this episode, Stephanie and Joël discuss transitioning from collecting solutions to identifying patterns applicable to broader contexts in software development. They explore the role of heuristics, common misconceptions among junior and intermediate developers, and strategies for leveling up from a solution-focused mindset to thinking in patterns. They also discuss their experiences of moving through this transition during their careers and share advice for upcoming software developers to navigate it successfully. They explore how learning abstraction, engaging in code reviews, and developing a strong intuition for code quality help developers grow. Uncover the issue of over-applying patterns and gain insights into the benefits of broader, reusable approaches in code development. Join us to discover how to build your own set of coding heuristics, the pitfalls of pattern misuse, and how to become a more thoughtful developer. Tune in now!
Key Points From This Episode:
Stephanie unpacks the differences between patterns and solutions.
The role of software development experience in recognizing patterns.
Why transitioning from solving problems to recognizing patterns is crucial.
Joël and Stephanie talk about the challenges of learning abstraction.
Hear pragmatic strategies for implementing patterns effectively.
How junior developers can build their own set of heuristics for code quality.
Discover valuable tools and techniques to identify patterns in your work.
Find out about approaches to documenting, learning, and sharing patterns.
Gain insights into the process of refactoring a solution into a pattern.
Outlining the common mistakes developers make and the pitfalls to avoid.
Steps for navigating disagreements and feedback in a team environment.
Links Mentioned in Today’s Episode:
WorkOS
RubyConf 2021 - The Intro to Abstraction I Wish I'd Received
'Ruby Science'
Refactoring.Guru
Thoughtbot code review guide
The Bike Shed
Joël Quenneville on LinkedIn
Joël Quenneville on X
Support The Bike Shed
Learning from other developers is an important ingredient to your success. During this episode, Joël Quenneville is joined by Stefanni Brasil, Senior Developer at Thoughtbot, and core maintainer of faker-ruby. To open our conversation, she shares the details of her experience at the Rails World conference in Toronto and the projects she enjoyed seeing most. Next, we explore the challenge of Mac versus Windows and how these programs interact with Ruby on Rails and dive into Stefanni’s involvement in Open Source for Thoughtbot and beyond; what she loves about it, and how she is working to educate others and expand the current limitations that people experience. This episode is also dedicated to the upcoming Open Source Summit that Stefanni is planning on 25 October 2024, what to expect, and how you can get involved. Thanks for listening!
Key Points From This Episode:
Introducing and catching up with Thoughtbot Senior Developer and maintainer of faker-ruby, Stefanni Brasil.
Her experience at the Rails World conference in Toronto and the projects she found most inspiring.
Why accessibility remains a key topic.
How Ruby on Rails translates on Mac and Windows.
Stefanni’s involvement in Open Source and why she enjoys it.
Her experience as core maintainer at faker-ruby.
Ideas she is exploring around Jeremy Evans’ book Polished Ruby Programming and the direction of Faker.
Involvement in Thoughtbot’s Open Source and how it drew her in initially.
The coaching series on Open Source that she participated in earlier this year.
What motivated her to create a public Google doc on Open Source maintenance.
An upcoming event: the Open Source Summit.
The time commitment expected from attendees.
How Stefanni intends to interact with guests and the talk that she will give at the event.
Why everyone is welcome to engage at any level they are comfortable with.
Links Mentioned in Today’s Episode:
Stefanni Brasil
Stefanni Brasil on X
Thoughtbot Open Summit
Open Source Issues doc
Open Source at Thoughtbot
Polished Ruby Programming
Faker Gem
Rails World
The Bike Shed
Joël Quenneville on LinkedIn
What is a program? Your answer to this question will determine the paradigm through which you view programming. During this episode, you’ll come to understand how things change once you develop an awareness of your paradigm, and what. To kick off this episode, Stephanie shares key insights she took from Planet Argon’s 2024 Ruby on Rails survey and dives deeper into her history with Ruby on Rails. Next, we dive into the definition of a paradigm and unpack three different paradigms you might hold as a developer: procedural, object-oriented, and functional. Considering how each of these impacts the way that you might approach your work as a developer, and what you can learn from the ones that are less familiar to you. Joël describes his scripting style and evaluates the concept of pure functions and their place in development, and we close by digging deeper into how your paradigm might impact the code that you write. Tune in to hear all this and more.
Key Points From This Episode:
The EPI feature that Joël has started to build out for his client.
Why Stephanie is excited about the results of Planet Argon’s 2024 Ruby on Rails community survey.
What a procedural program is: programming envisions a program as a series of instructions to a computer.
Defining an object-oriented paradigm: programming envisions a program as the behavior that emerges from objects talking to each other.
How a functional paradigm envisions a program as a series of data transformations.
Alan Turing and Alonzo Church’s approach to understanding this.
How a lot of the foundations of computer science came to be built before we had computers.
Using Ruby to make judgments and assessing whether or not this is a procedural habit.
Why Joël describes his scripting style as being very procedural.
Unpacking the meaning of functional programming.
Evaluating the concept of pure functions.
Considering how your paradigm may impact the Ruby code that you write.
Links Mentioned in Today’s Episode:
2024 Ruby on Rails Community Survey
Church-Turing Thesis
Dynamic type systems are not inherently more open
What is Functional Programming?
Blocks as an abstraction vs for loops
Functional core imperative shell
Testing objects with a functional mindset
The Bike Shed
Joël Quenneville on LinkedIn
Support The Bike Shed
For a long time, Programming Ruby was the authority in the developing world. Now, a much-needed update has been published. During this conversation, we are joined by Noel Rappin, who shares how his frustration at the idea of static type in Ruby motivated him to investigate why he felt this way, as he published his findings in The Pickaxe Book. We discuss how this book differs from previous material he has published, explore a recent blog post series that explored the idea of failing fast, and address the widespread opinion that developers should take a simpler approach that is more accessible. Noel also explores the responsibility of understanding how readers consume material and the importance of providing thorough context as an author, how Programming Ruby became the most significant programming reference, and the surprising journey that led Noel to realize he was able to provide an updated version of the theory in it. Next, we dive into some of the more opinionated blog posts Noel has posted and the harshest feedback he has received in response to them. You’ll also hear about his research and learning during the act of writing the book. Join us today to hear all this and more.
Key Points From This Episode:
Noel Rappin’s recently published work, The Pickaxe Book, on current versions of Ruby.
The inception of the book during discussions about the collision of Sorbet and Ruby.
How his background made him comfortable with the idea that there are no static types.
A recent blog post series and how it answered a question about failing fast.
Considering whether developers pursue simpler things that are more accessible to a wider range of coders.
The problem of thoroughness and longevity in writing instructional material.
Developing awareness of how readers consume and contextualize theory and opinion.
How Programming Ruby became the most significant programming reference.
Noel’s updated version of this material in his latest book.
His blog posts on real-life applications of Ruby and the feedback he receives.
How he goes about framing blog posts as opinion or instruction.
Determining what community consensus is.
The bewilderment that often accompanies onboarding sessions.
Research and learning leading up to writing and publishing the book.
Feedback and reviews on the book.
Links Mentioned in Today’s Episode:
Noel Rappin
Noel Rappin on X
Programming Ruby
How Not to Use Static Typing in Ruby
David Copeland Talk
Better Know a Ruby Thing
How To Manage Duplicate Test Setup, Or Can I Interest You in Weird RSpec?
Better Know a Ruby Thing: On The Use of Private Methods
Standardrb
Rails Test Prescriptions
Programming Ruby: A Pragmatic Programmer’s Guide
The Bike Shed
Joël Quenneville on LinkedIn
Support The Bike Shed
When does it make sense to step away from Rails conventions? What are the limits of convention over configuration? While Rails conventions provide a solid foundation, there are times when customization is necessary to meet specific project needs. In this episode, Joël and Stephanie dive into the tradeoffs of breaking away from Rails defaults. They explore the limits of convention over configuration and share their experiences with customizing beyond the typical Rails setup. Joël offers insights from a recent project where the client opted for all dry-rb objects, and they unpack the benefits and potential challenges of this approach. Stephanie talks about why people tend to shy away from certain Ruby features and her lessons regarding leveraging callbacks for code development. Explore different testing frameworks, the situations when following Ruby defaults is better, the benefits of the ActiveModel ecosystem, and more! Whether you are a Rails purist or looking to bend the rules, this episode will help you understand the pros and cons of stepping outside the Ruby on Rails box. Don’t miss it!
Key Points From This Episode:
Joël shares details about a large-scale refactoring initiative he has been working on.
Stephanie’s recent legacy-code production problem and lessons from her experience.
What Joël would have done differently when building his refactoring initiative.
The problems of renaming background applications during code development.
Why the open-close principle is valuable for making class changes to a system.
Reasons that a migration strategy is vital for navigating new and legacy code.
Explore approaches for overcoming synchronization issues between systems.
Learn about the concept of connascence for coupling systems together.
Considerations for using asynchronous tools with a connascence approach.
Practical ways to maintain naming consistency during code development.
The importance of differentiating between web and business-logic layers.
Situations where relying on callbacks for connascence becomes problematic.
Other issues that callback problems can reveal during code development.
Joël unpacks the scenarios where he deviates from the Ruby on Rails standard.
Frameworks for testing code and final takeaways from Joël and Stephanie.
Links Mentioned in Today’s Episode:
'Refactoring Legacy Code with the Strangler Fig Pattern'
Connascence of Name (CoN)
ActiveModel docs
GitHub | activemodel
'Vanilla Rails is plenty'
GitHub | minitest
GitHub | test-unit
Episode 435: Cohesive Code with Jared Norman
Ruby on Rails
The Bike Shed
Joël Quenneville on LinkedIn
Support The Bike Shed
How can asynchronous programming transform your Ruby on Rails applications? Today, Stephanie sits down with Hello Weather co-creator Trevor Turk to unpack asynchronous programming in Ruby on Rails. Trevor Turk is a seasoned software developer known for his work on Hello Weather, a minimalist weather app that delivers essential weather data quickly and precisely. He’s also the creator of Weather Machine, an advanced weather data platform designed to serve reliable and highly accurate forecasts via API. With a background that includes work at innovative tech companies, Trevor brings years of experience in developing intuitive, user-friendly digital tools. Trevor talks about the focus of his API work, the complexity of web-based apps, and what makes Hello Weather unique. He explains the fundamentals of asynchronous programming within the Ruby on Rails framework and why it is an approach all programmers should consider. Explore the nuances of programming for different data sources, how he leverages fibers and threads for the Hello Weather platform, and why asynchronous programming is not a silver bullet for application development. Discover how to start using asynchronous methods, the various asynchronous tools available in Ruby, and why experimenting with concurrent programming is essential. Join us to gain insights into why including asynchronous tools is vital for the Ruby on Rails ecosystem, improving platforms through open-source development, how to help improve the adoption of asynchronous tools in Ruby, and more. Tune in now!
Key Points From This Episode:
Introduction to Turk and his background in Ruby on Rails.
Details about his companies Hello Weather and Weather Machine.
The innovative features that the Hello Weather platform offers.
Hear how Hello Weather transitioned from a web-based to an application.
Why he needed to alter his programming approach to scale the company.
How he came across the concept of asynchronous programming.
Discover how using fibers is different from using threads in Ruby.
Find out about the different use cases of asynchronous programming.
Learn about the benefits of implementing concurrent programming.
Trevor shares the challenges of working with different versions of Ruby.
His role in enhancing asynchronous methods within the Ruby framework.
Common misconceptions of working with Ruby on Rails.
Final takeaways for those interested in asynchronous programming.
Links Mentioned in Today’s Episode:
Trevor Turk on LinkedIn
Trevor Turk on X
Trevor Turk on Threads
Hello Weather
Weather Machine
GitHub | async gem
GitHub | falcon gem
'Async Ruby on Rails'
load_async
Episode 437: Contributing to Open Source in the Midst of Daily Work with Steve Polito
GitHub | Action Cable server adapter
ActiveRecord connection checkout caching
Ruby on Rails
The Bike Shed
The Bike Shed
Joël Quenneville on LinkedIn
Support The Bike Shed
Writing abstractions in tests can be surprisingly similar to storytelling. The most masterful stories are those where the author has stripped away all of the extra information, and given you just enough knowledge to be immersed and aware of what is going on. But striking that balance can be tricky, both in storytelling and abstractions in tests. Too much information and you risk overwhelming the reader. Too little and they won’t understand why things are operating the way they are. Today, Stephanie and Joël get into some of the more controversial practices around testing, why people use them, and how to strike the right balance with your information. They discuss the most common motivations for introducing abstractions, from improved readability to simplifying the test’s purpose and the types of tests where they are most likely to introduce abstractions. Our hosts also reflect on how they feel about different abstractions in tests – like custom matchers and shared examples – outlining when they reach for them, and the tradeoffs and benefits that come with each. To learn more about how to find the perfect level of abstraction, be sure to tune in!
Key Points From This Episode:
What’s new in Joël’s world; mocking out screens for processes or a new bit of UI.
The new tool Stephanie’s using for reading on the web: Reader by Readwise.
Today’s topic: controversial practices around testing.
How Stephanie and Joël feel about looping through arrays and having IT blocks for each.
The most common motivations for introducing abstractions or helper methods into your tests.
Pros and cons of factories as abstractions in testing.
Types of tests where Joël and Stephanie are more likely to introduce abstractions.
Using page objects in system tests to improve user experience.
Finding the balance between too little and too much information with abstraction in testing.
Why Stephanie has been enjoying fancier matchers like RSpecs.
Top uses of custom matchers, especially for specialized error messaging.
Why Stephanie prefers custom matchers over shared examples.
Using helper methods as a lighter version of abstraction.
Differences and similarities between abstractions in tests versus application code.
A reminder to keep your goals in mind when using abstraction.
Links Mentioned in Today’s Episode:
Are you passionate about open source but struggling to find time amidst your daily work? Today on the podcast, Joël Quenneville sits down with Steve Polito to discuss practical strategies for making meaningful contributions to the open-source community, even when your schedule is packed. Steve is a developer with extensive experience in the open-source world seamlessly. He’s known for his ability to integrate open-source contributions into his daily workflow, all while maintaining high productivity in his professional role. In our conversation, we explore balancing professional responsibilities with open-source contributions. Steve walks us through his process, from the importance of keeping notes to leveraging Rails issue templates. Discover strategies for contributing to open-source work during work hours, the benefits of utilizing existing processes, and why extending the success of your work to the larger developer community is essential. Join us to hear recommendations for handling pull requests with Ruby on Rails, tips for using reproduction scripts, why you should release reports early and often, and much more. Tune in and learn how to seamlessly integrate open-source contributions into your daily workflow with Steve Polito!
Key Points From This Episode:
Joël and Steve catch up and share what they are currently working on.
Transitioning synchronous processing in a web request to the background.
An update on Steve’s “building in public” approach and its reception at thoughtbot.
How Steve chooses to document and track his development process.
Find out how he uses templates to enhance and increase productivity.
Why open-source work does not need to be done during your free time.
Ways you can contribute to open-source projects during normal work hours.
The benefits of sharing troubleshooting solutions with the open-source community.
Pull request lessons from his time working with Ruby on Rails.
Reasons why issues have a lower barrier to entry with Ruby on Rails.
His unique approach of using issues, pull requests, and suspenders.
Identifying aspects of everyday work that are suitable for open-source projects.
Links Mentioned in Today’s Episode:
How can we optimize our time and environment to do our best work as developers? In today’s episode, we are joined by Stephanie Viccari, former co-host of The Bike Shed and Senior Developer at thoughtbot, to unpack the steps for creating work conditions that enhance productivity. In this conversation, we delve into her unique communication style and approach to optimizing productivity within a team. She explains why she decided to hang up her consulting hat and join the product team at Cisco Meraki, her new role there, and how her consulting skills benefit her new position. Tuning in, you’ll discover the key to empathetic communication, how to unblock yourself, tips to help you navigate different communication styles, and why you should advocate for your needs. Stephanie also shares strategies for effective communication and recommendations for managing ‘deep work’ when your time is limited. Gain valuable insights into how to uncover what makes your skillset unique, why it takes a team to manage complex software, benchmarking performance, keeping motivated during stressful times, and more. To learn how to create the conditions for your best work and unlock your full potential as a developer, don’t miss this episode with Stephanie Viccari!
Key Points From This Episode:
Catch up with Stephanie: what she’s been up to since leaving thoughtbot.
How she mastered optimizing workflows and enhancing productivity.
Similarities and differences between working as a consultant versus on a product team.
Ways Stephanie’s mindset shifted from individual thinking to team-oriented strategies.
Nuances of advocating for changes as a consultant versus within a product team.
What software developers need to achieve their best work.
The role of trust between managers and developers in effective problem-solving.
Tips and recommendations for identifying and delivering your best work.
Practical advice for doing your best work, even when you feel demotivated.
Why it's important not to steal from tomorrow's productivity.
Links Mentioned in Today’s Episode:
How easy is it for a layperson to understand your systems? Jared Norman is a software consultant, speaker, and host of the Dead Code Podcast who specializes in building e-commerce applications in Ruby on Rails. This episode follows two recent talks at RailsConf and covers a theme that emerged from both of them: coupling and cohesion. Tuning in, you’ll gain insights on how to create more cohesive components to allow for change and improve your understanding of value objects, systems, and more. You’ll also hear about navigating the complexity of domain-driven design and learn how to gauge if your code is easy to understand through a simple rule of thumb. We discuss what it might look like to improve the cohesion of individual objects, identify your systems’ seams to create simplicity, and the liminal space between inheritance and composition and the role of decorators in moving through it. Join us today to hear all this and more!
Key Points From This Episode:
Introducing Jared Norman recent speaker at RailsConf and Ruby on Rails specialist.
Jared’s interests outside of coding: cycling.
Themes that emerged from Jared and Stephanie’s talks: coupling and cohesion.
A rule of thumb for achieving high cohesion.
How value objects tie into the idea of cohesion.
Creating more cohesive components in order to have code and systems that are easier to change.
The relationships between objects in increasing cohesion and how complex nestings of objects can hinder this.
Rearranging systems in order to find seams and create cohesion.
Simplifying code in order to facilitate it working independently to support functionality.
Improving systems by identifying opportunities for decoupling and other relationships.
Inheritance, composition, and decorators and the liminal space between.
The complexity of domain-driven design.
A rule that indicates when a system is easy to understand.
Links Mentioned in Today’s Episode:
It's Calls for Proposals (CFP) season, and in the process of helping our friends and colleagues flesh out their CFPs, we came up with a few questions to help them frame their proposals for success. After learning about the importance of finding your audience and angle of approach for your CFP, we dive into today's main topic – our Git and GitHub workflows. Joel and Stephanie walk us through their current workflows before exploring the differences between main branch and future branch commits. Then, we explore commits editing and why it's okay to make mistakes, commit messages versus GitHub pull requests (PR), what you need to know if you're new to Git, and what you need to understand about PR sizes and Git merge strategies. To end, Joel shares the commit messages that satisfy him the most, and we discover how to make one's life easier when reviewing PRs.
Key Points From This Episode:
Our CFP framework of questions to help you build a winning proposal.
Why it's important to understand who your audience is and who you're speaking to.
Ascertaining your angle of approach - how will you tell your story?
The ins and outs of Stephanie's current work life.
How discipline and particularly, self-discipline relate to our Git and GitHub workflows.
Understanding Joel and Stephanie's workflows - how they're similar and how they differ.
The differences between main branch and future branch commits.
Editing commits and editing commits history, and why it's okay to make mistakes.
Commit messages versus GitHub pull requests (PR).
Some advice and strategies for those who are new to Git.
Discussing Git merge strategies, PR sizes, and online changes.
Joel details the types of commit messages that he finds most satisfying.
How to make your life easier when reviewing PRs.
Links Mentioned in Today’s Episode:
RubyConf Rubric
'Working Iteratively’
Good Commit Messages
Shotgun Surgery
'Episode 401: Making the Right Thing Easy’
Joel Quenneville on X
Joel Quenneville on LinkedIn
Support The Bike Shed
Have you ever wondered how improvisation can revolutionize coding? In today’s episode, Stephanie sits down with Kasper Timm Hansen to discuss his innovative “riffing” approach to code development. Kasper is a long-time Ruby developer and former member of the Rails core team. He focuses on Ruby and domain modeling, developing various Ruby gems, and providing consulting services in the developer space. He has become renowned for his approach of “riffing” to software development, particularly in the Ruby on Rails framework. In our conversation, we delve into his unique approach to coding, how it differs from traditional methods, and the benefits of improvisation to code development. Discover the “feeling” part of riffing, the steps to uncovering relationships between models, and why it is okay not to know how to do something. Explore how riffing enhances collaboration, improves communication with and between teams, identifies alternative code, why “clever code” does not make for good solutions, and much more! Tune in to learn how to take your coding skills to the next level and uncover the magic of riffing with Kasper Timm Hansen!
Key Points From This Episode:
Introduction to Kasper, his background in Ruby, and experience as a consultant.
An overview of his RailsConf 2024 presentation on domain modeling.
His motivation behind his presentation and the overall reception of the concept.
Unpack the concept of “riffing” with code as a developer.
Insights into his methodology and how it differs from traditional approaches.
Examples of “riffing" and how it benefits the development process.
How he determines the best code to implement during his process.
Kasper shares how he frames problems and builds solutions.
Ways riffing highlights gaps in skillsets early in the development process.
Hear about the various ways riffing fosters and improves collaboration.
Unpack how riffing can help developers communicate more effectively.
Balancing the demands of code review with the riffing approach.
Final takeaways for listeners and how to contact Kasper to begin riffing!
Links Mentioned in Today’s Episode:
Some of Kasper's open source work:
The term ‘nil’ refers to the absence of value, but we often imbue it with much more meaning than just that. Today, hosts Joël and Stephanie discuss the various ways we tend to project extra semantics onto nil and the implications of this before unpacking potential alternatives and trade-offs.
Joël and Stephanie highlight some of the key ways programmers project additional meaning onto nil (and why), like when it’s used to create a guest session, and how this can lead to bugs, confusion, and poor user experiences. They discuss solutions to this problem, like introducing objects for improved readability, before taking a closer look at the implications of excessive guard clauses in code.
Our hosts also explore the three-state Boolean problem, illustrating the pitfalls of using nullable Booleans, and why you should use default values in your database. Joël then shares insights from the Elm community and how it encourages rigorous checks and structured data modeling to manage nil values effectively. They advocate for using nil only to represent truly optional data, cautioning against overloading nil with additional meanings that can compromise code clarity and reliability. Joël also shares a fun example of modeling a card deck, explaining why you might be tempted to add extra semantics onto nil, and why the joker always inevitably ends up causing chaos!
Stephanie shares her newfound interest in naming conventions, highlighting a resource called "Classnames" that provides valuable names for programming and design. Joël, in turn, talks about using AI to generate names for D&D characters, emphasizing how AI can help provide inspiration and reasoning behind name suggestions. Then, they shift to Joël's interest in Roman history, where he discusses a blog by a Roman historian that explores distinctions between state and non-state peoples in the ancient Mediterranean.
Together, the hosts delve into the importance of asking questions as consultants and developers to understand workflows, question assumptions, and build trust for better onboarding. Stephanie categorizes questions by engagement stages and their social and technical aspects, while Joël highlights how questioning reveals implicit assumptions and speeds up learning. They stress maintaining a curious mindset, using questions during PR reviews, and working with junior developers to foster collaboration. They conclude with advice on documenting answers and using questions for continuous improvement and effective decision-making in development teams.
Transcript:
JOËL: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, if it has not been clear about just kind of the things I'm mentioning on the podcast the past few weeks, I've been obsessed with naming things lately [chuckles] and just thinking about how to name things, and, yeah, just really excited about...or even just having fun with that more than I used to be as a dev. And I found a really cool resource called "Classnames." Well, it's like just a little website that a designer and developer shared from kind of as an offshoot from his personal website. I'll link it in the show notes.
But it's basically just a list of common names that are very useful for programming or even design. It's just to help you find some inspiration when you're stuck trying to find a name for something. And they're general or abstract enough that, you know, it's almost like kind of like a design pattern but a naming pattern [laughs], I suppose.
JOËL: Ooh.
STEPHANIE: Yeah, right? And so, there's different categories. Like, here's a bunch of words that kind of describe collections. So, if you need to find the name for a containment or a group of things, here's a bunch of kind of words in the English language that might be inspiring. And then, there's also other categories like music for describing kind of the pace or arrangement of things. Fashion, words from fashion can describe, like, the size of things. You know, we talk about T-shirt sizes when we are estimating work.
And yeah, I thought it was really cool that there's both things that draw on, you know, domains that most people know in real life, and then also things that are a little more abstract. But yeah, "Classnames" by Paul Robert Lloyd — that's been a fun little resource for me lately.
JOËL: Very cool. Have you ever played around at all with using AI to help you come up with the naming?
STEPHANIE: I have not. But I know that you and other people in my world have been enjoying using AI for inspiration when they feel a little bit stuck on something and kind of asking like, "Oh, like, how could I name something that is, like, a group of things?" or, you know, a prompt like that. I suspect that that would also be very helpful.
JOËL: I've been having fun using that to help me come up with good names for D&D characters, and sometimes they're a little bit on the nose. But if I sort of describe my character, and what's their vibe, and a little bit of, like, what they do and their background, and, like, I've built this whole, like, persona, and then, I just ask the AI, "Hey, what might be some good names for this?" And the AI will give me a bunch of names along with some reasoning for why they think that would be a good match.
So, it might be like, oh, you know, the person's name is, I don't know, Starfighter because it evokes their connection to the night sky or whatever because that was a thing that I put in the background. And so, it's really interesting. And sometimes they're, like, just a little too obvious. Like, you don't want, you know, Joe Fighter because he's a fighter.
STEPHANIE: And his name is Joe [laughs].
JOËL: Yeah, but some of them are pretty good.
STEPHANIE: Cool. Joël, what's new in your world?
JOËL: I guess in this episode of how often does Joël think about the Roman Empire...
STEPHANIE: Oh my gosh [laughs].
JOËL: Yes [laughs].
STEPHANIE: Spoiler: it's every day [laughs].
JOËL: Whaaat? There's a blog that I enjoy reading from a Roman historian. It's called "A Collection of Unmitigated Pedantry", acoup.blog. He's recently been doing an article series on not the Romans, but rather some of these different societies that are around them, and talking a little bit about a distinction that he calls sort of non-state peoples versus states in the ancient Mediterranean. And what exactly is that distinction? Why does it matter? And those are terms I've heard thrown around, but I've never really, like, understood them. And so, he's, like, digging into a thing that I've had a question about for a while that I've been really appreciating.
STEPHANIE: Can you give, like, the reader's digest for me?
JOËL: For him, it's about who has the ability to wield violence legitimately. In a state, sort of the state has a monopoly on violence. Whereas in non-state organizations, oftentimes, it's much more personal, so you might have very different sort of nobles or big men who are able to raise, let's say, private armies and wage private war on each other, and that's not seen as, like, some, like, big breakdown of society. It's a legitimate use of force. It's just accepted that that's how society runs.
As opposed to in a state, if a, you know, wealthy person decided to raise a private army, that would be seen as a big problem, and the state would either try to put you down or, like, more generally, society would, like, see you as having sort of crossed a line you shouldn't have crossed.
STEPHANIE: Hmm, cool. I've been reading a lot of medieval fantasy lately, so this is kind of tickling my brain in that way when I think about, like, what drives different characters to do things, and kind of what the consequences of those things are.
JOËL: Right. I think it would be really fascinating to sort of project this framework forwards and look at the European medieval period through that lens. It seems to me that, at least from a basic understanding, that the sort of feudal system seems to be very much in that sort of non-state category. So, I'd be really interested to see sort of a deeper analysis of that. And, you know, maybe he'll do an addendum to this series. Right now, he's mostly looking at the Gauls, the Celtiberians, and the Germanic tribes during the period of the Roman Republic.
STEPHANIE: Cool. Okay. Well, I also await the day when you somehow figure how this relates to software [laughter] and inevitably make some mind-blowing connection and do a talk about it [laughs].
JOËL: I mean, theming is always fun. There's a talk that I saw years ago at Strange Loop that was looking at the defense policy of the Roman Emperor Diocletian and the Roman Emperor Constantine, and the ways that they sort of defended the borders of the empire and how they're very different, and then related it to how you might handle network security.
STEPHANIE: Whaat?
JOËL: And sort of like a, hey, are we using more of a Diocletian approach here, or are we using more of a Constantine approach here? And all of a sudden, just, like, having those labels to put on there and those stories that went with it made, like, what could be a really, like, dry security talk into something that I still remember 10 years later.
STEPHANIE: Yeah. Yeah. We love stories. They're memorable.
JOËL: So, I'll make sure to link that in the show notes.
STEPHANIE: Very cool.
JOËL: We've been talking a lot recently about my personal note system, where I keep a bunch of, like, small atomic notes that are all usually based around a single thesis statement. And I was going through that recently, and I found one that was kind of a little bit juicy. So, the thesis is that consultants are professional question-askers. And I'm curious, as a consultant yourself, how do you feel about that idea?
STEPHANIE: Well, my first thought would be, how do I get paid to only ask in questions [laughs] or how to communicate in questions and not do anything else [laughs]? It's almost like I'm sure that there is some, like, fantasy character, you know, where it's like, there's some villain or just obstacle where you have this monster character who only talks in questions. And it's like a riddle that you have to solve [laughs] in order to get past.
JOËL: I think it's called a three-year-old.
STEPHANIE: Wow. Okay. Maybe a three-year-old can do my job then [laughter]. But I do think it's a juicy one, and it's very...I can't wait to hear how you got there, but I think my reaction is yes, like, I do be asking questions [laughs] when I join a project on a client team. And I was trying to separate, like, what kinds of questions I ask. And I kind of came away with a few different categories depending on, like, the stage of the engagement I'm in.
But, you know, when I first join a team and when I'm first starting out consulting for a team, I feel like I just ask a lot of basic questions. Like, "Where's the Jira board [laughs]?" Like, "How do you do deployments here?" Like, "What kind of Git process do you use?" So, I don't know if those are necessarily the interesting ones. But I think one thing that has been nice is being a consultant has kind of stripped the fear of asking those questions because, I don't know, these are just things I need to know to do my work. And, like, I'm not as worried about, like, looking dumb or anything like that [laughs].
JOËL: Yeah. I think there's often a fear that asking questions might make you look incompetent or maybe will sort of undermine your appearance of knowing what you're talking about, and I think I've found that to be sort of the opposite. Asking a lot of questions can build more trust, both because it forces people to think about things that maybe they didn't think about, bring to light sort of implicit assumptions that everyone has, and also because it helps you to ramp up much more quickly and to be productive in a way that people really appreciate.
STEPHANIE: Yeah. And I also think that putting those things in, like, a public and, like, documented space helps people in the future too, right? At least I am a power Slack searcher [laughs]. And whenever I am onboarding somewhere, one of the first places I go is just to search in Slack and see if someone has asked this question before.
I think the next kind of category of question that I discerned was just, like, questions to understand how the team understands things. So, it's purely just to, like, absorb kind of like perspective or, like, a worldview this team has about their codebase, or their work, or whatever. So, I think those questions manifest as just like, "Oh, like, you know, I am curious, like, what do you think about how healthy your codebase is? Or what kinds of bugs is your team, like, dealing with?"
Just trying to get a better understanding of like, what are the challenges that this team is facing in their own words, especially before I even start to form my own opinions. Well, okay, to be honest, I probably am forming my own opinions, like, on the side [laughs], but I really try hard to not let that be the driver of how I'm showing up and especially in the first month I'm starting on a new team.
JOËL: Would you say these sorts of questions are more around sort of social organization or, like, how a team approaches work, that sort of thing? Or do you classify more technical questions in this category? So, like, "Hey, tell me a little bit about your philosophy around testing." Or we talked in a recent episode "What value do you feel you get out of testing?" as a question to ask before even, like, digging into the implementation.
STEPHANIE: Yeah, I think these questions, for me, sit at, like, the intersection of both social organization and technical questions because, you know, asking something like, "What's the value of testing for your team?" That will probably give me information about how their test suite is like, right? Like, what kinds of tests they are writing and kind of the quality of them maybe. And it also tells me about, yeah, like, maybe the reasons why, like, they only have just unit tests or maybe, like, just [inaudible 12:31] test, or whatever. And I think all of that is helpful information.
And then, that's actually a really...I like the distinction you made because I feel like then the last category of questions that I'll mention, for now, feels like more geared towards technical, especially the questions I ask to debunk assumptions that might be held by the team. And I feel like that's like kind of the last...the evolution of my question-asking. Because I have, hopefully, like, really absorbed, like, why, you know, people think the way they do about some of these, you know, about their code and start to poke a little bit on being like, "Why do you think, you know, like, this problem space has to be modeled this way?"
And that has served me well as a consultant because, you know, once you've been at an organization for a while, like, you start to take a lot of things for granted about just having to always be this way, you know, it's like, things just are the way they are. And part of the power of, you know, being this kind of, like, external observer is starting to kind of just like, yeah, be able to question that. And, you know, at the end of day, like, we choose not to change something, but I think it's very powerful to be able to at least, like, open up that conversation.
JOËL: Right. And sometimes you open up that conversation, and what you get is a link to a big PR discussion or a Wiki or something where that discussion has already been had. And then, that's good for you and probably good for anybody else who has that question as well.
STEPHANIE: I'm curious, for you, though, like, this thesis statement, atomic note, did you have notes around it, or was it just, like, you dropped it in there [laughs]?
JOËL: So, I have a few things, one is that when you come in as a consultant, and, you know, we're talking here about consultants because that's what we do. I think this is probably true for most people onboarding, especially for non-junior roles where you're coming in, and there's an assumption of expertise, but you need to onboard onto a project. This is just particularly relevant for us as consultants because we do this every six months instead of, you know, a senior developer who's doing this maybe every two to three years.
So, the note that I have here is that when you're brought on, clients they expect expertise in a technology, something like Ruby on Rails or, you know, just the web environment in general. They don't expect you as a consultant to be an expert in their domain or their practices. And so, when you really engage with this sort of areas that are new by asking a lot of questions, that's the thing that's really valuable, especially if those questions are coming from a place of experience in other similar things. So, maybe asking some questions around testing strategies because you've seen three or four other ways that work or don't work or that have different trade-offs.
Even asking about, "Hey, I see we went down a particular path, technically. Can you walk me through what were the trade-offs that we evaluated and why we decided this was the path that was valuable for us?" That's something that people really appreciate from outside experts. Because it shows that you've got experience in those trade-offs, that you've thought the deeper thoughts beyond just shipping the next ticket. And sometimes they've made the decisions without actually thinking through the trade-offs. And so, that can be an opening for a conversation of like, "Hey, well, we just went down this path because we saw a blog article that recommended this, or we just did this because it felt right. Talk us through the trade-offs."
And now maybe you have a conversation on, "Hey, here are the trade-offs that you're doing. Let me know if this sounds right for your organization. If not, maybe you want to consider changing some things or tweaking your approach." And I think that is valuable sort of at the big level where you're thinking about how the team is structured, how different parts of work is done, the technical architecture, but it also is valuable at the small level as well.
STEPHANIE: Yeah, 100%. There is a blog post I really like by Hazel Weakly, and it's called "The Power of Being New: A Proven Recipe for High Impact." And one thing that she says at the beginning that I really enjoy is that even though, like, whenever you start on a new team there's always that little bit of pressure of starting to deliver immediate value, right? But there's something really special about that period where no one expects you to do anything, like, super useful immediately [laughs]. And I feel like it is both a fleeting time and, you know, I'm excited to continue this conversation of, like, how to keep integrating that even after you're no longer new.
But I like to use that time to just identify, while I have nothing really on my plate, like, things that might have just been overlooked or just people have gotten used to that sometimes is, honestly, like, can be a quick fix, right? Like, just, I don't know, deleting a piece of dead code that you're seeing is no longer used but just gets fallen off other people's plates. I really enjoy those first few weeks, and people are almost, like, always so appreciative, right? They're like, "Oh my gosh, I have been meaning to do that." Or like, "Great find." And these are things that, like I said, just get overlooked when you are, yeah, kind of busy with other things that now are your responsibility.
JOËL: You're talking about, like, that feeling of can you add value in the, like, initial time that you join. And I think that sometimes it can be easy to think that, oh, the only value you can add is by, like, shipping code. I think that being sort of noisy and asking a lot of questions in Slack is often a great way to add value, especially at first.
STEPHANIE: Yeah, agreed.
JOËL: Ideally, I think you come in, and you don't sort of slide in under the radar as, like, a new person on the team. Like, you come in, and everybody knows you're there because you are, like, spamming the channel with questions on all sorts of things and getting people to either link you to resources they have or explaining different topics, especially anything domain-related. You know, you're coming in with an outside expertise in a technology. You are a complete new person at the business and the problem domain. And so, that's an area where you need to ask a lot of questions and ramp up quickly.
STEPHANIE: Yes. I have a kind of side topic. I guess it's not a side topic. It's about asking questions, so it's relevant [laughs]. But one thing that I'm curious about is how do you approach kind of doing this in a place where question asking is not normalized and maybe other people are less comfortable with kind of people asking questions openly and in public? Like, how do you set yourself up to be able to ask questions in a way that doesn't lead to just, like, some just, like, suspicion or discomfort about, like, why you're asking those questions?
JOËL: I think that's the beauty of the consultant title. When an organization brings in outside experts, they kind of expect you to ask questions. Or maybe it's not an explicit expectation, but when they see you asking a lot of questions, it sort of, I think, validates a lot of things that they expect about what an outside expert should be. So, asking a lot of questions of trying to understand your business, asking a lot of questions to try to understand the technical architecture, asking questions around, like, some subtle edge cases or trade-offs that were made in the technical architecture.
These are all things that help clients feel like they're getting value for the money from an outside expert because that's what you want an outside expert to do is to help you question some of your assumptions, to be able to leverage their, like, general expertise in a technology by applying it to your specific situation.
I've had situations where I'll ask, like, a very nuanced, deep technical question about, like, "Hey, so there's, like, this one weird edge case that I think could potentially happen. How do we, like, think through about this?" And one of the, like, more senior people on the team who built the initial codebase responded, like, almost, like, proud that I've discovered this, like, weird edge case, and being like, "Oh yeah, that was a thing that we did think about, and here's why. And it's really cool that, like, day one you're, like, just while reading through the code and were like, 'Oh, this thing,' because it took us, like, a month of thinking about it before we stumbled across that."
So, it was a weird kind of fun interaction where as a new person rolling on, one of the more experienced devs in the codebase almost felt, like, proud of me for having found that.
STEPHANIE: I like that, yeah. I feel like a lot of the time...it's like, it's so easy to ask questions to help people feel seen, to be like, "Oh yeah, like, I noticed this." And, you know, if you withhold any kind of, like, judgment about it when you ask the question, people are so willing to be like, yeah, like you said, like, "Oh, I'm glad you saw that." Or like, "Isn't that weird? Like, I was feeling, you know, I saw that, too." Or, like, it opens it up, I think, for building trust, which, again, like, I don't even think this is something that you necessarily need to be new to even do. But if at any point you feel like, you know, maybe your working relationship with someone could be better, right? To the point where you feel like you're, like, really on the same page, yeah, ask questions [laughs]. It can be that easy.
JOËL: And I think what can be really nice is, in an environment where question asking is not normalized, coming in and doing that can help sort of provide a little bit of cover to other people who are feeling less comfortable or less safe doing that. So, maybe there's a lot of junior members on the team who are feeling not super confident in themselves and are afraid that asking questions might undermine their position in the company. But me coming in as a sort of senior consultant and asking a lot of those questions can then help normalize that as a thing because then they can look and say, "Oh, well he's asking all these questions. Maybe I can ask my question, and it'll be okay."
STEPHANIE: I also wanted to talk about setting yourself up and asking questions to get a good answer, asking good questions to get useful answers. One thing that has worked really well for me in the past few months has been sharing why I'm asking the question. And I think this goes back to a little bit of what I was hinting at earlier. If the culture is not really used to people asking questions and that just being a thing that is normal, sharing a bit of intention can help, like, ease maybe some nervousness that people might feel. Especially as consultants, we also are in a bit of a, I don't know, like, there is some power dynamics occasionally where it's like, oh, like, the consultants are here. Like, what are they going to come in and change or, like, start, you know, doing to, quote, unquote, "improve", whatever, I don't know [laughs].
JOËL: Right, right.
STEPHANIE: Yeah, that's the consultant archetype, I think. Anyway.
JOËL: Just coming in and being like, "Oh, this is bad, and this is bad, and you're doing it wrong."
STEPHANIE: [laughs]
JOËL: Ooh, I would be ashamed if I was the author of this code.
STEPHANIE: Yeah, my hot take is that that is a bad consultant [laughs]. But maybe I'll say, like, "I am looking for some examples of this pattern. Where can I find them [laughs]?" Or "I've noticed that the team is struggling with, like, this particular part of the codebase, and I am thinking about improving it. What are some of your biggest challenges, like, working with this, like, model?" something like that.
And I think this also goes back to, like, proving value, right? Even if it's like, sometimes I know kind of what I want to do, and I'll try to be explicit about that. But even before I have, like, a clear action item, I might just say like, "I'm thinking about this," you know, to convey that, you know, I'm still in that information gathering stage, but the result of that will be useful to help me with whatever kind of comes out of it.
JOËL: A lot of it is about, like, genuine curiosity and an amount of empathetic listening. Existing team knows a lot about both the code and the business. And as a consultant coming on or maybe even a more senior person onboarding onto a team, the existing team has so much that they can give you to help you be better at your job.
STEPHANIE: I was also revisiting a really great blog post from Julia Evans about "How to Ask Good Questions." And this one is more geared towards asking technical questions that have, like, kind of a maybe more straightforward answer. But she included a few other strategies that I liked a lot. And, frankly, I feel like I want to be even better at finding the right time to ask questions [laughs] and finding the right person to ask those questions to.
I definitely get in the habit of just kind of like, I don't know, I'll just put it out there and [laughs], hopefully, get some answers. But there are definitely ways, I think, that you can be more strategic, right? About identifying who might be the best person to provide the answers you're looking for. And I think another thing that I often have to balance in the consulting position is when to know when to, like, stop kind of asking the really big questions because we just don't have time [laughs].
JOËL: Right. You don't want to be asking questions in a way that's sort of undermining the product, or the decisions that are being made, or the work that has to get done. Ideally, the questions that you're asking are helping move the project forward in a positive way. Nobody likes the, you know, just asking kind of person. That person's annoying.
STEPHANIE: Do you have an approach or any thoughts about like, once you get an answer, like, what do you do with that? Yeah, what happens then for you?
JOËL: I guess there's a lot of different ways it can go. A potential way if it's just, like, an answer explained in Slack, is maybe saying, "We should document this." Or maybe even like, "Is this documented anywhere? If not, can I add that documentation somewhere?" And maybe that's, you know, a code comment that we want to add. Maybe that's an entry to the Wiki. Maybe that's updating the README. Maybe that's adding a test case. But converting that into something actionable can often be a really good follow-up.
STEPHANIE: Yeah, I think that mitigates the just asking [laughs] thing that you were saying earlier, where it's like, you know, the goal isn't to ask questions to then make more work for other people, right? It's to ask questions so, hopefully, you're able to take that information and do something valuable with it.
JOËL: Right. Sometimes it can be a sort of setup for follow-up questions. You get some information and you're like, okay, so, it looks like we do have a pattern for interacting with third-party APIs, but we're not using it consistently. Tell me a little bit about why that is. Is that a new pattern that we've introduced and we're trying to, like, get more buy-in from the team? Is this a pattern that we used to have, and we found out we didn't like it? So, we stopped using it, but we haven't found a replacement pattern that we like. And so, now we're just kind of...it's a free-for-all, and we're trying to figure it out.
Maybe there's two competing patterns, and there is this, like, weird politics within the tech team where they're sort of using one or the other, and that's something I'm going to have to be careful to navigate. So, asking some of those follow-up questions and once you have a technical answer can yield a lot of really interesting information and then help you think about how you can be impactful on the organization.
STEPHANIE: And that sounds like advice that's just true, you know, regardless of your role or how long you've been in it, don't you think?
JOËL: I would say yes. If you've been in the role a long time, though, you're the person who has that sort of institutional history in your mind. You know that in 2022, we switched over from one framework to another. You know that we used to have this, like, very opinionated architect who mandated a particular pattern, and then we moved away from it. You know that we were all in on this big feature last summer that we released and then nobody used it, and then the business pivoted, but there's still aspects of it that are left around. Those are things that someone knew onboarding doesn't know and that, hopefully, they're asking questions that you can then answer.
STEPHANIE: Have you been in the position where you have all that, like, institutional knowledge? And then, like, how do you maintain that sense of curiosity or just that sense of kind of, like, what you're talking about, that superpower that you get when you're new of being able to just, you know, kind of question why things are the way they are?
JOËL: It's hard, right? We're talking about how do you keep that sort of almost like a beginner's mindset, in this case, maybe less of a, like, new coder mindset and more of a new hire mindset. It's something that I think is much more front of mind for me because I rotate onto new clients every, like, 6 to 12 months. And so, I don't have very long to get comfortable before I'm immediately thrown into, like, a new situation.
But something that I like to do is to never sort of solely be in one role or the other, a sort of, like, experienced person helping others or the new person asking for help. Likely, you are not going to be the newest person on the team for long. Maybe you came on as a cohort and you've got a group of new people, all of whom are asking different questions. And maybe somebody is asking a question that you've asked before, that you've asked in a different channel or on a call with someone. Or maybe someone joins two weeks after you; you don't have deep institutional knowledge.
But if you've been asking a lot of questions, you've been building a lot of that for yourself, and you have a little bit that you can share to the next person who knows even less than you do. And that's an approach that I took even as an apprentice developer. When I was, like, brand new to Rails and I was doing an internship, and another intern joined me a couple of weeks after, and I was like, "You know what? I barely know anything. But I know what an instance variable is. And I can help you write a controller action. Let's pair on that. We'll figure it out. And, you know, ask me another question next week. I might have more answers for you." So, I guess a little bit of paying it forward.
STEPHANIE: Yeah, I really like that advice, though, of, like, switching up the role or, like, kind of what you're working on, just finding opportunities to practice that, you know, even if you have been somewhere for a long time. I think that is really interesting advice. And it's hard, too, right? Because that requires, like, doing something new, and doing something new can be hard [laughs]. But if you're, you know, aren't in a consultant role, where you're not rotating onto new projects every 6 to 12 months, that, I feel like, would be a good strategy to grow in that particular way.
JOËL: And even if you're not switching companies or in a consulting situation, it's not uncommon to have people switch from one team to another within an organization. And new team might mean new dynamics. That team might be doing a slightly different approach to project management. Their part of the code might be structured slightly differently. They might be dealing with a part of the business domain that you're less familiar with. While that might not be entirely new to you because, you know, you know a little bit of the organization's DNA and you understand the organization's mission and their core product, there are definitely a lot of things that will be new to you, and asking those questions becomes important.
STEPHANIE: I also have another kind of, I don't know, it's not even a strategy. It's just a funny thing that I do where, like, my memory is so poor that, like, even code I wrote, you know, a month ago, I'm like, oh, what was past Stephanie thinking here [laughs]? You know, questioning myself a little bit, right? And being willing to do that and recognizing that, like, I have information now that I didn't have in the past. And, like, can that be useful somehow?
You know, it's like, the code I wrote a month ago is not set in stone. And I think that's one way I almost, like, practice that skill with myself [laughs]. And yeah, it has helped me combat that, like, things are the way they are mentality, which, generally, I think is a very big blocker [laughs] when it comes to software development, but that's a topic for another day [laughs].
JOËL: I like the idea of questioning yourself, and I think that's something that is a really valuable skill for all developers. I think it can come up in things like documentation. Let's say you're leaving a comment on a method, especially one that's a bit weird, being able to answer that "Why was this weird technical decision made?" Or maybe you do this in your PR description, or your commit message, or in any of the other places where you do this, not just sort of shipping the code as is, but trying to look at it from an outsider's eyes.
And being like, what are the areas where they're going to, like, get a quizzical look and be like, "Why is this happening? Why did you make this choice?" Bonus points if you talked a little bit about the trade-offs that were decided on to say, "Hey, there were two different implementations available for this. I chose to take implementation A because I like this set of trade-offs better." That's gold. And, I guess, as a reviewer, if I'm seeing that in a PR, that's going to make my job a lot easier.
STEPHANIE: Yes. Yeah, I never thought about it that way, but yeah, I guess I do kind of apply, you know, the things that I would kind of ask to other team members to myself sometimes. And that is...it's cool to hear that you really appreciate that because I always kind of just did it for myself [laughs], but yeah, I'm sure that it, like, is helpful for other people as well.
JOËL: I guess you were asking what are ways that you can ask questions even when you are more established. And talking about these sorts of self-reflective questions in the context of review got me thinking that PRs are a great place to ask questions. They're great when you're a newcomer. One of the things I like to do when I'm new on a project is do a lot of PR reviews so I can just see the weird things that people are working on and ask a lot of questions about the patterns.
STEPHANIE: Yep. Same here.
JOËL: Do a lot of code reading. But that's a thing that you can keep doing and asking a lot of questions on PRs and not in a, like, trying to undermine what the person is doing, but, like, genuine questions, I think, is a great way to maintain that mindset.
STEPHANIE: Yeah, yeah, agreed. And I think when I've seen it done well, it's like, you get to be engaged and involved with the rest of your team, right? And you kind of have a bit of an idea about what people are working on. But you're also kind of entrusting them with ownership of that work. Like, you don't need to be totally in the weeds and know exactly how every method works. But, you know, you can be curious about like, "Oh, like, what were you thinking about this?" Or like, "What about this pattern appeals to you?" And all of that information, I think, helps you become a better, like, especially a senior developer, but also just, like, a leader on the team, I think.
JOËL: Yeah, especially the questions around like, "Oh, walk me through some of the trade-offs that you chose for this method." And, you know, for maybe a person who's more senior, that's great. They have an opportunity to, like, talk about the decisions they made and why. That's really useful information. For a more junior person, maybe they've never thought about it. They're like, "Oh, wait, there are trade-offs here?" and now that's a great learning opportunity for them.
And you don't want to come at it from a place of judgment of like, oh, well, clearly, you know, you're a terrible developer because you didn't think about the performance implications of this method. But if you come at it from a place of, like, genuine curiosity and sort of assuming the best of people on the team and being willing to work alongside them, help them discover some new concepts...maybe they've never, like, interacted so much with performance trade-offs, and now you get to have a conversation. And they've learned a thing, and everybody wins.
STEPHANIE: Yeah. And also, I think seeing people ask questions that way helps more junior folks also learn when to ask those kinds of questions, even if they don't know the answer, right? But maybe they start kind of pattern matching. Like, oh, like, there might be some other trade-offs to consider with this kind of code, but I don't know what they are yet. But now I know to at least start asking and find someone who can help me determine that. And when I've seen that, that has been always, like, just so cool because it's upskilling happening [laughs] in practice.
JOËL: Exactly. I love that phrase that you said: "Asking questions where you don't know the answers," which I think is the opposite of what lawyers are taught to do. I think lawyers the mantra they have is you never ask a witness a question that you don't know the answer to. But I like to flip that for developers. Ask a lot of questions on PRs where you don't know the answer, and you'll grow, and the author will grow. And this is true across experience levels.
STEPHANIE: That's one of my favorite parts about being a developer, and maybe that's why I will never be a lawyer [laughter].
JOËL: On that note, I have a question maybe I do know the answer to. Shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie and Joël discuss the recent announcement of the call for proposals for RubyConf in November. Joël is working on his proposals and encouraging his colleagues at thoughtbot to participate, while Stephanie is excited about the conference being held in her hometown of Chicago!
The conversation shifts to Stephanie's recent work, including completing a significant client project and her upcoming two-week refactoring assignment. She shares her enthusiasm for refactoring code to improve its structure and stability, even when it's not her own. Joël and Stephanie also discuss the everyday challenges of maintaining a test suite, such as slowness, flakiness, and excessive database requests. They discuss strategies to balance the test pyramid and adequately test critical paths.
Finally, Joël emphasizes the importance of separating side effects from business logic to enhance testability and reduce complexity, and Stephanie highlights the need to address testing pain points and ensure tests add real value to the codebase.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Something that's new in my world is that RubyConf just announced their call for proposals for RubyConf in November. They're open for...we're currently recording in June, and it's open through early July, and they're asking people everywhere to submit talk ideas. I have a few of my own that I'm working with. And then, I'm also trying to mobilize a lot of other colleagues at thoughtbot to get excited to submit.
STEPHANIE: Yes, I am personally very excited about this year's RubyConf in November because it's in Chicago, where I live, so I have very little of an excuse not to go [laughs]. I feel like so much of my conference experience is traveling to just kind of, like, other cities in the U.S. that I want to spend some time in and, you know, seeing all of my friends from...my long-distance friends. And it definitely does feel like just a bit of an immersive week, right? And so, I wonder how weird it will feel to be going to this conference and then going home at the end of the night. Yeah, that's just something that I'm a bit curious about. So, yeah, I mean, I am very excited. I hope everyone comes to Chicago. It's a great city.
JOËL: I think the pitch that I'm hearing is submit a proposal to the RubyConf CFP to get a chance to get a free ticket to go to RubyConf, where you get to meet Bike Shed co-host Stephanie Minn.
STEPHANIE: Yes. Ruby Central should hire me to market this conference [laughter] and that being the main value add of going [laughs], obviously. Jokes aside, I'm excited for you to be doing this initiative again because it was so successful for RailsConf kind of internally at thoughtbot. I think a lot of people submitted proposals for the first time with some of the programming you put on. Are you thinking about doing things any differently from last time, or any new thoughts about this conference cycle?
JOËL: I think I'm iterating on what we did last time but trying to keep more or less the same formula. Among other things, people don't always have ideas immediately of what they want to speak about. And so, I have a brainstorming session where we're just going to get together and brainstorm a bunch of topics that are free for anyone to take. And then, either someone can grab one of those topics and pitch a talk on it, or it can be, like, inspiration where they see that it jogs their mind, and they have an idea that then they go off and write a proposal.
And so, that allows, I think, a lot of colleagues as well, who are maybe not interested in speaking but might have a lot of great ideas, to participate and sort of really get a lot of that energy going. And then, from there, people who are excited to speak about something can go on to maybe draft a proposal. And then, I've got a couple of other events where we support people in drafting a proposal and reviewing and submitting, things like that.
STEPHANIE: Yes, I really love how you're just involving people with, you know, just different skills and interests to be able to support each other, even if, you know, there's someone on our team who's, like, not interested in speaking at all, but they're, like, an ideas person, right? And they would love to see their idea come to life in a talk from someone else. Like, I think that's really cool, and I certainly appreciate it as a not ideas person [laughs].
JOËL: Also, I want to shout out that Ruby Central is doing CFP coaching sessions on June 24th, June 25th, and June 26th, and those are open to anyone. You can sign up. We'll put a link to the signup form in the show notes. If you've never submitted something before and you'd like some tips on what makes for a good CFP, how can you up your chances of getting accepted, or maybe you've submitted before, you just want to get better at it; I recommend joining one of those slots. So, Stephanie, what's new in your world?
STEPHANIE: So, I just successfully delivered a big project on my client work last week. So, I'm kind of riding that wave and getting into the next bit of work that I have been assigned for this team, and I'm really excited to do this. But I also, I don't know, I've been just, like, thinking about it quite a bit. Basically, I'm getting to spend two dedicated weeks to just refactoring [laughs] some really, I guess, complicated code that has led to some bugs recently and just needing some love, especially because there's some whiffs of potentially, like, really investing in this area of the product, and people wanting to make sure that the foundation does feel very stable to build on top of for extending and changing that code.
And I think I, like, surprised myself by how excited I was to do this because it's not even code I wrote. You know, sometimes when you are the one who wrote code, you're like, oh, like, I would love time to just go back and clean up all these things that I kind of missed the first time around or just couldn't attend to for whatever reason. But yeah, I think I was just a little bit in the peripheries of that code, and I was like, oh, like, just seeing some weird stuff. And now to kind of have the time to be like, oh, this is all I'm going to be doing for two weeks, to, like, really dive into it and get my hands dirty [laughs], I'm very excited.
JOËL: I think that refactoring is a thing that can be really fun. And also, when you have a larger chunk of time, like two days, it's easy to sort of get lost in sort of grand visions or projects. How do you kind of balance the, I want to do a lot of refactoring; I want to take on some bigger things while maybe trying to keep some focus or have some prioritization?
STEPHANIE: Yeah, that's a great question. I was actually the one who said, like, "I want two weeks on this." And it also helped that, like, there was already some thoughts about, like, where they wanted to go with this area of the codebase and maybe what future features they were thinking about. And there are also a few bugs that I am fixing kind of related to this domain. So, I think that is actually what I started with.
And that was really helpful in just kind of orienting myself in, like, the higher impact areas and the places that the pain is felt and exploring there first to, like, get a sense of what is going on here. Because I think that information gathering is really important to be able to kind of start changing the code towards what it wants to be and what other devs want it to be.
I actually also started a thread in Slack for my team. I was, like, asking for input on what's the most confusing or, like, hard to reason about files or areas in this particular domain or feature set and got a lot of really good engagement. I was pleasantly surprised [laughs], you know, because sometimes you, like, ask for feedback and just crickets. But I think, for me, it was very affirming that I was, like, exploring something that a lot of people are like, oh, we would love for someone to, you know, have just time to get into this. And they all were really excited for me, too. So, that was pretty cool.
JOËL: Interesting. So, it sounds like you sort of budgeted some refactoring time and then, from there, broke it down into a series of a couple of debugging projects and then a couple of, like, more bounded refactoring projects, where, like, specifically, I want to restructure the way this object works or something like that.
STEPHANIE: Yeah. I think there was that feeling of wanting to clean up this area of the codebase, but you kind of caught on to that bit of, you know, it can go so many different ways. And, like, how do you balance your grand visions [laughs] of things with, I guess, a little bit of pragmatism? So, it was very much like, here's all these bugs that are causing our customers problems that are kind of, like, hard for the devs to troubleshoot. You know, that kind of prompts the question, like, why?
And so, if there can be, you know, the fixing of the bugs, and then the learning of, like, how that part of the system works, and then, hopefully, some improvements along the way, yeah, that just felt like a dream [laughs] for me. And two weeks felt about the right amount of time. I don't know if anyone kind of hears that and feels like it's too long or too little. I would be really curious. But I feel like it is complex enough that, like, context switching would, I think, make this work harder, and you kind of do have to just sit with it for a little bit to get your bearings.
JOËL: A scenario that we encounter on a pretty regular basis is a customer coming to us and telling us that they're feeling a lot of test pain and asking what are the ways that we can help them to make things better and that test pain can come under a lot of forms.
It might be a test suite that's really slow and that's hurting the team in terms of their velocity. It might be a test suite that is really flaky. It might be one that is really difficult to add to, or maybe one that has very low coverage, or one that is just really brittle. Anytime you try to make a change to it, a bunch of things break, and it takes forever to ship anything. So, there's a lot of different aspects of challenging test suites that clients come to us with.
I'm curious, Stephanie, what are some of the ones that you've encountered most frequently?
STEPHANIE: I definitely think that a slow test suite and a flaky test suite end up going hand in hand a lot, or just a brittle one, right? That is slowing down development and, like you said, causing a lot of pain. I think even if that's not something that a client is coming to us directly about, it maybe gets, like, surfaced a little bit, you know, sometime into the engagement as something that I like to keep an eye on as a consultant. And I actually think, yeah, that's one of kind of the coolest things, I think, about our consulting work is just getting to see so many different test suites [laughs]. I don't know. I'm a testing nerd, so I love that kind of stuff.
And then, I think you were also kind of touching on this idea of, like, maintaining a test suite and, yeah, making testing just a better experience. I have a theory [laughs], and I'd be curious to get your thoughts on it. But one thing that I really struggle with in the industry is when people talk about writing tests as if it's, like, the morally superior thing to do. And I struggle with this because I don't think that it is a very good strategy for helping people feel better or more confident and, like, upskill at writing tests.
I think it kind of shames people a little bit who maybe either just haven't gotten that experience or, you know, just like, yeah, like, for whatever reason, are still learning how to do this thing. And then, I think that mindset leads to bad tests [laughs] or tests that aren't really serving the purpose that you hope they would because people are doing it more out of obligation rather than because they truly, like, feel like it adds something to their work. Okay, I kind of just dropped that on you [laughs]. Do you have any reactions?
JOËL: Yeah, I guess the idea that you're just checking a box with your test rather than writing code that adds value to the codebase. They're two very different perspectives that, in the end, will generate more lines of code if you're just doing a checkbox but may or may not add a whole lot of value. So, maybe before even looking at actual, like, test practices, it's worth stepping back and asking more of a mindset question: Why does your team test? What is the value that your team feels they get out of testing?
STEPHANIE: Yeah. Yeah. I like that because I was about to say they go hand in hand. But I do think that maybe there is some, you know, question asking [laughs] to be done because I do think people like to kind of talk about the testing practices before they've really considered that. And I am, like, pretty certain from just kind of, at least what I've seen, and what I've heard, and what I've experienced on embedding into client teams, that if your team can't answer that question of, like, "What value does testing bring?" then they probably aren't following good testing practices [laughs]. Because I do think you kind of need to approach it from a perspective of like, okay, like, I want to do this because it helps me, and it helps my team, rather than, like you said, getting the check mark.
JOËL: So, once we've sort of established maybe a bit of a mindset or we've had a conversation with the team around what value they think they're getting out of tests, or maybe even you might need to sell the team a little bit on like, "Hey, here's, like, all these different ways that testing can bring value into your life, make your life as developers easier," but once you've done that sort of pre-work and you can start looking at what's actually the problem with a test suite, a common complaint from developers is that the test suite is too slow. How do you like to approach a slow test suite?
STEPHANIE: That's a good question. I actually...I think there's a lot of ways to answer that. But to kind of stay on the theme of stepping back a little bit, I wonder if assessing how well your test suite aligns with the testing pyramid would be a good place to start; at least, that could be where I start if I'm coming into a client team for the first time, right, and being asked to start assessing or just poking around. Because I think the slowness a lot of the time comes from a lot of quote, unquote, "integration tests" or, like, unit tests masquerading as integration tests, where you end up having, like, a lot of duplication of things that are being tested in ways that are integrating with some slow parts of the system like the database.
And yeah, I think even before getting into some of the more discreet reasons why you might be writing slow tests, just looking at the structure of your test suite and what kinds of things you're testing, and, again, even going back to your team and asking, like, "What kinds of things do you test?" Or like, "Do you try to test or wish to be testing more of, less of?" Like looking at the structure, I have found to be a good place to start.
JOËL: And for those who are not familiar, you used the term testing pyramid. This is a concept which says that you probably want to have a lot of small, fast unit tests, a medium amount of integration tests that test a few different components together, and then a few end-to-end tests. Because as you go up that pyramid, tests become more expensive. They take a lot longer to run, whereas the little unit tests are super cheap. You can create thousands of them, and they will barely impact your run time. Adding a dozen end-to-end tests is going to be noticeable. So, you want to balance sort of the coverage that you get from end to end with the sort of cheapness and ubiquity of the little unit tests, and then split the difference for tests that are in between.
STEPHANIE: And I think that is challenging, even, you know, you're talking about how you want the peak of your pyramid to be end-to-end tests. So, you don't want a lot of them, but you do want some of them to really ensure that things are totally plumbed and working correctly. But that does require, I think, really looking at your application and kind of identifying what features are the most critical to it. And I think that doesn't get paid enough attention, at least from a lot of my client experiences. Like, sometimes teams just end up with a lot of feature bloat and can't say like, you know, they say, "Everything's important [chuckles]," but everything can't be equally important, you know?
JOËL: Right. I often like to develop using a sort of outside-in approach, where you start by writing an end-to-end test that describes the behavior that your new feature ticket is asking for and use that to drive the work that I'm doing. And that might lead to some lower-level unit tests as I'm building out different components, but the sort of high-level behavior that we're adding is driven by adding an end-to-end spec.
Do you feel that having one new end-to-end spec for every new feature ticket that you work on is a reasonable thing to do, or do you kind of pick and choose? Do you write some, but maybe start, like, coalescing or culling them, or something like that? How do you manage that idea that maybe you would or would not want one end-to-end spec for each feature ticket?
STEPHANIE: Yeah, it's a good question. Actually, as you were saying that, I was about to ask you, do you delete some afterwards [laughs]? Because I think that might be what I do sometimes, especially if I'm testing, you know, edge cases or writing, like, the end-to-end test for error states. Sometimes, not all of them make it into my, like, final, you know, commit. But they, you know, had their value, right? And at least it prompted me to make sure I thought about them and make sure that they were good error states, right? Like things that had visible UI to the user about what was going on in case of an error. So, I would say I will go back and kind of coalesce some of them, but they at least give me a place to start. Does that match your experience?
JOËL: Yeah, I tend to mostly write end-to-end tests for happy paths and then write kind of mid-level things to cover some of my edge cases, maybe a couple of end-to-end tests for particularly critical paths. But, at some point, there's just too many paths through the app that you can't have end-to-end coverage for every single branch on every single path that can happen.
STEPHANIE: Yeah, I like that because if you find yourself having a lot of different conditions that you need to test in an end-to-end situation, maybe there's room for that to, like, be better encapsulated in that, like, more, like, middle layer or, I don't know, even starting to ask questions about, like, does this make sense with the product? Like, having all of these different things going on, does that line up with kind of the vision of what this feature is trying to be or should be? Because I do think the complexity can start at that high of a level.
JOËL: How do you feel about the idea that adding more end-to-end tests, at some point, has diminishing returns?
STEPHANIE: I'm not quite sure I'm following [laughs].
JOËL: So, let's say you have an end-to-end test for the happy path for every core feature of the app. And you decide, you know what, I want to add maybe some, like, side features in, or maybe I want to have more error states. And you start, like, filling in more end-to-end tests for those. Is it fair to say that adding some of those is a bit of a diminishing return? Like, you're not getting as much value as you would from the original specs. And maybe as you keep finding more and more rare edge cases, you get less and less value for your test.
STEPHANIE: Oh, yeah, I see. And there's more of a cost, too, right? The cost of the time to run, maintain, whatever.
JOËL: Right. Let's say they're roughly all equally expensive in terms of cost to run. But as you stray further and further off of that happy path, you're getting less and less value from that integration test or that end-to-end test.
STEPHANIE: I'm actually a little conflicted about this because that sounds right in theory, but then in practice, I feel like I've seen error states not get enough love [laughs] that it's...I don't even want to say, like, you make any kind of claim [laughs] about it. But, you know, if you're going to start somewhere, if you have, like, a limited amount of time and you're like, okay, I'm only going to write a handful of end-to-end tests, yeah, like, write tests for your happy paths [laughs].
JOËL: I guess it's probably fair to say that error states just don't get as much love as they should throughout the entire testing stack: at the unit level, at the integration level, all the way up to end to end.
STEPHANIE: I'm curious if you were trying to get at some kind of conclusion, though, with the idea of diminishing returns.
JOËL: I guess I'm wondering if, from there, we can talk about maybe a breakdown of a particular testing pyramid for a particular test suite is being top heavy, and whether there's value in maybe pushing some of these tests, some of these edge cases, some of these maybe less important features down from that, like, top end-to-end layer into maybe more of an integration layer. So, in a Rails context, that might be moving system specs down to something like a request spec.
STEPHANIE: Yeah, I think that is what I tend to do. I'm trying to think of how I get there, and I'm not quite sure that I can explain it quite yet. Yeah, I don't know. Do you think you can help me out here? Like, how do you know it's time to start writing more tests for your unhappy paths lower on the pyramid?
JOËL: Ideally, I think a lot of your code should be unit-tested. And when you are unit testing it, those pieces all need coverage of the happy and unhappy paths. I think the way it may often happen naturally is if you're pushing logic out of your controllers because it's a little bit challenging sometimes to test Rails controllers.
And so, if you're moving things into domain objects, even service objects, depending on how you implement them, just doing that and then making sure you unit test them can give you a lot more coverage of all the different edge cases that can happen. Where things sometimes fall apart is getting out of that business layer into the web layer and saying, "Hey, if something raises an error or if the save fails or something like that, does the user get a good experience, or do we just crash and give them a 500 page?"
STEPHANIE: Yeah, that matches with a lot of what I've seen, where if you then spend too much time in that business layer and only handling errors there, you don't really think too much about how it bubbles up. And, you know, then you are digging through, like, your error monitoring [laughs] service, trying to find out what happened so that you can tell, you know, your customer support team [laughs] to help them resolve, like, a bug report they got.
But I actually think...and you were talking about outside in, but, in some ways, in my experience, I also get feedback from the bottom up sometimes that then ends up helping me adjust some of those integration or end-to-end tests about kind of what errors are possible, like, down in the depths of the code [laughs], and then finding ways to, you know, abstract that or, like, kind of be like, "Oh, like, here are all these possible, like, exceptions that might be raised." Like, what HTTP status code do I want to be returned to capture all of these things? And what do I want to say to the user? So, yeah, I'm [laughs] kind of a little lost myself, but this idea that going both, you know, outside in and then maybe even going back up a little bit has served me well before.
JOËL: I think there can be a lot of value in sort of dropping down a level in the pyramid, and maybe instead of doing sort of end-to-end tests where you, like, trigger a scenario where something fails, you can just write a request back against the controller and say, "Hey, if I go to this controller and something raises an error, expect that you get redirected to this other location." And that's really cheap to run compared to an end-to-end test. And so, I think that, for me, is often the right compromise is handling error states at sort of the next lowest level and also in slightly more atomic pieces. So, more like, if you hit this endpoint and things go wrong, here's how things happen.
And I use endpoint not so much in an API sense, although it could be, but just your, you know, maybe you've got a flow that's multiple steps where, you know, you can do a bunch of things. But I might have a test just for one controller action to say, "Hey, if things go wrong, it redirects you here, or it shows you this error page." Whereas the end-to-end test might say, "Oh, you're going to go through the entire flow that hits multiple different controllers, and the happy path is this nice chain." But each of the exit points off at where things fail would be covered by a more scoped request spec on that controller.
STEPHANIE: Yeah. Yeah. That makes sense. I like that.
JOËL: So, that's kind of how I've attempted to balance my pyramid in a way that balances complexity and time with coverage. You mentioned that another area that test suites get slow is making too many requests to the database. There's a lot of ways that that happens. Oftentimes, I think a classic is using a factory where you really don't need to, persisting data to the database when all you needed was some object in memory. So, there are different strategies for avoiding that.
It's also easy to be creating too much data. So, maybe you do need to persist some things in the database, but you're persisting a hundred objects into memory or into the database when you really meant to persist two, so that's an easy accident. A couple of years ago, I gave a talk at RailsConf titled "Your Test Suite is Making Too Many Database Requests" that went over a bunch of different ways that you can be doing a lot of expensive database requests you didn't plan on making and how that slows down your test suite. So, that is also another hot spot that I like to look at when dealing with a slow test suite.
STEPHANIE: Yeah, I mentioned earlier the idea of unit tests really masquerading as integration tests [laughs]. And I think that happens especially if you're starting with a class that may already be a little bit too big than it should be or have more responsibilities than it should be. And then, you are, like, either just, like, starting with using the create build, like, strategy with factories, or you find yourself, like, not being able to fully run the code path you're trying to test without having stuff persisted.
Those are all, I think, like, test smells that, you know, are signaling a little bit of a testing anti-pattern that, yeah, like, is there a way to write, like, true unit tests for this stuff where you are only using objects in memory? And does that require breaking out some responsibilities? That is a lot of what I am kind of going through right now, actually, with my little refractoring project [laughs] is backfilling some tests, finding that I have to create a lot of records.
And you know what? Like, the first step will probably be to write those tests and commit them, and just have them live there for a little while while I figure out, you know, the right places to start breaking things up, and that's okay. But yeah, I did want to, like, just mention that if you are having to create a lot of records and then also noticing, like, your test is running kind of slow [laughs], that could be a good indicator to just give a good, hard look at what kind of style of test you think you're writing [laughs].
JOËL: Yeah, your tests speak to you, and when you're feeling pain, oftentimes, it can be a sign that you should consider refactoring your implementation. And I think that's doubly true if you're writing tests after the fact rather than test driving. Because sometimes you sort of...you came up with an implementation that you thought would be good, and then you're writing tests for it, and it's really painful. And that might be telling you something about the underlying implementation that you have that maybe it's...you thought it's well scoped, but maybe it actually has more responsibilities than you initially realized, or maybe it's just really tightly coupled in a way that you didn't realize. And so, learning to listen to your tests and not just sort of accepting the world for being the way it is, but being like, "No, I can make it better."
STEPHANIE: Yeah, I've been really curious why people have a hard time, like, recognizing that pain sometimes, or maybe believing that this is the way it is and that there's not a whole lot that you can do about it. But it's not true, like, testing really does not have to be painful. And I feel like, again, this is one of those things that's like, it's hard to believe until you really experience it, at least, that was the case for me.
But if you're having a hard time with tests, it's not because you're not smart enough. Like, that, I think, is a thing that I really want to debunk right now [laughs] for anyone who has ever had that thought cross their mind. Yeah, things are just complicated and complex somehow, or software entropy happens. That's, like, not how it should be, and we don't have to accept that [laughs]. So, I really like what you said about, oh, you can change it. And, you know, that is a bit of a callback to the whole mindset of testing that we mentioned earlier at the beginning.
JOËL: Speaking of test suites, we have not covered yet is paralyzing it. That could probably be its own Bike Shed episode on its own entirely on paralyzing a test suite. We've done entire engagements where our job was to come in and help paralyze a test suite, make it faster. And there's a lot of, like, pros and cons. So, I think maybe we can save that for a different episode. And, instead, I'd like to quickly jump in a little bit to some other common pain points of test suites, and I would say probably top of that list is test flakiness. How do you tend to approach flakiness in a client project?
STEPHANIE: I am, like, laughing to myself a little bit because I know that I was dealing with test flakiness on my last client engagement, and that was, like, such a huge part of my day-to-day is, like, hitting that retry button. And now that I am on a project with, like, relatively low flakiness, I just haven't thought about it at all [laughs], which is such a privilege, I think [laughs].
But one of the first things to do is just start, like, capturing metrics around it. If you, you know, are hearing about flakiness or seeing that, like, start to plague your test suite or just, you know, cropping up in different ways, I have found it really useful to start, like, I don't know, just, like, maybe putting some of that information in a dashboard and seeing how, just to, like, even make sure that you are making improvements, that things are changing, and seeing if there's any, like, patterns around what's causing the flakiness because there are so many different causes of it.
And I think it is pretty important to figure out, like, what kind of code you're writing or just trying to wrangle. That's, you know, maybe more likely to crop up as flakiness for your particular domain or application. Yeah, I'm going to stop there and see, like, because I know you have a lot of thoughts about flakiness [laughs].
JOËL: I mean, you mentioned that there's a lot of different causes for flakiness. And I think, in my experience, they often sort of group into, let's say, like, three different buckets. Anytime you're testing code that's doing things that are non-deterministic, that's easy for tests to be flaky. And so, you might think, oh, well, you know, you have something that makes a call to random, and then you're going to assert on a particular outcome of that. Well, clearly, that's going to not be the same every time, and that might be flaky.
But there are, like, more subtle versions of that, so maybe you're relying on the system clock in some way. And, you know, depending on the time you run that test, it might give you a different value than you expect, and that might cause it to fail. And it doesn't have to be you're asserting on, like, oh, specifically a given millisecond. You might be doing math around, like, number of days, and when you get near to, let's say, the daylight savings boundary, all of a sudden, no, you're off by an hour, and your number of days...calculation breaks because relying on the clock is something that is inherently non-deterministic. Non-determinism is a bucket.
Leaky tests is another bucket of failures that I see, things where one test might impact another that gets run after the fact, oftentimes by mutating some sort of global state. So, maybe you're both relying on some sort of, like, external file that you're both writing to or maybe a cache that one is writing to that the other one is reading from, something like that. It could even just be writing records into the database in a way that's not wrapped in a transaction, such that there's more data in the database when the next test runs than it expects.
And then, finally, if you are doing any form of parallelization, that can improve your test suite speed, but it also potentially leads to race conditions, where if your resources aren't entirely isolated between parallel test runners, maybe you're sharing a database, maybe you're sharing Redis instance or whatever, then you can run into situations where you're both kind of fighting over the same resources or overriding each other's data, or things like that, in a way that can cause tests to fail intermittently. And I think having a framework like that of categorization can then help you think about potential solutions because debugging approaches and then solutions tend to be a little bit different for each of these buckets.
STEPHANIE: Yeah, the buckets of different causes of flaky tests you were talking about, I think, also reminded me that, you know, some flakiness is caused by, like, your testing environment and your infrastructure. And other kinds of flakiness are maybe caused more from just the way that you've decided how your code should work, especially that, like, non-deterministic bucket. So, yeah, I don't know, that was just, like, something that I noticed as you were going through the different categories. And yeah, like, certainly, the solutions for approaching each kind are very different.
JOËL: I would like to pitch a talk from RubyConf last year called "The Secret Ingredient: How To Resolve And Understand Just About Any Flaky Test" by Alan Ridlehoover. Just really excellent walkthrough of these different buckets and common debugging and solving approaches to each of them. And I think having that framework in mind is just a great way to approach different types of flaky tests.
STEPHANIE: Yes, I'll plus one that talk, lots of great pictures of delicious croissants as well.
JOËL: Very flaky pastry.
STEPHANIE: [laughs] Joël, do you have any last testing anti-pattern guidances for our audience who might be feeling some test pain out there?
JOËL: A quick list, I'm going to say tight coupling that has then led to having a lot of stubbing in your tests often leads to tests that are very brittle, so tests that maybe don't fail when they should when you've actually broken things, or maybe, alternatively, tests that are constantly failing for the wrong reasons. And so, that is a thing that you can fix by making your code less coupled.
Tests that also require stubbing a lot of things because you do a lot of side effects. If you are making a lot of HTTP calls or things like that, that can both make a test more complex because it has to be aware of that. But also, it can make it more non-deterministic, more flaky, and it can just make it harder to change. And so, I have found that separating side effects from sort of business logic is often a great way to make your test suite much easier to work with.
I have a blog post on that that I'll link in the show notes. And I think this maybe also approaches the idea of a functional core and an imperative shell, which I believe was an idea pitched by Gary Bernhardt, like, over ten years ago. There's a famous video on that that we'll also link in the show notes. But that architecture for building an app can lead to a much nicer test to write. I guess the general idea being that testing code that does side effects is complicated and painful. Testing code that is more functional tends to be much more pleasant. And so, by not intermingling the two, you tend to get nicer tests that are easier to maintain.
STEPHANIE: That's really interesting. I've not heard that guidance before, but now I am intrigued. That reminded me of another thing that I had a conversation with someone about. Because after the RailsConf talk I gave, which was about testing pain, there was some stubbing involved in the examples that I was showing because I just see a lot of that stuff. And, you know, this audience member kind of had that question of, like, "How do you know that things are working correctly if you have to stub all this stuff out?"
And, you know, sometimes you just have to for the time being [chuckles]. And I wanted to just kind of call back to that idea of having those end-to-end tests testing your critical paths to at least make sure that those things work together in the happy way. Because I have seen, especially with apps that have a lot of service objects, for some reason, those being kind of the highest-level test sometimes. But oftentimes, they end up not being composed well, being quite coupled with other service objects. So, you end up with a lot of stubbing of those in your test for them. And I think that's kind of where you can see things start to break down.
JOËL: Yep. And when the RailsConf videos come out, I recommend seeing Stephanie's talk, some great gems in there for building a more maintainable test suite. Stephanie and I and, you know, most of us here at thoughtbot, we're testing nerds. We think about this a lot. We've also written a lot about this. There are a lot of resources in the show notes for this episode. Check them out. Also, just generally, check out the testing tag on the thoughtbot blog. There is a ton of content there that is worth looking into if you want to dig further into this topic.
STEPHANIE: Yeah, and if you are wanting some, like, dedicated, customized testing training, thoughtbot offers an RSpec workshop that's tailored to your team. And if you kind of are interested in the things we're sharing, we can definitely bring that to your company as well.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie has a newfound interest in urban foraging for serviceberries in Chicago. Joël discusses how he uses AI tools like ChatGPT to generate creative Dungeons & Dragons character concepts and backstories, which sparks a broader conversation with Stephanie about AI's role in enhancing the creative process.
Together, the hosts delve into professional growth and experience, specifically how to leverage everyday work to foster growth as a software developer. They discuss the importance of self-reflection, note-taking, and synthesizing information to enhance learning and professional development. Stephanie shares her strategies for capturing weekly learnings, while Joël talks about his experiences using tools like Obsidian's mind maps to process and synthesize new information. This leads to a broader conversation on the value of active learning and how structured reflection can turn routine work experiences into meaningful professional growth.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, as of today, while we record this, it's early June, and I have started foraging a little bit for what's called serviceberries, which is a type of tree/shrub that is native to North America. And I feel like it's just one of those, like, things that more people should know about because it makes these little, tiny, you know, delicious fruit that you can just pick off of the tree and have a little snack. And what's really cool about this tree is that, like I said, it's native, at least to where I'm from, and it's a pretty common, like, landscaping tree.
So, it has, like, really pretty white flowers in the spring and really beautiful, like, orange kind of foliage in the fall. So, they're everywhere, like, you can, at least where I'm at in Chicago, I see them a lot just out on the sidewalks. And whenever I'm taking a walk, I can just, yeah, like, grab a little fruit and have a little snack on them. It's such a delight. They are a really cool tree. They're great for birds. Birds love to eat the berries, too.
And yeah, a lot of people ask my partner, who's an arborist, like, if they're kind of thinking about doing something new with the landscaping at their house, they're like, "Oh, like, what are some things that I should plant?" And serviceberry is his recommendation. And now I'm sharing it with all of our Bike Shed listeners. If you've ever wondered about [laughs] a cool and environmentally beneficial tree [laughs] to add to your front yard, highly recommend, yeah, looking out for them, looking up what they look like, and maybe you also can enjoy some June foraging.
JOËL: That's interesting because it sounds like you're foraging in an urban environment, which is typically not what I associate with the idea of foraging.
STEPHANIE: Yeah, that's a great point because I live in a city. I don't know, I take what I can get [laughs]. And I forget that you can actually forage for real out in, you know, nature and where there's not raccoons and garbage [laughs]. But yeah, I think I should have prefaced by kind of sharing that this is a way if you do live in a city, to practice some urban foraging, but I'm sure that these trees are also out in the world, but yeah, have proved useful in an urban environment as well.
JOËL: It's really fun that you don't have to, like, go out into the countryside to do this activity. It's a thing you can do in the environment that you live in.
STEPHANIE: Yeah, that was one of the really cool things that I got into the past couple of years is seeing, even though I live in a city, there's little pieces of nature around me that I can engage with and picking fruit off of people's [inaudible 03:18] [laughs], like, not people's, but, like, parkway trees. Yeah, the serviceberry is also a pretty popular one here that's planted in the Chicago parks. So, yeah, it's just been like, I don't know, a little added delight to my days [laughs], especially, you know, just when you're least expecting it and you stumble upon it. It's very fun.
JOËL: That is really fun. It's great to have a, I guess, a snack available wherever you go.
STEPHANIE: Anyway, Joël, what is new in your world?
JOËL: I've been intersecting two, I guess, hobbies of mine: D&D and AI. I've been playing a lot of one-shot games with friends, and that means that I need to constantly come up with new characters. And I've been exploring what AI can do to help me develop more interesting or compelling character concepts and backstories. And I've been pretty satisfied with the result.
STEPHANIE: Cool. Yeah. I mean, if you're playing a lot and having to generate a lot of new ideas, it can be hard if you're, you know, just feeling a little empty [laughs] in terms of, you know, coming up with a whole character. And that reminds me of a conversation that you and I had in person, like, last month as we were talking about just how you've been, you know, experimenting with AI because you had used it to generate images for your RailsConf talk.
And I think I connected it to the idea of, like, randomness [laughs] and how just injecting some of that can help spark some more, I think, creativity, or just help you think of things in a new way, especially if you're just, like, having a hard time coming up with stuff on your own. And even if you don't, like, take exactly what's kind of provided to you in a generative AI, it at least, I don't know, kind of presents you with something that you didn't see before, or yeah, it's just something to react to.
JOËL: Yeah, it's a great tool for getting unstuck from that kind of writer's block or that, like, blank page feeling. And oftentimes, it'll give you a thing, and you're like, that's not really exactly what I wanted. But it sparks another idea, which is what I actually want. Or sometimes you can be like, "Hey, here's an idea I have. I'm not sure what direction to take it in. Give me a few options." And then, you see that, and you're like, "Oh, that's actually pretty interesting."
One thing that I think is interesting is once I've come up with a little bit of the character concept, or maybe even, like, a backstory element...so, I'm using ChatGPT, and it has that concept of memory. And so, throughout the conversation, it keeps bringing it back. So, if I tell it, "Look, this is an element that's going to be core to the character," and then later on, I'm like, "Okay, help me brainstorm some potential character flaws for this character," it'll actually find things that connect back to my, like, core concept, or maybe an element of the backstory. And it'll give me like, you know, 5 or 10 different ideas, and some of them can be actually really good.
So, I've really enjoyed doing that. It's not so much to just generate me a character so much as it is like a conversation back and forth of like, "Okay, help me come up with a vibe for it. Okay, now that I have a vibe or a backstory element or, like, a concept, help me workshop this thing. And what about that?" And if I want to say, "It's going to be this character class, what are maybe some ways I could develop it that are unusual?" and just sort of step by step kind of choose your own adventure. And it kind of walking me through the process has been really fun.
STEPHANIE: Nice. Yeah, the way you're talking about it makes a lot of sense to me how asking it to help you, not necessarily do all of it, like, you know, kind of just spit out something that you're like, okay, like, that's what I'm going to use, approaching it as a tool, and yeah, that's really fun. Have you had good experiences then playing with those characters [chuckles]?
JOËL: I have. I think it's also really great for sort of padding out some of the content. So, I had a character I played who was a washed-up politician. And at one point, I knew that I was going to have to make a campaign speech. And I asked ChatGPT, "Can you help me, like...here are the themes I want to hit. Give me a, like, classic, very politician-sounding speech that sounds inspiring but also says nothing at the same time." And it did a really good job of that. And you can tell it, "Oh, that's too long. That's too short. I want three sentences. I want five sentences." And that was great. So, I saved that, brought it to the table, and read out my campaign speech, and it was a hit.
STEPHANIE: Amazing. That's really fun. I like that because, yeah, I don't think...I am so poor at just improvising things like that, even though, like, I want to really embody the character. So, that's cool that you found a way to help you be able to do that because that just feels like kind of what playing D&D can be about.
JOËL: I've never DM'd, but I could imagine a situation where, because the DMs have to improv so much, and you know what the players do, I could imagine having a tool like that available behind the DM screen being really helpful. So, all of a sudden, someone's just like, "Oh, I went to a place," and, like, all of a sudden, you have to, like, sort of generate a village and, like, ten characters on the spot for people that you didn't expect, or an organization or something like that. I could imagine having a tool like that, especially if it's already primed with elements from your world that you've created, being something really helpful. That being said, I've never DM'd myself, so I have no idea what it actually is like to be on the other side of that screen.
STEPHANIE: Cool. I mean, if you ever do try that or have a DM experience and you're like, hmm, I wonder kind of how I might be able to help me here, I bet that would be a very cool experience to share on the show.
JOËL: I definitely have to report back here.
Something that I've been thinking about a lot recently is the difference between sort of professional growth and experience, so the time that you put into doing work. Particularly maybe because, you know, we spend part of our week doing client work, and then we have part of the week that's dedicated to maybe more directly professional growth: our investment day. How do we grow from that, like, four days a week where we're doing client work? Because not all experience is created equal.
Just because I put in the hours doesn't mean that I'm going to grow. And maybe I'm going to feel like I'm in a rut. So, how do I take those four days a week that I'm doing code and transform that into some sort of growth or expansion of my knowledge as a developer? Do you have any sort of tactics that you like to use or ways you try to be a little bit more mindful of that?
STEPHANIE: Yeah, this is a fun question for me, and kind of reminds me of something we've talked a little bit about before. I can't remember if it was, like, on air or just separately, but, you know, we talk a lot about, like, different learning strategies on the show, I think, because that's just something you and I are very into. And we often, like, lean on, you know, our investment day, so our Fridays that we get to not do client work and kind of dedicate to professional development.
But you and I also try to remember that, like, most people don't have that. And most people kind of are needing to maybe find ways to just grow from the day-to-day work that they do, and that is totally possible, I think. And some of the strategies that I have are, I guess, like, it is really...it can be really challenging to, like, you know, be like, okay, I spent 40 hours doing this, and like, what did I learn [chuckles]? Feeling like you have to have something to show for it or something to point to.
And one thing that I've been really liking is these automated check-ins we have at the end of the week. And, you know, I suspect that this is not that uncommon for just, like, a workplace to be like, "Hey, like, how did your week go? Like, what are some ways that it was successful? Like, what are your challenges? Like, where do you need support or help?" And I think I've now started using that as both, like, space for giving an update on just, like, business-y things. Like, "Here's the status of this project," or, like, "Here's, you know, a roadblock that we faced that took some extra time," or whatever.
Then also being like, oh, this is a great time to make this space for myself, especially because...I don't know about you, but whenever I have, like, performance review time and I have to write, like, a self-review, I'm just like, did I do anything in the last six months [laughs], or how have I grown in the last six months? It feels like such a big question, kind of like you were talking about that blank page syndrome a little bit.
But if I have kind of just put in the 10 minutes during my Friday to be like, is there something that was kind of just for me that I can say in my check-in? I can go back and, yeah, just kind of start to see just, like, you know, pick out or just pay attention to how, like, my 40 hours is kind of serving me in growing in the ways that I want to and not just to deliver code [laughs].
JOËL: What you're describing there, that sort of weekly check-in and taking notes, reminds me of the practice of journaling. Is that something that you've ever tried to do in your, like, regular life?
STEPHANIE: Oh yeah, very much so. But I'm not nearly as, like, routine about it in my personal life. But I suspect that the routine is helpful in more of a, like, workplace setting, at least for me, because I do have, like, more clear pathways of growth that I'm interested in or just, like, something that, I don't know, not that it's, like, expected of everyone, but if that is part of your goals or, like, part of your company's culture, I feel like I benefit from that structure. And yeah, I mean, I guess maybe that's kind of my way of integrating something that I already do in my personal life to an environment where, like I said, maybe there is, like, that is just part of the work and part of your career progression.
JOËL: I'm curious about the frequency. You mentioned that you sort of do this once a week, sort of a check-in at the end of the week. Do you find that once a week is about the right frequency versus maybe something like daily? I know a lot of these sort of more modern note-taking systems, Roam Research, or Obsidian, or whatever, have this concept of, like, a daily note that's supposed to encourage something that's kind of like journaling. Have you ever tried something more on a daily basis, or do you feel like a week is about...or once a week is about the right cadence for you?
STEPHANIE: Listen, I have, like, complicated feelings about this because I think the daily note is so aspirational for me [laughs] and just not how I work. And I have finally begrudgingly come to accept this no matter how much, like, I don't know, like, bullet journal inspirational content I consume on the internet [laughs]. I have tried and failed many a time to have more frequency in that way. But, I don't know, I think it almost just, like, sets me up for failure [laughs] because I have these expectations.
And that's, like, the other thing. It's like, you can't force learning necessarily. I don't know if this is, like, a strategy, but I think there is some amount of, like, making sure that I'm in the right headspace for it and, you know, like, my environment, too, kind of is conducive to it. Like, I have, like, the time, right? If I'm trying to squeeze in, I don't know, maybe, like, in between meetings, 20 minutes to be like, what did I learn from this experience? Nothing's coming out [laughs].
That was another thing that I was kind of mulling over when he had this topic proposed is this idea of, like, mindset and environment being really important because you know when you are saying, like, not all time is created equal, and I suspect that if, you know, either you or, like, the people around you and the environment you're in is not also facilitating growth, and, like, how much can you really expect for it to be happening?
JOËL: I mean, that's really interesting, right? The impact of sort of a broader company culture. And I think that definitely can act as a catalyst for growth, either to kind of propel you forward or to pull you back.
I want to dig into a little bit something you were saying about being in the right headspace to capture ideas. And I think that there's sort of almost, like, two distinct phases. There's the, like, capturing data, and information, and experiences, and then, there's synthesizing it, turning information into learning.
STEPHANIE: Yes.
JOËL: And it sounds like you're making a distinction between those two things, specifically that synthesis step is something that has to happen separately.
STEPHANIE: Ooh, I don't even...I don't know if I would necessarily say that I'm only talking about synthesis, but I do like that you kind of separated those categories because I do think that they are really important. And they kind of remind me a lot about the scientific method a little bit where, you know, you have the gathering data and, like, observations, and you have, you know, maybe some...whatever is precipitating learning that you're doing maybe differently or new.
And that also takes time, I think, or intention at least, to be like, oh, do I have what I need to, like, get information about how this is going? And then, yeah, that synthesis step that I think I was talking about a little bit more. But I don't think either is just automatic. There is, I think, quite a bit of intention involved.
JOËL: I think maybe the way I think about this is colored by reading some material on the Zettelkasten method of note-taking, which splits up the idea of fleeting notes and literature notes, which are sort of just, like, jotting down ideas, or things you've seen, things that you've learned, maybe a thought you had when you read a particular paragraph in a blog post, something like that.
And then, the permanent notes, which are more, like, fully formed thoughts that arise out of the more fleeting ones. And so, the idea is that the fleeting ones maybe you're taking those in a notebook if you're doing it pen and paper. You could be doing it in some sort of, like, daily note, or something like that. And then, those are temporary. They were there to just capture information. Later on, you process that, and then you can throw them out if you need to.
STEPHANIE: Yeah, that makes a lot of sense. This has actually been a shift for me, where I used to rely a lot more on memory and perhaps, like, didn't have a great system for taking things like fleeting notes and, like, documenting kind of [inaudible 18:28] what I was saying earlier about how do I make sure that the information is recorded, you know, for me to synthesize later? And I have found a lot more success lately in that fleeting note style of operating. And thanks to Obsidian honestly, now it's so easy to be like, oh, I'm just going to open a quick new file. And I need as little friction as possible to, like, put stuff somewhere [laughs].
And, actually, I'm excited to talk a little bit more about this with you because I think you're a little bit different where you somehow find the time [laughs] and care to create your diagrams. I'm like, if I can, for some reason, even get an Obsidian file open, I'll tab to Slack. And I send myself a lot of notes in my just own personal DM space. In fact, it's actually kind of embarrassing because I use the Command+K shortcut to navigate to my own personal DMs, which you can get to by typing me, like, M-E.
And sometimes I've accidentally just entered that into a channel chat [laughs], and then I have to delete it really quick later when I realize what I've done. So, yeah, like, I meant to navigate to my personal notes, and I just put in our team chat, "Me [laughs]." And, I don't know, I have no idea how that comes up [laughs], what people think is going on. But if anyone's listening to this podcast from thoughtbot and has seen that of me, that's what happened.
JOËL: You may not be the only one who's done that.
STEPHANIE: Thank you. Yeah [laughs], that's good to know.
JOËL: I want to step back a little bit because we've been talking about, like, introspection, and synthesis, and finding moments to capture information. And I think we've sort of...there's an unspoken assumption here that a way to kind of turbocharge learning from day-to-day experience is some form of synthesis or self-reflection. Would you agree with that statement?
STEPHANIE: Okay. This is another thing that I am perhaps, like, still trying to figure out, and we can figure it out together, which is separating, like, self-driven learning and, like, circumstance-driven learning. Because it's so much easier to want to reflect on something and find time to be, like, oh, like, how does this kind of help my goals or, like, what I want to be doing with my work? Versus when you are just asked to do something, and it could still be learning, right? It could still be new, and you need to go do some research or, you know, play around with a new tool. But there's less of that internal motivation or, like, kind of drive to integrate it. Like, do you have this distinction?
JOËL: I've definitely noticed that when there is motivation, I get more out of every hour of work that I put in in terms of learning new things. The more interest, the more motivation, the more value I get per unit of effort I put in.
STEPHANIE: Yeah. I think, for me, the other difference is, like, generative learning versus just kind of absorbing information that's already out there that someone else's...that is kind of, yeah, just absorbing rather than, like, creating something new from, like, those connections.
JOËL: Ooh.
STEPHANIE: Does that [chuckles] spark something for you?
JOËL: The gears are turning in my head because I'm almost hearing that as, like, a passive versus active learning thing. But just sort of like, I'm going to let things happen to me, and I will come out of that with some experience, and something is going to happen. Versus an active, I am going to, like, try to move in a direction and learn from that and things like that.
And I think this maybe connects back to the original question. Maybe this sort of, like, checking in at the end of the week, taking notes is a way to convert something that's a bit more of a passive experience, spending four days a week doing a project for a client, into something that's a little bit of a more active learning, where you say, "Okay, I did four weeks of this particular type of Rails work. What do I get out of it? What have I learned? What is something new that I've seen? What are some opinions I have formed, patterns I like or dislike?"
STEPHANIE: Yeah, I like that distinction because, you know, a few weeks ago, we were at RailsConf. We had kind of recapped it in a previous episode. And I think we had talked about like, oh, do we, like, to sit in talks or participate in workshops? And I think that's also another example of, like, passive versus active, right? Because I 100%, like, don't have the same type of learning by just, you know, listening to a talk that I do with maybe then going to look up, like, other things this person has put out in the world, finding them to talk to them about it, like, doing something with the content, right? Otherwise, it's just like, oh yeah, I heard this talk. Maybe one day I'll remember it when the need arises [laughs]. I, like, have a pointer to it in my brain. But until then, it probably just kind of, like, sits there, and nothing's really happened with it.
JOËL: I think maybe another thing that's interesting in that passive versus active distinction is that synthesis is inherently an act of creation. You are now creating new ideas of your own rather than just capturing information that is being thrown at you, either by sitting in a talk or by shipping tickets. The act of synthesizing and particularly, I think, making connections between ideas, either because something that, let's say you're in a talk, a speaker said that sparks an idea for yourself, or because you can connect something that speaker said with another idea that you already have or an idea that you've seen elsewhere.
So, you're like, oh, the thing this person is saying connects to this thing I read in a book or something another speaker said in an earlier session, or something like that. All of a sudden, now you're creating these new bits of knowledge, new perspectives, maybe even new mental models. We talked about mental models last week. And so, knowledge is not just the facts that you absorb or memorize. A lot of it is building the connections between those facts. And those are things that are not always given to you. You have to create them yourself.
STEPHANIE: Yeah, I am nodding my head a lot because that's resonating with, like, an experience that I'm having kind of coaching and mentoring a client developer on my team who is earlier in her career. And one thing that I've been really, like, working on with her is asking like, "Oh, like, what do you think of this?" Or like, "Have you seen this before? What are your reactions to this code, or, like this comment?" or whatever.
And I get the sense that, like, not a lot of people have prompted her to, like, come up with answers for those kinds of questions. And I'm really, really hopeful that, like, that kind of will help her achieve some of the goals that she's, like, hoping for in terms of her technical growth, especially where she's felt like she's stagnated a little bit.
And I think that calls back really well to what you said at the beginning of, like, you can spend years, right? Just kind of plugging away. But that's not the same as that really active growth. And, again, like, that's fine if that's where you're at or want to be at for a little while. But I suspect if anyone is kind of, like, wondering, like, where did that time go [laughs]...even for me, too, like, once someone started asking me those questions, I was like, oh, there's still so much to figure out or explore.
And I think you're actually really good at doing that, asking questions of yourself. And then, another thing that I've picked up from you is you ask questions about, like, what are questions other people would have? And that's a skill that I feel like I still have yet to figure out. I'm [chuckles] curious what you think about that.
JOËL: That's interesting because that kind of goes to another level. I often think of the questions other people would have from a more, like, pedagogical sense. So, I write a lot of blog posts. I write a lot of talks that I give. So, oftentimes when I'm creating that kind of material, there's a bit of an inner critic who's trying to, you know, sitting in the audience listening to myself speak, and who's going to maybe roll their eyes at certain points, or just get lost, or maybe raise their hand with a question. And that's who I try to address those things so that then when I go through it the next time, that inner critic is actually feeling engaged and paying attention.
STEPHANIE: Do you find that you're able to do that because you've seen that happen enough times where you're like, oh, I can kind of predict maybe what someone might feel confused about? I'm curious, like, how you got from being, like, well, I know what I would be confused about to what would someone else be unsure or, like, want more information about.
JOËL: Part of the answer there is that I'm a very harsh critic myself.
STEPHANIE: [laughs] Yes.
JOËL: So, I'm sitting in somebody else's talk, and there are probably parts where I'm rolling my eyes or being like, wait a minute, how did you get from this idea to this other thing? That doesn't follow. And so, I try to turn that back towards myself and use that as fuel to make my own work better.
STEPHANIE: Yeah, that's cool. I like that. Even if it's just framed as, like, a missed opportunity for people to have better or more comprehensive understanding. I know that's something that you're, like, very motivated to help kind of spread more of [laughs]. Understanding and learning is just important to you and to me. So, I think that's really cool that you're able to find ways to do that.
JOËL: Well, you definitely want to, I think, to keep a sort of beginner's mindset for a lot of these things, and one of the best ways to do that is to work with beginners. So, I spent a lot of time, back in the day, for example, in the Elm language chat room, just helping people answer basic questions, looking up documentation, explaining sort of basic concepts.
And that, I think, helped me get a sense of like, where were newcomers to the language getting stuck? And what were the explanations of those concepts that really connected? Which I could then translate into my work. And I think that that made me a better developer and helped me build this, like, really deep understanding of the underlying concepts in a way that I wouldn't have had just writing code on my own.
STEPHANIE: Wow, forum question answering hero. I have never thought to do that or felt compelled to do that. But I remember my friend was telling me, she was like, "Yeah, sometimes I just want to feel good about myself. And I remember that I know things that other people, like, are wanting to find out," and she just will answer some easy questions on Stack Overflow, you know, about, like, basic Rails stuff or something. And she is like, "Yeah, and that's doing my good deed [laughs]." And yeah, I think that it also, you know, has the same benefits that you were just saying earlier about...because you want to be helpful, you figure out how to actually be helpful, right?
JOËL: There's maybe a sense as well that helping others, once more, forces you into more of an active mindset for growth in the same way that interrogating yourself does, except now it's a beginner who's interrogating you. And so, it forces you to think a little bit more about those whys or those places where people get stuck. And you've just sort of assumed it's a certain way, but now you have to, like, explain it and really get into some of the concepts.
STEPHANIE: So, on the show, we've talked a lot about the fun things you share in the dev channel in our Slack workspace. But I recently discovered that someone (Was it you?) created an Obsidian MD channel for our favorite note-taking software. And in it, you shared a really cool tool that is available in Obsidian called mind maps.
JOËL: Yeah, so mind maps are a type of diagram. They're effectively a tree structure, but they don't really look like that when you draw them out. You start with a sort of topic in the center, and then you just keep drawing branches off of that, going every direction. And then, maybe branches off branches and keep going as you add more content. Turns out that Mermaid.js supports mind maps as a graph type, and Obsidian embeds Mermaid diagrams. So, you can use Mermaid's little language to express a mind map. And now, all of a sudden, you have mind mapping as a tool available for you within Obsidian.
STEPHANIE: And how have you been using that to kind of process and experience or maybe, like, end up with some artifacts from, like, something that you're just doing in regular day-to-day work?
JOËL: So, kind of like you, I think I have the aspiration of doing some kind of, like, daily note journaling thing and turning that into bigger ideas. In practice, I do not do that. Maybe that's the thing that I will eventually incorporate into my practice, but that's not something that I'm currently doing. Instead, a thing that I've done is a little bit more like you, but it's a little bit more thematically chunked. So, for example, recently, I did several weeks of work that involved doing a lot of documentation for module-level documentation.
You know, I'd invested a lot of time learning about YARD, which is Ruby's documentation system, and trying to figure out, like, what exactly are docs that are going to be helpful for people? And I wanted that to not just be a thing I did once and then I kind of, like, move on and forget it. I wanted to figure out how can I sort of grow from that experience maximally? And so, the approach I took is to say, let's take some time after I've completed that experience and actually sort of almost interrogate it, ask myself a bunch of questions about that experience, which will then turn into more broad ideas.
And so, what I ended up doing is taking a mind-mapping approach. So, I start that center circle is just a circle that says, "My experience writing docs," and then I kind of ring it with a series of questions. So, what are questions that might be interesting to ask someone who just recently had experience writing documentation? And so, I come up with 4,5,6 questions that could be interesting to ask of someone who had experience. And here I'm trying to step away from myself a little bit.
And then, maybe I can start answering those questions, or maybe there are sub-questions that branch off of that. And maybe there are answers, or maybe there are answers that are interesting but that then trigger follow-up questions. And so I'm almost having a conversation with myself and using the mind map as a tool to facilitate that. But the first step is putting that experience in the center and then ringing it with questions, and then kind of seeing where those lead.
STEPHANIE: Cool. Yeah, I am, like, surprised that you're still following that thread because the module docs experience was quite a little bit a while ago now. We even, you know, had an episode on it that I'll link in the show notes.
How do you manage, like, learning new things all the time and knowing what to, like, invest energy and attention into and what to kind of maybe, like, consider just like, oh, like, I don't know, that was just an experience that I had, and I might not get around to doing anything with it?
JOËL: I don't know that I have a great system. I think sometimes when I do, especially a more prolonged chunk of time doing a thing, I find it really worthwhile to say, hey, I don't want that to sort of just be a thing that was in my memory, and then it moves out. I'd like to pull out some more maybe practical or long-term ideas from it. Part of that is capture, but some of that is also synthesis.
I just spent two weeks or I just spent a month using a particular technology or doing a new kind of task. What do I have to show for it? Are there any, like, bigger ideas that I have here? Does this connect with any other technologies I've done or any other ideas or theories? Did I come up with any opinions? Did I like this technology? Did I not? Are there elements that were inspirational?
And then capturing some of that eventually with the idea of...so I do a sort of Zettelkasten-style permanent note collection, the idea to create at least a few of those based off of the experience that I can then connect to other things. And maybe it eventually turns into other content. Maybe it's something I hold onto for a while. In the case of the module docs, it turned into a Bike Shed episode. It also turned into a blog post that was published this past week. And so, it does have a way of coming back.
STEPHANIE: Yeah. Yeah. One thing that sparked for me was that, you know, you and I spend a lot of time thinking about, like, the practice of writing software, you know, in the work we do as consultants, too. But I find that, like, you can also apply this to the actual just your work that you are getting paid for [laughs]. This was, I think, a nascent thought in the talk that I had given. But there's something to the idea of, like, you know, if you are working in some code, especially legacy code, for a long time, and you learn so much about it, and then what do you have to show for it [chuckles], you know?
I have really struggled with feeling like all of that work and learning was useful if it just, like, remains in my memory and not necessarily shared with the team or, I don't know, just, like, knowing that if I leave, especially since I am a contractor, like, just recognizing that there's value in being like, oh, I spent an hour or, like, half a day sifting through this complex legacy code just to make, like, a small change. But that small change is not the full value of all of the work that I did. And I suspect that, like, just the mind mapping stuff would be really interesting to apply to more. It's not, like, just practical work, but, like, more mundane, I don't know, like, labor [laughs], if you will.
JOËL: I can think of, like, sort of two types of knowledge that you can take out of something like that. Some of it is just understanding how this legacy system works, saying, oh, well, they have this user model that's connected to this old persona table, which is kind of unused, but we sometimes rely for in this legacy case. And you've got to have this permission flag turned on and, like, all those things that you had to just discover by reading the code and exploring. And that's going to be useful to you as long as you work in that legacy codebase, as long as you work through that path. But when you move on to another project, that knowledge probably doesn't serve you a whole lot.
There are things that you did throughout that journey, though, that you can probably pull out that are going to be useful to you on other projects. And that might be maybe you came up with a new way of navigating the code or a new way of, like, finding how different pieces were connected. Maybe it was a diagramming tool; maybe it was some sort of gem. Maybe it was just a, oh, a heuristic, like, when I see a model, I like to follow the associations first. And I always go for the has_manys over the belongs_tos because those generally lead me in the right direction. Like, that's really interesting insight, and that's something that might serve you on a following project.
You can also pull out bigger things like, are there refactoring techniques that you experimented with or that you learned on this project that you would use again elsewhere? Are there ways of maybe quarantining scary code on a legacy project that are a thing that you would want to make more consistent part of your practice? Those are all great things to pull out of, just a like, oh yeah, I did some work on a, like, old legacy part of an app. And what do I have to show for it? I think you can actually have a lot to show for it.
STEPHANIE: Yeah, that's really cool. That sounds like a sure way of multiplying the learning. And I think I didn't really consider that when I was first talking about it, too. But yeah, there are, like, both of those things kind of available to you to, like, learn from. Yeah, it's like, that time is never just kind of, like, purely wasted. Oh, I don't know, sometimes it really feels like that [laughs] when you are debugging something really silly.
But yeah, like, I would be interested in kind of thinking about it from both of those lenses because I think there's value in what you learn about that particular system in that moment of time, even if it might not translate to just future works or future projects. And, like, that's something that I think we would do better at kind of capturing, and also, there's so much stuff, too, kind of to that higher level growth that you were speaking to.
JOËL: I think some of the distinctions we're talking about here is something that was explored in an older episode on note-taking with Amanda Beiner, where we sort of explored the difference between exploratory notes, debugging notes, idea notes, and how note-taking is not a single thing. It can serve many purposes, and they can have different lifespans. And those are all just ways to aid your thinking. But being maybe aware of the kind of thinking that you're trying to do, the kind of notes you're trying to take can help you make better use of that time.
STEPHANIE: I have one last question for you before we wrap up, which is, do you find, like, the stuff we're talking about to be particularly true about software development, or it just happens to be the thing that you and I both do, and we also love to learn, and so, therefore, we are able to talk about this for, like, 50 minutes [laughs]? Are you able to make any kind of distinction there, or is it just kind of part of pedagogy in general?
JOËL: I would say that that sort of active versus passive thing is a thing that's probably true, just about anything that you do. For example, I do a lot of bouldering. Just going spending a lot of time on the wall, climbing a lot; that's going to help me get better. But a classic way that people try to improve is filming themselves or having a friend film themselves, and then you can look at it, and then you evaluate, oh, that's what I did. This is where I was struggling to get the next hold. What if I try to do something different?
So, building in an amount of, like, self-reflection into the loop all of a sudden catalyzes that learning and helps you grow at a rate that's much more than if you're just kind of mindlessly putting time into it. So, I would go so far as to say that self-reflection, synthesis—those are all things that are probably going to catalyze growth in most areas of your life if you're being a little bit more self-aware. But I've found that it's been particularly useful for me when it comes to trying to get better at the job that I do every week.
STEPHANIE: Yeah, I think, for me, it's like, yeah, getting better at being a developer rather than being, you know, a software developer at X company. Like, not necessarily just getting better at working at that company but getting better at the skill itself.
JOËL: And those two things have a way of sort of, like, folding back into themselves, right? If you're a better software developer in general, you will probably be a better developer at that company. Yes, you want domain knowledge and, like, a deep understanding of how the system works is going to make you a better developer at that company. But also, if you're able to find more generic approaches to onboard onto new things, or to debug more effectively, or to better read or understand unknown code of high complexity, those are all going to make you much better at being a developer at that company as well. And they're transferable skills, so they're all really good things to have.
STEPHANIE: On that note. Shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël explains his note-taking system, which he uses to capture his beliefs and thoughts about software development. Stephanie recalls feedback from her recent RailsConf talk, where her confidence stemmed from deeply believing in her material despite limited rehearsal. This leads to a conversation about the value of mental models in building a comprehensive understanding of a topic, which can foster confidence and adaptability during presentations and discussions.
The episode then shifts focus to the practical application of enumerators in Ruby, exploring various mental models to understand their functionality better. Joël introduces several metaphors, such as enumerators as cursors, lazy collections, and sequence generators, which help demystify their use cases.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, what's new in my world isn't exactly a new thing. I've talked about it on the podcast here before, and it's my note-taking system. I have a system where I try to capture notes that are things I believe about software or things I think are probably true about software. They're chunked up in really small pieces, such that every note is effectively one small thesis statement and a paragraph of text, and maybe a diagram or a code snippet to support that. And then, it's highly hyperlinked to other notes. So, I sort of build out some thoughts on software that way.
A thing that I've done recently that's been pretty exciting with that is introducing a sort of separate set of notes that connect to my sort of opinion notes. So, I create individual notes for public works that I've done, things like blog posts or conference talks. Because a lot of those are built on top of ideas that have been sitting in my note system for a while. Readers and listeners get to sort of see the final product, but often sort of built up over several months or even a couple of years as I added different notes that kind of circled a topic and then eventually got to a thing.
What I did, though, was actually making those connections explicit. And so I use Obsidian. Obsidian has this cool graph view where it just sort of shows all of the notes, and it circles them with, like, connections between them where the notes connect. So, I can now see in a visual format how my thoughts cluster in different topics, but then also which clusters have talks and blog posts hanging off of them and also which ones don't, which ones are like, oh, I have a lot of thoughts on this topic, and I've not yet written about it in a public forum; maybe that would be a thing to explore. So, seeing that visual got me really excited. I was having a good time.
STEPHANIE: Yes, I have several thoughts coming to mind in response, which is, I know you love a visual. I really like the system of, even if you have created content for it, like, you have a space for, like, thoughts about it to evolve. Because you said, like, sometimes content comes out of notes that you've been...or, like, thoughts you've been having over years, but it's like, even afterwards, I'm sure there will still be new thoughts about it, too. I always have a hard time finding a place for that thing kind of once I, I don't know, it's like some of that stuff is never really considered done, right? So, that is really cool.
And I also was just thinking about an old episode of The Bike Shed back when Chris Toomey and Steph Viccari hosted the podcast called "What We Believe About Software," I think, is the title. And I was just thinking about how, like, if only we could just dump all of your notes [laughs] into some, you know, stream [laughs], and that would be really cool. If we ever do, like, an episode like that, that would be really fun. And I'm sure, you know, you already have this, like, huge bank of ideas [laughs].
JOËL: Yes. It is really fun because I build up...the thoughts are often sort of interconnected, and so they might have a topic, but they are very focused. So, I might have, like, three or four things I believe about a particular topic that cluster together. So, we could...and, actually, I have used, in the past, some of those clusters as initial food for thought for a Bikeshed episode.
STEPHANIE: Yeah, that's really neat. I like this idea of a kind of just, like, a repository for putting down what you believe about software as kind of, like, guiding principles for yourself as a developer a little bit.
I remember a piece of feedback I got about my RailsConf talk that I gave a few weeks ago, and someone said like, "Oh, you sounded really confident in what you were talking about." And that surprised me because I, like, didn't practice rehearsing giving the talk all that much [laughs]. It's because they had asked like, "Oh, like, did you practice a lot?" or something like that. And I think I realized that I, like, really believed in what I was sharing and kind of that, I think, was perhaps what they were picking up on.
And even though, like, maybe the rehearsal of the presentation itself was not where I had spent a lot of time on, I had spent a lot of time thinking about what I wanted to share and just building up my confidence around that. So, I thought that was an interesting connection.
JOËL: Yeah, you fully developed the idea. You kind of explored all the side trails, maybe a little bit on your own as well. You're on very familiar terrain. And so, that is a way of building confidence separate from just sort of memorizing a talk.
STEPHANIE: Yeah, yeah. Exactly.
JOËL: In a sense, I almost feel like that's a better sense of confidence because then you can sort of...you can roll with the punches. You know, if a slide is out of order or something, sure, it maybe messes up a little bit of the narrative that you're trying to say. But you're not like, "Oh no, what is this content?" You're like, "Oh yeah, this thing," and you can dive right into it. Somebody asks you a question, and you're not like, "Oh no, that was not in the script," because, again, you've sort of mastered your topic. You know the area as a whole, even sort of the blurry edges beyond the talk, and can react in a way that is pretty confident.
STEPHANIE: Yeah. I still definitely fear the open Q&A. I've never done it before, but maybe one day I will be able to because I just, you know, know my topic so well inside and out [laughs] that I can roll with the punches, as you say.
JOËL: Open Q&A is just...it's a roll of the dice. Sometimes, you get some really good conversation topics there, and sometimes, it's just a waste of everyone's time.
STEPHANIE: I like that take [laughs].
JOËL: Maybe that should go into the things I believe about software. So, other than receiving feedback about your RailsConf talk, what is new in your world?
STEPHANIE: Yeah, so I am wrapping up a pretty large project on my client work that we're hoping to release soon. And, in fact, it's actually being released along with a big announcement from the client company to their customers. Essentially, at a conference, they're going to say like, "Hey, like, we now have this new feature." And so, I think there's some hype generated around it. And this past week, we've been doing a lot of internal testing of the feature because there are a lot of employees of my client company who are, like, pretty big users of the product, which is cool because I think we're getting, you know, we have easy access to people who can give us good feedback.
But I am having a hard time with being on the receiving end of the feedback and figuring out, like, what is stuff I need to attend to now before, you know, this big release? And what is stuff that is just kind of, like, general feedback like, "Oh, like, I wish it did this," but, you know, it turns out that that's not really what we were building? And how do I just kind of, like, accept that?
You know, it's coming from a good place, but I can't really help them there, at least right now. And that's hard for me because I like helping people, right? And so, if someone says something like, "Oh, like, I wish it did this," or like, "Oh, that's kind of weird," I'm like, "Oh, I want to just, like, fix that for you right now [laughs]." And I suspect that a lot of other devs can relate to this, especially if, like, you know, you've been working on something for a little bit, and it feels...I'm just going to say it: it feels a little precious to me.
So, what I'm trying to do today, actually, is not look at any of the feedback at all [laughs] and come at it tomorrow with a bit of a calmer vibe and be able to separate out, like, you know, I think all feedback is informative, but not all of it is useful for you at any given moment. Like, if there are bugs, then those will be my immediate priority. If there's maybe some small tweaks that we can make the feature just a little bit more polished, then I also think those are good.
But then we are discovering a few things, too, about, like, what this feature is or could be. And I think those are the things that, you know, need to be brought into a conversation with a broader group and think about, like, is this the direction we want to go? So, that's kind of how I'm bucketing that feedback right now.
JOËL: How do you feel about receiving direct feedback versus having something filtered through something like a product team?
STEPHANIE: Ooh, that's an interesting question. Because right now we're doing, I think, a mix of both that I'm not sure that I really like. On one hand, when it's filtered, it's hard to get to the root of what someone is asking for. And oftentimes, like, it may not even include enough information after the fact to be able to come at it from a dev perspective. But then direct feedback, I think, is just a little bit overwhelming sometimes. And it can be hard to figure out what to pay attention to if you don't have that, like, input from a product team about, like, what the roadmap is looking like or where, you know, strategically their heads are at.
So, one thing that kind of has emerged from this is like, oh, I was getting, you know, notifications for the feedback coming in. And what we did was set up a meeting [laughs] so that we can...maybe all of us can, like, scan it together ahead of time and then come at it with a little bit of context about what's come in but then maybe coalesce around the things that we feel are important.
JOËL: Well, you'll have to keep us updated on how that plays out, and we can kind of hear what is the balance that ends up working well for you.
STEPHANIE: Yeah, I hope so. I think this is actually maybe something that's a bit underexplored from the dev perspective, you know, that in-between stage of you're not totally done because it's not shipped to the world yet, but, you know, you're starting to get a little bit of that input. And what you do with that? Because I think there is some value in being engaged in that process.
JOËL: So, we were talking earlier about this note-taking system that I use and sort of a renewed excitement that I have about it. And one thing that I did when I was going through and finding clusters of things that hadn't been written about was I found that I had a cluster of notes on different mental models that I had for understanding Ruby enumerators, not the enumerable module, but the enumerator object. And I decided, you know what? This would probably make for a good blog post. So, I drafted a blog post, and I've been thinking about this a little bit more recently. So, I've been really hyped about digging into enumerators because of that experience.
STEPHANIE: Yeah, that's very cool. I have to say that I feel like I did not know a lot about enumerators and the API for them kind of before you brought this topic up, and I did a bit of a deep dive in preparation for us to discuss it. I feel like most devs, you know, work with enumerators via methods on enumerable without totally knowing that they are. So, I think that this would be a really interesting episode for people to be like, oh, like, I've been using this stuff, you know, the whole time, and now I can have a different perspective or just more insight on what they can do.
JOËL: Before we dig into individual mental models, though, I want to think a little bit about the concept of mental models as a whole. Years ago, someone gave me advice to sort of pay attention to mental models, ways I think about the world or different code structures, different code approaches, and that really stuck with me. So, I've since been, like, kind of, like, collecting mental models.
And, in a way, they're like a, for me, a bit more of a concrete way to look at a particular topic. So, I can say I'm looking at this particular topic through the lens of a particular mental model that helps me build more clarity around it. And if I have three or four, then I can kind of look at it from three or four different perspectives. And now, all of a sudden, I feel like I'm seeing in three dimensions.
STEPHANIE: Whoa, the Matrix even [laughs]. That's cool. Yeah, I really like that advice. I think I'm going to steal it and start kind of suggesting it to other people because I think, in a way, on this show, that has come through a lot. And talking about things on the podcast has helped me develop a lot of my mental models. And I think we've done a few, like, episodes in the past about various ones we have for just our work because it's like, that's infinite [laughs]. But what I really have been appreciating is that mental models just need to work for you. As long as you're able to understand something, then it's valuable.
And that has really helped me also, like, just get on the same understanding with others because the goal is not necessarily to, like, explain it the way that I would think of it, but figure out what would help them kind of develop their own mental model for understanding something, and, you know, kind of as long as we both feel like we have that shared understanding, no matter what lens it's through. And, you know, sometimes it's even more effective when we are able to share it. But I feel like, you know, you can still find ways to collaborate on something with a diversity of mental models.
JOËL: Yeah, they're a great way to build self-understanding. They're a great way to sort of build understanding between two people. So, I'm a huge fan of the concept. And part of what I've been doing with my note-taking system is trying to capture those as much as possible. If I'm ever, like, trying to understand a complex topic and I'm like, oh, I think I've got a breakthrough here; I understand it; it's kind of like this, or you can imagine it in this perspective, it's like, write that down. That's gold.
STEPHANIE: Very cool. So, Joël, would you be able to share some of your mental models for enumerator?
JOËL: So, one way that I look at it is the idea that an enumerator is effectively a cursor over a collection. So, you have an array and a regular array; you're either in the middle of iterating through it using something like each, or you're not. You just have a collection of items. Enumerator introduces the idea that you're actually sort of at a position in the array. So, you're sort of focused on, let's say, the third item or the fourth item. You have a cursor there, and you can move that cursor forward as you sort of step through.
But the really cool thing is you can also kind of pause and just pass that cursor on to someone else, and someone else can move the cursor a few steps further down the collection, pause, pass it on to someone else. And it's totally fine. Nobody has to, like, go through an entire, like, each iteration.
STEPHANIE: Yeah. So, when you were talking about cursors, that got me thinking a little bit because I actually have struggled with that concept, especially when it comes to, you know, things code-related. Like, when I've had to work with database things and stuff, like, the idea of a cursor was a little, like, difficult for me to wrap my head around. And I was looking at the methods on enumerator, like the instance methods on enumerator. And one of them actually is what helped me develop this mental model. And I'm excited to see what you think.
But there is a rewind method that basically rewinds the sequence back to its beginning, right? And what that triggered for me was a VHS tape [laughs] and just those, like, car-shaped rewinders for tapes back in the '90s. I don't know if you ever had one in your house, but I did. And I just thought that was such a cool method name because it was very, I don't know, it was just like a word that we use in the English language, right?
So, the idea of, like, tapes, you know, like, cassette tapes or VHS tape kind of also it sounds like it matches well with what you were sharing, too, where it's like, I could pass, I don't know, maybe I, like, listen to a few songs on my cassette tape, and then I give it to someone else, and they can pick up where I left off. And yeah, that was really helpful in understanding, like, a marker of a position a little more than cursor was able to for me.
JOËL: That's really interesting because now I wonder, like, how far we could push that metaphor. So, musical data is encoded on magnetic tape. Cassette tapes typically there are sort of two spools. You start off with all of the tape wound up around one spool, and then as it sort of moves across the read head, it gets wound up on sort of the, I don't know, destination spool. I guess you can call them origin and destination. And because of that, you can sort of be in a, like, partly read state where, you know, half the tape is on the destination spool, half of it is on the origin spool, and you have that read head that's in the middle, and you're just kind of paused there. And you can kind of jump forward in that.
So, I imagine something like that in your metaphor is like an enumerator. Contrast that to imagine just a single spool, which is just we have musical data encoded on magnetic tape, and we wrapped it up on a spool. I feel like that's almost more like a regular array because you don't have that concept of, like, position, or being able to read parts of it or anything like that. It's just, here's some data.
STEPHANIE: Yeah. While you were talking about the two spools, I was thinking about, like, part of what is nice about enumerator is that you can go forward or backwards, right? And that feels a little more possible with that two-spool metaphor [laughs], rather than just unraveling something, where you are kind of discarding what has already been read.
JOËL: The one caveat there is that enumerators can move forward one item at a time. They can only move backwards by jumping back to the beginning. So, you can't step forward or step back.
STEPHANIE: Yeah, that's fair.
JOËL: You step forward, or you, like, rewind to the beginning. I think, in my mind, I was thinking a little bit more about this metaphor. And I think it's also just a metaphor for what's called the External Iterator Pattern. It's one of the classic Gang of Four Patterns, which is what enumerator, the object in Ruby, is an implementation of. I feel like I always see that in the documentation, like, oh, enumerator is an implementation of the External Iterator Pattern. And I just kind of go, what?
STEPHANIE: [laughs]
JOËL: Or maybe I kind of understand the idea of, like, okay, it's a way to, like, be able to step through a collection. But thinking in terms of a cursor or even your model as a cassette tape, I think that gives me a model, not just for enumerators, but then for better understanding that external iterator pattern. Like, I'm now not going to think of if I'm ever reading through the Gang Of Four book, or some other languages say we're an doing External Iterator Pattern, and I'll immediately be like, oh, that's a cursor, or that's a cassette tape.
STEPHANIE: Yeah, very cool. I like it.
JOËL: Another mental model that I have is thinking of enumerator in terms of a lazy collection. This is something that you tend to see more in functional programming languages, so the idea that you have a collection of potentially infinite length, or it could even be unknown length. But each element only sort of comes into being as you attempt to read it. So, it's kind of, like, a potentially infinite chain of Schrodinger's boxes. And you've got to open each of them to find out what's inside.
STEPHANIE: Do you know what this reminded me of? Like elementary school math questions that were like, "What comes next in this pattern?" And it has, like, you know, the first, like, four or five values in a sequence or something. And then, you have to figure out, like, what the next value is. But then, in some ways, you know, I think it can depend on whether your enumerator is using the previous value to determine the next one. But yeah, it's like, you can't just jump ahead to figure out what the 10th, you know, value in this pattern is without kind of knowing what's come before it.
JOËL: And sort of that needing to step through the entire collection, sort of one element at a time.
STEPHANIE: Yeah, exactly.
JOËL: I think a way that that concept is interesting, to me, is situations where a collection might be expensive, and you don't necessarily need all of it. So, you might have a bunch of calculations, but you can stop when you've hit the first one that succeeds or that matches a certain criteria. And so, it's not worth it to calculate the entire array of calculations if you're going to stop at the third one. And you could do that with some sort of, like, loop or something like that. But having it as a collection means you get to just treat it like an array, and you can call detect on it and do all the nice things that you're used to. It just happens to be a little bit more efficient in terms of not creating more data than you need to.
STEPHANIE: Yeah. And I think there's some really cool stuff you can do when you start chaining enumerators with this concept of it being lazy evaluated. So, one of the things I learned in my deep dive is that when you are using the lazy method, you're able to chain enumerators. And they work a bit differently, where the default functionality is, like, everything in the collection gets evaluated through the first method, and then it gets iterated over in the second method. Whereas if you use lazy, I believe how it works is that, like, the first value gets kind of processed by all of the methods. And then, you get, you know, the output before moving on to the second, like, the next value. Does that sound right?
JOËL: Yes. And I think that's where there's often a lot of confusion because there's sort of plain enumerator, and then there's a lazy enumerator that Ruby provides. A plain enumerator is a lazy list in the sense that items don't get evaluated unless you try to reach for them. So, if you have an enumerator and you say, "Just give me the first five items," it will do that. And even if the collection was 200 items long, the next 195 don't get evaluated. So, that's very efficient there.
Where you would get into trouble is that plain enumerators are not lazy when it comes to traversals. So, any method that would traverse the entire collection, so something like a map or a select, is not going to be lazy because it's going to traverse the entire collection, therefore forcing us to evaluate each of the items in there. Whereas something like enumerable lazy will not actually traverse the collection when you do your map or you're selecting. It will wait for you to say, "Give me the first item," or "Give me the first ten items," or something like that. But you don't always need lazy. You really only need lazy when you're doing a traversal method.
STEPHANIE: Okay. Cool, cool, cool. That makes a lot of sense.
JOËL: I think a sort of spinoff metaphor that I have there is this idea of a lazy list. Another concept that, in my mind, is very adjacent to lazy lists is the concept of streams. And streams I typically think of them in terms of, like, files or networking, things like that. But a thing that you can do let's say you're working on data that's in a very large file, so big that you can't fit it into memory, a common solution there is streaming it. So, you don't load the entire file into memory and then operate on it. Instead, little chunks of it are loaded into memory. You operate on them, and then you release that memory and load the next chunk. So, you sort of work through that file in chunks, but you'd only have, you know, 1 line or ten lines or however big your chunk is in memory at a time.
An enumerator allows you to do that with things that are not files. So, this could be a situation where, let's say, you're reading a lot of data from the database. You just have too many rows. You can't load them all into memory at once. But you do want to traverse through them. You could chunk that using enumerator so that every, you know, it loads 100 rows at a time or 1,000 rows at a time, or something like that. And your enumerator allows you to treat that as though it's a single array, even though, in the background, it's being chunked into pieces so that you never have more than a thousand rows at a time in memory. So, it allows you to do some, like, really nice sort of memory performance things.
STEPHANIE: When would you want to use this over kind of something like batching queries?
JOËL: So, I think ActiveRecord find_in_batches does something like this under the hood.
STEPHANIE: Oh, cool.
JOËL: I don't know if they use Ruby's enumerator or if they sort of build their own custom extension to it, but it's built on this idea.
STEPHANIE: Okay, that's really neat. I have another mental model that I wanted to get your thoughts on.
JOËL: Yeah!
STEPHANIE: One of the ways that I looked up that you can construct an enumerator, an infinite enumerator like we were talking about a little bit earlier, was with the produce class method. And that actually got me thinking about a production line and this idea that, you know, you have this mechanism for, you know, producing some kind of material or, like, good or something like that. And it's just there and waiting and ready [laughs] for you to, like, kind of ask for it, like, what it needs to do. And you can do that, like, sometimes in batches, right? If you are asking for like, "Okay, I want a thousand units," and then the production line goes to work [laughs]. But yeah, that was another one of those things where I'm like, wow, they really, I think, came up with a cool method name that evoked, like, an image in my head.
JOËL: That's the power of naming, right? And I think it's interesting you've mentioned twice how going through the method names on enumerator and finding different method names all of a sudden, like, turned on a light bulb in your mind. So, if you're naming things well, it can be incredibly useful for users of your library to pick up on what you're trying to do.
So, I want to circle back to something that you mentioned earlier, the idea of elementary school quizzes where you have to, like, figure out the next item in the sequence. Because that, for me, is very similar to my mental model: the idea that an enumerator is a sequence generator. So, instead of thinking of it as, oh, it's like an array or it's some kind of collection, instead, think of it as a robot that I can just ask it, hey, give me a value, and it will give me a value. And then, it will, like, keep doing that as long as I keep asking it for it. And those values, you know, they could be totally random. You can build one of those.
But you can also have it so that the values sort of come from a sequence. It's not like an array where you're like, oh, I'm going to, like, predefine an array of, I don't know, the Fibonacci sequence, and when someone asks me for the third value, I'll just go and read that third value from the array. Instead, it knows the algorithm, and it just says, "Oh, you want the next value in the Fibonacci sequence? Let me calculate it. Here it is. Oh, you want the next value? Here it is." And so, thinking from that perspective helped me really come to terms with the concept that values really do get calculated just in time. It's not really a collection. It's an object that can give you new values if you ask it.
STEPHANIE: Yeah, okay. That is making a lot more sense kind of in conjunction with the lazy list model that you shared earlier, and even a little bit with the production line that I was kind of sharing where it's like, you know, in this case, kind of, it's, like, the potential for a value, right?
JOËL: Right, exactly. And, you know, these are all mental models that converge on the same ideas because they're all just slightly different perspectives on what the same object does. And so, there is going to be some overlap, some converging between all of them. I have another fun one. Can I throw it at you?
STEPHANIE: Please.
JOËL: This one's a little bit different, and it's the idea that enumerators are a tool to bring your own iteration to a collection. So, imagine a situation where you're building your own, let's say, binary tree implementation. And there are multiple ways to traverse through a binary tree. In particular, let's say you're doing depth-first search. There are sort of three classic ways to traverse that are called pre-order, post-order, and in-order traversals. And it really is just sort of what order do you visit all the children in your tree?
Now, the point of a collection, oftentimes, is you need a way to iterate through it. And a classic solution would be to include enumerable, the module. In order to do that, you have to define a way to iterate through your collection. You call that each. And then, enumerable just gives you all the other nice things for free. The question is, though, for something like a tree where there are multiple valid ways to traverse, which one do you pick to make it the each that gets sort of all the enumerable goodies, and then the others are just, like, random methods you've defined?
Because if you define, let's say, pre-order traversal as each, now your detect and select and all those are going to work in pre-order, but the others are not going to get that. So, if you map over a tree, you're forced to map over in pre-order because that's what the library author chose. But what if you want to map over a tree in post-order or in-order?
STEPHANIE: Yeah, well, I'm guessing that here's where enumerator comes in handy [laughs].
JOËL: Yes. The approach here is instead of designating sort of one of those traversals as the sort of blessed traversal that gets to have enumerable; you build three of these, one for each of these traversals. And then, what's really nice is that because enumerators are themselves enumerable, they have map and select and all of these things built in.
Now you can do something like mytree dot preorder dot map or mytree dot postorder dot map. And you get all the goodies for free, but the users of your library get to basically choose which traversal they want to have. As a library author, you're not forced to pick ahead of time and sort of choose; this is the one I'm going to have. You sort of bring your own traversal by providing an enumerator, and then everything else just kind of falls into place.
STEPHANIE: Bring Your Own Traversal (BYOT) [laughter]. I like it. Yeah, that's cool. I can see how that would be really handy. I have not yet encountered a situation where I needed to get that deep into how my iteration is traversed, but that's really interesting. And, I mean, I can start even imagining, like, having an each method defined in these different ways, and then all of that being able to be composed with some of the other...just other methods. And now you have, like, so many different ways to perhaps, like, help, you know, different performance use cases.
JOËL: Yeah, it can be performance. I often tend to think of enumerator as a performance thing because of its sort of lazy properties because; it allows you to sort of stream or chunk data that you're working with. But in the case of this mental model of the Bring Your Own Traversal, it actually is more about flexibility and having sort of the beauty of Ruby without having to compromise on, oh, I have to pick a single way to traverse a collection.
STEPHANIE: But I really appreciate kind of this discussion about enumerator because this was previously, like, I don't think I have really ever used the class itself to solve a problem, but now I feel a lot more equipped to do so with a couple of the different kind of perspectives. And I think what they helped me do is just prime myself. If I see a problem that might benefit from something being iterated in a lazy way, like, being like, oh, I remember this thing, this mental model. Now I can go kind of look at the documentation for how to use it. And yeah, like, I don't know how I would have stumbled across, like, reaching for it otherwise.
JOËL: That's a really interesting thing to notice because we've been talking a lot about how mental models can be a tool for understanding. But once you build an understanding, even though it's somewhat fuzzy, they're also a great tool for sort of recall. So, not only are you thinking, okay, well, this mental model says enumerators are kind of like this, or they function in this way.
On the flip side of it, you can say, "Well, lazy evaluation problems are often enumerator problems. Like, streaming or chunked data problems are often enumerator problems. Multiple traversals are enumerator problems." So, now, even though you don't, like, fully understand it in your mind, you've got that recall where you can enter it, where you can come across that problem, and immediately you're like, oh, I'm dealing with multiple traversals here. I don't remember exactly how, but somehow, in my mind, I've got a connection that says, "Enumerators are a solution for this. Let me dig into that."
STEPHANIE: Yeah, especially as an alternative to where I would normally reach for something...a more kind of common enumerable method. Because I definitely know that feeling of like, oh, like, I wish it could just, like, do this a little bit differently, you know. And it turns out that, you know, something like that probably exists already. I just needed to know what it was [laughs].
JOËL: On that theme of I wish that I could have something that behaved just a little bit more...like, I'm doing something slightly weird, and I wish they would behave more, like, just plain Ruby does normally with my, like, collections I'm familiar with.
I'm going to pitch a talk that I gave at RubyConf Mini called "Teaching Ruby to Count." Some of these mental models actually showed up there. But the whole idea is like, oh, if you're bringing in sort of more custom objects and all of that, how can you just tweak them a little bit so that they're just as joyful to use and interact with as arrays, and numbers, and ranges? And they just sort of fit into that beauty of Ruby that we get out of the box.
STEPHANIE: Awesome. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël and Stephanie talk RailsConf!. Joël shares how he performed as a D&D character, Glittersense the gnome, to make his Turbo features talk entertaining and interactive. Stephanie's talk focused on addressing test pain by connecting it to code coupling, offering practical insights and solutions.
They agree on the importance of continuous improvement as speakers and developers and trying new approaches in talks and code design, and recommend Jared Norman's RailsConf talk on design patterns, too!
Transcript:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I think I can speak for both of us and say what's new in our world is that you and I just came back from RailsConf in Detroit.
JOËL: Yeah, we were there for, I guess, it's a three-day conference. Both of us were giving talks.
STEPHANIE: Yeah. I don't think we've both spoken at a conference for at least a little over a year, so that was really fun kind of to catch up in person. And there was a whole crew of thoughtboters who were there. Yeah, I feel like we were hanging out, like, a lot [chuckles] all of last week, just seeing each other, talking about, you know, rehearsing our talks and spending time together on...there was, like, a hack day, and we were sitting at the table together. So, I feel like I'm totally caught up on everything that's new in your world, and that's it. That's the end of the show [laughs].
JOËL: On that note, shall we wrap up?
STEPHANIE: [laughs] That would not be very fair to our listeners.
[laughter]
JOËL: Yeah. So, how was the conference speaking experience for you?
STEPHANIE: Ooh, it was really great this year. I have not spoken at a RailsConf before, so this was actually, I think, a bigger stage than I had experienced before, and I had a great time. I met Ruby friends, new and old, and, yeah, I left feeling very gooeyed, and very energized, and just so grateful for the Rails community [laughs]. Yeah, I had a very lovely time, kind of being a little bit outside my normal life for a few days. And I think my favorite part about these things is just like, anywhere you go, you can kind of just have a shared interest with someone, and you can start a conversation with them.
JOËL: That's really interesting. Do you find yourself just reaching out to strangers at conferences like this? Or do you tend to just hang out with the people that you know?
STEPHANIE: Oh, I think a little bit of both. I like to get meals with people I know. But if I'm just hanging out in, like, the lobby or if I happen to get a seat for a talk and I'm sitting next to someone that I don't know, I find it quite easy to just be like, "Hi, like, I'm Stephanie. Are you excited for this talk?" Or, like, "What good talks have you seen recently?" There's an aspect of, like, the social butterfly that comes out of me when I'm at these things. Because I just don't get to have, like, easy access to, I don't know, people with, like, that shared interest or people who are willing to just have a conversation with you normally, I think.
JOËL: Yeah, would you describe yourself more as an introvert or an extrovert?
STEPHANIE: I am an extroverted introvert [laughter]. I feel like maybe that might be interpreted as a non-answer, but I think I lean more on the introvert side. But you know when you're with a group of people, and there's not, like, a very clear extrovert in that conversation, and then you're like, oh, I have to do the heavy [chuckles] lifting of the social lubrication [laughs] in this conversation, I can step into that role, reluctantly [laughs].
JOËL: Okay. I like the label that you used, the extrovert introvert, in that I enjoy social situations. I do well in social situations. But they also consume a lot of energy for me. I don't necessarily get sort of recharged by doing social events. So, people will be surprised when they find out that I tend to talk about myself as an introvert because, like, "Oh, but you're, like, you know, you're not awkward. You engage very well in different group situations."
STEPHANIE: You have a podcast [laughs].
JOËL: And the truth is I enjoy those things, right? I really like social interaction, but it does, after a while, wear me out.
STEPHANIE: Yeah, that makes sense. I did want to spend a little bit of time talking about the talk you gave at RailsConf this year: "Dungeons & Dragons & Rails."
JOËL: I got to have a lot of fun with the theme. The actual content was introducing people to Turbo by building an interactive Dungeons & Dragons character sheet using vanilla Rails and a little bit of Turbo. So, we're not even writing any JavaScript. We're just using the Turbo helpers, a little bit of Action Cable to mimic something a little bit like...people who are in the know might be familiar with the site D&D Beyond, which is kind of the official D&D online character sheet website. Of course, it wasn't anywhere near as fancy because it's a 30-minute talk and showcasing different features, but that's what we were aiming for.
STEPHANIE: Yeah, you know, you've talked a bit about giving talks on the show before, but I wanted to get into what made this one different because I think it could be fun for our listeners.
[laughter]
JOËL: The way I structured this talk so it has a theme. It's about Dungeons & Dragons, and we're building a character sheet. The way I wrote the talk was it's broken up into chapters. Each chapter is teaching a new feature in Turbo that I want to show off. In order to motivate learning each of these features...because I don't like to just say, "Oh, here's a thing that technology can do. Oh, here's a thing that technology can do." That's boring. You need a reason to learn that. So, I needed a reason to say,
"We need to add this to a character sheet."
So, every sort of chapter of the talk opens up with a little narrative portion. We're following this character, Glittersense, the gnome, and he's on adventures. And at different points in the adventures, he's going to do different types of roles or need different stats and things. And so, when we reach the point in the adventure where we need that, we sort of freeze frame and then say, "Okay, let's add that as a feature to the character sheet."
And then, oh no, it turns out that this feature is a little bit more complicated. We're going to have to learn a new Turbo feature to do that. Who would have guessed? And then, we learn a new Turbo feature together. And then, we go back to the narrative portion. The adventures of Glittersense continue. And then, oh no, we're going to need to add another feature to the character sheet. And that's sort of how the talk is structured.
STEPHANIE: Yeah. And you did a really cool thing with the narrative portions, which was you basically performed as Glittersense, the gnome, voice and posture, and a lot of really great acting from you [laughs], in my opinion.
JOËL: That is something that came out pretty late in the talk preparation. So, I knew I wanted this kind of alternating story and code structure. Then, like, the weekend before RailsConf, I'm running through my slide deck, and I realized, you know what? What if instead of narrating Glittersense's adventures, what if I went first person for those sections? Glittersense tells his own story.
And then, from there, it wasn't a big jump to say, you know what? This is D&D. If I'm going first person and narrating, I really should do a voice. And this is a conversation I had with a couple of people at the speaker dinner. And, of course, everyone's like, "You should 100% do the voice." And I was really not feeling confident in my ability to pull it off. So, for the next two nights, because I was speaking on the third day, the next two nights at the conference, in the evenings, I'm in the hotel room in front of the mirror just practicing my gnome voice to try to get something that got the persona of Glitterense, the gnome, across to the audience.
STEPHANIE: How would you describe the persona?
JOËL: Very extra.
STEPHANIE: [laughs]
JOËL: Very high energy.
STEPHANIE: Yes. The name Glittersense is very extra, after all.
JOËL: [laughs]. I punctuated a lot of the things that he says with just high-pitched laughter. He's also...so, the framing device for all of this is that you're in a tavern listening to him tell his adventures. I wanted a little bit of the sense that Glittersense is maybe embellishing a little bit. I think it may be too much to say he's full of himself, but he's definitely making himself to be the hero of the story, and maybe making himself to be slightly cooler than he really was.
STEPHANIE: Yeah. I definitely got, like, a little bit of eccentricity, too, from the persona. And you know when you just, I don't know, meet an older person who has, like, a lot of life experience, and they want to tell you about it [laughter], but you do kind of maybe have a little bit of suspicion around how much they're exaggerating [laughs].
But it was really fun. Everyone I talked to afterwards, like, loved it. And I got to share the little nugget that, like, oh yeah, and Joël only, like, started doing the voice, like, decided that he was going to do it two days ago. And they were just all really, like, blown away because it seemed so well practiced, and it was really fun.
JOËL: I got to do something really fun, also, with physical space because Glittersense narrates his portion, sort of the story portions, but then the code portions where we're talking about Turbo, I'm talking in my own voice. And so, when I'm talking about Turbo, I'm standing at the lectern. And when I'm Glittersense, I'm kind of off to the side on the stage and doing the voice.
And so, there's this almost, like, two worlds that are inhabited: one by Joël, the speaker, and one by Glittersense, the gnome. And it got to the point where I don't say or do anything. I only move from the lectern to the, like, portion of the stage where Glittersense lives. And the audience starts chuckling and, like, nothing has happened yet, like, no jokes have been told. No voice has happened. No slides have changed. But the anticipation, people know what's coming.
STEPHANIE: Yeah. And I think the best part, what I really found just really fun and, I don't know, every time it happened, I just really enjoyed it, when you transitioned out of Glittersense, the gnome, and back to Joël because you were so nonchalant about it. You kind of, like, straighten up rather than having your little kind of crouchy gnome posture, and then just walk across back to the podium. And then, in your normal voice, go back to just, you know, sharing very...not necessarily dry, but just, like, straight to the point. "And this is, like, how you, you know, create a frame in [laughs] Turbo," as if nothing happened [laughs] when even just, like, you know, 20 seconds ago, you were just enthusing about, like, slaying the bandit, chieftain [laughter] known as Glittersense.
JOËL: Uh-huh. I think, especially when I open, so I get introduced. I'm off stage. I walk onto the stage, and I'm immediately Glittersense. And I'm telling a story, and the intro goes on for, like, quite a while. It's a big story chunk. And then, at some point, I just walk over to the lectern, drop the voice, hit next slide, and it's my title slide. I'm just like, "Okay, now welcome to Dungeons & Dragons on Rails. We're going to build a character sheet together."
STEPHANIE: Yeah, that's exactly the moment I'm thinking of.
JOËL: The walking in as Glittersense and just immediately going to the voice caught everyone by surprise. And then, the, like, oh, he keeps going for this. Is the whole talk going to be like this? And then, the, like, just when you think, oh, he's really going for it, the, like, dropping it and going to the podium and title slide. It wasn't intended to be a funny moment, but I think the contrast and the fact that I just switched over was one of the biggest laughs I got.
STEPHANIE: Yeah, I mean, I think that attests to how good the delivery of it was because that contrast was very felt. So, props to you.
JOËL: I love the idea of, you know, the thought that you put into building a talk and, like, the narrative structure and the pedagogy of the stuff. And, I think, in this particular case, this is almost like a narrative approach called in media res, where you start kind of in the middle. You open your book, or your movie, or whatever in the middle of the story. And then, you kind of come back to the beginning at some point later. So, it starts with some kind of action scene that grabs your attention. So, in this case, my title slide is 10, 15 slides into the talk.
We get immediately started with Glittersense and his adventures. And then, once we're sort of all bought into this world, then we move to the title slide and talk about, okay, we're here to build a character sheet and all that stuff. And I think that it wouldn't have had the same impact if I'd, like, opened with that and then gone into Glittersense's adventures. And that's something that was not the case at the beginning. I really reworked the talk to make it in that order. And I think that the talk had a lot more impact for doing that.
STEPHANIE: Yeah, definitely. I guess I also just wanted to point out that this is very different from all your other talks. And I think it's really cool that, you know, you are a veteran speaker, but you still find ways to do something new and try something that you've never done before, and yeah, find ways, new ways to, like, speak and engage people and teach. I don't know, do you have just any thoughts about why or how you got into a position to be like, "Oh, you know, I'm going to do something super different this time around" [laughs]?
JOËL: So, every talk I give, I try to do something new, something different, to push myself as a speaker to get better. That might be in the writing of the talk; that might be in the delivery. More recently, I've been trying to do more with dynamic presence on stage. So, when I spoke at RubyConf San Diego, I was trying to not just stand at the lectern but to learn to be able to give my talk while also, you know, walking around the stage, looking at the audience, making pauses where it's necessary, not to just be so into the delivery of the talk by just standing at the podium and, like, going through my deck, which is a small thing but I think is an area I wanted to improve in.
This time, I was playing around with some more narrative framing and ended up, yeah, like, pushing it to an extreme. And it works with the theme because inhabiting a character and role-playing is the core part of D&D. Not everybody plays a D&D character by doing a voice. You are a little bit extra if you do that. But it's not uncommon for people to do a voice. And so, it kind of fit perfectly with my theme. I just needed to get the self-confidence to do it. So, thank you to everyone at the speaker dinner that was like, "No, you totally got this. You should do this," because I was feeling very unsure.
STEPHANIE: It really paid off, so...
JOËL: I'd like to circle back to your talk, though. So, you gave, basically, the first talk of the conference. You were the first session after the keynote. A theme that came up multiple times in your talk was this idea of coupling and how it affects different parts of our code and, particularly the way that we structure tests or the way that we feel test pain. How did you, when you were prepping this talk, discover that theme and decide to lift it up? Was that something that you knew ahead of time you wanted to talk about, or did it just sort of emerge as part of the talk preparation process?
STEPHANIE: That's a really great question, and I'm glad you picked up on that. So, my talk was called: "So, Writing Tests Feels Painful. What Now?" Originally, when I came up with this idea, it actually started with coupling. I realized that I wanted to give a talk about coupling because it's just something that I was struggling with or, like, had seen other people struggle with and really wanting kind of a discrete resource, wanting to provide that.
But as I was just thinking about it, I was like, oh, like, there are so many different ways that this could go. On one hand, it was a very like important topic to me, but also maybe too big of a topic. And so, I actually, like, kind of put that on the back burner. And it wasn't until later when I connected it to another...it wasn't necessarily different at all, but just, like, an extension of this idea is, oh, like, people are struggling with coupling in tests or, like, it manifests in tests. And so, I thought maybe that could be the angle that I took on this topic that kind of gave me a little bit more focus.
And I didn't even end up saying like, "Yeah, this talk was, like, born out of just, you know, wrestling with coupling or anything like that." So, it's cool, to me, that you picked up on it as a theme because it was...I had, you know, ended up not being super explicit about it, but it was certainly, like, a thing that was driving the content from my perspective.
JOËL: Interesting. So, it started as a coupling talk and then got sort of focused through the lens of testing.
STEPHANIE: Yeah. And I think there was a part of me that was like, you know, I don't know if I could just teach the concept of coupling, like, by itself without the framing of testing for people who this is, like, a new concept for them. I realized that maybe it would be more effective to be like, "Hey, like, have you experienced test pain? You know, have you had to mock out a billion objects or changed, you know, made one change and then had to fix, like, a million tests subsequently? Then this talk is for you." And then weave in the idea of coupling in it to kind of start to help people feel familiar with it or just, like, identify it without as much, like, jargon as kind of I've seen when I've tried to figure out, like, how to manage it.
JOËL: It's interesting because I think it gives you a, like, concrete, valuable thing to optimize for as opposed to, like, hey, let's lower coupling because then you're writing, you know, quote, unquote, "better code." And you get to feel better about yourself as a programmer because you're doing things the, quote, unquote, "right way." That's very kind of hand-wavy, and I think sometimes leads people down a bad path where they're optimizing things that they shouldn't be.
But the tests give you this very concrete way to say, "Hey, we're not just trying to reach the, like, low score record for the app in terms of coupling. We're trying to reduce test pain. Tests are painful. And that pain is telling us something. It's telling us that we've crossed some sort of threshold for coupling. Let's find ways to reduce it, not so that we can feel good about ourselves, but so that our tests are actually manageable."
STEPHANIE: Yeah, I am really glad you picked up on that, too, because I feel the exact same way when someone just tells me to decouple something or, like, makes a note that, like, oh, this feels really coupled. I don't know what that means necessarily. And it's not very convincing to just be like, "Oh, you should write loosely coupled code [laughs]," at least for me. What you said just now, it's like, it's not to feel good about ourselves, you know, to write code that way, but, actually, to just feel good about our code, period [laughs]. And, yeah, finding that validation through just, like, actually working with code that is easier to change that is the goal, not necessarily to, yeah, kind of pursue some totally subjective, like, metric.
JOËL: So, one of the kinds of coupling that you called out, I think, was where you hardcode a class name of some other class in your object. And that feels, like, really sort of innocuous. Like, of course, my objects can talk to other objects. And maybe I want to, like, refer to a class somewhere. Why is that such a like tricky piece of coupling to work with?
STEPHANIE: It's not necessarily intentional sometimes. Like, you just do it because you're like, well, I need access to this class somewhere, and I happen to already be in this file. So, why not just hard-code it here? I do think it's a little tricky because the file that you're writing might be, like, very far down in, like, your code flow or, like, your code path, like, very far from, like, a controller or any kind of entry point into your system, at least based on what I've seen in a lot of modern Rails apps. And so, I think that coupling gets really, really obscured.
I have found that, like, if I have to kind of write a more, like, a higher level test, like, maybe a request spec or something, there are times when I'm, like, having to deal with a lot of classes just to set stuff up in a test like that that I didn't think I would have to [chuckles] when I first went about trying to just be like, oh, like, let's just figure out how to get a 200 response [laughs] from this request. So, you're really burying perhaps the things that are needed to set up, like, that full path of execution. And sometimes, it only comes out when you're writing a test for it.
JOËL: And you mentioned briefly, in passing, the idea that oftentimes this sort of coupling manifests as a lot of extra test setup because your object that you're trying to test now also needs all these other things that are related in order to be tested. But sometimes even when you hard code a class, though, you can't even just say, "Oh, I want this particular user or something returned." So, you have to then do something like allow this class to receive class method and return, and now you're stubbing.
And I don't know how you feel about stubs in RSpec. I always treat them a little bit like a code smell in the like classic sense of it's not necessarily bad, but maybe pause, take a look, and ask yourself, "Why is that there, and should I do things differently?"
STEPHANIE: Yeah. I ended up having, like, a lot of examples of stubbing in my example because the code had just been set up where that was the only way that you could access those collaborators, essentially, to, like, make an assertion on them, or have them do something different because you actually needed to go into a different path, right? And I was like, yeah, this should feel weird. You should feel a little bad [laughs] or at least, you know, kind of just pay attention to that feeling, even if you can't really do anything about it in that particular instance.
But on the flip side, you know, it's like, yes, it feels a bit strange, you know, but it's not all bad, right? Like, you're kind of learning like, oh, hey, like, I am coupled to this hard-coded class because I am needing to stub, like, a class method that returns it, or that constructs it. And at least you've exposed that, you know, for yourself.
One thing that I was running into a lot in my example, too, was that those things, like, weren't obvious when you were just reading maybe, like, the public methods and trying to figure out what was happening in them because they were wrapped in private methods. I was a little bit conflicted about this because there were times when it was already just a single method call, but then it was just kind of wrapped in a private method that actually hid [laughs] the things, like all the dependencies that were passed as arguments.
And I found that to be, sure, it looks kind of cleaner. But then all you need to do is scroll down [laughs], and then you're like, oh, actually, there's all these other things involved, but it was kind of hidden away for me. And I found that, actually, like, at least when I actually needed to change things, less helpful than I imagine what the, you know, code author intended. Do you have any thoughts about hiding details like that?
JOËL: I'm kind of a big fan.
STEPHANIE: Hmmm.
JOËL: The general idea, I think, is called the single level of abstraction principle. Whatever sort of public method that you're calling is often implemented in terms of...let's say it does a few different things. It's implemented in terms of, like, these sort of high-level concepts. So, whoever is reading the public method doesn't need to like care about the details of how each step is implemented.
So, maybe you're fetching something from an API, and then you're making a database call, and then you're doing some transformation and creating some new objects from it. Having all of the, like, HTTP calls and the ActiveRecord stuff and the, like, transformation all in the public method, yes, there's a lot of complexity happening there, and it makes that obvious. But it also makes it really hard to get a sense of what is happening.
So, I like to say, "Hey, there are four steps. Let's wrap them all each in a private method then you can call all of those in the public method." The public method now sort of reads like a very simple sort of script. First, fetch data from the HTTP API, then fetch some data from the database, then apply this transformation, then create this object. And if I'm mostly caring about what this object does and not the how let's say I'm building some other objects that interact with this, that is the information I want to know. Where I care about the actual implementation of, oh, well, exactly how is the ActiveRecord stuff done when I'm doing internal changes to the object, that's when I care about those private methods.
I think where it gets tricky, and I think that's the point that you were bringing up, is that if you write code in that way, it has to change the heuristics of how you read code to detect complexity. Because, oftentimes, I think a very classic heuristic for code complexity is just line length. If you have a 50-line method, probably there's a lot of complexity there. Maybe there's a lot of coupling. If it's a four-line method that is written at a high level of abstraction that just calls out to private methods, you scan over. You're like, oh, nice and clean. Nothing to see here. Move on. And so, that heuristic doesn't really hold up in a codebase where you're applying this single level of abstraction. Do you think that lines up with your experience?
STEPHANIE: Hmm. As I was listening to you, I was like, yeah, like, that makes total sense to me. But then I also clearly disagreed a little bit [laughs] in my initial...kind of what I was saying initially. And I think it's because that single layer of abstraction was not very well defined.
JOËL: Hmm. That's fair.
STEPHANIE: Yeah. Where, in fact, it was actually misleading. Like, it wanted to be at that level of abstraction, but it really wasn't. Like, it was operating on things at, like, a lower level and wasn't designed with that kind of readability in mind. So, it was more, like, it was just hiding stuff a little bit, at least for me.
And, I think, it certainly would have taken, like, more work to figure out what that code, like, really was meant to convey. It might have taken some refactoring to coalesce at that single level. And that was essentially kind of what I was showing in my talk as, like, how to get to saying, like, "Hey, we actually are operating in the lower level, but I don't think we need to."
There was some amount of, like, looking at all of the how to figure out, like, oh, maybe these things we don't even need to expose in this class. And we kind of got to a place where those details weren't, like, needed in that class at all. So, it's one of those things where it's harder than it sounds [laughs].
JOËL: It's definitely an art.
STEPHANIE: Yeah.
JOËL: And I think what you're saying about some of the coupling being, like, scattered throughout the class, it's something that I see a lot with situations where you're coupled, not so much to, like, a single class, but to something side effectful. So, you're building some kind of integration with a third-party API, and you're going to have to make a lot of HTTP calls. And each of those might be individually simple, and they're all sort of maybe in different private methods or whatever, or they're interspersed among a larger chunk of logic. And that makes your tests really complicated. But there's no, like, one place you can point at and be like, ooh, that's the one place where there's a lot of complexity.
What's happening here, though, is that your business object that's doing stuff is coupled to the network, and that coupling is going to force you to do some stubbing. It's going to force you to deal with a bunch of side effects that are non-deterministic in your code. And you used the word coalesce earlier that I really liked because I think that's often a situation where you do have to stand back and say, "Look, there's a lot of HTTP going on here. What if I coalesced it all into an object? Now I have two objects: one that's responsible for business logic, and one that's responsible for just the HTTP calls."
And, all of a sudden, the tests just totally simplify. And we've removed some coupling, but that's not something that you would have seen just from reading the code. Because, as you were saying, it's sort of scattered in little bits and pieces throughout your file that don't necessarily catch your eye.
STEPHANIE: Yeah. Which brings me to a blog post that I had found a lot of inspiration from in the talk that I'll link. It's called "That One Thing: Reduce Coupling for More Scalable and Sustainable Software." But it's actually about tests [laughs], even though it doesn't make an appearance in the title of the blog post at all. But this is where I kind of got the idea of necessary versus unnecessary coupling in test. Because I had never thought about how, yeah, like, when you write a test, you are very correctly coupling yourself to at least the method and class under test [laughs], if not also the arguments, right? Or anything else needed to construct what you're testing.
And literally having that listed out for me in this blog post I think it's a...they use some examples in Java. And so, there's, like, a little bit more [laughs] setup involved. But I think they're like, yeah, these are six things that, like, it's mostly fine if you're coupled to these because that's kind of what needs to happen in a test. But, like, even having something to compare a test I wrote to just, like, okay, these are the things I know I need. And then, you can start to see when you've diverged from that list, when you are finding yourself coupled to some internals of your class.
I really...that was actually, like, really helpful for me because, as we talked about earlier, like, it can be kind of communicated so abstractly. But here is, like, a very clear heuristic for when you should at least, like, start to pay attention or be like, oh, this is something that was needed to get the test to run but is now starting to feel a little unnecessary because it's not on this list.
JOËL: That list reminds me, or the idea of a list of things to check out for when thinking about coupling, reminds me of the concept of connascence, which is a fancy word for almost a, like, categorization of different types of coupling because coupling comes in different flavors, some of which are tighter forms of coupling than others. And so, having that vocabulary has been really helpful for me when I'm looking at PRs and code review, or even when I'm refactoring my own code. Kind of like that list that you mentioned that you have, now I have some heuristics to look at that and say, "Oh, can I go from a connascence of position to a connascence of naming, and does that help me?"
STEPHANIE: Yeah, I like that you mentioned the positional connascence because I also came across a really great metaphor for kind of things that need to change together, like, when that makes sense. And it was basically the idea of a dishwasher and a laundry machine [laughs]. I wish I could recall, like, what book this was from.
But it was basically like, oh yeah, like, in theory, you're washing two things. So, maybe they are similar, but then you're like, no, actually, you want these to be a little bit separate because, you know, you don't want to wash your dishes and your clothes in the same machine. I don't know, maybe that exists [laughs], but I don't think it would do a very good job for either goal.
And I think that was really helpful, for me, in imagining, like, the difference between kind of coupling and cohesion, like things that...even just imagining, like, kind of where I'm doing those things in the house, right? It's like, okay, that lives in a separate room. And, like, the kitchen is for the dishes, and that could be like, you know, a module if you will. And, like, laundry happens in the laundry room, and how to kind of just separate those things, even though they also do share some qualities, too. Like, they're both appliances, right? And so, that's the way that they are similar, but they're not the same.
JOËL: You just mentioned the sort of keyword cohesion. And for our listeners who are not familiar with that term, it refers to an object sort of having one thing that it does well. Like, everything in that class sort of works towards the same goal, kind of similar to the idea of the single responsibility principle.
So, in my earlier example, where we're sort of interspersing some business logic, a lot of HTTP requests, and pulling out an object that's focused on HTTP, like everything is based around that, now that object has higher cohesion because it's all doing one thing. So, if you read classic object-oriented literature, the recommendations that you'll typically see are that objects should have high cohesion and low coupling.
STEPHANIE: Yeah. Think of a dishwasher and a washing machine next time [laughs] you come across something like that. Because I feel like those are really great, like, real-life examples of that separation.
JOËL: Did you go to Jared Norman's talk on the third day: "Undervalued: The Most Useful Design Pattern"?
STEPHANIE: No, I didn't. Can you tell me about it?
JOËL: It felt like he was addressing a lot of the same themes as you were but from more of a code perspective than a test perspective. Talking a lot about, again, forms of coupling, dependencies, and then, specifically, one of the tools that he focused on to reduce the coupling that we see is value objects and factory methods to construct those.
So, for any of our listeners who, when the talks come out, watch Stephanie's talk and are like, "Wow, I would love to learn more about this," a great follow-up, Jared Norman's talk: "Undervalued: The Most Useful Design Pattern."
STEPHANIE: Yeah, that's neat because I can see that being a solution to the hard code did class names that we were talking about earlier. And I like how that is kind of, like, a progressive lesson in coupling a little bit. I'm really glad you shared that talk with me because now I'm excited to watch it when it comes out. And in general, I just love learning new vocabulary or finding new ways to speak about this topic with clarity. So, if any of our listeners have just additional mental models for coupling [laughs] different metaphors, different household appliances [laughs], or something like that, I would love to know.
JOËL: You would like that, given that our first episode together was about "The Value Of Specialized Vocabulary."
STEPHANIE: Yeah, it's clearly undervalued.
JOËL: Haha, I see what you did there.
STEPHANIE: Thank you. Thank you very much [laughs].
JOËL: On that terrible/wonderful pun, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël shares his preparations for his RailsConf talk, which is D&D-themed and centered around a gnome character named Glittersense. Stephanie expresses her delight in creating pod-related puns within thoughtbot's internal team structure, like "cross-podination" for inter-pod meetings and the adorable observation that her pod resembles "three peas in a pod" when using the git co-authored-by feature.
Together, Stephanie and Joël discuss bringing one's authentic self to work, balancing personal disclosure with professional boundaries, and fostering psychological safety. They highlight the value of shared interests and personal anecdotes in enhancing team cohesion, especially remotely, and stress the importance of an inclusive culture that respects individual preferences and boundaries.
Transcript:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, at the time of this recording, we're recording this the week before RailsConf. I've been working on some of the visuals for my RailsConf talk and leaning on AI to generate some of these. So, my talk is D&D-themed, and it's very narrative-based. We follow the adventures of this gnome named Glittersense throughout the talk as we learn about how to use Turbo to build a D&D character sheet. And so, I wanted the AI to generate images for me.
And the problem I've had with a lot of AI-generated images is that you're like, okay, I need a gnome, you know, in a fight doing this, doing that. But then, like, every time, you get, like, totally different images. You're like, "Oh, I need an image where it's this," but then, like, the character is different in all the scenes, and there's no consistency. So, I've been leaning a little bit more into the memory aspect of ChatGPT, where you can sort of tell it, "Look, these are the things. Now, whenever I refer to Glittersense, whenever you draw an image, do it with these characteristics that we've established what the character looks like."
Sometimes I'll have, like, a text conversation kind of, like, setting up the physical characteristics. And then, it's like, okay, now every time you draw him, draw him like this, or now every time you draw him, draw him with this particular piece of equipment that we've created. And so, leaning into that memory has allowed me to create a series of images that feel a little bit more consistent in a way that's been really interesting.
STEPHANIE: Cool. Yeah, that makes sense because you are telling a story, right? And you need it to have a through line and the imagery be matching as you progress in your presentation. I actually don't know a lot about how that memory works. Does it persist across sessions? Do you have to do it all in one [laughs] go, or how does that work?
JOËL: So, there's, like, a persistent chat. So, you can start sort of multiple conversations, but each conversation is its own thread with its own memory. And it will sort of keep track of certain things. And sometimes I'll even say, "Hey..." instead of, like, prompting it for something to get a response, you could prompt it to add things to its memory. So say like, "From now on, when I ask you these types of questions, I want you to respond in this way," or, "From now on, when I ask you to generate an image, I want it done in this format." So, for example, RailsConf requires all of their slides to be 16 by 9. If I want, like, a kind of cover image or, like, something full-screen, I need an image that is 16 by 9. So, one of the things I prompt the AI with is just, "From now on, whenever you generate an image, give me an image in 16:9 aspect ratio."
STEPHANIE: Cool. I also was intrigued by your gnome's name, Glittersense. And I was wondering what the story behind that character is.
JOËL: The story behind the name is that I was playing D&D with a friend who was this very kind of eclectic Dragonborn character. And I did some sort of valiant deed and got the name Glittersense bestowed upon me by this Dragonborn for having helped him out in some, like, cool way. So, that's a fun name. And so, when I was searching for a name for my character in this talk, I was like, you know what? Let's bring back Glittersense. I like that. I think it captures a little bit of, like, the wonder and the whimsy of a gnome.
STEPHANIE: That's really cute. I like that a lot.
JOËL: So, Stephanie, what's been new in your world?
STEPHANIE: So, lately, I've been having a lot of fun with coming up with names of things. You know the saying how naming is one of the hardest things in software? Well, okay, I'm not actually going to talk about anything that I named very particularly well in my code, but I've been just coming up with a lot of puns. It's just, I don't know, my brain is kind of in that space. And one thing that...I can't recall if I have talked about this on the show before, but our team at thoughtbot is experimenting with kind of smaller sub-teams within it called pods. We have now kind of been split into pods with other people who are working on maybe similar client projects. I have been having some really good naming ideas around [laughs] pod-related puns.
So, one thing that we did as part of this experiment was setting up meetings for pods to meet each other, and spend time together, and kind of share what each other was up to. And I was the first to coin the term cross-podination, kind of like cross-pollination. And I think I just, like, said it offhand one day, and then it caught on. And I was very pleasantly surprised to see that people just leaned into it and started naming those meetings cross-podination meetings.
And then, another one that came about recently was my pod there's three of us in it, and we were pairing, or I guess it's not really called a pairing if there's three. We were mobbing or ensembling, whatever you want to call it. And sometimes we like to use the git co-authored-by feature where you can attribute, you know, commits to people that you worked on them with. And in GitHub when you, you know, add people's emails to the commit, you know, you see your little GitHub profile picture in a little circle. And when you have multiple people shared on a commit, it is just, like, squished together. And since we're a trio, I was like, "Oh, it's like we're, like, three peas in a pod."
JOËL: [chuckles]
STEPHANIE: And I realized that it was an excellent missed opportunity for our pod name. We're something else. But I am hereby reserving that name for the next pod that I am in. You heard it here first [laughs]. It looked exactly like just three snug little peas. And I, yeah, it was very cute. I was very delighted. And yeah, that's what's new for me.
JOËL: I'll also point out the fact that you are currently talking on a podcast.
STEPHANIE: Whoa, whoa. So, you and I are a pod [laughter]. We're a podcasting pod [laughs]. Wow, I didn't even think about that. My world is just pods right now [laughs], folks.
JOËL: How do you feel about puns as an art form?
STEPHANIE: [laughs] Wow, art form is a strong phrase to use. I don't hate them. I think it depends. Sometimes I will cringe, and other times I'm like, that's great. That's excellent. Yeah, I think it depends. But I guess, clearly, I'm in my pun era, so I've just accepted it.
JOËL: Are you the kind of person who is, like, ashamed but secretly proud when you make a really good pun?
STEPHANIE: Yeah, that's a very good way to describe it. I'm sure there are other people out there [laughs].
JOËL: What's interesting with puns, right? Like, some people love them, some people hate them. Some people really lean into them, like, that becomes almost, like, part of their personality. We had a former teammate who his...we made a custom Slack emoji with his face, and it was the pun emoji because he always had a good pun ready for any situation. And so, that's sort of a way that I feel like sometimes you get to bring an aspect of your personality or at least a persona to work. What parts of yourself do you like to bring to work? What parts do you like to maybe leave out?
STEPHANIE: Yeah, I am really excited about this topic because I feel like it's a little bit evergreen, maybe was kind of a trendy thing to talk about in terms of team culture in the past couple of years, but this idea of bringing an authentic or whole self to work as, like, an ideal. And I don't know that I totally agree with that [laughs] because, like you said, sometimes you have a different kind of persona, or you have a kind of way that you want to present yourself at work. And that doesn't necessarily mean it's a bad thing. I personally like some kind of separation in terms of my work self and my rest of life self [laughs]. Yeah, I just think that should be fine.
JOËL: So, you might secretly be the pun master, but you don't want your colleagues to know.
STEPHANIE: [laughs] That's true. Or I save my puns only for work [laughter]. If I ever have, like, a shower thought where I think of a really good pun, I will, like, send a Slack message to myself to find [laughs] the perfect opportunity to use this pun in a meeting [laughs]. I don't actually do that, but that would be very funny.
JOËL: I feel like there's probably a sense in which nobody is a hundred percent their authentic self or their full self in a work situation, you know, it varies by person. But I'm sure everybody, to a certain extent, has a professional persona that they inhabit during work hours.
STEPHANIE: Yeah, and I like that the way we're talking about it, too, is a professional persona doesn't necessarily mean that you're just a little...matching kind of a business speak bot [laughs], where it's kind of devoid of personality, but just using all the right language in their emails [laughs] and the correct business jargon or whatever. To me, what is important is that people are able to choose how they show up or present themselves at work. That's, like, an active choice that they're making, not out of obligation or fear of consequences. You know, like, it's fine to be a little more private at work if that's just how you want to operate. And it's also fine to be more open about sharing things going on in your personal life.
Because I've seen ways in which both have been more enforced or, like, there's pressure to perform one way or another. And that could mean, like, when people kind of encourage others to try to be more of themselves or, like, share more things about personal life. That's not always necessarily a good thing if it's not something that people are comfortable with. And I suspect that we have kind of pulled back a little bit from that, but there was certainly a time when that was a bit of an expectation. And I'm not sure that that was quite [chuckles] what we wanted to aim for in terms of just the modern workplace.
JOËL: It is interesting because I think there can be some advantages to maybe building connection with people by sharing a little bit more about your life. But, again, if there's pressure to do it, that becomes really unwholesome.
STEPHANIE: Yeah. Unwholesome is a good word to use. Like, I want that wholesome content [laughs] at work. And I actually have a couple of thoughts about how I prefer to share, like, just personal things with my team members. And I'm curious kind of where you fall on this as well. But a couple of things that our team does that I really like is we have a quarterly newsletter that one of our team leads puts together. She has an open call for submissions, and people just share any, like travel plans, any professional wins, any kind of personal life things that they want to share.
People love talking about their home improvement adventures [laughs] on our team, which is really fun. And yeah, like, just share photos and a little blurb about what they've been up to. And this happens every quarter. And it's always such a delight to remember a little bit like, oh yeah, my co-workers have lives outside of work. But I really like that it's opt-in and also not that frequent, you know? It's kind of like, this is the time to share any like, special things that have happened in the past three months. And yeah, I think every time a new dispatch of it comes out, everyone kind of gets the warm and fuzzy feelings of appreciating their co-workers and what they've been up to.
JOËL: Do you think that that kind of sharing sort of maybe helps personalize a little bit of our colleagues, especially because we're all remote and we're interacting with each other through a screen?
STEPHANIE: Yes. Yeah. That's another good distinction. I think it is, like, a little more important that there are touch points like these when we are working remote because, yeah, the water cooler conversation just doesn't really happen nearly as much as it does when you're in an office. And I feel like that's the kind of thing that I would talk about at the water cooler [laughs]. It's like, "Oh yeah, I went to Disney World, or traveled for this conference, or I built new garden beds for my yard," just stuff like that. I don't know, I don't find that...like, when you're just communicating over Slack and email, there's not a good place for that kind of stuff. And that's why I really like the newsletter.
JOËL: One thing that's interesting about the difference between in-person and remote is that, in person, a way that you can express personality in the office is you can do some things with your workspace. You might have some items on your desk that are of personal interest. And, you know, you might still do that when you're working remote, but those don't get captured by your webcam unless it's in your background.
Your background you can get real creative with. But you can also, like, really curate that to, like, show practically nothing. Whereas if you were putting things on your desk in the office, there's kind of no way for your colleagues not to see that. So, you had to be...like, it had to be things that you were willing for everyone to see. But at the same time, sometimes it's nice to be able to say, hey, I'm going to put a touch of, like, things that are meaningful to me in my work life.
STEPHANIE: Yeah, I really like that. I mean, Joël, your background is always these framed maps on the wall, hanging on the wall, and that is very you, I think. Did you kind of think about how they'll just be your background whenever you're in a meeting, or they just happened to be there?
JOËL: So, these I had set up pre-pandemic. I like the décor. And then, when I started working from home in 2020, I was trying to figure out, like, where do I want to be to take meetings? And I was like, you know what? The math wall is pretty cool. I think that's going to be my background. I guess now it's almost become, like, a bit of a trademark.
STEPHANIE: Yeah, I feel that. My trademark...I have a few because I like to move around when I take meetings. So, when I'm at my desk, it's the plants in my office. When I'm in my kitchen, it's either my jars [laughs]. So, I have, like, open shelving and just all of these jars of, you know, some of it is ingredients like nuts, and grains, and stuff like that, and some of it is just empty jars that I use for drinking water. So, I have my jar collection. And then, occasionally, if I'm sitting on the other side of the table [chuckles], all of my pots and pans are hanging in the background from above my stove. So, yeah, I'm the jars, pots, and plants person [laughs] at the company.
JOËL: You know, we were talking earlier about the idea that it's harder to see your sort of workspace in a remote world. And I just remembered that we do a semi-regular...there's, like, a thread at thoughtbot where people just share pictures of their workspace, and it's opt-in. You don't have to put anything in there. But you get a little bit of, like, oh, the other side of the camera. That's pretty cool.
STEPHANIE: Yeah, I love seeing those threads. And I think a lot of people in our industry are also gear nerds, so [laughter] they love to see people's, like, fancy monitor and keyboard setups, maybe some cool lighting, oh, like, wire organization [laughs].
JOËL: Cable management.
STEPHANIE: Yep. Yep. Those are fun. And I actually think another one that we've lost since going remote is laptop stickers because that was such a great way for people to show some personality and things that they love, like programming stuff, maybe, like, you know, language stickers or organizations like thoughtbot stickers, too, and also, more personal stuff if they want. At a previous company, we were also remote, and someone came up with a really fun game where people anonymously submitted pictures of their laptop stickers. And we got together and tried to guess whose laptop belonged to who just based on the stickers.
JOËL: Oh, that's fun.
STEPHANIE: Yeah, that was really fun. I keep forgetting that I wanted to organize something like that for thoughtbot. But now I'm just thinking about it, and I feel the need to decorate my laptop with some stickers after this [laughs].
JOËL: One thing I do want to highlight, though, is the fact that several years back, when people were talking a lot about the importance of bringing your sort of authentic or whole self to work, one of the really valuable parts of that conversation was giving people the ability to do that, not forcing people to sort of hide parts of themselves, especially if they don't fit into a dominant culture or demographic, in order to be able to even function at work, right? That's a sort of key aspect of, I guess, basic inclusivity. And so, I think that's still a hundred percent true today. We want to build cultures that are inclusive, both in our in-person professional situations and for remote teams.
STEPHANIE: Yeah, 100%. I think, for me, what I think is a good measure of that is, you know, how comfortable are people disagreeing at the company kind of in public or sharing an alternative perspective? Like, that should be okay and celebrated, even, and considered, you know, with equal weight as kind of what you're saying, the dominant identity or even just opinion. Like, especially in tech, I think people have very strongly held opinions, and when they're disagreed with...I've become a little skeptical of the idea of, like, this is how we do things here or, like, we don't do that.
And I think that rather than sticking to a, like, stance like that, there's always room to incorporate, like, new approaches, new perspectives, new ways of thinking to a given problem. And that can only happen when people are comfortable with going there, you know, and kind of saying, like, "This is important to me," or, like, "This is how I feel about it." And that, in and of itself, is just equally valid [laughs] as whatever is taking the airtime currently.
JOËL: That's really interesting because I feel like now you've leaned into almost the idea of psychological safety for a team. And if you're having to sort of repress or hide elements of the way you think, or maybe even sort of core elements of your identity to fit in with a team, that's not psychologically safe, and you can't have those deeper conversations.
STEPHANIE: Yeah, 100%. I think it's two sides of the same coin, you know, it's like two ways of saying the same thing, that people should be able to conduct themselves in the way they choose to [laughs]. And I can't imagine anyone really disagreeing with a statement like that.
JOËL: So, I know you choose to not always share everything about your life or sort of...I don't want to say bring your authentic self but, like, bring everything about yourself to the workplace. Do you have a sort of a heuristic for what you decide to share or not share?
STEPHANIE: Yeah. I don't know if it's necessarily a heuristic so much as it's just what I do [laughs]. But I tend to do better with, like, smaller groups, and, actually, that's why I think pods has been working really well for me personally because I can share personal information just in a more intimate setting, which is helpful for me. And yeah, I tend to, like, find once, like, either Slack channels or Spaces, meetings are starting to get into the, like, 10, 11, 12 people territory is when I hold it back a little bit more, not because of any sort of, like, reason that I don't want to share. It's just, like, that's just not the venue for me.
But I do love when other people are, like, open, even in, like, larger spaces like that. I appreciate when other people do it just to, you know, signal that it's okay [laughs]. And I enjoy throwing a reaction or responding in a thread about, you know, something that someone shared in a bigger channel. And I think that diversity is actually really helpful because it conveys that, like, there's different ways of existing online in your work environment and that they're all acceptable. What about you? How do you kind of choose where to share things about your personal life?
JOËL: I think, kind of like you, I don't really have a heuristic. I just sort of go with gut feeling. I think I, sort of by nature, have always been maybe a little bit of having, like, separate professional and personal lives and keeping those a little bit more distinct. And, you know, there's some things that kind of cross over, like, oh, you know, I tried out this fun, new restaurant, or I did a cool activity over the weekend, or something like that.
I think I've come to see that there can be a lot of value in sharing parts of yourself with other colleagues. And so, from time to time, I'll, like, maybe bring in something a little bit deeper. And, like you said, sometimes that's more easily done in a smaller context. And then yeah, for some things, it's like, okay, I'm going to share photos from a vacation in that, you know, quarterly newsletter. That's kind of fun. But also knowing that there's no pressure that's nice.
STEPHANIE: Yeah. I think you're really good about finding the right avenues for that. I like, love when you show photos in the travel channel, even though I have that channel muted [laughs]. You'll, like, send me the link to the post in that channel. And yeah, I love that because it's a way for you to kind of, like, find the right place for it, and then also share it with any particular people if you choose to.
JOËL: I think, also, personal connections can be a way to build deeper relationships, especially in smaller groups. And you can form deeper connections with colleagues over a particular project, or a particular technology, or a tech topic, or, you know, just a passion about mechanical keyboards, or something like that. But if you're people who chat kind of more on the regular for different things, maybe separate from a client project you're on or something like that, and you do find yourself exchanging a little bit more about, oh, you know, what you're doing in your life, or what are the things that are going on for you, that often does tend to build, I think, a deeper connection between colleagues, which can be really nice.
STEPHANIE: Yeah. And I like that those relationships can also change. Like, there's different seasons in which you're more connected to some people and then less connected. Sometimes a colleague that you have shared interests with becomes someone that you kind of are in touch with more regularly, and then maybe you switch projects, and you aren't so much kind of as up to date. But, I don't know, I always think that there's, like, the right time for that kind of stuff, and it emerges.
JOËL: I'm going to throw a bit of a buzzword at you, and I'd love to get your reaction. The idea of belonging, the feeling of belonging on a team, is that a good thing, something that we should seek out? And if so, how much of that is responsibility of, like, management or, like, a property of the team or the group to make you sort of feel that belonging? And how much of that is on you having to maybe disclose things about yourself or share a little bit of your personal life to, like, create that sense of belonging?
STEPHANIE: Whoa. Yeah, that is a good way to frame it. I think there's a balance. There've been some, like, periods of my work life where I'm like, oh, I need more of a detachment from work and other times where I'm like, oh, I feel really disconnected, like, I want to feel like more of a part of this team. But I do think it's a management responsibility. And one thing that I know people to be cautious of is, you know, becoming too close at work. I don't know if your work being treated like a family, like, that kind of language can be a little bit borderline.
JOËL: Almost manipulative.
STEPHANIE: Right. Yeah, exactly. I do think there's something to be said about community at work and feeling like that kind of belonging, right? But also, that you can choose how much, like, you want to engage with that community and that being okay. I don't think it necessarily needs to be only through what you share about yourself. Like, you can have that sense of connection just by being a good colleague [chuckles], right? Like, even if the things you talk about are just within the realm of the project you're working on, like, there's still a sense of commitment and, yeah, in that relationship. And I think that is what matters when it comes to belonging.
In the past, ways that I've seen that work well in regards to kind of how you share information is just, like, I don't know, share how you're doing. Like, you don't have to provide too many details. But it could be like, "Oh, I'm kind of distracted in my personal life right now, and that's why I wasn't able to get this done." People should be understanding of that, even if you don't kind of let them in on the more personal aspects of it.
JOËL: Right. And you don't have to give any details, right?
STEPHANIE: Yeah.
JOËL: You should be in a place where people are comfortable with not knowing and not be like, "Ooh, what's going on with Stephanie's life? "
STEPHANIE: [laughs] Yeah. But I do also think, like, the knowing that, like, something is going on is, like, also important context, right? Because you don't necessarily want that to impact the commitments you do have at work.
JOËL: Right. And people tend to be a little bit more understanding if you're having to maybe shift some meetings around, or if you're struggling to focus on a particular day, or something like that.
STEPHANIE: Yeah. 100%.
JOËL: Yeah, we should normalize it of just like, "Hey, I'm having a hard day. I don't want to give details, but you know."
STEPHANIE: Yeah. Yeah. I think a way that that is always kind of weird is how people communicate they're taking a sick day [laughs]. I actually had someone tell me that they really appreciated a time when I just said, "You know, I need to take care of myself today," and didn't really say anything else [laughs] about why. Because they're like, "Oh, like, that helped normalize this idea that, like, that is fine just kind of as is." There's no need to, you know, supply any additional reasoning.
JOËL: Sometimes I feel like people almost feel the need to like, justify taking sick time. So, you've got to, like, say just how bad things are that now I'm actually taking sick time.
STEPHANIE: Yeah, which is...that's not the point, right? You know, we have it because we need it [laughs]. So, yeah, I'm glad you mentioned that because I think that's actually a really good example of the ways that people, like, approach kind of bringing themselves to work like that.
JOËL: Yeah, sometimes it's setting a boundary. An aspect I'm curious to look at is you, and I do a little bit of this with this podcast, right? Every week, we share a little bit of what's new in our world, and it goes out into the public internet. How do you tend to pick those topics and, like, how personal are you willing to get?
STEPHANIE: Yeah. Oh, that's so hard. It's always hard [laughs], I think. I generally am pretty open. You know, I have talked about plans that I have for moving. I don't know, things about my gardening. I think I've also been a little vulnerable on the show before when I've, like, had a challenge, like, at work. But yeah, it's important, to me, I think, to be, like, true. Like, I think part of what our listeners like about this show is that we show up every week, and it's just a chat between two friends [laughs].
JOËL: Uh-huh.
STEPHANIE: It also is kind of weird to know that it's just, like, out there, right? And I don't really know who's listening on the other side. I do know that, like, a lot of my friends listen. And, in some ways, I like to think that I'm talking to them, right? But yeah, sometimes I think about just, like, in a decade [laughs], it will still be out there. And on one hand, I think maybe it's kind of cool because I can listen back and be like, oh, like, that's what was going on for me in 2024.
And other times I'm like, oh my God, what if I'm one day just, like, deeply embarrassed by things I've talked about on this show [laughs]? But that's a risk, I guess, I'm willing to take because I do think that the sense of connection that we foster with our audience is really meaningful. And it gives me a lot of joy whenever I meet a listener who's like, "Oh, you, you know, talked about this one thing, and I really related to it." And yeah, I guess that's what I do this for. What about you?
JOËL: Yeah, I think kind of similar to you; tend to talk about things at work, interesting technical challenges, interesting sort of work, or even sometimes client-related challenges. Of course, you know, never calling out any clients by name, you know, talk about some hobbies and things like that. I think where I tend to draw the line a little bit is things that are a little bit more people-oriented in my personal life. So, I tend to not talk about family, and friends, and relationships, and things like that. And, you know, there are some times where there's like, those things intermix a little bit, where I'll, like, have shared, like, "This is what's new in my world." And then, like, off air, I'll follow up with you and say, "So, I didn't tell the whole story on air.
STEPHANIE: [laughs] Yeah.
JOËL: Here's what actually happened." Or, you know, "Here's this extra anecdote that I wanted you to know, but I didn't want everyone in the audience to hear."
STEPHANIE: Yeah. I think the weirdest part for me, too, is I certainly have my, like, parasocial relationships with people that I follow on the internet [laughs], like, people on YouTube, or other podcasts, and stuff like that. But I haven't thought a whole lot about just, like, what that looks like for me as a host of a podcast. I think, kind of the size of the show now it feels right for me, where it's like I run into people who listen at conferences and stuff like that, but it is kind of contained to a work-related thing. So, that feels good because it, I think, for me, helps just give the work stuff a little bit of a deeper meaning, but otherwise isn't spilling over to my regular life.
JOËL: And it's always fun when, you know, we get a listener email connecting to, you know, one of the random hobbies or something we've talked about and sharing a little bit of their experiences. I think last spring, I talked about getting a pair of bike shorts and, like, trying it out and seeing how that worked. And a listener called in and shared their experience with bike shorts, and, like, that's a lot of fun. It kind of creates that connection. So, I do enjoy that aspect.
STEPHANIE: Yeah. And just to plug, you can write in to us at [email protected], and if you have anything you want to share that was inspired by what you heard us talk about on the show.
JOËL: We'd love to have you.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie shares an intriguing discovery about the origins of design patterns in software, tracing them back to architect Christopher Alexander's ideas in architecture. Joël is an official member of the Boston bike share system, and he loves it. He even got a notification on the app this week: "Congratulations. You have now visited 10% of all docking stations in the Boston metro area." #AchievementUnlocked, Joël!
Joël and Stephanie transition into a broader discussion on data modeling within software systems, particularly how entities like companies, employees, and devices interconnect within a database. They debate the semantics of database relationships and the practical implications of various database design decisions, providing insights into the complexities of backend development.
Transcript:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I learned a very interesting tidbit. I don't know if it's historical; I don't know if I would label it that. But, I recently learned about where the idea of design patterns in software came from. Are you familiar with that at all?
JOËL: I read an article about that a while back, and I forget exactly, but there is, like, a design patterns movement, I think, that predates the software world.
STEPHANIE: Yeah, exactly. So, as far as I understand it, there is an architect named Christopher Alexander, and he's kind of the one who proposed this idea of a pattern language. And he developed these ideas from the lens of architecture and building spaces. And he wrote a book called A Pattern Language that compiles, like, all these time-tested solutions to how to create spaces that meet people's needs, essentially. And I just thought that was really neat that software design adopted that philosophy, kind of taking a lot of these interdisciplinary ideas and bringing them into something technical.
But also, what I was really compelled by was that the point of these patterns is to make these spaces comfortable and enjoyable for humans. And I have that same feeling evoked when I'm in a codebase that's really well designed, and I am just, like, totally comfortable in it, and I can kind of understand what's going on and know how to navigate it. That's a very visceral feeling, I think.
JOËL: I love the kind of human-centric approach that you're using and the language that you're using, right? A place that is comfortable for humans. We want that for our homes. It's kind of nice in our codebases, too.
STEPHANIE: Yeah. I have really enjoyed this framing because instead of just saying like, "Oh, it's quote, unquote, "best practice" to follow these design patterns," it kind of gives me more of a reason. It's more of a compelling reason to me to say like, "Following these design patterns makes the codebase, like, easier to navigate, or easier to change, or easier to work with." And that I can get kind of on board with rather than just saying, "This way is, like, the better way, or the superior way, or the way to do things."
JOËL: At the end of the day, design patterns are a means to an end. They're not an end in of itself. And I think that's where it's very easy to get into trouble is where you're just sort of, I don't know, trying to rack up engineering points, I guess, for using a lot of design patterns, and they're not necessarily in service to some broader goal.
STEPHANIE: Yeah, yeah, exactly. I like the way you put that. When you said that, for some reason, I was thinking about catching Pokémon or something like filling your Pokédex [laughs] with all the different design patterns. And it's not just, you know, like you said, to check off those boxes, but for something that is maybe a little more meaningful than that.
JOËL: You're just trying to, like, hit the completionist achievement on the design patterns.
STEPHANIE: Yeah, if someone ever reaches that, you know, gets that achievement trophy, let me know [laughs].
JOËL: Can I get a badge on GitHub for having PRs that use every single Gang of Four pattern?
STEPHANIE: Anyway, Joël, what's new in your world?
JOËL: So, on the topic of completing things and getting badges for them, I am a part of the Boston bike share...project makes it sound like it's a, I don't know, an exclusive club. It's Boston's bike share system. I have a subscription with them, and I love it. It's so practical. You can go everywhere. You don't have to worry about, like, a bike getting stolen or something because, like, you drop it off at a docking station, and then it's not your responsibility anymore. Yeah, it's very convenient. I love it.
I got a notification on the app this week that said, "Congratulations. You have now visited 10% of all docking stations in the Boston metro area."
STEPHANIE: Whoa, that's actually a pretty cool accomplishment.
JOËL: I didn't even know they tracked that, and it's kind of cool. And the achievement shows me, like, here are all the different stations you've visited.
STEPHANIE: You know what I think would be really fun? Is kind of the equivalent of a Spotify Wrapped, but for your biking in a year kind of around the city.
JOËL: [laughs]
STEPHANIE: That would be really neat, I think, just to be like, oh yeah, like, I took this bike trip here. Like, I docked at this station to go meet up with a friend in this neighborhood. Yeah, I think that would be really fun [laughs].
JOËL: You definitely see some patterns come up, right? You're like, oh yeah, well, you know, this is my commute into work every day. Or this is that one friend where, you know, every Tuesday night, we go and do this thing.
STEPHANIE: Yeah, it's almost like a travelogue by bike.
JOËL: Yeah. I'll bet there's a lot of really interesting information that could surface from that. It might be a little bit disturbing to find out that a company has that data on you because you can, like, pick up so much.
STEPHANIE: That's --
JOËL: But it's also kind of fun to look at it. And you mentioned Spotify Wrapped, right?
STEPHANIE: Right.
JOËL: I love Spotify Wrapped. I have so much fun looking at it every year.
STEPHANIE: Yeah. It's always kind of funny, you know, when products kind of track that kind of stuff because it's like, oh, like, it feels like you're really seen [laughs] in terms of what insights it's able to come up with. But yeah, I do think it's cool that you have this little badge. I would be curious to know if there's anyone who's, you know, managed to hit a hundred percent of all the docking stations. They must be a Boston bike messenger or something [laughs].
JOËL: Now that I know that they track it, maybe I should go for completion.
STEPHANIE: That would be a very cool flex, in my opinion.
JOËL: [laughs] And, you know, of course, they're always expanding the network, which is a good thing. I'll bet it's the kind of thing where you get, like, 99%, and then it's just really hard to, like, keep up.
STEPHANIE: Yeah, nice.
JOËL: But I guess it's very appropriate, right? For a podcast titled The Bike Shed to be enthusiastic about a bike share program.
STEPHANIE: That's true.
So, for today's topic, I wanted to pick your brain a little bit on a data modeling question that I posed to some other developers at thoughtbot, specifically when it comes to associations and associations through other associations [laughs]. So, I'm just going to kind of try to share in words what this data model looks like and kind of see what you think about it.
So, if you had a company that has many employees and then the employee can also have many devices and you wanted to be able to associate that device with the company, so some kind of method like device dot company, how do you think you would go about making that association happen so that convenience method is available to you in the code?
JOËL: As a convenience for not doing device dot employee dot company.
STEPHANIE: Yeah, exactly.
JOËL: I think a classic is, at least the other way, is that it has many through. I forget if you can do a belongs to through or not. You could also write, effectively, a delegation method on the device to effectively do dot employee dot company.
STEPHANIE: Yeah. So, I had that same inkling as you as well, where at first I tried to do a belongs to through, but it turns out that belongs to does not support the through option. And then, I kind of went down the next path of thinking about if I could do a has one, a device has one company through employee, right? But the more I thought about it, the kind of stranger it felt to me in terms of the semantics of saying that a device has a company as opposed to a company having a device. It made more sense in plain English to think about it in terms of a device belonging to a company.
JOËL: That's interesting, right? Because those are ways of describing relationships in sort of ActiveRecord's language. And in sort of a richer situation, you might have all sorts of different adjectives to describe relationships. Instead of just belongs to has many, you have things like an employee owns a device, an employee works for a company, you know because an employee doesn't literally belong to a company in the literal sense. That's kind of messed up. So, I think what ActiveRecord's language is trying to use is less trying to, like, hit maybe, like, the English domain language of how these things relate to, and it's more about where the foreign keys are in the database.
STEPHANIE: Yeah. I like that point where even though, you know, these are the things that are available to us, that doesn't actually necessarily, you know, capture what we want it to mean. And I had gone to see what Rails' recommendation was, not necessarily for the situation I shared. But they have a section for choosing between which model should have the belongs to, as opposed to, like, it has one association on it. And it says, like you mentioned, you know, the distinction is where you place the foreign key, but you should kind of think about the actual meaning of the data. And, you know, we've talked a lot about, I think, domain modeling [chuckles] on the show.
But their kind of documentation says that...the has something relationship says that one of something is yours, that it can, like, point back to you. And in the example I shared, it still felt to me like, you know, really, the device wanted to point to the company that it is owned by. And if we think about it in real-world terms, too, if that device, like, is company property, for example, then that's a way that that does make sense.
But the couple of paths forward that I saw in front of me were to rework that association, maybe add a new column onto the device, and go down that path of codifying it at the database level. Or kind of maybe something as, like, an in-between step is delegating the method to the employee. And that's what I ended up doing because I wasn't quite ready to do that data migration.
JOËL: Adding more columns is interesting because then you can run into sort of data consistency issues. Let's say on the device you have a company ID to see who the device belongs to. Now, there are sort of two different independent paths. You can ask, "Which company does this device belong to?" You can either check the company ID and then look it up in the company table. Or you can join on the employee and join the employee back under company. And those might give you different answers and that can be a problem with data consistency if those two need to stay in sync.
STEPHANIE: Yeah, that is a good point.
JOËL: There could be scenarios where those two are allowed to diverge, right? You can imagine a scenario where maybe a company owns the device, but an employee of a potentially different company is using the device. And so, now it's okay to have sort of two different chains because the path through the employee is about what company is using our devices versus which company actually owns them. And those are, like, two different kinds of relationships. But if you're trying to get the same thing through two different paths of joining, then that can set you up for some data inconsistency issues.
STEPHANIE: Wow. I really liked what you said there because I don't think enough thought goes into the emergent relationships between models after they've been introduced to a codebase. At least in my experience, I've seen a lot of thought go up front into how we might want to model an ActiveRecord, but then less thought into seeing what patterns kind of show up over time as we introduce more functionality to these models, and kind of understand how they should exist in our codebase. Is that something that you find yourself kind of noticing? Like, how do you kind of pick up on the cue that maybe there's some more thought that needs to happen when it comes to existing database tables?
JOËL: I think it's something that definitely is a bit of a red flag, for me, is when there are multiple paths to connect to sort of establish a relationship. So, if I were to draw out some sort of, like, diagram of the models, boxes, and arrows or something like that, and then I could sort of overlay different paths through that diagram to connect two models and realize that those things need to be in sync, I think that's when I started thinking, ooh, that's a potential danger.
STEPHANIE: Yeah, that's a really great point because, you know, the example I shared was actually a kind of contrived one based on what I was seeing in a client codebase, not, you know, I'm not actually working with devices, companies, and employees [laughs]. But it was encoded as, essentially, a device having one company. And I ended up drawing it out because I just couldn't wrap my head around that idea.
And I had, essentially, an arrow from device pointing to company when I could also see that you could go take the path of going through employee [laughs]. And I was just curious if that was intentional or was it just kind of a convenient way to have that direct method available? I don't currently have enough context to determine but would be something I want to pay attention to. Like you said, it does feel like, if not a red flag, at least an orange one.
JOËL: And there's a whole kind of science to some of this called database normalization, where they're sort of, like, they all have rather arcane names. They're the first normal form, the second normal form, the third normal form, you know, it goes on. If you look at the definition, they're all also a little bit arcane, like every element in a relation must depend solely upon the primary key. And you're just like, well, what does that mean? And how do I know if my table is compliant with that? So, I think it's worth, if you're Googling for some of these, find an article that sort of explains these a little bit more in layman's terms, if you will.
But the general idea is that there are sort of stricter and stricter levels of the amount of sort of duplicate sources of truth you can have. In a sense, it's almost like DRY but for databases, and for your database schema in particular. Because when you have multiple sources of truth, like who does this device belong to, and now you get two different answers, or three different answers, now you've got a data corruption issue. Unlike bugs in code where it's, you know, it can be a problem because the site is down, or users have incorrect behavior, but then you can fix it later, and then go to production, and disruption to your clients is the worst that happened, this sort of problem in data is sometimes unrecoverable. Like, it's just, hey, --
STEPHANIE: Whoa, that sounds scary.
JOËL: Yeah, no, data problems scare me in a way that code problems don't.
STEPHANIE: Whoa. Could you...I think I interrupted you. But where were you going to go about once you have corrupted data? Like, it's unrecoverable. What happens then?
JOËL: Because, like, if I look at the database, do I know who the real owner of this...if I want to fix it, let's say I fix my schema, but now I've got all this data where I've got devices that have two different owners, and I don't know which one is the real one. And maybe the answer is, I just sort of pick one and say, "Oh, the one that was through this association is sort of the canonical one, and we can just sort of ignore the other one." Do I have confidence in that decision? Well, maybe depending on some of the other context maybe, I'm lucky that I can have that.
The doomsday scenario is that it's a little bit of one, a little bit of the other because there were different code paths that would write to one way or another. And there's no real way of knowing. If there's not too many devices, maybe I do an audit. Maybe I have to, like, follow up with all of my customers and say, "Hey, can you tell me which ones are really your devices?" That's not going to scale. Like, real worst case scenario, you almost have to do, like, a bit of a bankruptcy, where you say, "Hey, all the data prior to this date there's a bit of a question mark on it. We're not a hundred percent sure about it." And that does not feel great. So, now you're talking about mitigation strategies.
STEPHANIE: Oof. Wow. Yeah, you did make it sound [laughs] very scary. I think I've kind of been on the periphery of a situation like this before, where it's not just that we couldn't trust the code. It's that we couldn't trust the data in the database either to tell us how things work, you know, for our users and should work from a product perspective. And I was on a previous client project where they had to, yeah, like, hire a bunch of people to go through that data and kind of make those determinations, like you said, to kind of figure out it out for, you know, all of these customers to determine the source of truth there. And it did not sound like an easy feat at all, right? That's so much time and investment that you have to put into that once you get to that point.
JOËL: And there's a little bit of, like, different problems at different layers. You know, at the database layer, generally, you want all of that data to be really in a sort of single source of truth. Sometimes that makes it annoying to query because you've got to do all these joins. And so, there are various denormalization strategies that you can use to make that. Or sometimes it's a risk you're going to take. You're going to say, "Look, this table is not going to be totally normalized. There's going to be some amount of duplication, and we're comfortable with the risk if that comes up."
Sometimes you also build layers of abstractions on top, so you might have your data sort of at rest in database tables fully normalized and separated out, but it's really clunky to query. So, you build out a database view on top of that that returns data in sort of denormalized fashion. But that's okay because you can always get your correct answer by querying the underlying tables.
STEPHANIE: Wow. Okay. I have a lot of thoughts about this because I feel like database normalization, and I guess denormalization now, are skills that I am certainly not an expert at. And so, when it comes to, like, your average developer, how much do you think that people need to be thinking about this? Or what strategies do you have for, you know, a typical Rails dev in terms of how deep they should go [laughs]?
JOËL: So, the classic advice is you probably want to go to, like, third to fourth normal form, usually three. There's also like 3.5 for some reason. That's also, I think, sometimes called BNF. Anyway, sort of levels of how much you normalize. Some of these things are, like, really, really basic things that Rails just builds into its defaults with that convention over configuration, so things like every table should have a primary key. And that primary key should be something that's fixed and unique.
So, don't use something like combination of first name, last name as your primary key because there could be multiple people with the same name. Also, people change their names, and that's not great. But it's great that people can change their names. It's not great to rely on that as a primary key.
There are things like look for repeating columns. If you've got columns in your schema with a number prefix at the end, that's probably a sign that you want to extract a table. So, I don't know, you have a movie, and you want to list the actors for a movie. If your movie table has actor 1, actor 2, actor 3, actor 4, actor 5, you know, like, all the way up to actor 20, and you're just like, "Yeah, no, we fill, like, actor 1 through N, and if there's any space left over, we just put nulls in those columns," that's a pretty big sign that, hey, why don't you instead have a, like, actor's table, and then make a, like, has many association?
So, a lot of the, like, really basic normalization things, I think, are either built into Rails or built into sort of best practices around Rails. I think something that's really useful for developers to get as a sense beyond learning the actual different normal forms is think about it like DRY for your schema. Be wary of sort of multiple sources of truth for your data, and that will get you most of the way there.
When you're designing sort of models and tables, oftentimes, we think of DRY more in terms of code. Do you ever think about that a little bit in terms of your tables as well?
STEPHANIE: Yeah, I would say so. I think a lot of the time rather than references to another table just starting to grow on a certain model, I would usually lean towards introducing a join table there, both because it kind of encapsulates this idea that there is a connection, and it makes the space for that idea to grow if it needs to in the future.
I don't know if I have really been disciplined in thinking about like, oh, you know, there should really...every time I kind of am designing my database tables, thinking about, like, there should only be one source of truth. But I think that's a really good rule of thumb to follow. And in fact, I can actually think of an example right now where we are a little bit tempted to break that rule. And you're making me reconsider [laughter] if there's another way of doing so.
One thing that I have been kind of appreciative of lately is on my current client project; there's just, like, a lot of data. It's a very data-intensive and sensitive application. And so, when we introduce migrations, those PRs get tagged for review by someone over from the DevOps side, just to kind of provide some guidance around, you know, making sure that we're setting up our models to scale well. One of the things that he's been asking me on my couple of code changes I introduced was, like, when I introduced an index, like, it happened to be, like, a composite index with a couple of different columns, and the particular order of those columns mattered.
And he kind of prompted me to, like, share what my use cases for this index were, just to make sure that, like, some thought went into it, right? Like, it's not so much that the way that I had done it was wrong, but just that I had, like, thought about it. And I like that as a way of kind of thinking about things at the abstraction that I need to to do my dev work day to day and then kind of mapping that to, like you were saying, those best practices around keeping things kind of performant at the database level.
JOËL: I think there's a bit of a parallel world that people could really benefit from dipping a toe in, and that's sort of the typed programming world, this idea of making impossible states impossible or making illegal states unrepresentable. That in the sort of now it's not schemas of database tables or schemas of types that you're creating but trying to prevent data coming into a state where someone could plausibly construct an instance of your object or your type that would be nonsensical in the context of your app, kind of trying to lock that down.
And I think a lot of the ways that people in those communities think about...in a sense, it's kind of like database normalization for developers. So, if you're not wanting to, like, dip your toe in more of the sort of database-centric world and, like, read an article from a DBA, it might be worthwhile to look at some of those worlds as well. And I think a great starting point for that is a talk by Richard Feldman called Making Impossible States Impossible. It's for the Elm language. And there are equivalents, I think, in many others as well.
STEPHANIE: That's really cool that you are making that connection. I know we've kind of briefly talked about workshops in the past on the show. But if there were a workshop for, you know, that kind of database normalization for developers, I would be the first to sign up [laughs].
JOËL: Hint, hint, RailsConf idea. There's something from your original question that I think is interesting to circle back to, and that's the fact that it was awkward to work through in Ruby to do the work that you wanted to do because the tables were laid out in a certain way. And sometimes, there's certain ways that you need the tables to be in order to be sort of safe to represent data, but they're not the optimal way that we would like to interact with them at the Ruby level.
And I think it's okay for not everything in Ruby to be 100% reflective of the structure of the tables underneath. ActiveRecord gives us a great pattern, but everything is kind of one-to-one. And it's okay to layer on some things on top, add some extra methods to build some, like, connections in Ruby that rely on this normalized data underneath but that make life easier for you, or they better just represent or describe the relationships that you have.
STEPHANIE: 100%. I was really compelled by your idea of introducing helpers that use more descriptive adjectives for what that relationship is like. We've talked about how Rails abstracted things from the database level, you know, for our convenience, but that should not stop us from, like, leaning on that further, right? And kind of introducing our own abstractions for those connections that we see in our domain. So, I feel really inspired. I might even kind of reconsider the way I handled the original example and see what I can make of it.
JOËL: And I think your original solution of doing the delegation is a great example of this as well. You want the idea that a device belongs to a company or has an association called company, and you just don't want to go through that long chain, or at least you don't want that to be visible as an implementation detail. So, in this case, you delegate it through a chain of methods in Ruby.
It could also be that you have a much longer chain of tables, and maybe they don't all have associations in Rails and all that. And I think it would be totally fine as well to define a method on an object where, I don't know, a device, I don't know, has many...let's call it technicians, which is everybody who's ever touched this device or, you know, is on a log somewhere for having done maintenance. And maybe that list of technicians is not a thing you can just get through regular Rails associations. Maybe there's a whole, like, custom query underlying that, and that's okay.
STEPHANIE: Yeah, as you were saying that, I was thinking about that's actually kind of, like, active models are the great spot to put those methods and that logic. And I think you've made a really good case for that.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël shares his experience with the dry-rb suite of gems, focusing on how he's been using contracts to validate input data. Stephanie relates to Joël's insights with her preparation for RailsConf, discussing her methods for presenting code in slides and weighing the aesthetics and functionality of different tools like VS Code and Carbon.sh. She also encounters a CI test failure that prompts her to consider the implications of enforcing specific coding standards through CI processes.
The conversation turns into a discussion on managing coding standards and tools effectively, ensuring that automated systems help rather than hinder development. Joël and Stephanie ponder the balance between enforcing strict coding standards through CI and allowing developers the flexibility to bypass specific rules when necessary, ensuring tools provide valuable feedback without becoming obstructions.
Transcript:
AD:
We're excited to announce a new workshop series for helping you get that startup idea you have out of your head and into the world. It's called Vision to Value. Over a series of 90-minute working sessions, you'll work with a thoughtbot product strategist and a handful of other founders to start testing your idea in the market and make a plan for building an MVP.
Join for all seven of the weekly sessions, or pick and choose the ones that address your biggest challenge right now. Learn more and sign up at tbot.io/visionvalue.
STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been working on a project that uses the dry-rb suite of gems. And one of the things we're doing there is we're validating inputs using this concept of a contract. So, you sort of describe the shape and requirements of this, like hash of attributes that you get, and it will then tell you whether it's valid or not, along with error messages. We then want to use those to eventually build some other sort of value object type things that we use in the app. And because there's, like, failure points at multiple places that you have to track, it gets a little bit clunky.
And I got to thinking a little bit about, like, forget about the internal machinery. What is it that I would actually like to happen here? And really, what I want is to say, I've got this, like, bunch of attributes, which may or may not be correct. I want to pass them into a method, and then either get back a value object that I was hoping to construct or some kind of error.
STEPHANIE: That sounds reasonable to me.
JOËL: And then, thinking about it just a little bit longer, I was like, wait a minute, this idea of, like, unstructured input goes into a method, you get back something more structured or an error, that's kind of the broad definition of parsing. I think what I'm looking for is a parser object. And this really fits well with a style of processing popularized in the functional programming community called parse, don't validate the idea that you use a parser like this to sort of transform data from more loose to more strict values, values where you can have more assumptions.
And so, I create an object, and I can take a contract. I can take a class and say, "Attempt to take the following attributes. If they're valid according to the construct, create this classroom." And it, you know, does a bunch of error handling and some...under the hood, dry-rb does all this monad stuff. So, I handled that all inside of the object, but it's actually really nice.
STEPHANIE: Cool. Yeah, I had a feeling that was where you were going to go. A while back, we had talked about really impactful articles that we had read over the course of the year, and you had shared one called Parse, Don't Validate. And that heuristic has actually been stuck in my head a little bit. And that was really cool that you found an opportunity to use it in, you know, previously trying to make something work that, like, you weren't really sure kind of how you wanted to implement that.
JOËL: I think I had a bit of a light bulb moment as I was trying to figure this out because, in my mind, there are sort of two broad approaches. There's the parse, don't validate where you have some inputs, and then you transform them into something stricter. Or there's more of that validation approach where you have inputs, you verify that they're correct, and then you pass them on to someone else. And you just say, "Trust me, I verified they're in the right shape." Dry-rb sort of contracts feel like they fit more under that validation approach rather than the parse, don't validate.
Where I think the kind of the light bulb turned on for me is the idea that if you pair a validation step and an object construction step, you've effectively approximated the idea of parse, don't validate. So, if I create a parser object that says, in sort of one step, I'm going to validate some inputs and then immediately use them if they're valid to construct an object, then I've kind of done a parse don't validate, even though the individual building blocks don't follow that pattern.
STEPHANIE: More like a parse and validate, if you will [laughs]. I have a question for you. Like, do you own those inputs kind of in your domain?
JOËL: In this particular case, sort of. They're coming from a form, so yes. But it's user input, so never trust that.
STEPHANIE: Gotcha.
JOËL: I think you can take this idea and go a little bit broader as well. It doesn't have to be, like, the dry-rb-related stuff. You could do, for example, a JSON schema, right? You're dealing with the input from a third-party API, and you say, "Okay, well, I'm going to have a sort of validation JSON schema." It will just tell you, "Is this data valid or not?" and give you some errors.
But what if you paired that with construction and you could create a little parser object, if you wanted to, that says, "Hey, I've got a payload coming in from a third-party API, validate it against this JSON schema, and attempt to construct this shopping cart object, and give me an error otherwise." And now you've sort of created a nice, little parse, don't validate pipeline which I find a really nice way to deal with data like that.
STEPHANIE: From a user perspective, I'm curious: Does this also improve the user experience? I'm kind of wondering about that. It seems like it could. But have you explored that?
JOËL: This is more about the developer experience.
STEPHANIE: Got it.
JOËL: The user experience, I think, would be either identical or, you know, you can play around with things to display better errors. But this is more about the ergonomics on the development side of things. It was a little bit clunky to sort of assemble all the parts together. And sometimes we didn't immediately do both steps together at the same time. So, you might sort of have parameters that we're like, oh, these are totally good, we promise. And we pass them on to someone else, who passes them on to someone else. And then, they might try to do something with them and hope that they've got the data in the right shape.
And so, saying, let's co-locate these two things. Let's say the validation of the inputs and then the creation of some richer object happen immediately one after another. We're always going to bundle them together. And then, in this particular case, because we're using dry-rb, there's all this monad stuff that has to happen. That was a little bit clunky. We've sort of hidden that in one object, and then nobody else ever has to deal with that.
So, it's easier for developers in terms of just, if you want to turn inputs into objects, now you're just passing them into one object, into one, like, parser, and it works. But it's a nicer developer experience, but also there's a little bit more safety in that because now you're sort of always working with these richer objects that have been validated.
STEPHANIE: Yeah, that makes sense. It sounds very cohesive because you've determined that these are two things that should always happen together. The problems arise when they start to actually get separated, and you don't have what you need in terms of using your interfaces. And that's very nice that you were able to bundle that in an abstraction that makes sense.
JOËL: A really interesting thing I think about abstractions is sometimes thinking of them as the combination of multiple other things. So, you could say that the combination of one thing and another thing, and all of a sudden, you have a new sort of combo thing that you have created. And, in this case, I think the combination of input validation and construction, and, you know, to a certain extent, error handling, so maybe it's a combination of three things gives you a thing you can call a parser. And knowing that that combination is a thing you can put a name on, I think, is really powerful, or at least it felt really powerful to me when that light bulb turned on.
STEPHANIE: Yeah, it's kind of like the whole is greater than the sum of its parts.
JOËL: Yeah.
STEPHANIE: Cool.
JOËL: And you and I did an episode on Specialized Vocabulary a while back. And that power of naming, saying that, oh, I don't just have a bunch of little atomic steps that do things. But the fact that the combination of three or four of them is a thing in and of itself that has a name that we can talk about has properties that we're familiar with, all of a sudden, that is a really powerful way to think about a system.
STEPHANIE: Absolutely. That's very exciting.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I am plugging away at my RailsConf talk, and I reached the point where I'm starting to work on slides. And this talk will be the first one where I have a lot of code that I want to present on my slides. And so, I've been playing around with a couple of different tools to present code on slides or, I guess, you know, just being able to share code outside of an editor. And the two tools I'm trying are...VS Code actually has a copy with syntax functionality in its command palette. And so, that's cool because it basically, you know, just takes your editor styling and applies it wherever you paste that code snippet.
JOËL: Is that a screenshot or that's, like, formatted text that you can paste in, like, a rich text editor?
STEPHANIE: Yeah, it's the latter.
JOËL: Okay.
STEPHANIE: That was nice because if I needed to make changes in my slides once I had already put them there, I could do that. But then the other tool that I was giving a whirl is Carbon.sh. And that one, I think, is pretty popular because it looks very slick. It kind of looks like a little Mac window and is very minimal. But you can paste your code into their text editor, and then you can export PNGs of the code. So, those are just screenshots rather than editable text. And I [chuckles] was using that, exported a bunch of screenshots of all of my code in various stages, and then realized I had a typo [laughs].
JOËL: Oh no!
STEPHANIE: Yeah, so I have not got around to fixing that yet. That was pretty frustrating because now I would have to go back and regenerate all of those exports. So, that's kind of where I'm at in terms of exploring sharing code. So, if anyone has any other tools that they would use and recommend, I am all ears.
JOËL: How do you feel about balancing sort of the quantity of code that you put on a slide? Do you tend to go with, like, a larger code slide and then maybe, like, highlight certain sections? Do you try to explain ideas in general and then only show, like, a couple of lines? Do you show, like, maybe a class that's got ten lines, and that's fine? Where do you find that balance in terms of how much code to put on a slide? Because I feel like that's always the big dilemma for me.
STEPHANIE: Yeah. Since this is my first time doing it, like, I really have no idea how it's going to turn out. But what I've been trying is focusing more on changes between each slide, so the progression of the code. And then, I can, hopefully, focus more on what has changed since the last snippet of code we were looking at. That has also required me to be more fiddly with the formatting because I don't want essentially, like, the window that's containing the code to be changing sizes [laughs] in between slide transitions. So, that was a little bit finicky.
And then, there's also a few other parts where I am highlighting with, like, a border or something around certain texts that I will probably pause and talk about, but yeah, it's tough. I feel like I've seen it done well, but it's a lot harder to and a lot more effort to [laughs] do in practice, I'm finding.
JOËL: When someone does it well, it looks effortless. And then, when somebody does it poorly, you're like, okay, I'm struggling to connect with this talk.
STEPHANIE: Yep. Yep. I hear that. I don't know if you would agree with this, but I get the sense that people who are able to make that look effortless have, like, a really deep and thorough understanding of the code they're showing and what exactly they think is important for the audience to pay attention to and understand in that given moment in their talk. That's the part that I'm finding a lot more work [laughs] because just thinking about, you know, the code I'm showing from a different lens or perspective.
JOËL: How do you sort of shrink it down to only what's essential for the point that you're trying to make? And then, more broadly, not just the point you're trying to make on this one slide, but how does this one slide fit into the broader narrative of the story you're trying to tell?
STEPHANIE: Right. So, we'll see how it goes for me. I'm sure it's one of those things that takes practice and experience, and this will be my first time, and we'll learn something from it.
JOËL: That's exciting. So, this is RailsConf in Detroit this year, I believe, May 7th through 9th.
STEPHANIE: Yep. That's right. So, recently on my client work, I encountered a CI failure on a PR of mine that I was surprised by. And basically, I had introduced a new association on a model, and this CI failure was saying like, "Hey, like, we see that you introduced this association. You should consider adding this to the presenter for this model." And I hadn't even known that that presenter existed [laughs]. So, it was kind of interesting to get a CI failure nudging me to consider if I need to be, like, making a different, you know, this other change somewhere else.
JOËL: That's a really fun use of CI. Do you think that was sort of helpful for you as a newer person on that codebase? Or was it more kind of annoying and, like, okay, this CI is over the top?
STEPHANIE: You know, I'm not sure [laughs]. For what it's worth, this presenter was actually for their admin dashboard, essentially. And so, the goal of what this workflow was trying to do was help folks who are using the admin dashboard have, like, all of the capabilities they need to do that job. And it makes sense that as you add behavior to your app, sometimes those things could get missed in terms of supporting, you know, not just your customers but developers, support product, you know, the other users of your app.
So, it was cool. And that was, you know, something that they cared enough to enforce. But yeah, I think there maybe is a bit of a slippery slope or at least some kind of line, or it might even be pretty blurry around what should our test failures really be doing.
JOËL: And CI is interesting because it can be a lot more than just tests. You can run all sorts of things. You can run a linter that fails. You could run various code quality tools that are not things like unit tests. And I think those are all valid uses of the CI process. What's interesting here is that it sounds like there were two systems that needed to stay in sync. And this particular CI check was about making sure that we didn't accidentally introduce code that would sort of drift apart in those two places. Does that sound about right?
STEPHANIE: Yeah, that does sound right. I think where it gets a little fuzzy, for me, is whether that kind of check was for code quality, was for a standard, or for a policy, right? It was kind of saying like, hey, like, this is the way that we've enforced developers to keep those two things from drifting. Whereas I think that could be also handled in different ways, right?
JOËL: Yeah. I guess in terms of, like, keeping two things in sync, I like to do that at almost, like, a code level, if possible. I mean, maybe you need a single source of truth, and then it just sort of happens automatically. Otherwise, maybe doing it in a way that will yell at you. So, you know, maybe there's a base class somewhere that will raise an error, and that will get caught by CI, or, you know, when you're manually testing and like, oh yeah, I need to keep this thing in sync. Maybe you can derive some things or get fancy with metaprogramming.
And the goal here is you don't have a situation where someone adds a new file in one place and then they accidentally break an admin dashboard because they weren't aware that you needed these two files to be one-to-one. If I can't do it just at a code level, I have done that before at, like, a unit test level, where maybe there's, like, a constant somewhere, and I just want to assert that every item in this constant array has a matching entry somewhere else or something like that, so that you don't end up effectively crashing the site for someone else because that is broken behavior.
STEPHANIE: Yeah, in this particular case, it wasn't necessarily broken. It was asking you "Hey, should this be added to the admin presenter?" which I thought was interesting. But I also hear what you're saying. It actually does remind me of what we were talking about earlier when you've identified two things that should happen, like mostly together and whether the code gives you affordances to do that.
JOËL: So, one of the things you said is really interesting, the idea that adding to the presenter might have been optional. Does that mean that CI failed for you but that you could merge anyway, or how does that work?
STEPHANIE: Right. I should have been more clear. This was actually a test failure, you know, that happened to be caught by CI because I don't run [laughs] the whole test suite locally.
JOËL: But it's an optional test failure, so you're allowed to let that test fail.
STEPHANIE: Basically, it told me, like, if I want this to be shown in the presenter, add it to this method, or if not, add it to...it was kind of like an allow list basically.
JOËL: I see.
STEPHANIE: Or an ignore list, yeah.
JOËL: I think that kind of makes sense because now you have sort of, like, a required consistency thing. So, you say, "Our system requires you...whenever you add a file in this directory, you must add it to either an allow list or an ignore list, which we have set up in this other file." And, you know, sometimes you might forget, or sometimes you're new, and it's your first time adding a file in this directory, and you didn't remember there's a different place where you have to effectively register it. That seems like a reasonable check to have in place if you're relying on these sort of allow lists for other parts of the system, and you need to keep them in sync.
STEPHANIE: So, I think this is one of the few instances where I might disagree with you, Joël. What I'm thinking is that it feels a bit weird to me to enforce a decision that was so far away from the code change that I made. You know, you're right. On one hand, I am newer to this codebase, maybe have less of that context of different features, things that need to happen. It's a big app. But I almost think this test reinforces this weird coupling of things that are very far away from each other [laughs].
JOËL: So, it's maybe not the test itself you object to rather than the general architecture where these admin presenters are relying on these other objects. And by you introducing a file in a totally different part of the app, there's a chance that you might break the admin, and that feels weird to you.
STEPHANIE: Yeah, that does feel weird to me. And then, also that this implementation is, like, codified in this test, I guess, as opposed to a different kind of, like, acceptance test, rather than specifying specifically like, oh, I noticed, you know, you didn't add this new association or attribute to either the allow list or the ignore list. Maybe there is a more, like, higher level test that could steer us in keeping the features consistent without necessarily dictating, like, that it needs to happen in these particular methods.
JOËL: So, you're talking something like doing an integration test rather than a unit test? Or are you talking about something entirely different?
STEPHANIE: I think it could be an integration test or a system test. I'm not sure exactly. But I am wondering what options, you know, are out there for helping keeping standards in place without necessarily, like, prescribing too much about, like, how it needs to be done.
JOËL: So, you used the word standard here, which I tend to think about more in terms of, like, code style, things like that. What you're describing here feels a little bit less like a standard and more of what I would call a code invariant.
STEPHANIE: Ooh.
JOËL: It's sort of like in this architecture the way we've set up, there must always be sort of one-to-one matching between files in this directory and entries in this array. Now, that's annoying because they're sort of, like, two different places, and they can vary independently. So, locking those two in sync requires you to do some clunky things, but that's sort of the way the architecture has been designed. These two things must remain one-to-one. This is an invariant we want in the app.
STEPHANIE: Can you define invariant for me [laughs], the way that you're using it here?
JOËL: Yeah, so something that is required to be true of all elements in this class of things, sort of a rule or a law that you're applying to the way that these particular bits of code need to behave. So, in this case, the invariant is every file in this directory must have a matching entry in this array. There's a lot of ways to enforce that. The sort of traditional idea is sort of pushing a lot of that checking...they'll sometimes talk about pushing errors to the left. So, if you can handle this earlier in the sort of code execution pipeline, can you do it maybe with a type system if you're in a type language? Can you do it with some sort of input validation at runtime?
Some languages have the concept of contracts, so maybe you enforce invariants using that. You could even do something really ad hoc in Ruby, where you might say, "Hey, at boot time, when we load this particular array for the admin, just load this directory. Make sure that the entries in the array match the entries in the directory, and if they don't, raise an error." And I guess you would catch that probably in CI just because you tried to run your test suite, and you'd immediately get this boot error because the entries don't match.
So, I guess it kind of gets [inaudible 22:36] CI, but now it's not really a dedicated test anymore. It's more of, like, a property of the system. And so, in this case, I've sort of shifted the error checking or the checking of this invariant more into the architecture itself rather than in, like, things that exercise the architecture. But you can go the other way and say, "Well, let's shift it out of the architecture into tests," or maybe even beyond that, into, like, manual QA or, you know, other things that you can do to verify it.
STEPHANIE: Hmm. That is very compelling to me.
JOËL: So, we've been talking so far about the idea of invariants, but the thing about invariants is that they don't vary. They're always true. This is a sort of fundamental rule of how this system works. The class of problems that I often struggle with how to deal with in these sorts of situations are rules that you only sometimes want to apply. They're not consistent. Have you ever run into things like that?
STEPHANIE: Yeah, I have. And I think that's what was compelling to me about what you were sharing about code invariance because I wasn't totally convinced this particular situation was a very clear and absolute rule that had been decided, you know, it seemed a little bit more ambiguous.
When you're talking about, like, applying rules that sometimes you actually don't want to apply, I think of things like linters, where we want to disable, you know, certain rules because we just can't get around implementing the way we want to while following those standards. Or maybe, you know, sometimes you just have to do something that is not accessible [laughs], not that that's what I would recommend, but in the case where there aren't other levers to change, you maybe want to disable some kind of accessibility check.
JOËL: That's always interesting, right? Because sometimes, you might want, like, the idea of something that has an escape hatch in it, but that immediately adds a lot of complexity to things as well. This is getting into more controversial territory. But I read a really compelling article by Jeroen Engels about how being able to, like, locally disable your linter for particular methods actually makes your code, but also the linter itself, a worse tool. And it really kind of made me rethink a little bit of how I approach linters as a tool.
STEPHANIE: Ooh.
JOËL: And what makes sense in a linter.
STEPHANIE: What was the argument for the linter being a worse tool by doing that?
JOËL: You know, it's funny that you ask because now I can't remember, and it's been a little while since I've read the article.
STEPHANIE: I'll have to revisit it after the show [laughs].
JOËL: Apparently, I didn't do the homework for this episode, but we'll definitely link to that article in the show notes.
STEPHANIE: So, how do you approach either introducing a new rule to something like a linter or maybe reconsidering an existing rule? Like, how would you go about finding, like, consensus on that from your team?
JOËL: That varies a lot by organizational culture, right? Some places will do it top-down, some of them will have a broader conversation and come to a consensus. And sometimes you just straight up don't get a choice. You're pulling in a tool like standard rb, and you're saying, "Look, we don't want to have a discussion about every little style thing, so whatever, you know, the community has agreed on for the standard rb linter is the style we're using. There are no discussions. Do what the linter tells you."
STEPHANIE: Yeah, that's true. I think I have to adapt to whatever, you know, client culture is like when I join new projects. You know, sometimes I do see people being like, "Hey, I think it's kind of weird that we have this," or, "Hey, I've noticed, for example, oh, we're merging focused RSpec tests. Like, let's introduce a rule to make sure that that doesn't happen."
I also think that a different approach is for those things not to be enforced at all by automation, but we, you know, there are still guidelines. I think the thoughtbot guides are an example of pretty opinionated guidelines around style and syntax. But I don't think that those kinds of things would, you know, ever be, like, enforced in a way that would be blocking.
JOËL: Those are kind of hard because they're not as consistent as you would think, so it's not a rule you can apply every time. It's more of a, here's some things to maybe keep in mind. Or if you're writing code in this way, think about some of the edge cases that might happen, or don't default to writing it in this way because things might go wrong. Make sure you know what you're doing. I love the phrase, "Must be able to justify this," or sometimes, "Must convince your pair that this is okay." So, default to writing in style A, avoid style B unless you can have a compelling reason to do so and can articulate that on your PR or, you know, convince your pair that that's the right way to go.
STEPHANIE: Interesting. It's kind of like the honor system, then [laughs].
JOËL: And I think that's sort of the general way when you're working with developers, right? There's a lot of areas where there is ambiguity. There is no single best way to do it. And so, you rely on people's expertise to build systems that work well. There are some things where you say, look, having conversations about these things is not useful. We want to have some amount of standardization or uniformity about certain things. Maybe there's invariance you want to hold. Maybe there's certain things we're, like, this should never get to production.
Whenever you've got these, like, broad sweeping statements about things should be always true or never true, that's a great time to introduce something like a linting rule. When it's more up to personal judgment, and you just want to nudge that judgment one way or another, then maybe it's better to have something like a guide.
STEPHANIE: Yeah, what I'm hearing is there is a bit of a spectrum.
JOËL: For sure. From things that are always true to things that are, like, sometimes true. I think I'm sort of curious about the idea of going a level beyond that, though, beyond things like just code style or maybe even, like, invariance you want to hold or something, being able to make suggestions to developers based off the code that is written. So, now you're applying more like heuristics, but instead of asking a human to apply those heuristics at code review time and leave some comments, maybe there's a way to get automated feedback from a tool.
STEPHANIE: Yeah, I think we had mentioned code analysis tools earlier because some teams and organizations include those as part of their CI builds, right? And, you know, even Brakeman, right? Like, that's an analysis tool for security. But I can't recall if I've seen an organization use things like Flog metrics which measure code complexity in things like that. How would you feel if that were a check that was blocking your work?
JOËL: So, I've seen things like that be used if you're using, like, the Code Climate plugin for GitHub. And Code Climate internally does effectively flog and other things that are fancier on your code quality. And so, you can set a threshold to say, hey, if complexity gets higher than a certain amount, fail the build.
You can also...if you're doing things via GitHub, what's nice is that you can do effectively non-blocking comments. So, instead of failing CI to say, "Hey, this method looks really complex. You cannot merge until you have made this method less complex," maybe the sort of, like, next step up in ambiguity is to just leave a comment on a PR from a tool and say, "Hey, this method here is looking really complex. Consider breaking it up."
STEPHANIE: Yeah, there is a tool that I've seen but not used called Danger, and its tagline is, Stop saying, "You forgot to..." in code review [laughs]. And it basically does that, what you were saying, of, like, leaving probably a suggestion. I can imagine it's blocking, but a suggestive comment that just automates that rather than it being a manual process that humans have to remember or notice.
JOËL: And there's a lot of things that could be specific to your organization or your architecture. So, you say, "Hey, you introduced a file here. Would you consider also making an entry to this presenter file so that it's editable on the admin?" And maybe that's a better place to handle that. Just a comment. But you wouldn't necessarily want every code reviewer to have to think about that.
STEPHANIE: So, I do think that I am sometimes not necessarily suspicious, but I have also seen tools like that end up just getting in the way, and it just becomes something you ignore. It's something you end up always using the escape hatch for, or people just find ways around it because they're harming more than they're helping. Do you have any thoughts about how to kind of keep those things in check and make sure that the tools we introduce genuinely are kind of helping the organization do the right thing rather than kind of being these perhaps arbitrary blockers?
JOËL: I'm going to throw a fancy phrase at you.
STEPHANIE: Ooh, I'm ready.
JOËL: Signal-to-noise ratio.
STEPHANIE: Whoa, uh-huh.
JOËL: So, how often is the feedback from your tool actually helpful, and how often is it just noise that you have to dismiss, or manually override, or things like that? At some point, the ratio becomes so much that you lose the signal in all the noise. And so, maybe you even, like, because you're always just ignoring the feedback from this tool, you accidentally start overriding things that would be genuinely helpful. And, at that point, you've got the worst of both worlds.
So, sort of keeping track on what that ratio is, and there's not, like, a magic number. I'm not going to tell you, "Oh, this is an 80/20 principle. You need to have, you know, 80% of the time it's useful and only 20% of the time it's not useful." I don't have a number to give you, but keeping track on maybe, you know, is it more often than not useful? Is your team getting to the point where they're just ignoring feedback from this tool? And thinking in terms of that signal versus that noise, I think is useful—to go back to that word again, heuristic for managing whether a tool is still helpful.
STEPHANIE: Yeah. And I would even go on to say that, you know, I always appreciate when people in leadership roles keep an eye on these things. And they're like, "Oh, I've been hearing that people are just totally numb to this tool [laughs]" or, you know, "There's no engagement on this. People are just ignoring those signals." Any developer impacted by this, it is valid to bring it up if you're getting frustrated by it or just finding yourself, you know, having all of these obstacles getting in the way of your development process.
JOËL: Sometimes, this can be a symptom that you're mixing too many classes of problems together in one tool. So, maybe there are things that are, like, really dangerous to your product to go live with them. Maybe it's, you know, something like Brakeman where you're doing security checks, and you really, ideally, would not go to production with a failing security check.
And then, you've got some random other style things in there, and you're just like, oh yeah, whatever, it's this tool because it's mostly style things but occasionally gives you a security problem. And because you ignore it all the time, now you accidentally go to production with a security problem. So, splitting that out and say, "Look, we've got blocking and unblocking because we recognize these two classes of problems can be a helpful solution to this problem."
STEPHANIE: Joël, did you just apply an object-oriented design principle to an organizational system?
[laughter]
JOËL: I may be too much of a developer.
STEPHANIE: Cool. Well, I really appreciate your input on this because, you know, I was just kind of mulling over, like, how I felt about these kinds of things that I encounter as a developer. And I am glad that we got to kind of talk about it. And I think it gives me a more expanded vocabulary to, you know, analyze or reflect when I encounter these things on different client organizations.
JOËL: And every organization is different, right? Like, you've got to learn the culture, learn the different elements of that software. What are the things that are invariant? What are the things that are dangerous that we don't want to ship without? What are the things that we're doing just for consistency? What are things which are, like, these are culturally things that we'd like to do? There's all these levels, and it's a lot to pick up.
STEPHANIE: Yeah. At the end of the day, I think what I really liked about the last thing you said was being able to identify the problem, like the class of problem, and applying the right tool for the right job. It helps me take a step back and perhaps even think of different solutions that we might not have thought about earlier because we had just gotten so used to the one way of enforcing or checking things like that.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie is back with a book recommendation: "Thinking in Systems" by Donella Meadows. This book has helped to bolster her understanding of complex systems in environmental, organizational, and software contexts, particularly through user interactions and system changes. Joël describes his transformative experience watching last week's total solar eclipse.
Together, they explore how systems thinking influences software development and team dynamics by delving into practical applications in writing and reading code, suggesting that understanding complex systems can aid developers in navigating and optimizing codebases and team interactions.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn, and together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I have a book recommendation today [laughs].
JOËL: Oh, I love book recommendations.
STEPHANIE: It's been a little while, so I wanted to share what I've been reading that I think might be interesting to this audience. I'm reading Thinking in Systems by Donella Meadows. Joël, are you familiar with systems thinking theory at all?
JOËL: Very superficially. Hearing people talk about it on, I guess, X, now Twitter.
STEPHANIE: Yeah. Well, what I like about this book is the subtitle is A Primer on Thinking in Systems [chuckles], which is perfect for me as someone who also just kind of understood it very loosely, as just like, oh, like, I dunno, you look at things holistically and look at the stuff, not just its parts but from a higher perspective.
JOËL: Yeah. Is that accurate sort of your pre-book reading overview? Or do you think there's a bigger thing, a bigger idea there that the book unpacks?
STEPHANIE: Yeah. I think I'm only, like, a third of the way through so far. But what I have enjoyed about it is that, you know, in some ways, like, intuitively, that makes a lot of sense about, like, oh yeah, you want to make sure that you see the forest for the trees, right?
But one thing I've been surprised by is how it's also teaching me more technical language to talk about complex systems. And, in this case, she is talking about, essentially, living systems or systems that change over time where things are happening. I think that can be a little bit confusing when we also are, you know, talking about computer systems, but, in this case, you know, systems like environments, or communities, or even, you know, companies or organizations, which is actually where I'm finding a lot of the content really valuable.
But some of the language that I've learned that I am now trying to integrate a little bit more into how I view a lot of just, like, daily problems or experiences involve things like feedback loops that might be reinforcing or balancing and different, like, inputs and output flows and what is driving those things. So, I've appreciated just having more precise language for things that I think I kind of intuited but didn't exactly know how to, like, wrap up in a way to communicate to someone.
JOËL: Do you think the idea of thinking in terms of things like self-balancing versus sort of diverging input loops is something that's useful when actually writing code? Or do you think of it a little bit more in terms of, like, teams and how they organize general problem-solving approaches, things like that?
STEPHANIE: I think the answer is both. I actually gave this quite a bit of thought because I was trying to wrap my head around her definition of a system and how we talk about systems sometimes, like, a codebase, for example. And the conclusion I came to is that, really, it's not just the code static by itself that we care about. It's how it gets exercised, how users use it, how developers change it, how we interact with it when we, like, run tests, for example.
So, that was really helpful in kind of thinking about some of the problems we see in engineering organizations as a result of software being a thing that is used and written by humans, as opposed to it just existing in memories [chuckles] or, like, it's in a storage system somewhere. Like, that means it's kind of lifeless, and it's not changing anymore. But the point of kind of this framework is trying to understand it as it changes.
JOËL: So, kind of that blurry line between humans and computers and where those two overlap is where a lot of that systems thinking almost, like, mental model or vocabulary has been most helpful for you.
STEPHANIE: Yeah, I would say so. So, Joël, what's new in your world?
JOËL: So, I did the thing. I traveled to see the total solar eclipse this past weekend. It was mind-blowing. It was incredibly cool. I really loved it. For any of our listeners who have never seen a solar eclipse, in the coming years, have an opportunity to see one. I'd say it's worth traveling to see because it is really impressive.
STEPHANIE: Cool. What did it look like when it happened, when it was 100% eclipsed?
JOËL: So, what really impressed me was the fact that, like, most of the cool stuff happens in that, like, last half a percent. So, like, 95% eclipsed, still not that impressive. If that's all I'd seen, I would be disappointed. And then, in that last little bit, all of a sudden, everything goes dark. It's sort of, like, that twilight past sunset. You've got a glow on the horizon. The stars are out.
STEPHANIE: Wow.
JOËL: The animals are behaving like it's past sunset. They're getting ready to go to sleep.
STEPHANIE: Whoa.
JOËL: The sun itself is just a black dot with this, like, big fiery ring around it. Like all those pictures, icons, photos you see online, or drawings that look over the top, those things are real. That's what it looks like.
STEPHANIE: Wow, that's really neat. Could you see it without looking through the eclipse viewers?
JOËL: So, when you hit totality, you can look at it with a naked eye, and it is, yeah, magnificent.
STEPHANIE: Oh, that's so cool. How long did it last?
JOËL: So, it depends where you are in the path of totality. I was pretty much dead center. And it lasts, I think, three and a half minutes is what we had.
STEPHANIE: That's so cool. So, for me, here in Chicago, we did not have complete totality. It was about, like, 95%. So, I was watching it, just from that perspective. And I would say, yeah, it was not nearly as cool as what you described. It kind of just was like, oh, it got dark. It almost looked like I was viewing the world through sunglasses.
I did have one of those viewers that I used to, like, look at the sun and see how much of it had been covered. But yeah, it was cool. But what you said, I think now I feel like, wow, I really should have [laughter] traveled. I could have traveled just a few hours, you know, to, like, Indianapolis or something to have been on the path. That would have been really neat. And I don't think the next one will be until 2044 or something like that.
JOËL: Yeah. And that's the thing, right? I think if you're within a few hours of the path of a total eclipse, it is absolutely worth traveling to totality. The downside of that is that everybody else has the same idea. And so, you will be fighting traffic and a lot of things, especially if it goes through some, like, populated areas, like it did this time.
STEPHANIE: Yeah. Well, that's really neat that you got to see that. That's, I don't know, it sounds like not exactly once in a lifetime, but definitely very rare.
JOËL: For sure. I think with this experience now; I would definitely consider traveling again if there's one, like, anywhere near where I live, or, you know, maybe even, like, planning a vacation around going somewhere else to see one because it's short. You know, you're there for three minutes, and you see something cool. But that was really impressive.
So, something that really struck me when you were talking earlier about systems thinking is that you mentioned that it gave you a sort of a new vocabulary to talk about things. It almost gave you a sort of different way of thinking or some other mental models that you could use to apply when you are interacting in that sort of fussy boundary between people and code.
And I think that this idea of having language and having mental models is something that is incredibly valuable for us as programmers in a few different areas. And I'd be curious to see particularly for when we're reading other code, reading code that someone else has written or, you know, yourself from six months ago, do you have any sort of mental models that you like to reach for or techniques that you like to use to sort of give yourself that almost vocabulary to understand what somebody else is trying to do with their code?
STEPHANIE: Yeah, I would say so. You know, as you were talking about, like, how do you read code? I was thinking about how I read code is different from how I would read a book [laughs]. I almost rarely just read everything line by line and, like, file by file, you know, in some order that has been presented to me. I am usually a lot more involved. It's almost, like, more like a choose your own adventure kind of book [chuckles], where it's like, oh, go to this page to check if you want to check out what happened down this code path [chuckles].
JOËL: Right, right. Oh, if you're reading a novel, are you the kind of person that will read the ending first?
STEPHANIE: Absolutely not.
[laughter]
JOËL: You have strong opinions here.
STEPHANIE: Even when I, like, really want to... okay, sometimes I will, like, maybe just kind of flip to the back and just see, like, oh, how many more pages or chapters do I have [laughs] left? If I am itching to know what might happen. But I definitely don't start a book by reading the end. I think there are people who do that, and maybe that works for them, but I don't understand it.
[laughter]
JOËL: But maybe that's the thing that you do with your code.
STEPHANIE: Yeah. When I read code, it's almost always with some kind of intention to understand a particular behavior, usually kind of kicked off by some action, like, done by the user or something automated. And I want to understand that process from start to finish. So, I'm less likely to read a whole class file [chuckles], as opposed to just following method and the messages that are sent along the way in a process.
JOËL: That makes sense. Do you tend to sort of go from kind of the origin point and then follow it down, or sort of the opposite, find some, like, terminal node and then work your way back?
STEPHANIE: Oh.
JOËL: And I could imagine this in a more concrete sense in a Rails app. You find, like, the route that you're going to hit because you know it's a URL, and then you find the controller, and then you read through the action. And then, you maybe follow a service and something like that or look into the view. Or maybe the opposite: there's a particular page that gets rendered. You look at a method, a helper method that gets called in a view, and then you sort of, like, follow a backtrace from there.
STEPHANIE: Yeah, I think both. It depends on what information I have available to me, I think. I can think of, recently, I was trying to figure out the process for which, like, a user in this application I'm working on can downgrade the tier of their account, and I didn't know what to grep for. And so, I asked, like, "Hey, like, what are the entry points for a user being able to do this?"
And someone gave me a couple of routes, and that was great because then I got to see, oh, that this is possible in multiple ways. Like, the user can do it themselves, or the admin can do it, and that was really helpful. Other times, I think I have been able to find a keyword on a page and start from, like, a view or a component, or something like that, and then work upwards.
JOËL: I love that question that you asked, "What are the entry points for this thing?" I feel like that's a fantastic question to sort of ask yourself when you're feeling stuck, but it's also a great question to ask other people that might know.
Do you find that you read code differently when you're just trying to, like, maybe understand a broader subsystem? Maybe you're sort of new to this area and you have to add a feature, as opposed to maybe you're debugging something and trying to understand why things went wrong. Are those two different kinds of reading?
STEPHANIE: Yeah, that's also a great point because I do think there's another time when I've just scanned the file structure of an app and looked at the model's directory and just kind of been like, okay, like, maybe some things are namespaced. And that helps me just know what the main concepts that I have to be dealing with or that I will be dealing with are.
But I find that sometimes less fruitful because of kind of what I mentioned earlier about thinking in systems, where I'm not sure how important those things will be yet because I don't know how they're used. They could not be used at all [laughs]. And then, I think I'm potentially, like, storing information that is not actually relevant in my brain.
JOËL: That's tough, right? Because systems are so big, we can't hold them entirely in our brain. So, sometimes, selectively deciding what will not be loaded in there is just as important as what will.
STEPHANIE: Yes. And I think that is actually advice that I would give to devs who are trying to get better at reading code. And this one's hard because when I am working with more early-career developers, it's hard to figure out, like, what are they seeing? How are they interpreting the code on the page? Because oftentimes, I see that they are getting stuck on the details, whereas I would like to encourage them to just be like, you don't really need to know what's going on in that method right now. Does the method name kind of communicate enough to you, like, what this thing is doing without having to understand all of the details?
But my advice would be to start figuring out what to ignore [laughs] because, like you said, it's impossible to, like, hold all of that information at one time. What do you think about that advice and, like, how do you teach that to someone?
JOËL: I think you're sort of hinting at two different ways of reducing the amount you have to load in your mind. The way I think about it, I think of it sort of spatially, so you can reduce the breadth of things you have to load into your head, so, realize, wait, there's all of these methods, and I don't need to know all of the methods in the file. There's only this one entry point I care about and everything downstream of that, and you just sort of prune everything off to the side, ignore it. That's not relevant right now.
But there's also sort of a depth. How deep of implementation do you really need to have? Maybe you only need to know about the high-level concepts. And then, you sort of, like, do this pruning where you say, "I'm not going to go deeper than this level," because the implementation is not really relevant to what I'm trying to understand right now. I mostly need to know what are these classes and how do they interact with each other? Or something along those lines.
And, ideally, you're may be doing a little bit of both. You probably don't need to go all the way to the deep implementation of every method, but you also don't necessarily need to know all of the high-level concepts and all of the objects in the system that interact. So, being able to prune in sort of both dimensions, breadth and depth, helps you to, I think, narrow the window of what you need to learn.
STEPHANIE: Yeah, that's a really great point. I have a couple more strategies that I just thought about as you were talking about that. One is kind of on the journey to let go of some things that I can't understand in the moment. If they seem important, I will write them down and, like, put them somewhere in a list to come back to later and be like, "This is a thing I don't fully understand yet," and just be okay with that.
I think, for me, there is some anxiety of like, oh, like, what if I'll need to know about it later? And at least putting it down somewhere as like, okay, like, I've done something with that anxious [laughs] energy of, like, recognizing that I don't understand this right now, and that's okay. But I can revisit it later.
And then, another one is almost the opposite, where it's like, what are my landmarks as I'm navigating through a codebase? Like, what are the files that I'm consistently opening? Because so many of the roads lead to this object. Even when I'm kind of going through different paths, it's like, I can hook into, like, the behavior that I'm looking for from these landmark objects or models because they are really important in this domain. So, it's like, I don't necessarily need to remember every step of the way, but if I can recall some of the more important methods, then I can kind of find my way back.
JOËL: Do you just try to, like, memorize those, or do you write them down? Like, how do you make a method or an object a landmark for you?
STEPHANIE: That has felt a little more, like, it becomes more, like, muscle memory, I think, because I'm revisiting them pretty frequently. I don't know, it's somehow the act of repeating, like, going through those files just gets encoded somewhere in my brain [laughs], and I don't have to worry as much about forgetting them.
JOËL: Strengthening that neural pathway.
STEPHANIE: Yeah, exactly.
JOËL: Or whatever is happening in the brain there.
STEPHANIE: [laughs]
JOËL: I like what you were saying earlier, though, about taking notes and sort of almost, like, a breadcrumbs approach. We did an episode almost two years ago where we talked about note-taking for various purposes and note-taking as an exploration exercise, and then note-taking when debugging, where we went deeper into that topic. And I think that would be really relevant to any of our listeners. We'll link that in the show notes.
STEPHANIE: Yeah. Leaving breadcrumbs. That's a great metaphor or just a way to describe it. Because I have a little shorthand for if I am leaving myself notes in a codebase as I'm trying to understand what's happening, and it's just, like, putting my initials in a comment and, like, including some observation or commentary about what I'm seeing or a question.
JOËL: Also, just a kind of meta observation here, but in the last, you know, 10-15 minutes we've been talking about this, we're already creating our own set of metaphors, and language, and mental models around understanding code. We're talking about breadcrumbs, and landmarks, and looking at code through a broad versus deep lens. That's exactly what we're talking about.
STEPHANIE: Joël, do you have any mental models that you use that we haven't really gotten into yet?
JOËL: I don't know if they're mental models per se, but I lean very heavily into diagramming as a form of understanding code. And maybe that's a way of sort of reducing the number of concepts because instead of now sort of thinking in terms of, like, lines of code, I'm thinking in terms of maybe some boxes and arrows, and that's a much higher-level way of looking at a system and can give me some really interesting insights.
And there are a ton of different diagrams you can use for different things, and I guess all of them are based on a different maybe mental model of what a system is. So, for example, I might actually write out the method call graph starting from some endpoint and just sort of saying, "Hey, when I call this method, what are all of the methods downstream that get called? And is there anything interesting at any of those steps?"
Variation on that if you're looking at, let's say, some kind of performance thing would be, like, a flame graph where you have sort of that but then it also shows you the amount of time spent in each of the methods. And that can give you a sense of where your bottlenecks are.
Another one that I really like is thinking in terms of a finite state machine. So, sort of following data, how does it change in response to different events that can come into the system? And I'm not talking about, oh, you're using one of the, like, state machine gems out there for your Rails app. This is more of a way of thinking about programs and how they act.
You can have just a plain, old Rails app, and you're thinking about, okay, well, how does a cart turn into an order, turn into a fulfillment request at the warehouse, turns into a tracking number for shipping? Modeling that as a state machine. And also, you know, can it move back along that path, or is it only linear move forward? Any kind of multi-state form a wizard often has paths where you move back. It's not linear. That very easily can be drawn out as a state machine. So, that is something that I really like to pull out when I'm trying to understand a, like, complex workflow.
STEPHANIE: Yeah, I think we've talked about this before a little bit, or maybe not even a little bit, a lot [laughs]. But I know that you're a big fan of Mermaid.js for creating diagrams in markdown that can be embedded in a pull request description or even in a commit message. When I was hearing you talk about state machines and just all the different paths that can lead to different states, I was like, I bet that's something that you would create using a diagram and stick for yourself and others when sharing code.
JOËL: Yes, Mermaid does support state machines as a graph type, which is really cool.
Another thing that you can do is embed those in tools like Obsidian, which is my current note-taking tool. So, if I'm doing some sort of notes as a sort of exploratory tool, I will often start writing a Mermaid graph directly in line, and it will render and everything. That's really nice. If I'm not in Obsidian and I just need some sort of one-off graph, I'll often lean on Mermaid.live, which just gives you an editor where you can write up some Mermaid code. It will render it, and then you can copy the PNG into somewhere else and share that with other people. So, if I just need a one-off thing to share in Slack or something like that, I like to lean on that.
Another type of diagram that I use pretty frequently is an entity-relationship diagram, so sort of what database tables are related to what others. On larger apps, there's just so many tables, and maybe a bunch of JOINS and things like that, and it's sometimes difficult to get the picture of what is happening, so I'll often draw out a graph of those. Now, it's not worth doing the entire database because that will be huge and overwhelming. So, I'll find, like, five or six tables that are relevant to me and then try to answer the question: How are they related to each other?
STEPHANIE: Yeah, I like that. I was going to ask if you do it manually or if you use a tool because I've worked in various apps that have used the Rails ERD gem that will generate an entity-relationship diagram for you every time the schema changes. But there's something very compelling, to me, about the idea of trying to just figure out if you know the relationships, if you could draw them out, as opposed to having a tool do it for you.
JOËL: Exactly.
STEPHANIE: And I think, like, also, you do have information that might not be encoded in the system. Like, you actually know, oh, these two tables are related, even if no one has defined an association on them. I think that is important in understanding actually how the system is working in real life, I guess.
JOËL: Agreed. So, we've been talking a lot about how we can use different tools, different mental models to take code that somebody else has written and kind of, like, almost read it from disk and load it into our brains. But what about the opposite? We're faced with a business problem, and we want to sort of write it to disk, turn it into code that somebody else will then read or that a machine will execute. I hear that happens occasionally. Are there sort of mental models or ways of approaching tackling a more, like, amorphous problem in the real world and turning that into code? Like, are they just the inverse of what we do when we read code, or are they, like, totally different set of skills?
STEPHANIE: For me personally, I don't follow this framework very strictly, but I think more intuitively how I like to go about it is more behavior-driven where...because that is the language of maybe our cross-functional partners. They're saying like, "Hey, like, when this happens, I want to be able to do this," and I kind of start there. Maybe I'll pick up some of the keywords that they're repeating pretty frequently as like, oh, like, this is a concept.
Actually, lately, the past couple of weeks, I've been test-driving almost all of my code as I work on a totally, like, greenfield feature. And that has been working really well for me, I think, because we did explore more granular, both, like, granular and abstract concepts when we were spiking this feature. And so, we had come up with some domain models. I had kind of thought about, like, how they might interact with each other.
But when you then have to actually, like, code that, there are so many little nuances and things to keep track of that I found test driving things from, like, behavior and user stories. Those are really helpful in keeping me, like, on track to making sure that I didn't just have all these little pieces of domain concepts that then didn't really interact in a meaningful way.
JOËL: Yeah, the sort of very, like, user or customer-centric approach to thinking about what is this app doing? Is a great way to think about it. And I guess the sort of translation of that, that first step of translation into code is some sort of, like, system spec.
STEPHANIE: Yeah, exactly.
JOËL: I like that because, you know, we have all these other abstractions that we use as developers. But at the end of the day, our customers and even, you know, our product people aren't thinking in terms of, like, objects and classes and all these other fun abstractions that we have. They're thinking in terms of behaviors and, you know, maybe subsystems, workflows, things like that. And then it's up to us to translate that into whatever paradigm of our language that we're using.
STEPHANIE: Do you do things differently from me?
JOËL: I don't think that I do it necessarily differently. I think it's one of several tools I have in my tool belt. Something that is similar but from a slightly different angle is inspiring myself with a lot of the ideas from domain-driven design. You know, we've been talking a lot about this idea of, like, mental models and having a vocabulary, things like that, about sort of the way that we work, but that exists at the product level as well. And what if we could encode a lot of that into our application itself?
So, is there a distinction between a subscriber and a payer in our system? Is there specialized vocabulary around different other concepts in the app? Maybe instead of just having those be things that product people talk about, what if we made them actual named entities in the system and have maybe our object graph, at least in some way, reflect the sort of idealized model of what our business actually does?
That often means that you're thinking of things at a higher level because you're thinking of things at the level that our product people are thinking about them. You might be thinking of things in terms of user journeys, or product workflows, or things like that, because you say, "Oh, well, a new payer has been added to this group account. And that has started a subscription, which then means that a user has access to these corporate features that they didn't have when they were in a solo account."
Like, I've just thrown ten different sort of product terms out there that, you know, if there are concepts in our code can help us think about less of the implementation. What does the app do, or how does the app do it? And more in terms of, like, product terms, what does the app do? How do people experience the behavior, or maybe how does data change over the life cycle of the app? So, those perspectives, I think, have helped me distill down sort of more vague product ideas into things that I can then start turning into code.
STEPHANIE: Absolutely. I think one way that this framework ends up falling short, at least for me a little bit sometimes, is making connections between behaviors that are similar but not exactly the same. Or when you think about them in more isolated ways, like, it's easy to miss that, like, they are the same idea and that there is, like, something a bit higher level that you can connect them, that you can create a more abstract class for, even though that's not actually how people talk about the things.
One example I can think of is things like concerns that are both related to domain language but then also, like, kind of specific to how things work in the code as a system because you might not necessarily call something a subscribable from a product perspective. Do you have any thoughts about identifying those pieces?
JOËL: So, what's interesting is I think there's a little bit of, like, layers above and below, the sort of domain layer where you're talking in terms of, like, what the product team would use. When you're doing a lot of the implementation, there will be things that are just, like, that's how we implemented them. They're in the nitty gritty, and they're not terms that the product team would necessarily use.
Things like array and string they're low-level details. We have to use them. That's not really relevant to the world of payers, and subscribers, and things like that. So, they're sort of lower layer. And I think that's totally fine to have things where we sort of have things that are sort of programmer only, as long as they're sort of contained within this higher-level layer because that allows people new to the app to sort of see what are the different things in the application to think about things in a higher level.
It also allows for smoother communication with the product team. So, ideally, you don't have a concept in the app that is the same as something that the product team, but you just both gave it different names, and then that's really annoying. Or maybe the dev team created something that's, like, almost exactly the same as what the product team talks about, but with some, like, slight variations. Now, you're just going to be talking past each other in every planning meeting, and that will be incredibly annoying.
STEPHANIE: Yeah. At one point, when I was trying to communicate, like, async about how a feature works, and there was like the product word for it and then the dev word for it, I would have to type out both [chuckles] because I wanted to make sure that no one was confused about what we were talking about, which was the same thing that just had two names. And yeah, I don't know how many seconds of my life I'll never get back as a result [chuckles].
JOËL: Were these concepts that were identical and had just different names, or was this like, oh, well, our internal subscribed user is almost the same as when product talks about and, I don't know, employee, but our subscribed user has a couple of other extra behaviors that employees don't have, and now there's, like, this weird, like, overlap?
STEPHANIE: Yeah, both situations I have found myself in, but I think this one they were virtually identical. Like, they could be used interchangeably to mean the same thing by people who understand both of those definitions, but the problem was that we still had two words [laughs].
JOËL: Yeah, yeah. I'm a big fan of, where possible, converging on the product team's definition. Although because code forces you to be more precise, sometimes that can then force some conversations with the product team about, like, "Hey, so we've been hand waving around this concept of a subscriber. Turns out we think there's actually two different kinds of concepts at work here: the person who's consuming the content and the person who's paying for it. And are they really the same thing, or should we sort of think about these as two different entities? And, in that case, what should the name be?" And that can force a really, I think, healthy conversation between development and product.
STEPHANIE: Yeah, I like that. You mentioned there was, like, a higher level and a lower level, but I don't think we've gotten to the higher one yet.
JOËL: Yeah. Sometimes, you want to build abstraction sort of over. You're talking about the idea of, like, subscribable things. I think that's where I'm a lot fuzzier. It's much more case-by-case. Where possible, I'd like to introduce some of those things as domain vocabulary so that we'd say, "Well, look, we have a, like, family of products, and they're all subscribable." And maybe, like, the adjective doesn't matter quite as much to our product people, but, you know, because we're using a module in Ruby, we want to lean into the adjective form, and that's fine. But I would at least want some loose connection there.
STEPHANIE: Yeah, that makes sense because I think that ultimately makes for a better product. If we're thinking about, like, how to present a hierarchy of information to a user, like a navigation menu, we would want to group those things that are under that family together, ideally, so that they know how to interact with it.
JOËL: Another thing that I think falls maybe under, like, this higher-level umbrella are things like design patterns. So, maybe because we want to be able to sort of, like, swap things in and out, we're using some form of strategy pattern. That feels like maybe it's a little bit higher level. It interacts with a lot of the domain concepts, but our product team doesn't really need to think in terms of, like, oh, strategies, and swappable things, and, like, flex points in your architecture. So, those would not necessarily be domain vocabulary. Although I could see, like, maybe there's a way where they do get a domain name, and that's great.
STEPHANIE: Oh, I think maybe this is where I disagree with you a little bit. Well, actually, I agreed with what you said at the end [laughs] in terms of how maybe they should be part of the domain vocabulary because I think...I've seen product not fully understand the complexity of the application as it grows over time. And that can lead to sometimes, like, not as great product experience or experience for the user, like, interacting with this product.
And maybe that is something we want to, as developers, if we're starting to see and feel and have maybe even introduced a pattern for...I can't claim to have done this too much, but it's definitely a skill I want to hone in on. But, like, how do I communicate to product folks so that we understand, oh, like, where is it possible for these different types of a subscriber to diverge? Because that is important, I think, in determining the future of a product and, like, where we want to invest in it and where we should focus, like, new features.
JOËL: And oftentimes, when there is that kind of divergence, there probably will be some sort of product-level thinking that needs to happen there. Are we saying, "Hey, we have one of three types of subscribers, and we want to think about that"? Or maybe we want to say, "We have three different ways of processing an application." Maybe it's derived automatically. Maybe it's a dropdown that you have to pick. But let's say it's a dropdown. What do we name that dropdown with the, like, kind of processing that we want to do to an application? The thing that we want to name that dropdown that's probably a good name for that, like, group of strategies, assuming we implement with a strategy pattern. Maybe we're doing it differently.
STEPHANIE: Yeah. The more you talk about that, the more I'm convinced that that's, like, the way I want to be working at least, because you have to know what's there in order to, like, name it. You know, you have to face it, essentially [laughs]. Whereas I think a lot of applications I've worked on fall into the trap of all of those things are obscured way down in the depths of the user flow, where it's like, oh, suddenly, for some reason, you can, like, have a dropdown here that totally changes the behavior, even though you've gotten this far in either the stack trace or even just, like the user journey, as I know you like to branch early in your code.
JOËL: [laughs].
STEPHANIE: But you should also branch early from a user's experience [laughs].
JOËL: In general, I'm just a big fan of having a communication loop between development and product, not only sort of receiving a lot of useful information from the product team about what we want to build. But then because we're encountering this more, like, technical spec that we're writing, have those conversations bubble back to product and say, "Hey, so we talked about a dropdown where there are sort of three different ways of processing an application. Let's talk a little bit more about what it means to have three different ways of processing. And what do we want to name that? Is that accessible to everyone, or are they sort of one-to-one tied with a type of user?"
And all of a sudden, that has just generated probably a lot of questions that product never even thought to ask because they're working on an infinite canvas of possibilities. And it's really helped you as a developer to have better names to write your code and sort of better sketch out the boundaries of the problem you're trying to solve. So, I think it's a really healthy loop to have. I strongly encourage it.
So, we've spent a lot of time talking about thinking about behavior and things like the domain-driven design movement. But a few other things I want to shout out as being really helpful, one is an exercise where you take a problem statement and just underline all of the nouns. That is a great way to get a sense of, like, what is going on here.
More generally, I think a lot of what we're talking about falls under the umbrella of what you might call analysis. And so, digging into different analytic techniques can be a great way to better understand the problem that you're working through. One such tool would be decision tables. So, you have a problem, and you say, "Well, given these inputs, what should the outputs be?"
STEPHANIE: Cool. If there were any techniques or tools that we missed in terms of how you load code in your brain or generate code from your brain [laughs], we would love to know. You can write in to us at [email protected].
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël conducted a thoughtbot mini-workshop on query plans, which Stephanie found highly effective due to its interactive format. They then discuss the broader value of interactive workshops over traditional talks for deeper learning.
Addressing listener questions, Stephanie and Joël explore the strategic use of if and else in programming for clearer code, the importance of thorough documentation in identifying bugs, and the use of Postgres' EXPLAIN ANALYZE, highlighting the need for environment-specific considerations in query optimization.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville, and together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Just recently, I ran a sort of mini workshop for some colleagues here at thoughtbot to dig into the idea of query plans and, how to read them, how to use them. And, initially, this was going to be more of a kind of presentation style. And a colleague and I who were sharing this decided to go for a more interactive format where, you know, this is a, like, 45-minute slot.
And so, we set it up so that we did a sort of intro to query plans in about 10 minutes then 15 minutes of breakout rooms, where people got a chance to have a query plan. And they had some sort of comprehension questions to answer about it. And then, 15 minutes together to have each group share a little bit about what they had discovered in their query plan back with the rest of the group, so trying to balance some understanding, some application, some group discussion, trying to keep it engaging. It was a pretty fun approach to sharing information like that.
STEPHANIE: Yeah. I wholeheartedly agree. I got to attend that workshop, and it was really great. Now that I'm hearing you kind of talk about the three different components and what you wanted people attending to get out of it, I am impressed because [laughs] there is, like, a lot more thought, I think, that went into just participant engagement that reflecting on it now I'm like, oh yeah, like, I think that was really effective as opposed to just a presentation. Because you had, you know, sent us out into breakout rooms, and each group had a different query that they were analyzing. You had kind of set up links that had the query set up in the query analyzer. I forget what the tool was called that you used.
JOËL: I forget the name of it, but we will link it in the show notes.
STEPHANIE: Yeah. It was helpful for me, though, because, you know, I think if I were just to have learned about it in a presentation or even just looked at, you know, screenshots of it on a slide, that's different still from interacting with it and feeling more confident to use it next time I find myself in a situation where it might be helpful.
JOËL: It's really interesting because that was sort of the goal of it was to make it a bit more interactive and then, hopefully, helping people to retain more information than just a straight up, like, presentation would be. I don't know how you feel, I find that often when I go to a place like, let's say, RailsConf, I tend to stay away from more of the workshop-y style events and focus more on the talks. Is that something that you do as well?
STEPHANIE: Yeah. I have to confess that I've never attended a workshop [laughs] at a conference. I think it's partly my learning style and also partly just honestly, like, my energy level when I'm at the conference. I kind of just want to sit back. It's on my to-do list. Like, I definitely want to attend one just to see what it's like. And maybe that might even inspire me to want to create my own workshop. But it's like, once I'm in it, and, you know, like, everyone else is also participating, I'm very easily peer pressured [laughs]. So, in a group setting, I will find myself enjoying it a lot more. And I felt that kind of same way with the workshop you ran for our team.
Though, I will say a funny thing that happened was that when I went out into my breakout group with another co-worker, and we were trying to grok this query that you gave us, we found out that we got the hardest one, the most complicated one [laughs] because there were so many things going on. There was, like, multiple, like, you know, unions, some that were, like, nested, and then just, like, a lot of duplication as well, like, some conditions that were redundant because of a different condition happening inside of, like, an inner statement. And yeah, we were definitely scratching our heads for a bit and were very grateful that we got to come back together as a group and be like, "Can someone please help? [laughs] Let's figure out what's going on here."
JOËL: Sort of close that loop and like, "Hey, here's what we saw. What does everybody else see?"
STEPHANIE: Yeah, and I appreciated that you took queries from actual client projects that you were working on.
JOËL: Yeah, that was the really fun part of it was that these were not sort of made-up queries to illustrate a point. These were actual queries that I had spent some time trying to optimize and where I had had to spend a lot of time digging into the query plans to understand what was going on. And it sounds like, for you, workshops are something that is...they're generally more engaging, and you get more value out of them. But there's higher activation energy to get started. Does that sound right?
STEPHANIE: Yeah, that sounds right. I think, like, I've watched so many talks now, both in person and on YouTube, that a lot of them are easily forgettable [laughs], whereas I think a workshop would be a lot more memorable because of that interactivity and, you know, you get out of it what you put in a little bit.
JOËL: Yeah, that's true. Have you looked at the schedule for RailsConf 2024 yet? And are there any workshops on there that you're maybe considering or that maybe have piqued your interest?
STEPHANIE: I have, in fact, and maybe I will check off attending a workshop [laughs] off my bucket list this year. There are two that I'm excited about. Unfortunately, they're both at the same time slot, so I --
JOËL: Oh no. You're going to have to choose.
STEPHANIE: I know. I imagine I'll have to choose. But I'm interested in the Let's Extend Rails With A Gem by Noel Rappin and Vision For Inclusion Workshop run by Todd Sedano. The Rails gem one I'm excited about because it's just something that I haven't had to do really in my dev career so far, and I think I would really appreciate having that guidance. And also, I think that would be motivation to just get that, like, hands-on experience with it. Otherwise, you know, this is something that I could say that I would want to do and then never get [chuckles] around to it.
JOËL: Right, right. And building a gem is the sort of thing that I think probably fits better in a workshop format than in a talk format.
STEPHANIE: Yeah. And I've really appreciated all of Noel's content out there. I've found it always really practical, so I imagine that the workshop would be the same.
JOËL: So, other than poring over the RailsConf schedule and planning your time there, what has been new for you this week?
STEPHANIE: I have a really silly one [laughs].
JOËL: Okay.
STEPHANIE: Which is, yesterday I went out to eat dinner to celebrate my partner's birthday, and I experienced, for the first time, robots [laughter] at this restaurant. So, we went out to Hot Pot, and I guess they just have these, like, robot, you know, little, small dish delivery things. They were, like, as tall as me, almost, at least, like, four feet. They were cat-themed.
JOËL: [laughs]
STEPHANIE: So, they had, like...shaped like cat...they had cat ears, and then there was a screen, and on the screen, there was, like, a little face, and the face would, like, wink at you and smile.
JOËL: Aww.
STEPHANIE: And I guess how this works is we ordered our food on an iPad, and if you ordered some, like, side dishes and stuff, it would come out to you on this robot cat with wheels.
JOËL: Very fun.
STEPHANIE: This robot tower cat. I'm doing a poor job describing it because I'm still apparently bewildered [laughs]. But yeah, I was just so surprised, and I was not as...I think I was more, like, shocked than delighted. I imagine other people would find this, like, very fun. But I was a little bit bewildered [laughs].
The other thing that was very funny about this experience is that these robots were kind of going down the aisle between tables, and the aisles were not quite big enough for, like, two-way traffic. And so, there were times where I would be, you know, walking up to go use the restroom, and I would turn the corner and find myself, like, face to face with one of these cat robot things, and, like, it's starting to go at me. I don't know if it will stop [laughs], and I'm the kind of person who doesn't want to find out.
JOËL: [laughs]
STEPHANIE: So, to avoid colliding with this, you know, food delivery robot, I just, like, ran away from it [laughs].
JOËL: You don't know if they're, like, programmed to yield or something like that.
STEPHANIE: Listen, it did not seem like it was going to stop.
JOËL: [laughs]
STEPHANIE: It got, like, I was, you know, kind of standing there frozen in paralysis [laughs] for a little while. And then, once it got, I don't know, maybe two or three feet away from me, I was like, okay, like, this is too close for comfort [laughs]. So, that was my, I don't know, my experience at this robot restaurant. Definitely starting to feel like I'm in the, I don't know, is this the future? Someone, please let me know [laughs].
JOËL: Is this a future that you're excited or happy about, or does this future seem a little bit dystopian to you?
STEPHANIE: I was definitely alarmed [laughter]. But I'm not, like, a super early adopter of new technology. These kinds of innovations, if you will, always surprise me, and I'm like, oh, I guess this is happening now [laughs]. And I will say that the one thing I did not enjoy about it is that there was not enough room to go around this robot. It definitely created just pedestrian traffic issues. So, perhaps this could be very cool and revolutionary, but also, maybe design robots for humans first.
JOËL: Or design your dining room to accommodate your vision for the robots. I'm sure that flying cars and robots will solve all of this, for sure.
STEPHANIE: Oh yeah [laughter]. Then I'll just have to worry about things colliding above my head.
JOËL: And for the listeners who cannot see my face right now, that was absolutely sarcasm [laughs]. Speaking of our listeners, today we're going to look at a group of different listener questions. And if you didn't know that, you could send in a question to have Stephanie and I discuss, you can do that. Just send us an email at [email protected]. And sometimes, we put it into a regular episode. Sometimes, we combine a few and sort of make a listener question episode, which is what we're doing today.
STEPHANIE: Yeah. It's a little bit of a grab bag.
JOËL: Our first question comes from Yuri, and Yuri actually has a few different questions. But the first one is asking about Episode 349, which is pretty far back. It was my first episode when I was coming on with Chris and Steph, and they were sort of handing the baton to me as a host of the show. And we talked about a variety of hot takes or unpopular opinions.
Yuri mentions, you know, a few that stood out to him: one about SPAs being not so great, one about how you shouldn't need to have a side project to progress in your career as a developer, one about developer title inflation, one about DRY and how it can be dangerous for a mid-level dev, avoiding let in RSpec specs, the idea that every if should come with an else, and the idea that developers shouldn't be included in design and planning. And Yuri's question is specifically the question about if statements, that every if should come with an else. Is that still an opinion that we still have, and why do we feel that way?
STEPHANIE: Yeah, I'm excited to get into this because I was not a part of that episode. I was a listener back then when it was still Steph and Chris. So, I am hopefully coming in with a different, like, additional perspective to add as well while we kind of do a little bit of a throwback. So, the one about every if should come with an else, that was an unpopular opinion of yours. Do you mind kind of explaining what that means for you?
JOËL: Yeah. So, in general, Ruby is an expression-oriented language. So, if you have an if that does not include an else, it will implicitly return nil, which can burn you. There may be some super expert programmers out there that have never run into undefined method for nil nil class, but I'm still the kind of programmer who runs into that every now and then. And so, implicit nils popping up in my code is not something I generally like. I also generally like having explicit else for control flow purposes, making it a little bit clearer where flow of control goes and what are the actual paths through a particular method.
And then, finally, doing ifs and elses instead of doing them sort of inline or as trailing conditionals or things like that, by having them sort of all on each lines and balancing out. The indentation itself helps to scan the code a little bit more. So, deeper indentation tells you, okay, we're, like, nesting multiple conditions, or something like that. And so, it makes it a little bit easier to spot complexity in the code. You can apply, and I want to say this is from Sandi Metz, the squint test.
STEPHANIE: Yeah, it is.
JOËL: Where you just kind of, like, squint at your code so you're not looking at the actual characters, and more of the structure, and the indentation is actually a friend there rather than something to fight. So, that was sort of the original, I think, idea behind that. I'm curious, in your experience, if you would like to balance your conditionals, ifs with something else, or if you would like to do sort of hanging ifs.
STEPHANIE: Hanging ifs, I like that phrase that you just coined there. I agree with your opinion, and I think it's especially true if you're returning values, right? I mean, in Ruby, you kind of always are. But if you are caring about return values, like you said, to avoid that implicit nil situation, I find, especially if you're writing tests for that code, it's really easy, you know, if you spot that condition, you're like, okay, great. Like, this is a path I need to test.
But then, oftentimes, you don't test that implicit path, and if you don't enter the condition, then what happens, right? So, I think that's kind of what you're referring to when you talk about both. It's, like, easier to spot in terms of control flow, like, all the different paths of execution, as well as, yeah, like, saving you the headaches of some bugs down the line.
One thing that I thought about when I was kind of revisiting that opinion of yours is the idea of like, what are you trying to communicate is different or special about this condition when you are writing an if statement? And, in my head, I kind of came up with a few different situations you might find yourself in, which is, one, where you truly just have, like, a special case, and you're treating that completely differently. Another when you have more of a, like, binary situation, where it's you want to kind of highlight either...more of a dichotomy, right? It's not necessarily that there is a default but that these are two opposite things. And then, a third situation in which you have multiple conditions, but you only happen to have two right now.
JOËL: Interesting. And do you think that, like, breaking down those situations would lead you to pick different structures for writing your conditionals?
STEPHANIE: I think so.
JOËL: Which of those scenarios do you think you might be more likely to reach for an if that doesn't have an else that goes with it?
STEPHANIE: I think that first one, the special case one. And in Yuri's email, he actually asked, as a follow-up, "Do we avoid guard clauses as a result of this kind of heuristic or rule?" And I think that special case situation is where a guard clause would shine because you are highlighting it by putting it at the top of a method, and then saying like, you know, "Bail out of this" or, like, "Return this particular thing, and then don't even bother about the rest of this code."
JOËL: I like that. And I think guard clauses they're not the first thing I reach for, but they're not something I absolutely avoid. I think they need to be used with care. Like you said, they have to be in the top of your method. If you're adding returns and things that break out of your method, deep inside a conditional somewhere, 20 lines into your method, you don't get to call that a guard clause anymore. That's something else entirely. I think, ideally, guard clauses are also things that will break out of the method, so they're maybe raising exception. Maybe they're returning a value. But they are things that very quickly check edge cases and bail so that the body of the method can focus on expecting data in the correct shape.
STEPHANIE: I have a couple more thoughts about this; one is I'm reminded of back when we did that episode on kind of retroing Sandi Metz's Rules For Developers. I think one of the rules was: methods should only be five lines of code. And I recall we'd realized, at least I had realized for the first time, that if you write an if-else condition in Ruby, that's exactly five lines [laughs].
And so, now that I'm thinking about this topic, it's cool to see that a couple of these rules converge a little bit, where there's a bit of explicitness in saying, like, you know, if you're starting to have more conditions that can't just be captured in a five-line if-else statement, then maybe you need something else there, right? Something perhaps like polymorphic or just some way to have branched earlier.
JOËL: That's true. And so, even, like, you were talking about the exceptional edge cases where you might want to bail. That could be a sign that your method is doing too much, trying to like, validate inputs and also run some sort of algorithm. Maybe this needs to be some sort of, like, two-step thing, where there's almost, like, a parsing phase that's handled by a different object or a different method that will attempt to standardize your inputs and raise the appropriate errors and everything. And then, your method that has the actual algorithm or code that you're trying to run can just assume that its inputs are in the correct shape, kind of that pushing the uncertainty to the edges.
And, you know, if you've only got one edge case to check, maybe it's not worth to, like, build this in layers, or separate out the responsibilities, or whatever. But if you're having a lot, then maybe it does make sense to say, "Let's break those two responsibilities out into two places."
STEPHANIE: Yeah. And then, the one last kind of situation I've observed, and I think you all talked about this in the Unpopular Opinions episode, but I'm kind of curious how you would handle it, is side effects that only need to be applied under a certain condition. Because I think that's when, if we're focusing less on return values and more just on behavior, that's when I will usually see, like, an if something, then do this that doesn't need to happen for the other path.
JOËL: Yes. I guess if you're doing some sort of side effect, like, I don't know, making a request to an API or writing to a file or something, having, like, else return nil or some other sentinel value feels a little bit weird because now you're caring about side effects rather than return values, something that you need to keep thinking of. And that's something where I think my thing has evolved since that episode is, once you start having multiple of these, how do they compose together? So, if you've got if condition, write to a file, no else, keep going. New if condition, make a request to an API endpoint, no else, continue.
What I've started calling independent conditions now, you have to think about all the different ways that they can combine, and what you end up having is a bit of a combinatorial explosion. So, here we've got two potential actions: writing to a file, making a request to an API. And we could have one or the other, or both, or neither could happen, depending on the inputs to your method, and maybe you actually want that, and that's cool.
Oftentimes, you didn't necessarily want all of those, especially once you start going to three, four, five. And now you've got that, you know, explosion, like, two to the five. That's a lot of paths through your method. And you probably didn't really need that many. And so, that can get really messy. And so, sometimes the way that an if and an else work where those two paths are mutually exclusive actually cuts down on the total number of paths through your method.
STEPHANIE: Hmm, I like that. That makes a lot of sense to me. I have definitely seen a lot of, like, procedural code, where it becomes really hard to tell how to even start relating some of these concepts together. So, if you happen to need to run a side effect, like writing to a file or, I don't know, one common one I can think of is notifying something or someone in a particular case, and maybe you put that in a condition. But then there's a different branching path that you also need to kind of notify someone for a similar reason, maybe even the same reason.
It starts to become hard to connect, like, those two reasons. It's not something that, like, you can really scan for or, like, necessarily make that connection because, at that point, you're going down different paths, right? And there might be other signals that are kind of confusing things along the way. And it makes it a lot harder, I think, to find a shared abstraction, which could ultimately make those really complicated nested conditions a little more manageable or just, like, easier to understand at a certain complexity. I definitely think there is a threshold.
JOËL: Right. And now you're talking about nested versus non-nested because when conditions are sort of siblings of each other, an if-else behaves differently from two ifs without an else. I think a classic situation where this pops up is when you're structuring code for a wizard, a multi-step form. And, oftentimes, people will have a bunch of checks. They're like, oh, if this field is present, do these things. If this field is present, do these things.
And then, it becomes very tricky to know what the flow of control is, what you can expect at what moment, and especially which actions might get shared across multiple steps. Is it safe to refactor in one place if you don't want to break step three? And so, learning to think about the different paths through your code and how different conditional structures will impact that, I think, was a big breakthrough for me in terms of taking the next logical step in terms of thinking, when do I want to balance my ifs and when do I not want to? I wrote a whole article on the topic. We'll link it in the show notes.
So, Yuri, thanks for a great question, bringing us back into a classic developer discussion. Yuri also asks or gives us a bit of a suggestion: What about revisiting this topic and doing an episode on hot takes or unpopular topics? Is that something that you'd be interested in, Stephanie?
STEPHANIE: Oh yeah, definitely, because I didn't get to, you know, share my hot topics the last episode [laughs]. [inaudible 24:23]
JOËL: You just got them queued up and ready to go.
STEPHANIE: Yeah, exactly. So, yeah, I will definitely be brainstorming some spicy takes for the show.
JOËL: So, Yuri, thanks for the questions and for the episode suggestion.
STEPHANIE: So, another listener, Kevin, wrote in to us following up from our episode on Module Docs and about a different episode about Multi-dimensional Numbers. And he mentioned a gem that he maintains. It's called Ruby Units. And it basically handles the nitty gritty of unit conversions for you, which I thought was really neat.
He mentioned that it has a numeric class, and it knows how to do math [laughs], which I would find really convenient because that is something that I have been grateful not to have to really do since college [laughs], at least those unit conversions and all the things that I'd probably learned in math and physics courses [laughs]. So, I thought that was really cool, definitely is one to check out if you frequently work with units. It seemed like it would be something that would make sense for a domain that is more science or deals in that kind of domain.
JOËL: I'm always a huge fan of anything that tags raw numbers that you're working with with a quantity rather than just floating raw numbers around. It's so easy to make a mistake to either treat a number as a quantity you didn't think of, or make some sort of invalid operation on it, or even to think you have a value in a different size than you do. You think you're dealing with...you know you have a time value, but you think it's in seconds. It's actually in milliseconds. And then, you get off by some big factor.
These are all mistakes that I have personally made in my career, so leaning on a library to help avoid those mistakes, have better information hiding for the things that really aren't relevant to the work that I'm trying to do, and also, kind of reify these ideas so that they have sort of a name, and they're, like, their own object, their own thing that we can interact with in the app rather than just numbers floating around, those are all big wins from my perspective.
STEPHANIE: I also just thought of a really silly use case for this that is, I don't know, maybe I'll have to experiment with this. But every now and then, I find the need to have to convert a unit, and I just pop into Google, and I'm like, please give me, you know, I'll search for 10 kilometers in miles or something [laughs]. But then I have to...sometimes Google will figure it out for me, and sometimes it will just list me with a bunch of weird conversion websites that all have really old-school UI [laughs]. Do you know what I'm talking about here?
Anyway, I would be curious to see if I could use this gem as a command-line interface [laughs] for me without having to go to my browser and roll the dice with freecalculator.com or something like that [laughs].
JOËL: One thing that's really cool with this library that I saw is the ability to define your own units, and that's a thing that you'll often encounter having to deal with values that are maybe not one of the most commonly used units that are out there, dealing with numbers that might mean a thing that's very particular to your domain. So, that's great that the library supports that. I couldn't see if it supports multi-dimensional units. That was the episode that inspired the comment. But either way, this is a really cool library. And thank you, Kevin, for sharing this with us.
STEPHANIE: Kevin also mentions that he really enjoys using YARD docs. And we had done that whole episode on Module Docs and your experience writing them. So, you know, your people are out there [laughs].
JOËL: Yay.
STEPHANIE: And we talked about this a little bit; I think that writing the docs, you know, on one hand, is great for future readers, but, also, I think has the benefit of forcing the author to really think about their inputs and outputs, as Kevin mentions. He's found bugs by simply just going through that process in designing his code, and also recommends Solargraph and Solargraph's VSCode extension, which I suspect really kind of makes it easy to navigate a complex codebase and kind of highlight just what you need to know when working with different APIs for your classes. So, I recently kind of switched to the Ruby LSP, build with Shopify, but I'm currently regretting it because nothing is working for me right now. So, that might be the push that I need [laughs] to go back to using Solargraph.
JOËL: It's interesting that Kevin mentions finding bugs while writing docs because that has absolutely been my experience. And even in this most recent round, I was documenting some code that was just sort of there. It wasn't new code that I was writing. And so, I had given myself the rule that this would be documentation-only PRs, no code changes. And I found some weird code, and I found some code that I'm 98% sure is broken.
And I had to have the discipline to just put a notice in the documentation to be like, "By the way, this is how the method should work. I'm pretty sure it's broken," and then, maybe come back to it later to fix it. But it's amazing how trying to document code, especially code that you didn't write yourself, really forces you to read it in a different way, interact with it in a different way, and really, like, understand it in a deep way that surprised me a little bit the first time I did it.
STEPHANIE: That's cool. I imagine it probably didn't feel good to be like, "Hey, I know that this is broken, and I can't fix it right now," but I'm glad you did. That takes a lot of, I don't know, I think, courage, in my opinion [laughs], to be like, "Yeah, I found this, and I'm going to, you know, like, raise my hand acknowledging that this is how it is," as supposed to just hiding behind a broken functionality that no one [laughs] has paid attention to.
JOËL: And it's a thing where if somebody else uses this method and it breaks in a way, and they're like, "Well, the docs say it should behave like this," that would be really frustrating. If the docs say, "Hey, it should behave like this, but it looks like it's broken," then, you know, I don't know, I would feel a little bit vindicated as a person who's annoyed at the code right now.
STEPHANIE: For sure.
JOËL: Finally, we have a message from Tim about using Postgres' EXPLAIN ANALYZE. Tim says, "Hey, Joël, in the last episode, you talked a bit about PG EXPLAIN ANALYZE. As you stated, it's a great tool to help figure out what's going on with your queries, but there is a caveat you need to keep in mind. The query planner uses statistics gathered on the database when making decisions about how to fetch records. If there's a big difference between your dev or staging database and production, the query may make different decisions.
For example, if a table has a low number of records in it, then the query planner may just do a table scan, but in production, it might use an index. Also, keep in mind that after a schema changes, it may not know about new indexes or whatever unless an explicit ANALYZE is done on the table." So, this is really interesting because, as Tim mentions, EXPLAIN ANALYZE doesn't behave exactly the same in production versus in your local development environment.
STEPHANIE: When you were trying to optimize some slow queries, where were you running the ANALYZE command?
JOËL: I used a combination. I mostly worked off of production data. I did a little bit on a staging database that had not the same amount of records and things. That was pretty significant. And so, I had to switch to production to get realistic results. So, yes, I encountered this kind of firsthand.
STEPHANIE: Nice. For some reason, this comment also made me think of..., and I wanted to plug a thoughtbot shell command that we have called Parity, which lets you basically download your production database into your local dev database if you use Heroku. And that has come in handy for me, obviously, in regular development, but would be really great in this use case.
JOËL: With all of the regular caveats around security, and PII, and all this stuff that come with dealing with production data. But if you're running real productions on production, you should be cleared and, like, trained for access to all of that. I also want to note that the queries that you all worked with on Friday are also from the production database.
STEPHANIE: Really?
JOËL: So, you got to see what it actually does, what the actual timings were.
STEPHANIE: I'm surprised by that because we were using, like, a web-based tool to visualize the query plans. Like, what were you kind of plugging into the tool for it to know?
JOËL: So, the tool accepts a query plan, which is a text output from running a SQL query.
STEPHANIE: Okay. So, it's just visualizing it.
JOËL: Correct. Yeah. So, you've got this query plan, which comes back as this very intimidating block of, like, text, and arrows, and things like that. And you plug it into this web UI, and now you've got something that is kind of interactive, and visual, and you can expand or collapse nodes. And it gives you tooltips on different types of information and where you're spending the most time. So, yeah, it's just a nicer way to visualize that data that comes from the query plan.
STEPHANIE: Gotcha. That makes sense.
JOËL: So, that's a very important caveat. I don't think that's something that we mentioned on the episode. So, thank you, Tim, for highlighting that. And for all of our listeners who were intrigued by leaning into EXPLAIN ANALYZE and query plan viewers to debug your slow queries, make sure you try it out in production because you might get different results otherwise.
STEPHANIE: So, yeah, that about wraps up our listener topics in recent months. On that note, Joël, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie revisits the concept of "spiking"—a phase of exploration to determine the feasibility of a technical implementation or to address unknowns in feature requests—sharing her recent experiences with a legacy Rails application. Joël brings a different perspective by discussing his involvement with a client project that heavily utilizes the dry-rb suite of gems, highlighting the learning curve associated with adapting to new patterns and libraries.
Joël used to be much more idealistic and has moved to be more pragmatic. Stephanie has moved the other way. So together, Stephanie and Joël engage in a philosophical discussion on being an idealistic versus a pragmatic programmer. They explore the concept of programming as a blend of science and art, where technical decisions are not only about solving problems but also about expressing ideas and building shared understandings within a team.
Transcript:
JOËL: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn, and together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, a few weeks ago, we did an episode on spiking in response to a listener question. And I wanted to kind of revisit that topic for a little bit because I've been doing a lot of spiking on my client project. And for those who are not familiar, the way that I understand or define spikes is kind of as an exploration phase to figure out if a technical implementation might work. Or if you have a feature request with some unknowns, you can spend some time-boxed spiking to figure out what those unknowns might be.
And I'm working on your typical legacy Rails application [laughs]. And I think one thing that we talked about last time was this idea of, at what point does spiking end up being just working on the feature [laughs]? And I think that's especially true in an older codebase, where you kind of have to go down a few rabbit holes, maybe, just to even find out if something will trip you up down the line.
And the way I approached that this time around was just, like, identifying the constraints and putting a little flag there for myself. Like, these were rabbit holes that I could go down, but, you know, towards the initial beginning phase of doing the spiking, I decided not to. I just kind of bookmarked it for later.
And once I had identified the main constraints, that was when I was like, okay, like, what kind of solutions can I come up with for these constraints? And that actually then helped me kind of decide which ones we're pursuing a little bit more to get, like, the information I needed to ultimately make a decision about whether this was worth doing, right?
It kind of kept me...I'm thinking about, you know, when you are bowling with those safety guards [laughs], it keeps your ball from just rolling into the gutter. I think it helped with not going too deep into places that I may or may not be super fruitful while also, I think, giving me enough information to have a more realistic understanding of, like, what this work would entail.
JOËL: Would you say that this approach that you're taking is inspired or maybe informed by the conversation we had on the episode?
STEPHANIE: I was especially interested in avoiding the kind of binary of like, no, we can't do this because the system just, you know, isn't able to support it, or it's just too...it would be too much work. That was something I was really, like you said, kind of inspired by after that conversation because I wanted to avoid that trap a little bit.
And I think another really helpful framing was the idea of, like, okay, what would need to be done in order to get us to a place where this could be possible? And that's why I think identifying those constraints was important because they're not constraints forever. Like, we could do something about them if we really wanted to, so kind of avoiding the, like, it's not possible, right? And saying like, "It could be. Here's all the things that we need to do in order to make it possible." But I think that helped shift the conversation, especially with stakeholders and stuff, to be a little bit more realistic and collaborative.
So, Joël, what's new in your world?
JOËL: So, I'm also on a new client project, and a thing that's been really interesting in this codebase is that they've been using the dry-rb suite of gems pretty heavily. And I've seen a lot about the suite of gems. I've read about them. Interestingly, this is kind of the first time that I've been on a codebase that sort of uses them as a main pattern in the app. So, there's been a bit of a learning curve there, and it's been really interesting.
STEPHANIE: This is exciting to me because I know you have a lot of functional programming background, also, so it's kind of surprising that you're only now, you know, using something that explicit from functional languages in Ruby. And I'm curious: what's the learning curve, if not the paradigm? Like, what are you kind of encountering?
JOËL: I think there's a little bit of just the translation. How do these gems sort of approach this? So, they have to do a couple of, like, clever Ruby things to make some of these features work. Some of these also will have different method names, so a lot of just familiarizing myself with the libraries. Like, oh, well, this thing that I'm used to having called a particular thing has a slightly different name here or maybe not having all of the utilities. I was like, oh, how do we traverse with this particular library? Then you have to, like, look it up.
So, it's a lot of like, how do I do this thing I know how to do in, let's say, Elm? How do I translate that into Ruby? But then, also, some of the interplay of how that works in code that also does some very kind of imperative side effecty things also written by a team that is getting used to the pattern. And so, you'll sort of see things where people are pulling things in, but maybe you don't fully understand the deeper underlying approach that's meant to be used.
STEPHANIE: Have you noticed any use cases where the dry-rb patterns really shine in your application?
JOËL: A thing that's nice is that I think it really forces you to think about your edge cases in a way that sometimes Ruby developers play very fast and loose with "Yeah, whatever, it will never be nil." Push to production immediately start getting NoMethodError in your bug tracker. I never do this, by the way, but you know.
STEPHANIE: [laughs].
JOËL: Speaking from a friend's experience [laughs].
STEPHANIE: Asking for a friend, yeah [laughs].
JOËL: I think a thing that I've sort of had to figure out sort of every time I deal with these patterns in different languages is just the importance of good composition and good separation. Because you're adding these sort of wrapper context around things, if you're constantly wrapping and unwrapping, you're like, check things inside, and then do the next thing, and then unwrap again and branch and check and do the next thing, that code becomes really clunky in a way that you just sort of expect to do if you're just writing code in regular Ruby with a nil. But it doesn't really work with a dry-rb maybe or a result.
So, the pattern that I have found that works really well is to extract sort of every operation that can be, let's say, that could fail so that it would give you a result back. Extract that out into its own separate function that will construct a success or a failure, and then have your sort of main code that wants to then do a bunch of these things together.
All it does is use some of the dry-rb helper methods to compose all of these together, whether that's just some sort of, like, do notation, or binding, or fmap, or something like that, which allows you to have sort of individual chunks that can fail, and then one sort of aggregator piece of code that just finds a way to combine all of them nicely. And that avoids you having to do all this repetition.
STEPHANIE: Yeah, that makes a lot of sense.
JOËL: It's a pattern, I think; I had to learn the hard way when I was working with Elm. Because if you're taking a potential nullable value and then you want to do things with it but then that potential operation is also nullable because the input was potentially null, and then that just sort of propagates all the way down the chain. So, my whole chain of functions now is doing checks for nullability. And in Ruby, I could just be like, no, I checked it in the first function. I can then just trust that it's not null down the chain.
Elm doesn't do the like, trust me, bro. The compiler will force you to validate every time, and then the code just blows up, and it gets really painful. So, I had to start thinking about new models of thinking that would separate out code that actually needs to care and code that doesn't need to care about nullability. And I wrote an article about that. That turned into actually a conference talk as well. And these sort of ideas have served me really well at Elm. And I think these translate pretty well to dry-rb as well. That's something that I'm exploring, but the principles seem like they're not tied to a particular language.
STEPHANIE: Yeah, and it's kind of cool that you experienced all of that in working with Elm, where a compiler was there to yell at you [laughs] and kind of forcing you to...I don't know if do the right thing is the right word, but kind of think in the way that it wants you to think. And I can see people who are coming from Ruby and starting to experiment with dry-rb maybe needing a bit of that since it's not built-in in the tooling, just in a recoder view or just in conversations among devs.
JOËL: [inaudible 09:26] Beyond just the idea of wrapping your values and making sure you check them all the time, there are patterns that make that easier or more painful. And even in something like Elm, the compiler would yell at me would make sure I could not have a runtime error by forgetting to check for nullability. It did not prevent me from writing monstrosities of nested repeated conditionals checking if nil, if nil, if nil. That I had to figure out some sort of, like, higher-level patterns that play nicely with that kind of software.
And I think these are things that people have to sort of encounter, feel the pain, feel the frustration, and then move into those better patterns after the fact. And sometimes that's not easy because it's not obvious why that's a valuable pattern to approach.
STEPHANIE: Yeah, I agree completely. Speaking of following patterns and kind of arriving at maybe an ideal version of [chuckles], you know, what you'd like your code to do, you know, to build what you are looking to build [laughs]...this is my very poor attempt at a smooth transition that Joël [laughter] manages to be able to do [laughs] whenever we're trying to shift into the topic of the episode. Anyway, today, we were hoping to talk a little bit about this idea between being an idealistic programmer and a pragmatic programmer and the different journeys that we've each been on in arriving kind of how to balance the two.
JOËL: Yeah, you know, I think neither of these are absolutes, right? It's a spectrum. You probably move around that spectrum from day to day, and then probably, like, more general trends over your career. But I'm curious, for you today, if you had to pick one of those labels, like, which sort of zone of the spectrum would you put yourself in? Do you think you're more idealistic or more pragmatic?
STEPHANIE: I think I'm in a more of an idealistic zone right now.
JOËL: Would you say you're kind of like middle trending idealistic or kind of, like, pretty far down the idealistic side?
STEPHANIE: Middle trending idealistic. I like that way of describing it. I want to know where you are. And then I kind of wanted to try to take a step back and even define what that means for both of us.
JOËL: Right, right. I think the way I'd probably describe myself is a recovering idealist.
STEPHANIE: Oof. Yeah [laughs].
JOËL: I think there was a time where I was really idealistic. I really like knowing sort of underlying theory of software construction, broader patterns. By patterns here, I don't mean necessarily, like, you know, the Gang of Four, but just general sort of approaches that work well and using that to guide my work. But I've also been trending a lot more into the, like, pragmatic side of things in the past few years.
STEPHANIE: So, could you kind of tell me a little bit about what does pragmatic mean for you and what does ideal mean for you?
JOËL: So, I think the pragmatic side of me it's about delivering working software. If you're not shipping anything, you know, the most beautiful piece of art that you've created just warms your heart is useless. So, I think I'm sort of at the extreme end of pragmatism, right? It's all about shipping and shipping fast. And, in the end, that's generally the goal of software.
On the more idealistic side, the sort of doing everything kind of perfect or by the book, or, you know, maybe in a way that brings you personal satisfaction, oftentimes, at the expense of shipping and vice versa. Sometimes shipping comes at the expense of writing absolutely terrible code, but, of course, you know, there's value in both. Shipping is what actually delivers value to your users, your company, yourself if you're using the software.
But if you're not following patterns and things, you're often stuck in a really short-term thinking loop, where you are maybe delivering value today at the cost of being able to deliver value tomorrow or writing code that is unreadable or code that is difficult to collaborate on. So, more than just me shipping an individual feature, I've got to think about, while I'm working with a team, how can I help them be able to ship features or build on top of my work for tomorrow? So, that's sort of how I visualize the field. I'm curious what the words idealism and pragmatism mean to you.
STEPHANIE: Yeah. I agree with you that pragmatism is, you know, this idea of delivering working software. And I think I have seen it very, you know, kind of condensed as, like, moving quickly, getting stuff out the door, basically, like, end result being, like, a thing that you can use, right?
I think I've personally been reassessing that idea a lot because I'm kind of almost wondering like, well, what are we moving quickly for [laughs]? I sometimes have seen pragmatism just end there being like, okay, like, it's all about velocity. And then, I'm kind of stuck being like, well, if you write working software for, you know, completely the wrong thing, is that still pragmatic? I don't know. So, that's kind of where I'm at these days with–I'm feeling a little bit more suspect of pragmatism, at least wanting to make sure that, especially with the people that I'm working with day to day, that we're agreeing on what that means and what success means.
And then, as for idealism, I think also, actually, I now have a little bit of duality in terms of how I understand that as well. One of them being, yes, definitely, like, by the book or, like, by the ideas that we've, you know, some very smart people [laughs] have figured out as, like, this is clean or good quality, or these are the patterns to, you know, make your code as, again, as clean, I don't know, kind of putting air quotes around that, as possible.
And then, I actually like what you really said about code that warms your heart [laughs] that you feel, like, really moved by or, like, just excited about or inspired by because I think that can also be a little bit different from just following theories that other people have defined.
The more I spend doing this stuff, the more I am convinced that writing software is actually a very creative practice. And that's something that I've, like, definitely had to balance with the pragmatism a bit more because there are days when it's just not coming [chuckles], you know, like, I just stare at a blank, new file. And I'm like, I can't even imagine what these classes would be because, like, that creative part of my brain just, like, isn't on that day. So, that's kind of where I'm sitting in terms of, like, what idealistic programming kind of seems to me.
JOËL: There's definitely an element of programming that feels like self-expression, you know, there are parameters around that. And working with a team, you probably all sort of, like, move towards some average. But I would definitely say that there is some element of self-expression in coding.
STEPHANIE: Yeah, 100%. Have you heard about this paper called Programming as Theory Building?
JOËL: The name sounds vaguely familiar, but I can't place the main idea in my mind right now.
STEPHANIE: It's, like, an academic-ish paper from the 80s. And I'll link to it in the show notes because I can't remember the author right now. But the idea is writing code is actually just one way of expressing a theory that we are building. In fact, that expression doesn't even....it's like, it's impossible for it to fully encapsulate everything that was involved in the building of the theory because every decision you make, you know, you decide what not to do as well, right? Like, all the things that you didn't encode in your application is still part of this theory, like stuff that you rejected in order to interpret and make abstract the things that you are translating from the quote, unquote "real world" into code.
That really stuck with me because, in that sense, I love this idea that you can create your own little world, right? Like, you're developing it when you code. And that is something that gets lost a little bit when we're just focused on the pragmatic side of things.
JOËL: Where things get tricky as well is that when you're working with a team, you're not just building your own little world. You're building a shared world with shared mental models, shared metaphors. That's where oftentimes it becomes important to make sure that the things that you are thinking about are expressed in a way that other people could read your code and then immediately pick up on what's happening. And that can be through things like documentation, code comments. It can also be through more rigorous data modeling.
So, for example, I am a huge fan of value objects in general. I tend to not have raw numbers floating around in an app. I like to wrap them in some kind of class and say, "Hey, these numbers that are floating around they actually represent a thing," and I'll name that thing so that other people can get a sense that, oh, it is one of the moving parts of this app, and then here are the behaviors that we expect on it.
And that is partly for sort of code correctness and things like that but also as a sort of way of communicating and a way of contributing to that shared reality that we're creating with the team in a way that if I just left a raw number, that would be almost, like, leaving something slightly undefined. Like, the number is there. It does a thing, but what it does is maybe a little bit more implied. I know in my mind that this is a dollar amount, and maybe there's even a comment above it that says, "Dollar amount." But it makes it a little bit harder for it to play in with everybody else's realities or view of the system than if it were its own object.
STEPHANIE: Yeah, I like what you said about you're building a shared world with your fellow colleagues. And that helped explain to me why, as some people say, naming is the hardest part about building software because, yeah, like you said, even just saying you are wanting to make a method or class expressive. And we talked about how code is a way of expressing yourself. You could, like, name all your stuff in Wingdings [laughs], but we don't. I actually don't know if you could do that. But that was, for some reason, what I imagined. I was like, it's possible, and you could deliver software in complete gibberish [laughs].
JOËL: In theory, could you say that naming your variables as emoji is the most expressive way? Because now it's all emotions.
STEPHANIE: A picture is worth a thousand words, as they say.
JOËL: So, this variable is the frowny face, upside-down smile face. It doesn't get more expressive than that.
STEPHANIE: At a former company, in our Slack workspace, I had a co-worker who loved to use the circus tent emoji to react to things. And, like, I'm convinced that no one really knew what it meant, but we also kind of knew what it meant. We were just like, oh yeah, that's the emoji that she uses to express amusement or, like, something a little bit ironic. And we all kind of figured it out [laughs] eventually. So, again, I do think it's possible. I bet someone has done, like, a creative experiment with writing an application in just emojis. This is now going to be some research I do after this episode [laughter].
JOËL: It is fun when you have, like, a teammate. You know they have the signature emoji that they respond to on things.
STEPHANIE: Yep. Absolutely. So, you know, we kind of spent a little bit of time talking about idealism. I actually wanted to pull back to the idea of pragmatism because, in preparation for this episode, I also revisited my copy of The Pragmatic Programmer. Are you familiar with this book? Have you read it at all?
JOËL: I have read it. It's been probably ten years. We did, I think, a book club at thoughtbot to go through the book.
STEPHANIE: I was skimming the table of contents because I was curious about, again, that, like, definition of pragmatism. You and I had kind of talked about how it can be short-sighted. But what I was actually pretty impressed with, and I imagine this is why the book holds up, you know, after decades, is success for them also means being able to continue to deliver quality software. And that idea of continuity kind of implied, to me, that there was an aspect of, like, making sure the quality meets a certain threshold and, like, incorporating these theories and doing the best practices because they're thinking about success over time, right? Not just the success of this particular piece that you're delivering.
JOËL: I would say most people in our industry are sort of balancing those two objectives, right? They're like, we want to have a decent velocity and ship things, but at the same time, we want to be able to keep delivering. We want a certain threshold of quality.
In between those two objectives, there is a sea of trade-offs, and how you manage them are probably a little bit part of your personality as a developer and is probably also, to a certain extent, a function of your experience, learning sort of when to lean more into taking some shortcuts to ship faster and when to double down on certain practices that increase code quality, and what aspects of quality value more than others because not all forms of quote, unquote, "quality" are the same.
I think a sort of source of danger, especially for newer developers, is you sort of start on almost, like, a hyper-pragmatic side of things because most people get into software because they want to build things. And the ultimate way to build is to ship, and then you sort of encounter problems where you realize, oh, this code is really clunky. It's harder and harder to ship. Let me learn some elements of code quality. Let's get better at my craft so that I can build software that has fewer bugs or that I can ship more consistently. And that's great.
And then, you sort of run into some, like, broader sort of theories of programming: patterns, structures, things like that. And it becomes very easy to sort of blindly copy-paste that everywhere to the point where I think it's almost a bit of a meme, the, like, intermediate programmer who's read Clean Code or the Design Patterns book and is just now, like, applying these things blindly to every piece of code they encounter to the annoyance of the entire team.
STEPHANIE: I think you just about described my trajectory [laughter], though hopefully, I was not so obnoxious about [laughs] it for my team having to deal with my, like, discovering [laughs] theories that have long been used.
JOËL: I think we kind of all go through that journey to a certain extent, right? It's a little bit different for every one of us, but I think this is a journey that is really common for developers.
STEPHANIE: Yeah. One thing I frequently think about a lot is how much I wished I had known some of that theory earlier. But I don't think I have an answer one way or another. It's like; I'm not sure if having that knowledge earlier really would have helped me because I've also definitely been in...I'm just thinking about, like, when I was in college in lectures trying to absorb theories that made no sense to me because I had no, like, practical experience to connect it to. It's almost, like, maybe there is, like, that perfect time [laughs] where it is the most valuable for what you're doing. And I don't know. I kind of believe that there is a way to bridge that gap.
JOËL: I mean, now we're kind of getting into an element of pedagogy. Do you sort of teach the theory first, and then show how to apply it to problems? Or do you show problems and then introduce bits of theory to help people get unstuck and maybe then cap it off by like, oh, these, like, five different, like, techniques I showed you to, like, solve five different problems, turns out they all fit in some grand unified theory? And, like, here's how the five things you thought were five different techniques are actually the same technique viewed from five different perspectives. Let me blow your mind.
STEPHANIE: That's a Joël approach [laughter] to teaching if I've ever heard one.
JOËL: I'm a huge fan of that approach. Going back to some of the, like, the functional programming ideas, I think that's one that really connected for me. I struggled to learn things like monads, and functors, and things like that. And I think, in my mind, these two approaches is like the Haskell school of teaching and the Elm school of teaching.
Haskell will sort of say, "Hey, let me teach you about this theory of monads and all these things, and then, we'll look at some ways where that can be applied practically." Whereas Elm will say, "No, you don't need to know about this. Let's look at some practical problems. Oh, you've got null values you need to check. Here's how you can, like, handle nullability in a safe way. Oh, you've got a bunch of HTTP requests that might resolve in random order, and you want to, like, deal with them when they all come back. Here's some tips on how you can do that."
And then, you have three or four things, and then, eventually, it just sort of lets you say, "Wait a minute, all of these problems are sort of all the same, and it turns out they all fit in some unified theory." And then, the light bulb goes off, and you're like, "Ooh, so now when I'm dealing with unknown blobs of Jason trying to parse data out of them, I'll bet I can use the same techniques I used for chaining HTTP requests to dig multiple dependent pieces of JSON."
STEPHANIE: Yeah. And that's so satisfying, right? It really is kind of leveling up in that Galaxy Brain meme sort of way.
JOËL: Yeah. And that's maybe to a certain extent even a value of idealism because if you build your system in such a way that it follows some of these patterns, then insights and intuitions that people have in one part of your code can then carry to other parts of your code, and that's incredibly powerful.
STEPHANIE: Yeah. And I almost wonder because you also mentioned kind of where you end up on the spectrum is a function of your experience. I wonder if us, you know, being consultants and seeing patterns across many applications also kind of contributes to the striving for idealism [laughs].
JOËL: It's kind of both, right? Because there's very high incentive to ship pretty rapidly, especially if you're on a shorter engagement or if you're on a project that has a shorter timescale. But also, yes, because you've seen so many projects, you've seen how things can go wrong. Also, you've seen the same problem from 20 different perspectives that are all slightly different. And so, some of those broader patterns can start emerging in your head.
STEPHANIE: Yeah, honestly, I think that's kind of the work that I enjoy the most in consulting because a lot of clients bring us on when they're like, "Hey, like, we've reached a point where our velocity has slowed down. Like, can you help us unstick our developers?" And that's actually when I've found that leaning on the theories and maybe a little bit of idealism is actually really useful because I'm kind of providing those tools to developers at this time when they need it. That's kind of why I have been saying trending idealism because I have found that particularly useful at work.
JOËL: There's an element here of, like, looking at a bunch of different use cases and then finding some sort of unifying model or theory. And that's a word that I think programmers have a love-hate relationship with: Abstraction. I don't know about you, but designing abstractions is a lot of fun for me. I love designing abstractions. I have always loved designing abstractions. It's not always the best use of my time, and it's not always the best thing for a codebase.
STEPHANIE: Ooh, okay, okay. This was a good transition. I hear you that, like, yeah, love-hate relationship. It's hard. That's kind of where I've ended up. It's really hard. And I think it's because it requires that creative thinking.
JOËL: It requires that creative thinking. And then also, like, it requires you to sort of see more broadly, a more broad picture. What are the things that are connected, the things that are disconnected, even though they seem related? And, like, being able to sort of slice those similarities from each other.
STEPHANIE: Yeah. I agree. And the interesting part is that, like, a lot of the time you just don't know yet. And you kind of have to come back to reality and admit that you don't know yet, you know, got to come back to earth, take a look around, and, yeah, you can go through the thought exercise of thinking [laughs] about all of the possibilities, and I imagine you could do that forever [laughs].
JOËL: I mean, that's why we have heuristics like the rule of three that says, "Don't abstract something out or attempt to DRY code until you've seen three use cases of it." So, maybe leave a little bit of duplication or a little bit of maybe not perfectly factored code until you have a couple of more examples. And the sort of real picture starts emerging a little bit more.
STEPHANIE: So, I think we are kind of at this topic already, but was there a moment or was there something that kind of helped you realize, like, oh, I can't be in that space of imagining abstractions [laughs] forever when I have to deliver software? Like, what changed for you to be the, as you said yourself, recovering idealist and having to maybe employ some more pragmatic heuristics?
JOËL: And I think, for me, it's partly being a consultant and being in a lot of projects and having that pressure to work with deadlines and sort of not having an infinite canvas to paint with, having to sort of fit some of my grand ideas into the reality of, we've got a week or two weeks to get this thing done, and also working with a team, and some ideas don't work well with every team. Every team is kind of at a different place.
And abstractions sort of only serve you as well as they are useful to not only you but the team at large. So, if a team is not comfortable with a set of abstractions, or it's sort of, like, too far down a path, then that can be really challenging. And that's where something like the dry-rb set of gems, which has some really fun abstractions like a mental model for doing things, depending on the team, that can be a really heavy lift. And so, as much as I like those patterns, I might think long and hard before I try to push this on a whole team.
STEPHANIE: Yeah, I kind of had to navigate a situation like that recently, where I was doing a code review, and I had left some suggestions about refactoring to encapsulate some responsibilities better. And then, I was like, oh, and then I noticed another thing that we could do to make that easier. And it, you know, definitely can start to spiral. And the author, you know, kind of responded to me and said, "Hey, like, I really appreciate these comments, but we are a bit tight on deadline for this project. So, is it okay if I, like, revisit this when we've delivered it?"
And, you know, I was just like, "Yeah, it's totally up to you." At the end of the day, I want whoever's authoring this code to have, like, full agency about how they want to move forward. And it was really helpful for me to get that context of, like, oh, they're a bit tight on the deadline because then I can start to meet them where they're at. And maybe I can give some suggestions for moving towards that ideal state, but ones that are lower left, and that is still better than nothing.
JOËL: That sounds awfully pragmatic.
STEPHANIE: [laughs]
JOËL: Moving in a positive direction, we're getting halfway. It's better than nothing. That's very pragmatic.
STEPHANIE: Hmm. Wow. But it's pragmatically moving towards idealism.
JOËL: [laughs]
STEPHANIE: If that is even possible [laughs].
JOËL: Uh-huh.
STEPHANIE: That's maybe the book that I'm going to write, not The Pragmatic Programmer, but The Pragmatically Idealistic Programmer [laughs].
JOËL: The Pragmatic Idealist.
STEPHANIE: Ooh, yeah, I like that. Okay. Watch out for that book coming 2030 [laughter], written by me and Joël.
JOËL: So, I think you brought up a really interesting point, which is the idea of pragmatism versus idealism when it comes to code review. Do you find that you think about these ideas differently when reviewing somebody else's code versus when you write your own?
STEPHANIE: Oooh, yeah. I'm not sure exactly why, but definitely, when I'm reviewing someone else's code, I'm already in the headspace of, you know, I have some separation, right? Like, I'm not in the mode of thinking very hard [laughs] about what I'm creating. I'm just, like, in the editing kind of phase. And then, I can actually pull more from different theories and ideas, and I find that actually quite easier. When I'm writing my own code, it's just whatever comes out, right? And then, hopefully, I have the time to revisit it and give it a scan, and then start to integrate the, like, idealistic theories and the patterns that I would like to be using.
But it definitely...for patterns that I feel a lot more confident about or more familiar with, they just come out mostly kind of oriented in that way if I have the time, or sometimes I will make the time, you know. I'll just say, "It's not done yet," because I know it can be better. I think that could be another, like, pragmatically idealist way of handling that.
JOËL: [laughs]
STEPHANIE: Right? It's just telling people, "I'm not done." [laughs] It's not done until I do at least give it an attempt.
JOËL: So, it's kind of a two-phase thing when you're writing your own code, whereas it's only a single phase when you're reviewing somebody else's.
STEPHANIE: Yeah. Yeah. But, like I said earlier, it's like, I also really believe that I don't want to impose any of my ideas [laughs] onto others. I really believe that people have to arrive at it on their own. So, it used to bother me a little bit more when I was just like, oh, but this way is better [laughs]. When people wouldn't get on board, I would be sad about it. But as long as I know that I, like, left that comment, then I can give myself a pat on the back for trying to move towards that ideal state. What about you [laughs]?
JOËL: I think this is probably also where I'm, like, now a recovering idealist. There was a time where I would leave a ton of comments on someone's PR. I almost had a view of like, how can I help you get your PR to be the best it can possibly be? And sometimes, if you start with something that's very rough around the edges, you're leaving a lot of comments. And I've been that guy who's left 50 comments on a PR. In retrospect, I think that was not being a good teammate.
STEPHANIE: Hmm.
JOËL: So, I think maybe my mental model or my, like, goal for PR review has changed a little bit. It's less about how can I help you make your code the best it can possibly be? And a how can I help you get your code to mergeable? And it's possible that mergeable means best that it can possibly be, but that's usually not the case. So, I'm going to give you some feedback: some things that confuse me, maybe raise one or two patterns that are existing in the app that maybe you weren't aware of that you should maybe consider applying. Maybe I'll raise a couple of ideas that are new, but that apply here.
And those might just be a, "Hey, let's just think about this. Maybe we don't want to do this in this PR, but maybe we want to look at them at some point. Or we should be thinking about this in a sort of rule of three situation. If we see this come up another time, maybe consider introducing a strategy pattern here, or maybe consider making this a value object, or separating these side effects from these pure behavior." But it's more of a dialogue about how can I help you get your PR to the point where it is mergeable?
STEPHANIE: Yeah. Another thing I thought about just now is both are meaningful or, like, both can provide meaning in different ways, and people ascribe different amounts of meaning to both; where I had worked with someone, a client developer before, who was not super interested in doing any kind of refactoring or, like, any, you know, second passes for quality. Because, for him, like, he just wanted to ship, right? That was where he found meaning in his work.
Whereas that actually made my work feel a lot more meaningless [chuckles] because I'm like, well, if we're just kind of hands on a keyboard, like robots shipping code, I don't know, that doesn't feel particularly motivating for me. You know, I do want to employ some of that craft a little bit more.
JOËL: And, I guess, yeah, idealism versus pragmatism is also...it's a personal individual thing. There's an element where it's a team decision, or at least a sense of, like, how much quality do we need at this point in the life cycle of the project? And what are the areas where we particularly want to emphasize quality? What are our quality standards? And that's, to a certain extent, consensus among the team that it's individual members. And it's also coming from team leadership.
STEPHANIE: Yeah. Yeah, exactly. I mentioned that, you know, just to, I think, shed a little bit of light that it's usually not personal, right [laughs]? There's that part of understanding that is really important to, yeah, like, keep building this shared world of writing software, and, hopefully, it should be meaningful for all of us.
JOËL: I think a few takeaways that I have would be, one, the value of, like, theory and idealism. These things help you to become a better developer. They help you to spot patterns. It's probably good to sort of have in the background always be learning some new thing, whether that's learning a new set of patterns, or learning some mental models, thinking about, oh, the difference between side effects and pure code, learning about particular ways of structuring code. These are all things that are good to have in your back pocket to be able to apply to the code that you're doing, even if it's a sort of after-the-fact, hey, I've done a similar task three different times. Is there a broader principle?
But then, also, take the time to really make sure that you're focusing on shipping code, and maybe that's learning to work in smaller chunks, working iteratively, learning to scope your work well. Because, in the end, delivering value is a thing that is something that we could all probably benefit from doing more of.
And then, finally, taking some time to self-reflect, a little bit of self-awareness in this area. What are the aspects of pragmatism and idealism that you find personally meaningful? What are the elements that you think bring value to your work, to your team? And let that sort of guide you on your next code writing or PR review.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël shares his recent project challenge with Tailwind CSS, where classes weren't generating as expected due to the dynamic nature of Tailwind's CSS generation and pruning. Stephanie introduces a personal productivity tool, a "thinking cap," to signal her thought process during meetings, which also serves as a physical boundary to separate work from personal life.
The conversation shifts to testing methodologies within Rails applications, leading to an exploration of testing philosophies, including developers' assumptions about database cleanliness and their impact on writing tests.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I'm working on a new project, and this is a project that uses Tailwind CSS for its styling. And I ran into a bit of an annoying problem with it just getting started, where I was making changes and adding classes. And they were not changing the things I thought they would change in the UI. And so, I looked up the class in the documentation, and then I realized, oh, we're on an older version of the Tailwind Rails gem. So, maybe we're using...like, I'm looking at the most recent docs for Tailwind, but it's not relevant for the version I'm using. Turned out that was not the problem.
Then I decided to use the Web Inspector and actually look at the element in my browser to see is it being overwritten somehow by something else? And the class is there in the element, but when I look at the CSS panel, it does not show up there at all or having any effects. And that got me scratching my head. And then, eventually, I figured it out, and it's a bit of a facepalm moment [laughs].
STEPHANIE: Oh, okay.
JOËL: Because Tailwind has to, effectively, generate all of these, and it will sort of generate and prune the things you don't need and all of that. They're not all, like, statically present. And so, if I was using a class that no one else in the app had used yet, it hadn't gotten generated. And so, it's just not there. There's a class on the element, but there's no CSS definition tied to it, so the class does nothing.
What you need to do is there's a rake task or some sort of task that you can run that will generate things. There's also, I believe, a watcher that you can run, some sort of, like, server that will auto-generate these for you in dev mode. I did not have that set up. So, I was not seeing that new class have any effect. Once I ran the task to generate things, sure enough, it worked. And Tailwind works exactly how the docs say they do. But that was a couple of hours of my life that I'm not getting back.
STEPHANIE: Yeah, that's rough. Sorry to hear. I've also definitely gone down that route of like, oh, it's not in the docs. The docs are wrong. Like, do they even know what they're talking about? I'm going to fix this for everyone. And similarly have been humbled by a facepalm solution when I'm like, oh, did I yarn [laughs]? No, I didn't [laughs].
JOËL: Uh-huh. I'm curious, for you, when you have sort of moments where it's like the library is not behaving the way you think it is, is your default to blame yourself, or is it to blame the library?
STEPHANIE: [laughs]. Oh, good question.
JOËL: And the follow-up to that is, are you generally correct?
STEPHANIE: Yeah. Yep, yep, yep. Hmm, I will say I externalize the blame, but I will try to at least do, like, the basic troubleshooting steps of restarting my server [laughter], and then if...that's as far as I'll go. And then, I'll be like, oh, like, something must be wrong, you know, with this library, and I turn to Google. And if I'm not finding any fruitful results, again, you know, one path could be, oh, maybe I'm not Googling correctly, but the other path could be, maybe I've discovered something that no one else has before.
But to your follow-up question, I'm almost, like, always wrong [laughter]. I'm still waiting for the day when I, like, discover something that is an actual real problem, and I can go and open an issue [chuckles] and, hopefully, be validated by the library author.
JOËL: I think part of what I heard is that your debugging strategy is basic, but it's not as basic as Joël's because you remember to restart the server [chuckles].
STEPHANIE: We all have our days [laughter].
JOËL: Next time. So, Stephanie, what is new in your world?
STEPHANIE: I'm very excited to share this with you. And I recognize that this is an audio medium, so I will also describe the thing I'm about to show you [laughs].
JOËL: Oh, this is an object.
STEPHANIE: It is an object. I got a hat [laughs].
JOËL: Okay.
STEPHANIE: I'm going to put it on now. It's a cap that says "Thinking" on it [laughs] in, like, you know, fun sans serif font with a little bit of edge because the thinking is kind of slanted. So, it is designy, if you will. It's my thinking cap. And I've been wearing it at work all week, and I love it.
As a person who, in meetings and, you know, when I talk to people, I have to process before I respond a lot of the time, but that has been interpreted as, you know, maybe me not having anything to say or, you know, people aren't sure if I'm, you know, still thinking or if it's time to move on. And sometimes I [chuckles], you know, take a long time. My brain is just spinning. I think another funny hat design would be, like, the beach ball, macOS beach ball.
JOËL: That would be hilarious.
STEPHANIE: Yeah. Maybe I need to, like, stitch that on the back of this thinking cap. Anyway, I've been wearing it at work in meetings. And then, when I'm just silently processing, I'll just point to my hat and signal to everyone what's [laughs] going on. And it's also been really great for the end of my work day because then I take off the hat, and because I've taken it off, that's, like, my signal, you know, I have this physical totem that, like, now I'm done thinking about work, and that has been working.
JOËL: Oh, I love that.
STEPHANIE: Yeah, that's been working surprisingly well to kind of create a bit more of a boundary to separate work thoughts and life thoughts.
JOËL: Because you are working from home and so that boundary between professional life and personal life can get a little bit blurry.
STEPHANIE: Yeah. I will say I take it off and throw it on the floor kind of dramatically [laughter] at the end of my work day. So, that's what's new. It had a positive impact on my work-life balance. And yeah, if anyone else has the problem of people being confused about whether you're still thinking or not, recommend looking into a physical thinking cap.
JOËL: So, you are speaking at RailsConf this spring in Detroit. Do you plan to bring the thinking cap to the conference?
STEPHANIE: Oh yeah, absolutely. That's a great idea. If anyone else is going to RailsConf, find me in my thinking cap [laughs].
JOËL: So, this is how people can recognize Bikeshed co-host Stephanie Minn. See someone walking around with a thinking cap.
STEPHANIE: Ooh. thinkingbot?
JOËL: Ooh.
STEPHANIE: Have I just designed new thoughtbot swag [laughter]? We'll see if this catches on.
JOËL: So, we were talking recently, and you'd mentioned that you were facing some really interesting dilemmas when it came to writing tests and particularly how tests interact with your test database.
STEPHANIE: Yeah. So, I recently, a few weeks ago, joined a new client project and, you know, one of the first things that I do is start to run those tests [laughs] in their codebase to get a sense of what's what. And I noticed that they were taking quite a long time to get set up before I even saw any progress in terms of successes or failures. So, I was kind of curious what was going on before the examples were even run.
And when I tailed the logs for the tests, I noticed that every time that you were running the test suite, it would truncate all of the tables in the test database. And that was a surprise to me because that's not a thing that I had really seen before. And so, basically, what happens is all of the data in the test database gets deleted using this truncation strategy. And this is one way of ensuring a clean slate when you run your tests.
JOËL: Was this happening once at the beginning of the test suite or before every test?
STEPHANIE: It was good that it was only running once before the test suite, but since, you know, in my local development, I'm running, like, a file at a time or sometimes even just targeting a specific line, this would happen on every run in that situation and was just adding a little bit of extra time to that feedback loop in terms of just making sure your code was working if that's part of your workflow.
JOËL: Do you know what version of Rails this project was in? Because I know this was popular in some older versions of Rails as a strategy.
STEPHANIE: Yeah. So, it is Rails 7 now, recently upgraded to Rails 7. It was on Rails 6 for a little while.
JOËL: Very nice. I want to say that truncation is generally not necessary as of Rails...I forget if it's 5 or 6. But back in the day, specifically for what are now called system tests, the sort of, like, Capybara UI-driven browser tests, you had, effectively, like, two threads that were trying to access the database. And so, you couldn't have your test data wrapped in a transaction the way you would for unit tests because then the UI thread would not have access to the data that had been created in a transaction just for the test thread. And so, people would use tools like Database Cleaner to use a truncation strategy to clear out everything between tests to allow a sort of clean slate for these UI-driven feature specs.
And then, I want to say it's Rails 5, it may have been Rails 6 when system tests were added. And one of the big things there was that they now could, like, share data in a transaction instead of having to do two separate threads and one didn't have access to it. And all of a sudden, now you could go back to transactional fixtures the way that you could with unit tests and really take advantage of something that's really nice and built into Rails.
STEPHANIE: That's cool. I didn't know that about system tests and that kind of shift happening. I do think that, in this case, it was one of those situations where, in the past, the database truncation, in this case, particular using the Database Cleaner gem was necessary, and that just never got reassessed as the years went by.
JOËL: That's one of the classic things, right? When you upgrade a Rails app over multiple versions, and sometimes you sort of get a new feature that comes in for free with the new version, and you might not be aware of it. And some of the patterns in the app just kind of keep going. And you don't realize, hey, this part of the app could actually be modernized.
STEPHANIE: So, another interesting thing about this testing situation is that I learned that, you know, if you ran these tests, you would experience this truncation strategy. But the engineering team had also kind of played around with having a different test setup that didn't clean the database at all unless you opted into it.
JOËL: So, your test database would just...each test would just keep writing to the database, but they're not wrapped in transactions. Or they are wrapped in transactions, but you may or may not have some additional data.
STEPHANIE: The latter. So, I think they were also using the transaction strategy there. But, you know, there are some reasons that you would still have some data persisted across test runs. I had actually learned that the use transactional fixtures config for RSpec doesn't roll back any data that might have been created in a before context hook.
JOËL: Yep, or a before all. Yeah, the transaction wraps the actual example, but not anything that happens outside of it.
STEPHANIE: Yeah, I thought that was an interesting little gotcha. So, you know, now we had these, like, two different ways to run tests. And I was chatting with a client developer about how that came to be. And we then got into an interesting conversation about, like, whether or not we each expect a clean database in the first place when we write our tests or when we run our tests, and that was an area that we disagreed.
And that was cool because I had not really, like, thought about like, oh, how did I even arrive at this assumption that my database would always be clean? I think it was just, you know, from experience having only worked in Rails apps of a certain age that really got onto the Database [laughs] Cleaner train. But it was interesting because I think that is a really big assumption to make that shapes how you then approach writing tests.
JOËL: And there's kind of a couple of variations on that. I think the sort of base camp approach of writing Rails with fixtures, you just sort of have, for the most part, an existing set of data that's there that you maybe layer on a few extra things on. But there's base level; you just expect a bunch of data to exist in your test database. So, it's almost going off the opposite assumption, where you can always assume that certain things are already there. Then there's the other extreme of, like, you always assume that it's empty. And it sounds like maybe there's a position in the middle of, like, you never know. There may be something. There may not be something, you know, spin the wheel.
STEPHANIE: Yeah. I guess I was surprised that it, you know, that was just a question that I never really asked myself prior to this conversation, but it could feel like different testing philosophies. But yeah, I was very interested in this, you know, kind of opinion that was a little bit different from mine about if you assume that your database, your test database, is not clean, that kind of perhaps nudges you in the direction of writing tests that are less coupled to the database if they don't need to be.
JOËL: What does coupling to the database mean in this situation?
STEPHANIE: So, I'm thinking about Rails tests that might be asserting on a change in database behavior, so the change matcher in RSpec is one that I see maybe sometimes used when it doesn't need to be used. And we're expecting, like, account to have changed the count of the number of records on it for a model have changed after doing some work, right?
JOËL: And the change matcher from RSpec is one that allows you to not care whether there are existing records or not. It sort of insulates you from that.
STEPHANIE: That's true. Though I guess I was thinking almost like, what if there was some return value to assert on instead? And would that kind of help you separate some side effects from methods that might be doing too much? And kind of when I start to see tests that have both or are asserting on something being returned, and then also something happening, that's one way of, like, figuring out what kind of coupling is going on inside this test.
JOËL: It's the classic command-query separation principle from object-oriented design.
STEPHANIE: I think another one that came to mind, another example, especially when you're talking about system tests, is when you might be using Capybara and you end up...maybe you're going through a flow that creates a record. But from the user perspective, they don't actually know what's going on at the database level. But you could assert that something was created, right? But it might be more realistic at that level of abstraction to be asserting some kind of visual element that had happened as a result of the flow that you're testing.
JOËL: Yeah. I would, in fact, go so far as to say that asserting on the state of your database in a system test is an anti-pattern. System tests are sort of, by design, meant to be all about user behavior trying to mimic the experience of a user. And a user of a website is not going to be able to...you hope they're not able to SSH into [chuckles] your database and check the records that have been created. If they can, you've got another problem.
STEPHANIE: I wonder if you could take this idea to the extreme, though. And do you think there is a world where you don't really test database-level concerns at all if you kind of believe this idea that it doesn't really matter what the state of it should be?
JOËL: I guess there's a few different things on, like, what it matters about the state of it because you are asserting on its state sort of indirectly in a sort of higher level integration test. You're asserting that you see certain things show up on the screen in a system test. And maybe you want to say, "I do certain tasks, and then I expect to see three items in an unordered list." Those three items probably come from the database, although, you know, you could have it where they come from an API or something like that.
So, the database is an implementation level. But if you had random data in your database, you might, in some tests, have four items in the list, some tests have five. And that's just going to be a flaky test, and that's going to be incredibly painful. So, while you're not asserting on the database, having control over it during sort of test setup, I think, does impact the way you assert.
STEPHANIE: Yeah, that makes sense. I was suddenly just thinking about, like, how that exercise can actually tell you perhaps, like, when it is important to, in your test setup, be persisting real records as opposed to how much you can get away with, like, not interacting with it because, like, you aren't testing at that integration level.
JOËL: That brings up a good point because a lot of tests probably you might need models, but you might not need persisted models to interact with them, if you're testing a method on a model that just does things based off its internal state and not any of the ActiveRecord database queries, or if you have some other service or something that consumes a model that doesn't necessarily need to query.
There's a classic blog post on the thoughtbot blog about when you should not reuse. There's a classic blog post on the thoughtbot blog about when not to use FactoryBot. And, you know, we are the makers of FactoryBot. It helps set up records in your database for testing. And people love to use it all the time. And we wrote an article about why, in many cases, you don't need to create something into the database. All you need is just something in memory, and that's going to be much faster than using FactoryBot because talking to the database is expensive.
STEPHANIE: Yeah, and I think we can see that in the shift from even, like, fixtures to factories as well, where test data was only persisted as needed and as needed in individual tests, rather than seeding it and having all of those records your entire test run. And it's cool to see that continuing, you know, that idea further of like, okay, now we have this new, popular tool that reduce some of that. But also, in most cases, we still don't need...it's still too much.
JOËL: And from a performance perspective, it's a bit of a see-saw in that fixtures are a lot faster because they get inserted once at the beginning of your test run. So, a SQL execution at the beginning of a test run and then every test after that is just doing its thing: maybe creating a record inside of a transaction, maybe not creating any records at all. And so, it can be a lot faster as opposed to using FactoryBot where you're creating records one at a time. Every create call in a test is a round trip to the database, and those are expensive.
So, FactoryBot tests tend to be more expensive than those that rely on fixtures. But you have the advantage of more control over what data is present and sort of more locality because you can see what has been created at the test level. But then, if you decide, hey, this is a test where I can just create records in memory, that's probably the best of all worlds in that you don't need anything created ahead with fixtures. You also don't need anything to be inserted using FactoryBot because you don't even need the database for this test.
STEPHANIE: I'm curious, is that the assumption that you start with, that you don't need a persisted object when you're writing a basic unit test?
JOËL: I think I will as much as possible try not to need to persist and only if necessary use persist records. There are strategies with FactoryBot that will allow you to also, like, build stubbed or just build in memory. So, there's a few different variations that will, like, partially do things for you. But oftentimes, you can just new up an object, and that's what I will often start with.
In many cases, I will already know what I'm trying to do. And so, I might not go through the steps of, oh, new up an object. Oh no, I'm getting a I can't do the thing I need to do. Now, I need to write to the database. So, if I'm testing, let's say, an ActiveRecord scope that's filtering down a series of records, I know that's a wrapper around a database query. I'm not going to start by newing up some records and then sort of accidentally discovering, oh yeah, it does write to the database because that was pretty clear to me from the beginning.
STEPHANIE: Yeah. Like, you have your mental shortcuts that you do. I guess I asked that question because I wonder if that is a good heuristic to share with maybe developers who are trying to figure out, like, should they create persisted records or, you know, use just regular instance in memory or, I don't know, even [laughs] use, like, a double [laughs]?
JOËL: Yeah, I've done that quite a bit as well. I would say maybe my heuristic is, is the method under test going to need to talk to the database? And, you know, I may or may not know that upfront because if I'm test driving, I'm writing the test first. So, sometimes, maybe I don't know, and I'll start with something in memory and then realize, oh, you know, I do need to talk to the database for this. And this is for unit tests, in particular.
For something more like an integration test or a system test that might require data in the database, system tests almost always do. You're not interacting with instances in memory when you're writing a system test, right? You're saying, "Given the database state is this when I visit this URL and do these things, this page reacts in such and such a way." So, system tests always write to the database to start with. So, maybe that's my heuristic there. But for unit tests, maybe think a little bit about does your method actually need to talk to the database? And maybe even almost give yourself a challenge. Can I get away with not talking to the database here?
STEPHANIE: Yeah, I like that because I've certainly seen a lot of unit tests that are integration tests in disguise [laughs].
JOËL: Isn't that the truth? So, we kind of opened up this conversation with the idea of there are different ways to manage your database in terms of, do you clean or not clean before a test run? Where did you end up on this particular project?
STEPHANIE: So, I ended up with a currently open PR to remove the need to truncate the database on each run of the test suite and just stick with the transaction for each example strategy. And I do think that this will work for us as long as we decide we don't want to introduce something like fixtures, even though that is actually also a discussion that's still in the works. But I'm hoping with this change, like, right now, I can help people start running faster tests [chuckles].
And should we ever introduce fixtures down the line, then we can revisit that. But it's one of those things that I think we've been living with this for too long [laughs]. And no one ever questioned, like, "Oh, why are we doing this?" Or, you know, maybe that was a need, however many years ago, that just got overlooked. And as a person new to the project, I saw it, and now I'm doing something about it [laughs].
JOËL: I love that new person energy on a project and like, "Hey, we've got this config thing. Did you know that we didn't need this as of Rails 6?" And they're like, "Oh, I didn't even realize that." And then you add that, and it just moves you into the future a little bit.
So, if I understand the proposed change, then you're removing the truncation strategy, but you're still going to be in a situation where you have a clean database before each test because you're wrapping tests in transactions, which I think is the default Rails behavior.
STEPHANIE: Yeah, that's where we're at right now. So, yeah, I'm not sure, like, how things came to be this way, but it seemed obvious to me that we were kind of doing this whole extra step that wasn't really necessary, at least at this point in time. Because, at least to my knowledge [laughs], there's no data being seeded in any other place.
JOËL: It's interesting, right? When you have a situation where this was sort of a very popular practice for a long time, a lot of guides mentioned that. And so, even though Rails has made changes that mean that this is no longer necessary, there's still a long tail of apps that will still have this that may be upgraded later, and then didn't drop this, or maybe even new apps that got created but didn't quite realize that the guide they were following was outdated, or that a best practice that was in their head was also outdated. And so, you have a lot of apps that will still have these sort of, like, relics of the past. And you're like, "Oh yeah, that's how we used to do things."
STEPHANIE: So yeah, thanks, Joël, for going on this journey with me in terms of, you know, reassessing my assumptions about test databases. I'm wondering, like, if this is common, how other people, you know, approach what they expect from the test database, whether it be totally clean or have, you know, any required data for common flows and use cases of your system. But it does seem that little in between of, like, maybe it is using transactions to reset for each example, but then there's also some persistence that's happening somewhere else that could be a little tricky to manage.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie introduces her ideal setup for enjoying coffee on a bike ride. Joël describes his afternoon tea ritual. Exciting news from the hosts: both have been accepted to speak at RailsConf! Stephanie's presentation, titled "So, Writing Tests Feels Painful. What now?" aims to tackle the issues developers encounter with testing while offering actionable advice to ease these pains. Joël's session will focus on utilizing Turbo to create a Dungeons & Dragons character sheet, combining his passion for gaming with technical expertise.
Their conversation shifts to artificial intelligence and its potential in code refactoring and other applications, such as enhancing the code review process and solving complex software development problems. Joël shares his venture into combinatorics, illustrating how this mathematical approach helped him efficiently refactor a database query by systematically exploring and testing all potential combinations of query segments.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn, and together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, today I went out for a coffee on my bike, and I feel like I finally have my perfect, like, on-the-go coffee setup. We have this thoughtbot branded travel mug. So, it's one of the little bits of swag that we got from the company. It's, like, perfectly leak-proof. I'll link the brand in the show notes. But it's perfectly leak-proof, which is great. And on my bike, I have a little stem bag, so it's just, like, a tiny kind of, like, cylindrical bag that sits on the, like, vertical part of my handlebars that connects to the rest of my bag. And it's just, like, the perfect size for a 12-ounce coffee.
And so, I put my little travel mug in there, and I just had a very refreshing morning. And I'd gone out on my bike for a little bit, stopping by for coffee and headed home to work. And I got to drink my coffee during my first meeting. So, it was a wonderful way to start the day.
JOËL: Do you just show up at the coffee shop with your refillable mug and say, "Hey, can you pour some coffee in this?"
STEPHANIE: Yeah. I think a lot of coffee places are really amenable to bringing your own travel mugs. So yeah, it's really nice because I get to use less plastic. And also, you know, when you get a to-go mug, it is not leak-proof, right? It could just slosh all over the place and spill, so not bike-friendly. But yeah, bring your own mug. It's very easy.
JOËL: Excellent.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Also, warm beverages. Who would have thought? It's almost like it's cold in North America or something. I've been really enjoying making myself tea in the afternoons recently. And I've been drinking this brand of tea that is a little bit extra. Every flavor of tea they have comes with a description of how the tea feels.
STEPHANIE: Ooh.
JOËL: I don't know who came up with these, but they're kind of funny. So, one that I particularly enjoy is described as feels like stargazing on an empty beach.
STEPHANIE: Wow. That's very specific.
JOËL: They also give you tasting notes. This one has tastes of candied violet, elderberry, blackberry, and incense.
STEPHANIE: Ooh, that sounds lovely. Are you drinking, like, herbal tea in the afternoon, or do you drink caffeinated tea?
JOËL: I'll do caffeinated tea. I limit myself to one pot of coffee that I brew in the morning, and then, whenever that's done, I switch to tea. Tea I allow myself anything: herbal, black tea; that's fine.
STEPHANIE: Yeah, I can't have too much caffeine in the afternoon either. But I do love an extra tea. I wish I could remember, like, what even was in this tea or what brand it was, but once I had a tea that was a purplish color. But then, when you squeeze some lemon in it, or I guess maybe anything with a bit of acid, it would turn blue.
JOËL: Oh, that's so cool.
STEPHANIE: Yeah, I'll have to find what this tea was [laughs] and update the podcast for any tea lovers out there. But yeah, it was just, like, a little bit of extra whimsy to your regular routine.
JOËL: I love adding a little whimsy to my day, even if it's just seeing a random animated GIF that a coworker has sent or Tuple has some of the, like, reactions you can send if you're pairing with someone. And I don't use those very often, so whenever one of those comes through, and it's like, ship it or yay, that makes me very happy.
STEPHANIE: Agreed.
JOËL: This week is really fun because as we were prepping for this episode, we both realized that there is a lot that's been new in our world recently. And Stephanie, in particular, you've got some pretty big news that recently happened to you.
STEPHANIE: Yeah, it turns out we're making the what's new in your world segment the entire episode today [laughs]. But my news is that I am speaking at RailsConf this year, so that is May 7th through 9th in Detroit. And so, yeah, I haven't spoken at a RailsConf before, only a RubyConf. So, I'm looking forward to it. My talk is called: So, Writing Tests Feels Painful. What now?
JOËL: Wait, is writing tests ever painful [laughs]?
STEPHANIE: Maybe not for you, but for the rest of us [laughs].
JOËL: No, it absolutely is. I, right before this recording, came from a pairing session where we were scratching our heads on an, like, awkward-to-write test. It happens to all of us.
STEPHANIE: Yeah. So, I was brainstorming topics, and I kind of realized, especially with a lot of our consulting experience, you know, we hear from developers or even maybe, like, engineering managers a lot of themes around like, "Oh, like, development is slowing down because our test suite is such a headache," or "It's really slow. It's really flaky. It's really complicated." And that is a pain point that a lot of tech leaders are also looking to address for their teams.
But I was really questioning this idea that, like, it always had to be some effort to improve the test suite, like, that had to be worked on at some later point or get, like, an initiative together to fix all of these problems, and that it couldn't just be baked into your normal development process, like, on an individual level. I do think it is really easy to feel a lot of pain when trying to write tests and then just be like, ugh, like, I wish someone would fix this, right? Or, you know, just kind of ignore the signals of that pain because you don't know, like, how to manage it yourself.
So, my talk is about when you do feel that pain, really trying to determine if there's anything you can do, even in just, like, the one test file that you're working in to make things a little bit easier for yourself, so it doesn't become this, like, chronic issue that just gets worse and worse. Is there something you could do to maybe reorganize the file as you're working in it to make some conditionals a little bit clearer?
Is there any, like, extra test setup that you're like, "Oh, actually, I don't need this anymore, and I can just start to get rid of it, not just for this one example, but for the rest in this file"? And do yourself a favor a little bit. So yeah, I'm excited to talk about that because I think that's perhaps, like, a skill that we don't focus enough on.
JOËL: Are you going to sort of focus in on the side of things where, like, a classic TDD mantra is that test pain reflects underlying code complexity? So, are you planning to focus on the idea of, oh, if you're feeling test pain, maybe take some time to refactor some of the code that's under test, maybe because there's some tight coupling? Or are you going to lean a little bit more into maybe, like, the Boy Scout rule, you know, 'Leave the campsite cleaner than you found it' for your test files?
STEPHANIE: Ooh, I like that framing. Definitely more of the former. But one thing I've also noticed working with a lot of client teams is that it's not always clear, like, how to refactor. I think a lot of intermediate developers start to feel that pain but don't know what to do about it. They don't know, like, maybe the code smells, or the patterns, or refactoring strategies, and that can certainly be taught. It will probably pull from that. But even if you don't know those skills yet, I'm wondering if there's, like, an opportunity to teach, like, developers at that level to start to reflect on the code and be like, "Hmm, what could I do to make this a little more flexible?"
And they might not know the names of the strategies to, like, extract a class, but just start to get them thinking about it. And then maybe when they come across that vocabulary later, it'll connect a lot easier because they'll have started to think about, you know, their experiences day to day with some of the more conceptual stuff.
JOËL: I really like that because I feel we've probably all heard that idea that test pain, especially when you're test driving, is a sign of maybe some anti-patterns or some code smells in the underlying code that you're testing. But translating that into something actionable and being able to say, "Okay, so my tests are painful. They're telling me something needs to be refactored. I'm looking at this code, and I don't know what to refactor." It's a big jump. It's almost the classic draw two circles; draw the rest of the owl meme. And so, I think bridging that gap is something that is really valuable for our community.
STEPHANIE: Yeah, that's exactly what I hope to do in my talk. So, Joël, you [chuckles] also didn't quite mention that you have big news as well.
JOËL: So, I also got accepted to speak at RailsConf. I'm giving a talk on Building a Dungeons & Dragons Character Sheet Using Turbo.
STEPHANIE: That's really awesome. I'm excited because I want to learn more about Turbo. I want someone else to tell me [laughs] what I can do with it. And as a person with a little bit of Dungeons & Dragons experience, I think a character sheet is kind of the perfect vehicle for that.
JOËL: Building a D&D character sheet has been kind of my go-to project to experiment with a new front-end framework because it's something that's pretty dynamic. And for those who don't know, there's a bunch of fields that you fill in with stats for different attributes that your character has, but then those impact other stats that get rendered. And sometimes there can be a chain two or three long where different numbers kind of combine together. And so, you've got this almost dependency tree of, like, a particular number.
Maybe your skill at acrobatics might depend on a number that you entered in the dexterity field, but it also depends on your proficiency bonus, and maybe also depends on the race that you picked and a few other things. And so, calculating those numbers all of a sudden becomes not quite so simple. And so, I find it's a really fun exercise to build when trying out a new interactive front-end technology.
STEPHANIE: Have you done this with a different implementation or a framework?
JOËL: I've done this, not completely, but I've attempted some parts of a D&D character sheet, I think, with Backbone.js with Ember. I may have done an Angular one at some point in original Angular, so Angular 1. I did this with Elm. Somehow, I skipped React. I don't think I did React to build a D&D character sheet. And now I'm kind of moving a little bit back to the backend. How much can we get done just with Turbo? Or do we need to pull in maybe Stimulus? These are all things that are going to be really fun to demonstrate.
STEPHANIE: Yeah. Speaking of injecting some whimsy earlier, I think it's kind of like just a little more fun than a regular to-do app, you know, or a blog to show how you can build, you know, something that people kind of understand with a different technology.
JOËL: Another really fun thing that I've been toying with this week has been using AI to help me refactor code. And this has been using just sort of a classic chat AI, not a tool like Copilot. And I was dealing with a query that was really slow, and I wanted to restructure it in a different way. And I described to the AI how I wanted it to refactor and explicitly said, "I want this to be the same before and after." And I asked it to do the refactor, and it gave me some pretty disappointing results where it did some, like, a couple of really obvious things that were not that useful.
And I was talking to a colleague about how I was really disappointed. I was thinking, well, AI should be able to do something better than this. And this colleague suggested changing the way I was asking for things and specifically asking for a step-by-step and asking it to prove every step using relational algebra, which is the branch of math that deals with everything that underlies relational databases, so the transformations that you would do where you keep everything the same, but you're saying, "Hey, these equations are all equivalent." And it sure did. It gave me a, like, 10-step process with all these, like, symbols and things.
My relational algebra is not that strong, and so I couldn't totally follow along. But then I asked it to give me a code example, like, show me the SQL at every step of this transformation and at the end. And, you know, it all kind of looked all right. I've not fully tested the final result it gave me to see if it does what it says on the tin. But I'm cautiously optimistic. I think it looks very similar to something that I came up with on my own. And so, I'm somewhat impressed, at least, like, much better than things were in the beginning with that first round. So, I'm really curious to see where I can take this.
STEPHANIE: Yeah, I think that's cool that you were able to prompt it differently and get something more useful. One of the reasons why I personally have been a little bit hesitant to get into the large language models is because I would love to see the AI show its work, essentially, like, tell me a little bit more about how it got from question to answer. And I thought that framing of kind of step-by-step show me code was a really interesting way, even to just, like, get some different results that do the same thing.
But you can kind of evaluate that a little bit more on your own rather than just using that first result that it gave you that was like, eh, like, I don't know if this really did anything for me. So, it would be cool, even if you don't end up using, like, the final one, right? If something along the way also is an improvement from what you started with that would be really interesting.
JOËL: Honestly, I think you kind of want the same thing if you're chatting with an AI chatbot or having a conversation in Slack with a colleague. They're just like, "Hey, can you help me refactor this?" And then a sort of, like, totally different chunk of code. And it's just like, "Trust me, it works."
STEPHANIE: [laughs].
JOËL: And maybe it does. Maybe you plug it into your codebase and run the tests against it, and the tests are still green. And so, you trust that it works, but you don't really understand where it came from. That doesn't always feel good, even when it comes from a human. So, what I've appreciated with colleagues has been when they've given me a step-by-step. Sometimes, they give me the final product. They just say, "Hey. Try this. Does this work?" Plug it in to the test. It does pass. It's green. Great. "Tell me what black magic you did to get to that."
And then they give me the step-by-step and it's like, oh, that's so good because not only do I get a better understanding of what happens at every step, but now I'm equipped the next time I run into this problem to apply the same technique to figure it out on my own.
STEPHANIE: Yeah. And I liked, also, that relational algebra pro tip, right? It kind of ensures that what you're getting makes sense or is equivalent along the way [laughs].
JOËL: We think, right? I don't know enough relational algebra to check its work. It is quite possible that it is making some subtle mistakes along the way, or, like, making inferences that it shouldn't be. I'm not going to say I trust that. But I think, specifically, when asking for SQL transformations, prompting it to do so using relational algebra in a step-by-step way seemed to be a way to get it to do something more reliably or at least give more interesting results.
STEPHANIE: Cool.
JOËL: I was interested in trying this out in part because I've been more curious about AI tools recently, and also because we're hoping to do a deeper dive into AI on a Bike Shed episode at some point later, so very much still in the gathering information phase. But this was a really cool experience. So, having an AI refactor a query for me using relational algebra, definitely something that's new in my world this week.
STEPHANIE: Speaking of refactoring and this idea of making improvements to your code and trying to figure out how to get from what you currently have to something new, I have been thinking a lot about how to make code reviews more actionable. And that's because, on my current client project, our team is struggling a little bit with code reviews, especially when you kind of want to give feedback on more of a design change in the code or thinking about some different abstractions. I have found that that is really hard to communicate async and also in a, like, a GitHub code review format where you can really just comment, like, line by line.
And I've found that, you know, when someone is leaving feedback, that's like, "I'm having a hard time reading this. And I'm imagining that we could organize the code a bit differently in these three different layers or abstractions," there's a lot of assumptions there, right [laughs]? That your message is being communicated to the author and that they are able to, like, visualize, or have a mental model for what you're explaining as well.
And then kind of what I've been seeing in this dynamic is, like, not really knowing what to do with that and to kind of just, like, I don't know where to go from here. So, I guess the next step is just to, like, merge it. Is that something you've experienced before or encountered when it comes to feedback?
JOËL: Broader changes are often challenging to explain, especially when they're...sometimes you get so abstract you can just write a quick paragraph. And sometimes it's like, hey, what if we, like, totally change our approach? I've definitely done the thing where I'll just ping someone and say, "Hey, can we talk about this synchronously? Can we get on a call and have a deeper conversation?"
How do you tend to approach if you're not going to hop on a call with someone and, like, have a 20 or 30-minute conversation? How do you approach doing that asynchronously on a pull request? Are you the type of person to put, like, a ton of, like, code blocks, like, "Here's what I was thinking. We could instead have this class and this thing"? And, like, pretty soon, it's, like, a page and a half of text. Or do you have another approach that you like to use?
STEPHANIE: Yeah. And I think that's where it can get really interesting. Because my process is, I'll usually just start commenting and maybe if I'm seeing some things that can be done differently. If it's not just, like, a really obvious change that I could just use English to describe, I'll add a little suggested change. But I also don't want to just rewrite this person's code [laughs] in a code review.
JOËL: That's the challenge, right?
STEPHANIE: Yeah. And I've definitely seen that be done before, too. Once I notice I'm at, like, four plus comments, and then they're not just, like, nitpicks about, like, syntax or something like that, that helps me clue into the idea that there is some kind of bigger change that I might be asking of the author. And I don't want to overwhelm them with, like, individual comments that really are trying to convey something more holistic.
JOËL: Right. I wonder if having a, like, specialized yet more abstract language is useful for these sorts of things where a whole paragraph in English or, you know, a ton of code examples might be a bit much. If you're able to say something like, "Hey, how would you feel about using a strategy pattern approach here instead of, you know, maybe a template object or some custom thing that we've built here?" that allows us to say a lot in a fairly sort of terse way. And it's the thing that you can leave more generically on the PR instead of, like, individually commenting in a bunch of places. And that can start a broader conversation at more of an architecture level.
STEPHANIE: Yes, I really like that. That's a great idea. I would follow that up with, like, I think at the end of the day, there are some conversations that do need to be had synchronously. And so, I like the idea of leaving a comment like that and just kind of giving them resources to learn what a strategy pattern is and then offering support because that's also a way to shorten that feedback loop of trying to communicate an idea. And I like that it's kind of guiding them, but also you're there to add some scaffolding if it ends up being, like, kind of a big ask for them to figure out what to do.
JOËL: There's also oftentimes, I think, a tone thing to manage where, especially if there's a difference in seniority or experience between the two people, it can be very easy for something to come across as an ask or a demand rather than a like, "Hey, let's think about some alternatives here." Or, like, "I have some concerns with your implementation. Let's sort of broadly explore some possible alternatives. Maybe a strategy pattern works."
But the person reading that who wrote the original code might be, like, receiving that as "Your code is bad. You should have done a strategy pattern instead." And that's not the conversation I want to have, right? I want to have a back-and-forth about, "Hey, what are the trade-offs involved? Do you have a third architecture you'd like to suggest?" And so, that can be a really tricky thing to avoid.
STEPHANIE: Yeah, I like that what you're saying also kind of suggested that it's okay if you don't have an idea yet for exactly how it should look like. Maybe you just are like, oh, like, I'm having a hard time understanding this, but I don't think just leaving it at that gives the author a lot to go on.
I think there's something to it about maybe the action part of actionable is just like, "Can you talk about it with me?" Or "Could you explain what you're trying to do here?" Or, you know, leave a comment about what this method is doing. There's a lot of ways, I think, that you can reach some amount of improvement, even if it doesn't end up being, like, the ideal code that you would write.
JOËL: Yes. There's also maybe a distinction in making it actionable by giving someone some code and saying, "Hey, you should copy-paste this code and make that..." or, you know, use a GitHub suggested code or something, which works on the small. And in the big, you can give some maybe examples and say, "Hey, what if you refactored in this way?"
But sometimes, you could even step back and let them do that work and say, "Hey, I have some concerns with the current architecture. It's not flexible in the ways that we need to be flexible. Here's my understanding of the requirements. And here's sort of how I see maybe this architecture not working with that. Let's think of some different ways we could approach this problem." And oftentimes, it's nice to give at least one or two different ideas to help start that. But it can be okay to just ask the person, "Hey, can you come up with some alternate implementations that would fulfill these sets of requirements?"
STEPHANIE: Yeah, I like that. And I can even see, like, maybe you do that work, and you don't end up pursuing it completely in addressing that feedback. But even asking someone to do the exercise itself, I think, can then spark new ideas and maybe other improvements.
In general, I like to think about...I'm a little hesitant to use this metaphor because I'm not actually giving code, like, letter grades when I review them, but the idea that, like, not all code has to get, like, an A [chuckles], but maybe getting it, like, from one letter grade up to, like, half a letter grade, like, higher, that is valuable, even if it's not always practical to go through multiple rounds of code review. And I think just making it actionable enough to be a little bit better, like, that is, in my opinion, the sweet spot.
JOËL: That's true. The sort of over-giving feedback to someone to try to get code perfect, rather than just saying, "Hey, can we make it slightly better?" And, you know, there are probably some minimum standards you need to hit. But at some point, it's a trade-off of like, how much time do we need to put polishing this versus shipping something?
STEPHANIE: Yeah, and I think that it is cumulative over time, right? That's how people learn. Yeah, it's like one of the biggest opportunities for developers to level up is from that feedback. And that's why I think it's important that it's actionable because, you know, and you put the time into, like, giving that review, and it's not just to make sure the code works, but it's also, like, one of the touch points for collaboration.
JOËL: So, if you had to summarize what makes code review comments actionable, do you have, like, top three tips that make a comment really actionable as opposed to something that's not helpful? Or maybe that's more of the journey that you're on, and you've not distilled it down to three pithy tips that you can put in a listicle.
STEPHANIE: Honestly, I think it does kind of just distill down to one, which is for every comment, you should have an idea of what you would like the author to do about it. And it's okay if it's nothing, but then tell them that it's nothing. You could just be expressing, "I thought this was kind of weird, [laughs]" or "This is not my favorite thing, but it's okay."
JOËL: And it can be okay for the thing you want the author to do. It doesn't have to be code. It could be a conversation.
STEPHANIE: Yeah, exactly. It could be a conversation. It could be asking for information, too, right? Like, "Did you consider alternatives, and could you share them with me?" But that request portion, I think is really important because, yeah, I think there's so much miscommunication that can happen along the way. So, definitely still trying to figure out how to best support that kind of code review culture on my team.
JOËL: This week's episode has been really fun because it's just been a combination of a lot of things that are new in our world, things that we've been trying, things that we've been learning. And kind of in an almost, like, a meta sense, one of the things I've been digging into is combinatorics, the branch of math that looks at how things combine and particularly how it works with combining a bunch of ActiveRecord query fragments where there's potential branching, so things like doing a union of two sort of sub queries or doing an or where you're combining two different where queries and trying to figure out what are the different paths through that.
STEPHANIE: Wow, what a great way to combine what we were talking about, Joël [laughs]. Did you apply combinatorics to this podcast episode [laughs]?
JOËL: Somehow, topics multiply with each other, something, something.
STEPHANIE: Yeah, that makes sense to me [laughs]. Okay. Will you tell me more about what you've been using it for in your queries?
JOËL: So, one thing I'm trying to do is because I've got these different branching paths through a query, I want to see sort of all the different ways because these are defined as ActiveRecord scopes, and I'm chaining them together. And it looks linear because I'm calling scope1 dot scope2 dot scope3. But each of those have branches inside of them. And so, there's all these different ways that data could get used or not.
And one way that I figured out, like, what are the different paths here, was actually drawing out a matrix, just putting together a table. In this case, I had two scopes, each of which had a two-way branch inside, and so I made a two by two matrix. And that gave me all of the combinations of, oh, if you go down one branch in one scope and down another branch in the other scope. And what I went through is then I went in in each square and filled in how many records I would expect to get back from the query from some basic set that I was working on in each of these combinations.
And one thing that was really interesting is that some of those combinations were sort of mutually exclusive, where a scope further down the line was filtering on the same field as an earlier one and would overwrite it or not overwrite it, but the two would then sort of you can't have both of those things be true at the same time. So, I'm looking for something that has a particular manager ID, and then I'm looking for something that has a particular different manager ID. And the way Rails combines these, if you just change scopes with where, is to and them together. There are no records that have both manager ID 1 and manager ID 2. You can only have one manager ID.
And so, as I'm filling out my matrix, there's some sections I can just zero out and be like, wait, this will always return zero record. And then I can start focusing on the parts that are not zeroed out. So, I've got two or three squares. What's special about those? And that helped me really understand what the combination of these multiple query fragments together were actually trying to do as a holistic whole.
STEPHANIE: Wow, yeah, that is really interesting because I hear you when you say it looks linear. And it would be really surprising to me for there to be branching paths. Like, that's not really what I think about when I think about SQL. But that makes a lot of sense that it could get so complicated that it's just impossible to get a certain kind of result. Like, what's going to be the outcome of applying combinatorics to this? Is there a refactoring opportunity, or is it really just to even understand what's going on?
JOËL: So, this was a refactoring that I was trying to do, but I didn't really understand the underlying behavior of the chain of scopes. I just knew that they were doing some complex things that were inefficient from a SQL perspective. And so, I was looking at ways to refactor, but I also wanted to get a sense of what is this actually trying to do other than just chaining a bunch of random bits of code together? So, the matrix really helped for that.
The other way that I used it was to write some tests because this query I was trying to refactor, this chain of scopes, was untested. And I wanted to write tests that were very thorough because I wanted to make sure that my refactor didn't break any edge cases. And I'm, you know, writing a few tests. Okay, well, here's a record that I definitely want to get returned by this query, and maybe here are a couple of records I don't want to get returned.
And the more I was, like, going into this and trying to write test cases, the more I was finding more edge cases that I didn't want to and, oh, but what about this? And what about the combination of these things? And it got to the point where it was just messing with my mind. I was, like, confusing myself and really struggling to write tests that would do anything useful.
STEPHANIE: Wow. Yeah. Honestly, I have already started to become a little bit suspicious of complex scopes, and this further pushes me in that direction [laughs] because yeah, once you start to...like, the benefit of them is that you can chain them, but it really hides a lot of the underlying behavior. So, you can easily just turn yourself around or, like, go, you know, kind of end up [laughs] in a little bit of a bind.
JOËL: Definitely, especially once it grows a little bit harder to hold in your head. And I don't know exactly where that level is for me. But in this particular situation, I identified, I think, five different dimensions that would impact the results of this query. And then each dimension had maybe three or four different values that we might care about. And, eventually, I just took the time to write this out.
So, I created five arrays and then just said, "Hey, here are the different managers that we care about. Here are the different project types we care about. Here are the different..." and we had, like, five of these, and each array had three or four elements in it. And then, in a series of nested loops, I iterated through all of these arrays and at the innermost loop, created the data that I wanted that matched that particular set of values.
Now, we're often told you should not be doing things in nested loops because you end up sort of multiplying all of these together, but, in this case, this is actually what I wanted to do. You know, it turns out that I had a hundred-ish records I had to create to sort of create a data set that would be all the possible edge cases I might want to filter on. And creating them all by hand with all of the different variations was going to be too much. And so, I ended up doing this with arrays and nested loops. And it got me the data that I needed. And it gave me then the confidence to know that my refactor did indeed work the way I was expecting.
STEPHANIE: Wow. That's truly hero's work [chuckles]. I'm, like, very excited because it sounds like that's a huge opportunity for some performance improvements as well.
JOËL: For the underlying code, yes. The test might be a little bit slow because I'm creating a hundred records in the database. And you might say, "Oh, do you really need to do that? Can you maybe collapse some of these cases?" In this particular case, I really wanted to have high confidence that the refactor was not changing anything. And so, I was okay creating a hundred records over a series of nested iterations. That was a price I was willing to pay. The refactored query, it turns out, I was able to write it in a way that was significantly faster.
STEPHANIE: Yeah, that's what I suspected.
JOËL: So, I had to rewrite it in a way that didn't take advantage of all the change scopes. I had to just sort of write something custom from scratch, which is often the case, right? Performance and reusability sometimes fight against each other, and it's a trade-off. So, I'm not reusing the scopes. I had to write something from scratch, but it's multiple hundreds of times faster.
STEPHANIE: Wow. Yeah. That seems worth it for a slow test [laughs] for the user experience to be a lot better, especially when you just reach that level of complexity. And it's a really awesome strategy that you applied to figure that out. I think it's a very unique one [laughs]. That's for sure.
JOËL: I've had an interest in sort of analytical tools to help me understand domain models, to help understand problems, to help understand code that I'm working with for a while now, and I think an understanding of combinatorics fits into that. And then, particular tools within that, such as drawing things out in a table, in a two by two matrix, or an end-by-end matrix to get something visual, that's a great tool for debugging or understanding a problem.
Thinking of problems as data that exists in multiple dimensions and then asking about the cardinality of that set it's the kind of analysis I did a lot when I was modeling using algebraic data types in Elm. But now I've sort of taken some of the tools and analysis I use from that world into thinking about things like SQL records, things like dealing with data in Ruby. And I'm able to bring those tools and that way of thinking to help me solve some problems that I might struggle to solve otherwise.
For any of our listeners who this, like, kind of piques their interest, combinatorics falls under a broader umbrella of mathematics called discrete math. And within that, there's a lot that I think is really useful, a lot of tools and techniques that we can apply to our day-to-day programming. We have a Bike Shed episode where we talked about is discrete math relevant to day-to-day programmers and what are the ways it's so? We'll link that in the show notes. I also gave a talk at RailsConf last year diving into that titled: The Math Every Programmer Needs. So, if you're looking for something that's accessible to someone who's not done a math degree, those are two great jumping-off points.
STEPHANIE: Yeah. And then, maybe you'll start drawing out arrays and applying combinatorics to figure out your performance problems.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Joël talks about his difficulties optimizing queries in ActiveRecord, especially with complex scopes and unions, resulting in slow queries. He emphasizes the importance of optimizing subqueries in unions to boost performance despite challenges such as query duplication and difficulty reusing scopes. Stephanie discusses upgrading a client's app to Rails 7, highlighting the importance of patience, detailed attention, and the benefits of collaborative work with a fellow developer.
The conversation shifts to Ruby's reduce method (inject), exploring its complexity and various mental models to understand it. Joël and Stephanie discuss when it's preferable to use reduce over other methods like each, map, or loops and the importance of understanding the underlying operation you wish to apply to two elements before scaling up with reduce. The episode also touches on monoids and how they relate to reduce, suggesting that a deep understanding of functional programming concepts can help simplify reduce expressions.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been doing a bunch of fiddling with query optimization this week, and I've sort of run across an interesting...but maybe it's more of an interesting realization because it's interesting in the sort of annoying way. And that is that, using ActiveRecord scopes with certain more complex query pieces, particularly unions, can lead to queries that are really slow, and you have to rewrite them differently in a way that's not reusable in order to make them fast.
In particular, if you have sort of two other scopes that involve joins and then you combine them using a union, you're unioning two sort of joins. Later on, you want to change some other scope that does some wares or something like that. That can end up being really expensive, particularly if some of the underlying tables being joined are huge. Because your database, in my case, Postgres, will pull a lot of this data into the giant sort of in-memory table as it's, like, building all these things together and to filter them out. And it doesn't have the ability to optimize the way it would on a more traditional relation.
A solution to this is to make sure that the sort of subqueries that are getting unioned are optimized individually. And that can mean moving conditions that are outside the union inside. So, if I'm chaining, I don't know, where active is true on the outer query; on the union itself, I might need to move that inside each of the subqueries. So, now, in the two or three subqueries that I'm unioning, each of them needs to have a 'where active true' chained on it.
STEPHANIE: Interesting. I have heard this about using ActiveRecord scopes before, that if the scopes are quite complex, chaining them might not lead to the most performant query. That is interesting. By optimizing the subqueries, did you kind of change the meaning of them? Was that something that ended up happening?
JOËL: So, the annoying thing is that I have a scope that has the union in it, and it does some things sort of on its own. And it's used in some places. There are also other places that will try to take that scope that has the union on it, chain some other scopes that do other joins and some more filters, and that is horribly inefficient. So, I need to sort of rewrite the sort of subqueries that get union to include all these new conditions that only happen in this one use case and not in the, like, three or four others that rely on that union.
So, now I end up with some, like, awkward query duplication in different call sites that I'm not super comfortable about, but, unfortunately, I've not found a good way to make this sort of nicely reusable. Because when you want to chain sort of more things onto the union, you need to shove them in, and there's no clean way of doing that.
STEPHANIE: Yeah. I think another way I've seen this resolved is just writing it in SQL if it's really complex and it becoming just a bespoke query. We're no longer trying to use the scope that could be reusable.
JOËL: Right. Right. In this case, I guess, I'm, like, halfway in between in that I'm using the ActiveRecord DSL, but I am not reusing scopes and things. So, I sort of have the, I don't know, naive union implementation that can be fine in all of the simpler use cases that are using it. And then the query that tries to combine the union with some other fancy stuff it just gets its own separate implementation different than the others that it has optimized. So, there are sort of two separate paths, two separate implementations. I did not drop down to writing raw SQL because I could use the ActiveRecord DSL. So, that's what I've been working with.
What's new in your world this week?
STEPHANIE: So, a couple of weeks ago, I think, I mentioned that I was working on a Rails 7 upgrade, and we have gotten it out the door. So, now the client application I'm working on is on Rails 7, which is exciting for the team. But in an effort to make the upgrade as incremental as possible, we did, like, back out of a few of the new application config changes that would have led us down a path of more work. And now we're kind of following up a little bit to try to turn some of those configs on to enable them.
And it was very exciting to kind of, like, officially be on Rails 7. But I do feel like we tried to go for, like, the minimal amount of work possible in that initial big change. And now we're having to kind of backfill a little bit on some of the work that was a little bit more like, oh, I'm not really sure, like, how big this will end up being.
And it's been really interesting work, I think, because it requires, like, two different mindsets. Like, one of them is being really patient and focused on tedious work. Like, okay, what happens when we enable this config option? Like, what changes? What errors do we see? And then having to turn it back off and then go in and fix them.
But then another, I think, like, headspace that we have to be in is making decisions about what to do when we come to a crossroads around, like, okay, now that we are starting to see all the changes that are coming about from enabling this config, is this even what we want to do? And it can be really hard to switch between those two modes of thinking.
JOËL: Yeah. How do you try to balance between the two?
STEPHANIE: So, I luckily have been pairing with another dev, and I've actually found that to be really effective because he has, I guess, just, like, a little bit more of that patience to do the more tedious, mundane [laughs] aspects of, like, driving the code changes. And I have been riding along.
But then I can sense, like, once he gets to the point of like, "Oh, I'm not sure if we should keep going down this road," I can step in a little bit more and be like, "Okay, like, you know, I've seen us do this, like, five times now, and maybe we don't want to do that." Or maybe being like, "Okay, we don't have a really clear answer, but, like, who can we talk to to find out a little bit more or get their input?"
And that's been working really well for me because I've not had a lot of energy to do more of that, like, more manual or tedious labor [chuckles] that comes with working on that low level of stuff. So yeah, I've just been pleasantly surprised by how well we are aligning our superpowers.
JOËL: To use some classic business speech, how does it feel to be in the future on Rails 7?
STEPHANIE: Well, we're not quite up, you know, up to modern days yet, but it does feel like we're getting close. And, like, I think now we're starting to entertain the idea of, like, hmm, like, could we be even on main? I don't think it's really going to happen, but it feels a little bit more possible. And, in general, like, the team thinks that that could be, like, really exciting. Or it's easier, I think, once you're a little bit more on top of it. Like, the worst is when you get quite behind, and you end up just feeling like you're constantly playing catch up. It just feels a little bit more manageable now, which is good.
JOËL: I learned this week a fun fact about Rails 7.1, in particular, which is that the analyze method on ActiveRecord queries, which allowed you to sort of get SQL EXPLAIN statements, now has the ability to pass in a couple of extra parameters. So, there are symbols, and you can pass in things like analyze or verbose, which allows you to get sort of more data out of your EXPLAIN query, which can be quite nice when you're debugging for performance.
So, if you're in the future and you're on Rails 7.1 and you want sort of the in-depth query plans, you don't need to copy the SQL into a Postgres console to get access to the sort of fully developed EXPLAIN plan. You can now do it by passing arguments to EXPLAIN, which I'm very happy for.
STEPHANIE: That's really nice.
JOËL: So, we've mentioned before that we have a developers' channel on Slack here at thoughtbot, and there's all sorts of fun conversations that happen there. And there was one recently that really got me interested, where people were talking about Ruby's reduce method, also known as inject. And it's one of those methods that's kind of complicated, or it can be really confusing.
And there was a whole thread where people were talking about different mental models that they had around the reduce method and how they sort of understand the way it works. And I'd be curious to sort of dig into each other's mental models of that today. To kick us off, like, how comfortable do you feel with Ruby's reduce method? And do you have any mental models to kind of hold it in your head?
STEPHANIE: Yeah, I think reduce is so hard to wrap your head around, or it might be one of the most difficult, I guess, like, functions a new developer encounters, you know, in trying to understand the tools available to them. I always have to look up the order of the arguments [laughs] for reduce.
JOËL: Every time.
STEPHANIE: Yep. But I feel like I finally have a more intuitive sense of when to use it. And my mental model for it is collapsing a collection into one value, and, actually, that's why I prefer calling it reduce rather than the inject alias because reduce kind of signals to me this idea of going from many things to one canonical thing, I suppose.
JOËL: Yeah, that's a very common use case for reducing, and I guess the name itself, reducing, kind of has almost that connotation. You're taking many things, and you're going to reduce that down to a single thing.
STEPHANIE: What was really interesting to me about that conversation was that some people kind of had the opposite mental model where it made a bit more sense for them to think about injecting and, specifically, like, the idea of the accumulator being injected with values, I suppose. And I kind of realized that, in some ways, they're kind of antonyms [chuckles] a little bit because if you're focused on the accumulator, you're kind of thinking about something getting bigger. And that kind of blew my mind a little bit when I realized that, in some ways, they can be considered opposites.
JOËL: That's really fascinating. It is really interesting, I think, the way that we can take the name of a method and then almost, like, tell ourselves a story about what it does that then becomes our way of remembering how this method works. And the story we tell for the same method name, or in this case, maybe there's a few different method names that are aliases, can be different from person to person.
I know I tend to think of inject less in terms of injecting things into the accumulator and more in terms of injecting some kind of operator between every item in the collection. So, if we have an array of numbers and we're injecting plus, in my mind, I'm like, oh yeah, in between each of the numbers in the collection, just inject a little plus sign, and then do the math. We're summing all the items in the collection.
STEPHANIE: Does that still hold up when the operator becomes a little more complex than just, you know, like, a mathematical operator, like, say, a function?
JOËL: Well, when you start passing a block and doing custom logic, no, that mental model kind of falls apart. In order for it to work, it also has to be something that you can visualize as some form of infix operator, something that goes between two values rather than, like, a method name, which is typically in prefix position.
I do want to get at this idea, though: the difference between sort of the block version versus passing. There are ways where you can just do a symbol, and that will call a method on each of the items. Because I have a bit of a hot take when it comes to writing reduce blocks or inject blocks that are more accessible, easier to understand.
And that is, generally, that you shouldn't, or more specifically, you should not have a big block body. In general, you should be either using the symbol version or just calling a method within the block, and it's a one-liner. Which means that if you have some complex behavior, you need to find a way to move that out of this sort of collection operation and into instance methods on the objects being iterated.
STEPHANIE: Hmm, interesting. By one-liner do you mean passing the name of the method as a proc or actually, like, having your block that then calls the method? Because I can see it becoming even simpler if you have already extracted a method.
JOËL: Yeah, if you can do symbol to proc, that's amazing, or even if you can use just the straight-up symbol way of invoking reduce or inject. That typically means you have to start thinking about the types of objects that you are working with and what methods can be moved onto them. And sometimes, if you're working with hashes or something like that that don't have domain methods for what you want, that gets really awkward. And so, then maybe that becomes maybe a hint that you've got some primitive obsession happening and that this hash that sort of wants a domain object or some kind of domain method probably should be extracted to its own object.
STEPHANIE: I'll do you with another kind of spicy take. I think, in that case, maybe you don't want a reduce at all. If you're starting to find that...well, okay, I think it maybe could depend because there could be some very, like, domain-specific logic. But I have seen reduce end up being used to transform the structure of the initial collection when either a different higher-order function can be used or, I don't know, maybe you're just better off writing it with a regular loop [laughs]. It could be clearer that way.
JOËL: Well, that's really interesting because...so, you mentioned the idea that we could use a different higher-order function, and, you know, higher-order function is that fancy term, just a method that accepts another method as an argument. In Ruby, that just means your method accepts a block. Reduce can be used to implement pretty much the entirety of enumerable. Under the hood, enumerable is built in terms of each. You could implement it in terms of reduce.
So, sometimes it's easy to re-implement one of the enumerable methods yourself, accidentally, using reduce. So, you've written this, like, complex reduce block, and then somebody in review comes and looks at it and is like, "Hey, you realize that's just map. You've just recreated map. What if we used map here?"
STEPHANIE: Yeah. Another one I've seen a lot in JavaScript land where there are, you know, fewer utility functions is what we now have in Ruby, tally. I feel like that was a common one I would see a lot when you're trying to count instances of something, and I've seen it done with reduce. I've seen it done with a for each. And, you know, I'm sure there are libraries that actually provide a tally-like function for you in JS. But I guess that actually makes me feel even more strongly about this idea that reduce is best used for collapsing something as opposed to just, like, transforming a data structure into something else.
JOËL: There's an interesting other mental model for reduce that I think is hiding under what we're talking about here, and that is the idea that it is a sort of mid-level abstraction for dealing with collections, as opposed to something like map or select or some of those other enumerable helpers because those can all be implemented in terms of reduce. And so, in many cases, you don't need to write the reduce because the library maintainer has already used reduce or something equivalent to build these higher-level helpers for you.
STEPHANIE: Yeah, it's kind of in that weird point between, like, very powerful [chuckles] so that people can start to do some funky things with it, but also sometimes just necessary because it can feel a little bit more concise that way.
JOËL: I've done a fair amount of functional programming in languages like Elm. And there, if you're building a custom data structure, the sort of lowest-level way you have of looping is doing a recursion, and recursions are messy. And so, what you can do instead as a library developer is say, "You know what, I don't want to be writing recursions for all of these." I don't know; maybe I'm building a tree library. I don't want to write a recursion for every different function that goes over trees if I want to map or filter or whatever. I'm going to write reduce using recursion, and then everything else can be written in terms of reduce.
And then, if people want to do custom things, they don't need to recurse over my tree. They can use this reduce function, which allows them to do most of the traversals they want on the tree without needing to touch manual recursion. So, there's almost, like, a low-level, mid-level, high-level in the library design, where, like, lowest level is recursion. Ideally, nobody touches that. Mid-level, you've got reducing that's built out on top of recursion. And then, on top of that, you've got all sorts of other helpers, like mapping, like filtering, things like that.
STEPHANIE: Hmm. I'm wondering, do you know of any performance considerations when it comes to using reduce built off a recursion?
JOËL: So, one of the things that can be really nice is that writing a recursion yourself is dangerous. It's so easy to, like, accidentally introduce Stack Overflow. You could also write a really inefficient one. So, ideally, what you do is that you write a reduce that is safe and that is fast. And then, everybody else can just use that to not have to worry about the sort of mechanics of traversing the collection. And then, just use this. It already has all of the safety and speed features built in. You do have to be careful, though, because reduce, by nature, traverses the entire collection. And if you want to break out early of something expensive, then reduce might not be the tool for you.
STEPHANIE: I was also reading a little bit about how, in JavaScript, a lot of developers like to stick to that idea of a pure function and try to basically copy the entire accumulator for every iteration and creating a new object for that. And that has led to some memory issues as well. As opposed to just mutating the accumulator, having, especially when you, you know, are going through a collection, like, really large, making that copy every single time and creating, yeah [chuckles], just a lot of issues that way. So, that's kind of what prompted that question.
JOËL: Yeah, that can vary a lot by language and by data structure. In more functional languages that try to not mutate, they often have this idea of what they call persistent data structures, where you can sort of create copies that have small modifications that don't force you to copy the whole object under the hood. They're just, like, pointers. So, like, hey, we, like, are the same as this other object, but with this extra element added, or something like that. So, if you're growing an array or something like that, you don't end up with 10,000 copies of the array with, like, a new element every time.
STEPHANIE: Yeah, that is interesting. And I feel like trying to adopt different paradigms for different tools, you know, is not always as straightforward as some wish it were [laughs].
JOËL: I do want to give a shout-out to an academic paper that is...it is infamously dense. The title of it is Functional Programming with Bananas, Lenses, and Barbed Wire.
STEPHANIE: It doesn't sound dense; it sounds fun. Well, I don't about barbed wire.
JOËL: It sounds fun, right?
STEPHANIE: Yeah, but certainly quirky [laughs].
JOËL: It is incredibly dense. And they've, like, created this custom math notation and all this stuff. But the idea that they pioneered there is really cool, this idea that kind of like I was talking about sort of building libraries in different levels. Their idea is that recursion is generally something that's unsafe and that library and language designers should take care of all of the recursion and instead provide some of these sort of mid-level helper methods to do things. Reducing is one of them, but their proposal is that it's not the only one. There's a whole sort of family of similar methods that are there that would be useful in different use cases.
So, reduce allows you to sort of traverse the whole thing. It does not allow you to break out early. It does not allow you to keep sort of track of a sort of extra context element if you want to, like, be traversing a collection but have a sort of look forward, look back, something like that. So, there are other variations that could handle those. There are variations that are the opposite of reduce, where you're, like, inflating, starting from a few parameters and building a collection out of them.
So, this whole concept is called recursion schemes, and you can get, like, really deep into some theory there. You'll hear fancy words like catamorphisms and anamorphisms. There's a whole world to explore in that area. But at its core, it's this idea that you can sort of slice up things into this sort of low-level recursion, mid-level helpers, and then, like, kind of userland helpers built on top of that.
STEPHANIE: Wow. That is very intense; it sounds like [chuckles]. I'm happy not to ever have to write a recursion ever again, probably [laughs]. Have you ever, as just a web developer in your day-to-day programming, found a really good use case for dropping down to that level? Or are you kind of convinced that, like, you won't really ever need to?
JOËL: I think it depends on the paradigm of the language you're working in. In Ruby, I've very rarely needed to write a recursion. In something like Elm, I've had to do that, eh, not infrequently. Again, it depends, like, if I'm doing more library-esque code versus more application code. If I'm writing application code and I'm using an existing, let's say, tree library, then I typically don't need to write a recursion because they've already written traversals for me. If I'm making my own and I have made my own tree libraries, then yes, I'm writing recursions myself and building those traversals so that other people don't have to.
STEPHANIE: Yeah, that makes sense. I'd much rather someone who has read that paper [laughs] write some traversal methods for me.
JOËL: And, you know, for those who are curious about it, we will put a link to this paper in the description.
So, we've talked about a sort of very academic mental model way of thinking about reducing. I want to shift gears and talk about one that I have found is incredibly practical, and that is the idea that reduce is a way to scale an operation that works on two objects to an operation that works on sort of an unlimited number of objects.
To make it more concrete, take something like addition. I can add two numbers. The plus operator allows me to take one number, add another, get a sum. But what if I want to not just add two numbers? I want to add an arbitrary number of numbers together. Reduce allows me to take that plus operator and then just scale it up to as many numbers as I want. I can just plug that into, you know, I have an array of numbers, and I just call dot reduce plus operator, and, boom, it can now scale to as many numbers as I want, and I can sum the whole thing.
STEPHANIE: That dovetails quite nicely with your take earlier about how you shouldn't pass a block to reduce. You should extract that into a method. Don't you think?
JOËL: I think it does, yes. And then maybe it's, like, sort of two sides of a coin because I think what this leads to is an approach that I really like for reducing because sometimes, you know, here, I'm starting with addition. I'm like, oh, I have addition. Now, I want to scale it up. How do I do that? I can use reduce. Oftentimes, I'm faced with sort of the opposite problem. I'm like, oh, I need to add all these numbers together. How do I do that? I'm like, probably with a reduce. But then I start writing the block, and, like, I get way too into my head about the accumulator and what's going to happen.
So, my strategy for writing reduce expressions is to, instead of trying to figure out how to, like, do the whole thing together, first ask myself, how do I want to combine any two elements that are in the array? So, I've got an array of numbers, and I want to sum them all. What is the thing I need to do to combine just two of those? Forget the array. Figure that out.
And then, once I have that figured out, maybe it's an existing method like plus. Maybe it's a method I need to define on it if it's a custom object. Maybe it's a method that I write somewhere. Then, once I have that, I can say, okay, I can do it for two items. Now, I'm going to scale it up to work for the whole array, and I can plug it into reduce. And, at that point, the work is already basically done, so I don't end up with a really complex block. I don't end up, like, almost ending in, like, a recursive infinite loop in my head because I do that.
STEPHANIE: [laughs].
JOËL: So, that approach of saying, start by figuring out what is the operation you want to do to combine two elements, and then use reduce as a way to scale that to your whole array is a way that I've used to keep things simple in my mind.
STEPHANIE: Yeah, I like that a lot as a supplement to the model I shared earlier because, for me, when I think about reducing as, like, collapsing into a value, you kind of are just like, well, okay, I start with the collection, and then somehow I get to my single value. But the challenge is figuring out how that happens [laughs], like, the magic that happens in between that.
And I think another alias that we haven't mentioned yet for reduce that is used in a lot of other languages is fold. And I actually like that one a lot, and I think it relates to your mental model. Because when I think about folding, I'm picturing folding up a paper like an accordion. And you have to figure out, like, what is the first fold that I can make? And just repeating that over and over to get to your little stack of accordion paper [laughs]. And if you can figure out just that first step, then you pretty much, like, have the recipe for getting from your initial input to, like, your desired output.
JOËL: Yeah. I think fold is interesting in that some languages will make a distinction between fold and reduce. They will have both. And typically, fold will require you to pass an initial value, like a starting accumulator, to start it off. Whereas reduce will sort of assume that your array can use the first element of the array as the first accumulator.
STEPHANIE: Oh, I just came up with another visual metaphor for this, which is, like, folding butter into croissant pastry when the butter is your initial value [laughs].
JOËL: And then the crust is, I guess, the elements in the array.
STEPHANIE: Yeah. Yeah. And then you get a croissant out of it [laughs]. Don't ask me how it gets to a perfectly baked, flaky, beautiful croissant, but somehow that happens [laughs].
JOËL: So, there's an interesting sort of subtlety here that I think happens because there are sort of two slightly different ways that you can interact with a reduce. Sometimes, your accumulator is of the same type as the elements in your array. So, you're summing an array of numbers, and your accumulator is the sum, but each of the elements in the array are also numbers. So, it's numbers all the way through. And sometimes, your accumulator has a different type than the items in the array. So, maybe you have an array of words, and you want to get the sum of all of the characters and all the words. And so, now your accumulator is a number, but each of the items in the array are strings.
STEPHANIE: Yeah, that's an interesting distinction because I think that's where you start to see the complex blocks being passed and reduced.
JOËL: The complex blocks, definitely; I think they tend to show up when your accumulator has a different type than the individual items. So, maybe that's, like, a slightly more complicated use case. Oftentimes, too, the accumulator ends up being some, like, more complex, like, hash or something that maybe would really benefit from being a custom object.
STEPHANIE: I've never done that before, but I can see why that would be really useful. Do you have an example of when you used a custom object as the accumulator?
JOËL: So, I've done it for situations where I'm working with objects that are doing tally-like operations, but I'm not doing just a generic tally. There's some domain-specific stuff happening. So, it's some sort of aggregate counter on multiple dimensions that you can use, and that can get really ugly. And you can either do it with a reduce or you can have some sort of, like, initial version of the hash outside and do an each and mutate the hash and stuff like that. All of these tend to be a little bit ugly. So, in those situations, I've often created some sort of custom object that has some instance methods that allow to sort of easily add new elements to it.
STEPHANIE: That's really interesting because now I'm starting to think, what if the elements in the collection were also a custom object? [chuckles] And then things could, I feel like, could be really powerful [laughs].
JOËL: There's often a lot of value, right? Because if the items in the collection are also a custom object, you can then have methods on them. And then, again, the sort of complexity of the reduce can sort of, like, fade away because it doesn't own any of the logic. All it does is saying, hey, there's a thing you can do to combine two items. Let's scale it up to work on a collection of items. And now you've sort of, like, really simplified what logic is actually owned inside the reduce.
I do want to shout out for those listeners who are theory nerds and want to dig into this. When you have a reduce, and you've got an operation where all the values are of the same type, including the accumulator, typically, what you've got here is some form of monoid. It may be a semigroup. So, if you want to dig into some theory, those are the words to Google and to go a deep dive on.
The main thing about monoids, in particular, is that monoids are any objects that have both a sort of a base case, a sort of empty version of themselves, and they have some sort of combining method that allows you to combine two values of that type. If your object has these things and follows...there's a few rules that have to be true. You have a monoid. And they can then be sort of guaranteed to be folded nicely because you can plug in their base case as your initial accumulator. And you can plug in their combining method as just the value of the block, and everything else just falls into place.
A classic here is addition for numbers. So, if you want to add two numbers, your combining operator is a plus. And your sort of empty value is a zero. So, you would say, reduce initial value is zero, array of numbers. And your block is just plus, and it won't sum all of the numbers. You could do something similar with strings, where you can combine strings together with plus, and, you know, your empty string is your base case. So, now you're doing sort of string concatenation over arbitrary number of strings.
Turns out there's a lot of operations that fall into that, and you can even define some of those on your custom object. So, you're like, oh, I've got a custom object. Maybe I want some way of, like, combining two of them together. You might be heading in the direction of doing something that is monoidal, and if so, that's a really good hint to know that it can sort of, like, just drop into place with a fold or a reduce and that that is a tool that you have available to you.
STEPHANIE: Yeah, well, I think my eyes, like, widened a little bit when you first dropped the term monoid [laughs]. I do want to spend the last bit of our time talking about when not to use reduce, and, you know, we did talk a lot about recursion. But when do you think a regular old loop will just be enough?
JOËL: So, you're suggesting when would you want to use something like an each rather than a reduce?
STEPHANIE: Yeah. In my mind, you know, you did offer, like, a lot of ways to make reduce simpler, a lot of strategies to end up with some really nice-looking syntax [chuckles], I think. But, oftentimes, I think it can be equally as clear storing your accumulator outside of the iteration and that, like, is enough for me to understand. And reduce takes a little bit of extra overhead to figure out what I'm looking at. Do you have any thoughts about when you would prefer to do that? Or do you think that you would usually reach for something else?
JOËL: Personally, I generally don't like the pattern of using each to iterate over a collection and then mutate some external accumulator. That, to me, is a bit of a code smell. It's a sign that each is not quite powerful enough to do the thing that I want to do and that I'm probably needing some sort of more specialized form of iteration. Sometimes, that's reduce. Oftentimes, because each can suffer from the same problem you mentioned from reduce, where it's like, oh, you're doing this thing where you mutate an external accumulator. Turns out what you're really doing is just map. So, use map or use select or, you know, some of the other built-in iterators from the enumerable library.
There's a blog post on the thoughtbot blog that I continually link to people. And when I see the pattern of, like, mutating an external variable with each, yeah, I tend to see that as a bit of a code smell. I don't know that I would never do it, but whenever I see that, it's a sign to me to, like, pause and be like, wait a minute, is there a better way to do this?
STEPHANIE: Yeah, that's fair. I like the idea that, like, if there's already a method available to you that is more specific to go with that. But I also think that sometimes I'd rather, like, come across that pattern of mutating a variable outside of the iteration over, like, someone trying to do something clever with the reduce.
JOËL: Yeah, I guess reduce, especially if it's got, like, a giant block and you've got then, like, things in there that break or call next to skip iterations and things like that, that gets really mind-bending really quickly. I think a case where I might consider using an each over a reduce, and that's maybe generally when I tend to use each, is when I'm doing side effects. If I'm using a reduce, it's because I care about the accumulated value at the end. If I'm using each, it's typically because I am trying to do some amount of side effects.
STEPHANIE: Yeah, that's a really good call out. I had that written down in my notes, and I'm glad you brought it up because I've seen them get conflated a little bit, and perhaps maybe that's the source of the pain that I'm talking about. But I really like that heuristic of reduce as, you know, you're caring about the output, as opposed to what's going on inside. Like, you don't want any unexpected behavior.
JOËL: And I think that applies to something like map as well. My sort of heuristic is, if I'm doing side effects, I want each. If I want transformed values that are sort of one-to-one in the collection, I want map. If I want a single sort of aggregate value, then I want reduce.
STEPHANIE: I think that's the cool thing about mixing paradigms sometimes, where all the strategies you talked about in terms of, you know, using custom, like, objects for your accumulator, or the elements in your collection, like, that's something that we get because, you know, we're using an object-oriented language like Ruby. But then, like, you also are kind of bringing the functional programming lens to, like, when you would use reduce in the first place. And yeah, I am just really excited now [chuckles] to start looking for some places I can use reduce after this conversation and see what comes out of it.
JOËL: I think I went on a bit of an interesting journey where, as a newer programmer, reduce was just, like, really intense. And I struggled to understand it. And I was like, ban it from code. I don't want to ever see it. And then, I got into functional programming. I was like, I'm going to do reduce everywhere. And, honestly, it was kind of messy.
And then I, like, went really deep on a lot of functional theory, and I think understood some things that then I was able to take back to my code and actually write reduce expressions that are much simpler so that now my heuristic is like, I love reduce; I want to use it, but I want as little as possible in the reduce itself. And because I understand some of these other concepts, I have the ability to know what things can be extracted in a way that will feel very natural, in a way that myself from five years ago would have just been like, oh, I don't know. I've got this, you know, 30-line reduce expression that I know is complicated, but I don't know how to improve.
And so, a little bit of the underlying theory, I don't think it's necessary to understand these simplified reduces, but as an author who's writing them, I think it helps me write reduces that are simpler. So, that's been my journey using reduce.
STEPHANIE: Yeah. Well, thanks for sharing. And I'm really excited. I hope our listeners have learned some new things about reduce and can look at it from a different light.
JOËL: There are so many different perspectives. And I think we keep discovering new mental models as we talk to different people. It's like, oh, this particular perspective. And there's one that we didn't really dig into but that I think makes more sense in a functional world that's around sort of deconstructing a structure and then rebuilding it with different components. The shorthand name of this mental model, which is a fairly common one, is constructor replacement. For anyone who's interested in digging into that, we'll link it in the show notes.
I gave a talk at an Elm meetup where I sort of dug into some of that theory, which is really interesting and kind of mind-blowing. Not as relevant, I think, for Rubyists, but if you're in a language that particularly allows you to build custom structures out of recursive types or what are sometimes called algebraic data types, or tagged unions, or discriminated unions, this thing goes by a bajillion names, that is a really interesting other mental model to look at.
And, again, I don't think the list that we've covered today is exhaustive. You know, I would love it for any of our listeners; if you have your own mental models for how to think about folding, injecting, reducing, send them in: [email protected]. We'd love to hear them.
STEPHANIE: And on that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie shares about her vacation at Disney World, particularly emphasizing the technological advancements in the park's mobile app that made her visit remarkably frictionless. Joël had a conversation about a topic he loves: units of measure, and he got to go deep into the idea of dimensional analysis with someone this week.
Together, Joël and Stephanie talk about module documentation within software development. Joël shares his recent experience writing module docs for a Ruby project using the YARD documentation system. He highlights the time-consuming nature of crafting good documentation for each public method in a class, emphasizing that while it's a demanding task, it significantly benefits those who will use the code in the future. They explore the attributes of good documentation, including providing code examples, explaining expected usage, suggesting alternatives, discussing edge cases, linking to external resources, and detailing inputs, outputs, and potential side effects.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn, and together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I recently was on vacation, and I'm excited [chuckles] to tell our listeners all about it. I went to Disney World [laughs]. And honestly, I was especially struck by the tech that they used there. As a person who works in tech, I always kind of have a little bit of a different experience knowing a bit more about software, I suppose, than just your regular person [laughs], citizen. And so, at Disney World, I was really impressed by how seamlessly the like, quote, unquote, "real life experience" integrated with their use of their branded app to pair with, like, your time at the theme park.
JOËL: This is, like, an app that runs on your mobile device?
STEPHANIE: Yeah, it's a mobile app. I haven't been to Disney in a really long time. I think the last time I went was just as a kid, like, this was, you know, pre-mobile phones. So, I recall when you get into the line at a ride, you can skip the line by getting what's called a fast pass. And so, you kind of take a ticket, and it tells you a designated time to come back so that you could get into the fast line, and you don't have to wait as long.
And now all this stuff is on your mobile app, and I basically did not wait in [laughs] a single line for more than, like, five minutes to go on any of the rides I wanted. It just made a lot of sense that all these things that previously had more, like, physical touchstones, were made a bit more convenient. And I hesitate to use the word frictionless, but I would say that accurately describes the experience.
JOËL: That's kind of amazing; the idea that you can use tech to make a place that's incredibly busy also feel seamless and where you don't have to wait in line.
STEPHANIE: Yeah and, actually, I think the coolest part was it blended both your, like, physical experience really well with your digital one. I think that's kind of a gripe I have as a technologist [laughs] when I'm just kind of too immersed in my screen as opposed to the world around me. But I was really impressed by the way that they managed to make it, like, a really good supplement to your experience being there.
JOËL: So, you're not hyped for a future world where you can visit Disney in VR?
STEPHANIE: I mean, I just don't think it's the same. I rode a ride [laughs] where it was kind of like a mini roller coaster. It was called Expedition Everest. And there's a moment, this is, like, mostly indoors, but there's a moment where the roller coaster is going down outside, and you're getting that freefall, like, drop feeling in your stomach. And it also happened to be, like, drizzling that day that we were out there, and I could feel it, you know, like, pelting my head [laughs]. And until VR can replicate that experience [chuckles], I still think that going to Disney is pretty fun.
JOËL: Amazing.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I'm really excited because I had a conversation about a topic that I like to talk about: units of measure. And I got to go deep into the idea of dimensional analysis with someone this week. This is a technique where you can look at a calculation or a function and sort of spot-check whether it's correct by looking at whether the unit for the measure that would come out match what you would expect. So, you do math on the units and ignore the numbers coming into your formula. And, you know, let's say you're calculating the speed of something, and you get a distance and the amount of time it took you to take to go that distance.
And let's say your method implements this as distance times time. Forget about doing the actual math with the numbers here; just look at the units and say, okay, we've got our meters, and we've got our seconds, and we're multiplying them together. The unit that comes out of this method is meters times seconds. You happen to know that speeds are not measured in meters times seconds. They're measured in meters divided by seconds or meters per second. So, immediately, you get a sense of, like, wait a minute, something's wrong here. I must have a bug in my function.
STEPHANIE: Interesting. I'm curious how you're representing that data to, like, know if there's a bug or not. In my head, when you were talking about that, I'm like, oh yeah, I definitely recall doing, like, math problems for homework [laughs] where I had, you know, my meters per second. You have your little fractions written out, and then when you multiply or divide, you know how to, like, deal with the units on your piece of paper where you're showing your work. But I'm having a hard time imagining what that looks like as a programmer dealing with that problem.
JOËL: You could do it just all in your head based off of maybe some comments that you might have or the name of the variable or something. So, you're like, okay, well, I have a distance in meters and a time in seconds, and I'm multiplying the two. Therefore, what should be coming out is a value that is in meters times seconds. If you want to get fancier, you can do things with value objects of different types. So, you say, okay, I have a distance, and I have a time. And so, now I have sort of a multiplication of a distance and a time, and sort of what is that coming out as?
That can sometimes help you prevent from having some of these mistakes because you might have some kind of error that gets raised at runtime where it's like, hey, you're trying to multiply two units that shouldn't be multiplied, or whatever it is. You can also, in some languages, do this sort of thing automatically at the type level. So, instead of looking at it yourself and sort of inferring it all on your own based off of the written code, languages like F# have built-in unit-of-measure systems where once you sort of tag numbers as just being of a particular unit of measure, any time you do math with those numbers, it will then tag the result with whatever compound unit comes from that operation.
So, you have meters, and you have seconds. You divide one by the other, and now the result gets tagged as meters per second. And then, if you have another calculation that takes the output of the first one and it comes in, you can tell the compiler via type signature, hey, the input for this method needs to be in meters per second. And if the other calculation sort of automatically builds something that's of a different unit, you'll get a compilation error. So, it's really cool what it can do.
STEPHANIE: Yeah, that is really neat. I like all of those built-in guardrails, I suppose, to help you, you know, make sure that your answer is correct. Definitely could have used that [chuckles]. Turns out I just needed a calculator to take my math test with [laughs].
JOËL: I think what I find valuable more than sort of the very rigorous approach is the mindset. So, anytime you're dealing with numbers, thinking in your mind, what is the unit of this number? When I do math with it with a different number, is it the same unit? Is it a different unit? What is the unit of the thing that's coming out? Does this operation make sense in the domain of my application? Because it's easy to sometimes think you're doing a math operation that makes sense, and then when you look at the unit, you're like, wait a minute, this does not make sense.
And I would go so far as to say that, you know, you might think, oh, I'm not doing a physics app. I don't care about units of measure. Most numbers in your app that are actually numbers are going to have some kind of unit of measure associated to them. Occasionally, you might have something where it's just, like, a straight-up, like, quantity or something like that. It's a dimensionless number. But most things will have some sort of unit. Maybe it's a number of dollars. Maybe it is an amount of time, a duration. It could be a distance. It could be all sorts of things. Typically, there is some sort of unit that should attach to it.
STEPHANIE: Yeah. That makes sense that you would want to be careful about making sure that your mathematical operations that you're doing when you're doing objects make sense. And we did talk about this in the last episode about multidimensional numbers a little bit. And I suppose I appreciate you saying that because I think I have mostly benefited from other people having thought in that mindset before and encoding, like I mentioned, those guardrails.
So, I can recall an app where I was working with, you know, some kind of currency or money object, and that error was raised when I would try to divide by zero because rather than kind of having to find out later with some, not a number or infinite [laughs] amount of money bug, it just didn't let me do that. And that wasn't something that I had really thought about, you know, I just hadn't considered that zero value edge case when I was working on whatever feature I was building.
JOËL: Yeah, or even just generally the idea of dividing money. What does that even mean? Are you taking an amount of money and splitting it into two equivalent piles to split among multiple people? That kind of makes sense. Are you dividing money by another money value? That's now asking a very different kind of question.
You're asking, like, what is the ratio between these two, I guess, piles of money if we want to make it, you know, in the physical world? Is that a thing that makes sense in your application? But also, realize that that ratio that you get back is not itself an amount of money. And so, there are some subtle bugs that can happen around that when you don't keep track of what your quantities are.
So, this past week, I've been working on a project where I ended up having to write module docs for the code in question. This is a Ruby project, so I'm writing docs using the YARD documentation system, where you effectively just write code comments at the sort of high level covering the entire class and then, also, individual documentation comments on each of the methods. And that's been really interesting because I have done this in other languages, but I'd never done it in Ruby before. And this is a piece of code that was kind of gnarly and had been tricky for me to figure out. And I figured that a couple of these classes could really benefit from some more in-depth documentation.
And I'm curious, in your experience, Stephanie, as someone who's writing code, using code from other people, and who I assume occasionally reads documentation, what are the things that you like to see in good sort of method-level docs?
STEPHANIE: Personally, I'm really only reading method-level docs when, you know, at this point, I'm, like, reaching for a method. I want to figure out how to use it in my use case right now [laughs]. So, I'm going to search API documentation for it. And I really am just scanning for inputs, especially, I think, and maybe looking at, you know, some potential various, like, options or, like, variations of how to use the method. But I'm kind of just searching for that at a glance and then moving on [laughs] with my day. That is kind of my main interaction with module docs like that, and especially ones for Ruby and Rails methods.
JOËL: And for clarity's sake, I think when we're talking about module docs here, I'm generally thinking of, like, any sort of documentation that sort of comments in code meant to document. It could be the whole modular class. It could be on a per-method level, things like RDoc or YARD docs on Ruby classes. You used the word API docs here. I think that's a pretty similar idea.
STEPHANIE: I really haven't given the idea of writing this kind of documentation a lot of thought because I've never had to do too much of it before, but I know, recently, you have been diving deep into it because, you know, like you said, you found these classes that you were working with a bit ambiguous, I suppose, or just confusing. And I'm wondering what kind of came out of that journey. What are some of the most interesting aspects of doing this exercise?
JOËL: And one of the big ones, and it's not a fun one, but it is time-consuming. Writing good docs per method for a couple of classes takes a lot of time, and I understand why people don't do it all the time.
STEPHANIE: What kinds of things were you finding warranted that time? Like, you know, you had to, at some point, decide, like, whether or not you're going to document any particular method. And what were some of the things you were looking out for as good reasons to do it?
JOËL: I was making the decisions to document or not document on a class level, and then every public method gets documentation. If there's a big public API, that means every single one of those methods is getting some documentation comments, explaining what they do, how they're meant to be used, things like that. I think my kind of conclusion, having worked with this, is that the sort of sweet spot for this sort of documentation is for anything that is library-like, so a lot of things that maybe would go into a Rails lib directory might make sense. Anything you're turning into a gem that probably makes sense.
And sometimes you have things in your Rails codebase that are effectively kind of library-like, and that was the case for the code that I was dealing with. It was almost like a mini ORM style kind of ActiveRecord-inspired series of base classes that had a bunch of metaprogramming to allow you to write models that were backed by not a database but a headless CMS, a content management system. And so, these classes are not extracted to the lib directory or, like, made into a gem, but they feel very library-esque in that way.
STEPHANIE: Library-like; I like that descriptor a lot because it immediately made me think of another example of a time when I've used or at least, like, consumed this type of documentation in a, like, SaaS repo. Rather, you know, I'm not really seeing that level of documentation around domain objects, but I noticed that they really did a lot of extending of the application record class because they just had some performance needs that they needed to write some, like, custom code to handle.
And so, they ended up kind of writing a lot of their own ORM-like methods for just some, like, custom callbacks on persisting and some just, like, bulk insertion functionality. And those came with a lot of different ways to use them. And I really appreciated that they were heavily documented, kind of like you would expect those ActiveRecord methods to be as well.
JOËL: So, I've been having some conversations with other members at thoughtbot about when they like to use the style of module doc. What are some of the alternatives? And one that kept coming up for different people that they would contrast with this is what they would call the big README approach, and this could be for a whole gem, or it could be maybe some directory with a few classes in your application that's got a README in the root of the directory.
And instead of documenting each method, you just write a giant README trying to answer sort of all of the questions that you anticipate people will ask. Is that something that you've seen, and how do you feel about that as a tool when you're looking for help?
STEPHANIE: Yes. I actually really like that style of documentation. I find that I just want examples to get me started, especially; I guess this is especially true for libraries that I'm not super familiar with but need to just get a working knowledge about kind of immediately. So, I like to see examples, the getting started, the just, like, here's what you need to know. And as I start to use them, that will get me rolling. But then, if I find I need more details, then I will try to seek out more specific information that might come in the form of class method documentation.
But I'm actually thinking about how FactoryBot has one of the best big README-esque [laughs] style of documentation, and I think they did a really big refresh of the docs not too long ago. It has all that high-level stuff, and then it has more specific information on how to use, you know, the most common methods to construct your factories. But those are very detailed, and yet they do sit, like, separately from inline, like, code documentation in the style of module docs that we're talking about.
So, it is kind of an interesting mix of both that I think is helpful for me personally when I want both the “what do I need to know now?” And the, “like, okay, I know where to look for if I need something a little more detailed.”
JOËL: Yeah. The two don't need to be mutually exclusive. I thought it was interesting that you mentioned how much examples are valuable to you because...I don't know if this is controversial, but an opinion that I have about sort of per-method documentation is that you should always default to having a code example for every method. I don't care how simple it is or how obvious it is what it does. Show me a code example because, as a developer, examples are really, really helpful. And so, seeing that makes documentation a lot more valuable than just a couple of lines that explain something that was maybe already obvious from the title of the method. I want to see it in action.
STEPHANIE: Interesting. Do you want to see it where the method definition is?
JOËL: Yes. Because sometimes the method definition, like, the implementation, might be sort of complex. And so, just seeing a couple of examples, like, oh, you call with this input, you get that. Call with this other input; you get this other thing. And we see this in, you know, some of the core docs for things like the enumerable methods where having an example there to be like, oh, so that's how map works. It returns this thing under these circumstances. That sort of thing is really helpful.
And then, I'll try to do it at a sort of a bigger level for that class itself. You have a whole paragraph about here's the purpose of the class. Here's how you should use it. And then, here's an example of how you might use it. Particularly, if this is some sort of, like, base class you're meant to inherit from, here's the circumstances you would want to subclass this, and then here's the methods you would likely want to override.
And maybe here are the DSLs you might want to have and to kind of package that in, like, a little example of, in this case, if you wanted a model that read from the headless CMS, here's what an example of such a little model might look like. So, it's kind of that putting it all together, which I think is nice in the module docs. It could probably also live in the big README at some level.
STEPHANIE: Yeah. As you are saying that, I also thought about how I usually go search for tests to find examples of usage, but I tend to get really overwhelmed when I see inline, like, that much inline documentation. I have to, like, either actively ignore it, choose to ignore it, or be like, okay, I'm reading this now [laughs]. Because it just takes up so much visual space, honestly.
And I know you put a lot of work into it, a lot of time, but maybe it's because of the color of my editor theme where comments are just that, like, light gray [laughs]. I find them quite easy to just ignore. But I'm sure there will be some time where I'm like, okay, like, if I need them, I know they're there.
JOËL: Yeah, that is, I think, a downside, right? It makes it harder to browse the code sometimes because maybe your entire screen is almost taken up by documentation, and then, you know, you have one method up, and you've got to, like, scroll through another page of documentation before you hit the next method, and that makes it harder to browse. And maybe that's something that plays into the idea of that separation between library-esque code versus application code.
When you browse library-esque code, when you're actually browsing the source, you're probably doing it for different reasons than you would for code in your application because, at that point, you're effectively source diving, sometimes being like, oh, I know this class probably has a method that will do the thing I want. Where is it? Or you're like, there's an edge case I don't understand on this method. I wonder what it does. Let me look at the implementation. Or even some existing code in the app is using this library method. I don't know what it does, but they call this method, and I can't figure out why they're using it. Let me look at the source of the library and see what it does under the hood.
STEPHANIE: Yeah. I like the distinction of it is kind of a different mindset that you're reading the code at, where, like, sometimes my brain is already ready to just read code and try to figure out inputs and outputs that way. And other times, I'm like, oh, like, I actually can't parse this right now [chuckles]. Like, I want to read just English, like, telling me what to expect or, like, what to look out for, especially when, like you said, I'm not really, like, trying to figure out some strange bug that would lead me to diving deep in the source code. It's I'm at the level where I'm just reaching for a method and wanting to use it.
We're writing these YARD docs. I think I also heard you mention that you gave some, like, tips or maybe some gotchas about how to use certain methods. I'm curious why that couldn't have been captured in a more, like, self-documenting way. Or was there a way that you could have written the code for that not to have been needed as a comment or documented as that? And was there a way that method names could have been clear to signal, like, the intention that you were trying to convey through your documentation?
JOËL: I'm a big fan of using method names as a form of documentation, but they're frequently not good enough. And I think comments, whether they're just regular inline comments or more official documentation, can be really good to help avoid sort of common pitfalls. And one that I was working with was, there were two methods, and one would find by a UID, so it would search up a document by UID. And another one would search by ID.
And when I was attempting to use these before I even started documenting, I used the wrong one, and it took me a while to realize, oh wait, these things have both UIDs and IDs, and they're slightly different, and sometimes you want to use one or the other. The method names, you know, said like, "Find by ID" or "Find by UID." I didn't realize there were both at the time because I wasn't browsing the source. I was just seeing a place where someone had used it. And then, when I did find it in the source, I'm like, well, what is the difference?
And so, something that I did when I wrote the docs was sort of call out on both of those methods; by the way, there is also find by UID. If you're searching by UID, consider using the other one. If you don't know what the difference is, here's a sentence summarizing the difference. And then, here's a link to external documentation if you want to dive into the nitty gritty of why there are two and what the differences are. And I think that's something you can't capture in just a method name.
STEPHANIE: Yeah, that's true. I like that a lot. Another use case you can think of is when method names are aliased, and it's like, I don't know how I would have possibly known that until I, you know, go through the journey of realizing [laughs] that these two methods do the same thing or, like, stumbling upon where the aliasing happens.
But if that were captured in, like, a little note when I'm in, like, a documentation viewer or something, it's just kind of, like, a little tidbit of knowledge [laughs] that I get to gain along the way that ends up, you know, being useful later because I will have just kind of...I will likely remember having seen something like that. And I can at least start my search with a little bit more context than when you don't know what you don't know.
JOËL: I put a lot of those sorts of notes on different methods. A lot of them are probably based on a personal story where I made a mistaken assumption about this method, and then it burned me. But I'm like, okay, nobody else is going to make that mistake. By the way, if you think this is what the method does, it does something slightly different and, you know, here's why you need to know that.
STEPHANIE: Yeah, you're just looking out for other devs.
JOËL: And, you know, trying to, like, take my maybe negative experience and saying like, "How can I get value out of that?" Maybe it doesn't feel great that I lost an hour to something weird about a method. But now that I have spent that hour, can I get value out of it? Is the sort of perspective I try to have on that.
So, you mentioned kind of offhand earlier the idea of a documentation viewer, which would be separate than just reading these, I guess, code comments directly in your code editor. What sort of documentation viewers do you like to use?
STEPHANIE: I mostly search in my browser, you know, just the official documentation websites for Rails, at least. And then I know that there are also various options for Ruby as well. And I think I had mentioned it before but using DuckDuckGo as my search engine. I have nice bang commands that will just take me straight to the search for those websites, which is really nice. Though, I have paired with people before who used various, like, macOS applications to do something similar. I think Alfred might have some built-in workflows for that. And then, a former co-worker used to use one called Dash, that I have seen before, too. So, it's another one of those just handy just, like, search productivity extensions.
JOËL: You mentioned the Rails documentation, and this is separate from the guides. But the actual Rails docs are generated from comments like this inline in code. So, all the different ActiveRecord methods, when you search on the Rails documentation you're like, oh yeah, how does find_by work? And they've got a whole, like, paragraph explaining how it works with a couple of examples. That's this kind of documentation. If you open up that particular file in the source code, you'll find the comments.
And it makes sense for Rails because Rails is more of, you know, library-esque code. And you and I search these docs pretty frequently, although we don't tend to do it, like, by opening the Rails gem and, like, grepping through the source to find the code comment. We do it through either a documentation site that's been compiled from that source or that documentation that's been extracted into an offline tool, like you'd mentioned, Dash.
STEPHANIE: Yeah, I realized how conflicting, I suppose, it is for me to say that I find inline documentation really overwhelming or visually distracting, whereas I recognize that the only reason I can have that nice, you know, viewing experience is because documentation viewers use the code comments in that format to be generated.
JOËL: I wonder if there's like a sort of...I don't know what this pattern is called, but a bit of a, like, middle-quality trap where if you're going to source dive, like, you'd rather just look at the code and not have too much clutter from sort of mediocre comments. But if the documentation is really good and you have the tooling to read it, then you don't even need to source dive at all. You can just read the documentation, and that's sufficient.
So, both extremes are good, but that sort of middle kind of one foot in each camp is sort of the worst of both worlds experience. Because I assume when you look for Rails documentation, you never open up the actual codebase to search. The documentation is good enough that you don't even need to look at the files with the comments and the code.
STEPHANIE: Yeah, and I'm just recalling now there's, like, a UI feature to view the source from the documentation viewer page.
JOËL: Yes.
STEPHANIE: I use that actually quite a bit if the comments are a little bit sparse and I need just the code to supplement my understanding, and that is really nice. But you're right, like, I very rarely would be source diving, unless it's a last resort [laughs], let's be honest.
JOËL: So, we've talked about documentation viewers and how that can make things nice, and you're able to read documentation for things. But a lot of other tooling can benefit from this sort of model documentation as well, and I'm thinking, in particular, Solargraph, which is Ruby's language server protocol. And it has plugins for VS Code, for Vim, for a few different editors, takes advantage of that to provide all sorts of things.
So, you can get smart expansion of code and good suggestions. You can get documentation for what's under your cursor. Maybe you're reading somebody else's code that they've written, and you're like, why are they calling this parameterized method here? What does that even do? Like, in VS Code, you could just hover over it, and it will pop up and show you documentation, including the, like, inputs and return types, and things like that. That's pretty nifty.
STEPHANIE: Yeah, that is cool. I use VS Code, but I've not seen that too much yet because I don't think I've worked in enough codebases with really comprehensive [laughs] YARD docs. I'm actually wondering, tooling-wise, did you use any helpful tools when you were writing them or were you hand-documenting each?
JOËL: I was hand-documenting everything.
STEPHANIE: Class. Okay.
JOËL: The thing that I did use is the YARD gem, which you don't need to have the gem to write YARD-style documentation. But if you have the gem, you can run a local server and then preview a documentation site that is generated from your comments that has everything in there. And that was incredibly helpful for me as I was trying to sort of see an overview of, okay, what would someone who's looking at the docs generated from this see when they're trying to look for what the documentation of a particular method does?
STEPHANIE: Yeah, and that's really nice.
JOËL: Something that I am curious about that I've not really had a lot of experience with is whether or not having extra documentation like that can help AI tools give us better suggestions.
STEPHANIE: Yeah, I don't know the answer to that either, but I would be really curious to know if that is already something that happens with something like Copilot.
JOËL: Do better docs help machines, or are they for humans only?
STEPHANIE: Whoa, that's a very [laughs] philosophical question, I think. It would make sense, though, that if we already have ways to parse and compile this kind of documentation, then I can see that incorporating them into the types of, like, generative problems that AI quote, unquote "solves" [chuckles] would be really interesting to find out. But anyone listening who kind of knows the answer to that or has experience working with AI tools and various types of code comment documentation would be really curious to know what your experience is like and if it improves your development workflow.
So, for people who might be interested in getting better at documenting their code in the style of module docs, what would you say are some really great attributes of good documentation in this form?
JOËL: I think, first of all, you have to write from the motivation of, like, if you were confused and wanting to better understand what a method does, what would you like to see? And I think coming from that perspective, and that was, in my case, I had been that person, and then I was like, okay, now that I've figured it out, I'm going to write it so that the next person is not confused.
I have five or six things that I think were really valuable to add to the docs, a few of which we've already mentioned. But rapid fire, first of all, code example. I love code examples. I want a code example on every method. An explanation of expected usage. Here's what the method does. Here's how we expect you to use this method in any extra context about sort of intended use.
Callouts for suggested alternatives. If there are methods that are similar, or there's maybe a sort of common mistake that you would reach for this method, put some sort of call out to say, "Hey, you probably came here trying to do X. If that's what you were actually trying to do, you should use method Y." Beyond that, a discussion of edge cases, so any sort of weird ways the method behaves. You know, when you pass nil to it, does it behave differently? If you call it in a different context, does it behave differently? I want to know that so that I'm not totally surprised.
Links to external resources–really great if I want to, like, dig deeper. Is this method built on some sort of, like, algorithm that's documented elsewhere? Please link to that algorithm. Is this method integrating with some, like, third-party API? You know, they have some documentation that we could link to to go deeper into, like, what these search options do. Link to that. External links are great. I could probably find it by Googling myself, but you are going to make me very happy as a developer if you already give me the link.
You'd mentioned capturing inputs and outputs. That's a great thing to scan for. Inputs and outputs, though, are more sometimes than just the arguments and return values. Although if we're talking about arguments, any sort of options hash, please document the keys that go in that because that's often not obvious from the code. And I've spent a lot of time source diving and jumping between methods trying to figure out like, what are the options I can pass to this hash?
Beyond the explicit inputs and outputs, though, anything that is global state that you rely on. So, do you need to read something from an environment variable or even a global variable or something like that that might make this method behave differently in different situations? Please document that. Any situations where you might raise an error that I might not expect or that I might want to rescue from, let me know what are the potential errors that might get raised.
And then, finally, any sorts of side effects. Does this method make a network call? Are you writing to the file system? I'd like to know that, and I'd have to, like, figure it out by trial and error. And sometimes, it will be obvious in just the description of the method, right? Oh, this method pulls data from a third-party API. That's pretty clear. But maybe it does some sort of, like, caching in the background or something to a file that's not really important. But maybe I'm trying to do a unit test that involves this, and now, all of a sudden, I have to do some weird stubbing. I'd like to know that upfront.
So, those are kind of all the things I would love to have in my sort of ideal documentation comment that would make my life easier as a developer when trying to use some code.
STEPHANIE: Wow. What a passionate plea [laughs]. I was very into listening to you list all of that. You got very animated. And it makes a lot of sense because I feel like these are kind of just the day-to-day developer issues we run into in our work and would be so awesome if, especially as the, you know, author where you have figured all of this stuff out, the author of a, you know, a method or a class, to just kind of tell us these things so we don't have to figure it out ourselves.
I guess I also have to respond to that by saying, on one hand, I totally get, like, you want to be saved [chuckles] from those common pitfalls. But I think that part of our work is just going through that and playing around and exploring with the code in front of us, and we learn all of that along the way. And, ultimately, even if that is all provided to you, there is something about, like, going through it yourself that gives you a different perspective on it.
And, I don't know, maybe it's just my bias against [laughs] all the inline text, but I've also seen a lot of that type of information captured at different levels of documentation. So, maybe it is a Confluence doc or in a wiki talking about, you know, common gotchas for this particular problem that they were trying to solve. And I think what's really cool is that, you know, everyone can kind of be served and that people have different needs that different styles of documentation can meet.
So, for anyone diving deep in the source code, they can see all of those examples inline. But, for me, as a big Googler [laughs], I want to see just a nice, little web app to get me the information that I need to find. I'm happy having that a little bit more, like, extracted from my source code.
JOËL: Right. You don't want to have to read the source code with all the comments in it. I think that's a fair criticism and, yeah, probably a downside of this. And I'm wondering, there might be some editor tooling that allows you to just collapse all comments and hide them if you wanted to focus on just the code.
STEPHANIE: Yeah, someone, please build that for me. That's my passionate plea [laughs]. And on that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Bye.
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël discusses the challenges he encountered while optimizing slow SQL queries in a non-Rails application. Stephanie shares her experience with canary deploys in a Rails upgrade. Together, Stephanie and Joël address a listener's question about replacing the wkhtml2pdf tool, which is no longer maintained.
The episode's main topic revolves around the concept of multidimensional numbers and their applications in software development. Joël introduces the idea of treating objects containing multiple numbers as single entities, using the example of 2D points in space to illustrate how custom classes can define mathematical operations like addition and subtraction for complex data types. They explore how this approach can simplify operations on data structures, such as inventories of T-shirt sizes, by treating them as mathematical objects.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've recently been trying to do some performance enhancements to some very slow queries. This isn't a Rails app, so we're sort of combining together a bunch of different scopes. And the way they're composing together is turning out to be really slow. And I've reached for a tool that is just really fun. It's a visualizer for SQL query plans.
You can put the SQL keywords in front of a query: 'EXPLAIN ANALYZE,' and it will then output a query plan, sort of how it's going to attempt to do the work. And that might be like, oh, we're going to use this index on this table to join on this other thing, and then we're going to...maybe this is a table that we think we're going to do a sequential scan through and, you know, it builds out a whole thing.
It's a big block of text, and it's kind of intimidating to look at. So, there are a few websites out there that will do this. You just paste a query plan in, and they will build you a nice, little visualization, almost like a tree of, like, tasks to be done. Oftentimes, they'll also annotate it with metadata that they pulled from the query plan. So, oh, this particular node is the really expensive one because we're doing a sequential scan of this table that has 15 million rows in it. And so, it's really useful to then sort of pinpoint what are the areas that you could optimize.
STEPHANIE: Nice. I have known that you could do that EXPLAIN ANALYZE on a SQL query, but I've never had to do it before. Is this your first time, or is it just your first time using the visualizer?
JOËL: I've played around with EXPLAIN ANALYZE a little bit before. Pro tip: In Rails, if you've got a scope, you can just chain dot explain on the end, and instead of running the query, it will run the EXPLAIN version of it and return the query plan. So, you don't need to, like, turn into SQL then manually run it in your database system to get the EXPLAIN. You can just tack a dot explain on there to get the query plan.
It's still kind of intimidating, especially if you've got a really complex query that's...this thing might be 50 lines long of EXPLAIN with all this indentation and other stuff. So, putting it into a sort of online visualizer was really helpful for the work that I was doing. So, it was my first time using an online visualizer. There are a few out there. I'll link to the one that I used in the show notes. But I would do that again, would recommend.
STEPHANIE: Nice.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I actually just stepped away from being in the middle of doing a Rails upgrade [chuckles] and releasing it to production just a few minutes before getting on to record with you on this podcast. And the reason I was able to do that, you know, without feeling like I had to just monitor to see how it was going is because I'm on a project where the client is using canary deploys. And I was so pleasantly surprised by how easy it made this experience where we had decided to send the canary release earlier this morning.
And the way that they have it set up is that the canary goes to 10% of traffic. 10% of the users were on Rails 7 for their sessions. And we saw a couple of errors in our error monitoring service. And we are like, "Okay, like, let's take a look at this, see what's going on." And it turns out it was not too big of a deal because it had to do with, like, a specific page. And, for the most part, if a user did encounter this error, they probably wouldn't again after refreshing because they had, like, a 90% chance [chuckles] of being directed to the previous version where everything is working.
And we were kind of making that trade-off of like, oh, we could hotfix this right now on the canary release. But then, as we were starting to debug a little bit, it was a bit hairier than we expected originally. And so, you know, I said, "I have to hop on to go record The Bike Shed. So, why don't we just take this canary down just for the time being to take that time pressure off? And it's Friday, so we're heading into the weekend. And maybe we can revisit the issue with some fresh eyes." So, I'm feeling really good, actually. And I'm glad that we were able to do something that seems scary, but there were guardrails in place to make it a lot more chill.
JOËL: Yay for the ability to roll back. You used the term canary release. That's not one that I'm familiar with. Can you explain what a canary release is?
STEPHANIE: Oh yeah. Have you heard of the phrase 'Canary in the coal mine'?
JOËL: I have.
STEPHANIE: Okay. So, I believe it's the same idea where you are, in this case, releasing a potentially risky change, but you don't want to immediately make it available to, like, all of your users. And so, you send this change to, like, a small reach, I suppose, and give it a little bit of a test and see [chuckles] what comes back. And that can help inform you of any issues or risks that might happen before kind of committing to deploying a potentially risky change with a bigger impact.
JOËL: Is this handled with something like a feature flag framework? Or is this, like, at an infrastructure level where you're just like, "Hey, we've got the canary image in, like, one container on one server, and then we'll redirect 10% of traffic to that to be served by that one and the other 90% to be served by the old container or something like that"?
STEPHANIE: Yeah, in this case, it was at the infrastructure level. And I have also seen something similar at a feature flag level, too, where you're able to have some more granularity around what percent of users are seeing a feature. But I think with something like a Rails upgrade, it was nice to be able to have that at that infrastructure level. It's not necessarily, like, a particular page or feature to show or not show.
JOËL: Yeah, I think you would probably want that at a higher level when you're changing over the entire app. Is this something that you had to custom-build yourself or something that just sort of came out of the box with some of the infrastructure tools you're using?
STEPHANIE: It came out of the box, actually. I just joined this client project this week and was very delighted to see just some really great deployment infrastructure and getting to meet the DevOps engineers, too, who built it. And they're really proud of it. They kind of walked us through our first release earlier this week. And he was telling me, the DevOps engineer, that this was actually his favorite part of the job, is walking people through their first release and being their buddy while they do it. Because I think he gets to also see users interact with the tool that he built, and he had a lot of pride in that, so it was a very delightful experience.
JOËL: That's so wonderful. I've been on so many projects where the sort of infrastructure side of things is not the team's strong point, and releasing can be really scary. And it's great to hear the opposite of that.
We recently received a question for Stephanie based on an earlier episode. So, the question asks, "In episode 413, Stephanie discussed a recent issue she encountered with wkhtml2pdf. The episode turned into a deeper discussion about package management, but I don't think it ever cycled back to the conclusion. I'm curious: how did Stephanie solve this dilemma? We're facing the same issue on a project that my team maintains. It's an old codebase, and there are bits of old code that use wkhtml2pdf to generate print views of our data in our application.
The situation is fairly dire. wkhtml2pdf is no longer maintained. In fact, it won't even be available to install from our operating system's package repositories in June. We're on FreeBSD, but I assume the same will be eventually true for other operating systems. And so, unless you want to maintain some build step to check out and compile the source code for an application that will no longer receive security updates, just living with it isn't really an option.
There are three options we're considering. One, eliminate the dependency entirely. Based on user feedback, it sounds like our old developers were using this library to generate PDFs when what users really wanted was an easy way to print. So, instead of downloading a PDF, just ensure the screen has a good print style sheet and register an onload handler to call window dot print. We're thinking we could implement this as an A/B test to the feature to test this theory. Or two, replace wkhtml2pdf with a call to Headless Chrome and use that to generate the PDF. Or, three, replace wkhtml2pdf with a language-level package. For us, that might be the dompdf library available via Composer because we're a PHP shop."
Yeah, a lot to unpack here. Any high-level thoughts, Stephanie?
STEPHANIE: My first thought while I was listening to you read that question is that wkhtml2pdf is such a mouthful [laughs]. And I was impressed how you managed to say it at least, like, five times.
JOËL: So, I try to say that five times fast.
STEPHANIE: And then, my second high-level thought was, I'm so sorry to Brian, our listener who wrote in, because I did not really solve this dilemma [chuckles] for my project and team. I kind of kicked the can down the road, and that's because this was during a support and maintenance rotation that I've talked a little bit about before on the show. I was only working on this project for about a week.
And what we thought was a small bug to figure out why PDFs were a little bit broken turned out, as you mentioned, to be this kind of big, dire dilemma where I did not feel like I had enough information to make a good call about what to do. So, I kind of just shared my findings that, like, hey, there is kind of a risk and hoping that someone else [laughs] would be able to make a better determination.
But I really was struck by the options that you were considering because it was actually a bit of a similar situation to the bug I was sharing where the PDF that was being generated that was slightly broken. I don't think it was, like, super valuable to our users that it be in the form of a PDF. It really was just a way for them to print something to have on handy as a reference from, you know, some data that was generated from the app. So, yeah, based on what you're sharing, I feel really excited about the first one. Joël, I'm sure you have some opinions about this as well.
JOËL: I love sort of the bigger picture thinking that Brian is doing here, sort of stepping back and being like, wait, why do we even need PDF here, and how are our customers using it? I think those are the really good questions to ask before sinking a ton of time into coming up with something that might be, like, a bit of a technical wonder. Like, hey, we managed to, like, do this PDF generation thing that we had to, like, cobble together so many other things. And it's so cool technically, but does it actually solve the underlying problem? So, shout out to Brian for thinking about it in those terms. I love that.
Second cool thing that I wanted to shout out, because I think this is a feature of browsers that not many people are aware of; you can have multiple style sheets for your page, and you can tag them to be for different media. So, you can have a style sheet that only gets applied when you print versus when you display on screen. And there are a couple of others. I don't remember exactly what they are. I'll link to the docs in the show notes.
But taking advantage of this, like, this is old technology but making that available and saying, "Yeah, we'll make it so that it's nice when you print, and we'll maybe even, you know, a link or a button with JavaScript so that you could just Command-P or Control-P to print. But we'll have a button in there as well that will allow you to print to PDF," and that solves your problem right there.
STEPHANIE: Yeah, that's really cool. I didn't know that about being able to tag style sheets for different media types. That's really fascinating. And I like that, yeah, we're just eliminating this dependency on something, like, potentially really complex with a, hopefully, kind of elegant and modern solution, maybe.
JOËL: And your browser is already able to do so many of these things. Why do we sort of try to recreate it? Printing is a thing browsers have been able to do for a long time. Printing to PDF is a thing that you can do for a long time. I will sometimes use that on sites where I need to, let's say I'm purchasing something, and I need some sort of receipt to expense, but they won't give me a download, a PDF download that I can send to the accounting team, so I will print to PDF the, like, HTML view. And that works just fine. It's kind of a workaround hack.
Sometimes, it doesn't work well because the HTML page is just not well set up to, like, show up on a PDF page. You get some, like, weird, like, pagination issues or things like that. But, you know, just a little bit of thought for a print style sheet, especially for something you know that people are likely going to want to print or to save to PDF, that's a nice touch.
STEPHANIE: Yeah. So, good luck, Brian, and let us know how this goes and any outcomes you find successful. So, for today's longer topic, I was excited because I saw, Joël, you dropped something in our topic backlog: Multidimensional Numbers. I'm curious what prompted this idea and what you wanted to say about it.
JOËL: We did an episode a while back where we talked about value objects, wrapping numbers, wrapping collections. This is Episode 386, and we were talking about tallying, specifically working with collections of T-shirt sizes and doing math on these sort of objects that might contain multiple numbers. And a sort of sidebar from that that we didn't really get into is the idea that objects that contain sort of multiple numbers can be treated as a number themselves.
And I think a great example of this is something like a point in two-dimensional space. It's got an x coordinate, a y coordinate. It's two numbers, but you can treat sort of the combination of the two of them together as a single number. There's a whole set of coordinate math that you can do to do things like add coordinates together, subtract them, find the distance between them. There's a whole field of vector math that we can do on those.
And I think learning to recognize that numbers are not just instances of the integer or the float class but that there could be these more complex things that are also numbers is maybe an important realization and something that, as developers, if we think of these sort of more complex values as numbers, or at least mathematical objects, then that will help us write better code.
STEPHANIE: Cool. Yeah. When you were first talking about 2D points, I was thinking about if I have experience working with that before or, like, having to build something really heavily based off of, like, a canvas or, you know, a coordinate system. And I couldn't think of any really good examples until I thought about, like, geographic locations.
JOËL: Oh yeah, like a latitude, longitude.
STEPHANIE: Yeah, exactly. Like, that is a lot more common, I think, for various types of just, like, production applications than 2D points if you're not working on, like, a video game or something like that, I think.
JOËL: Right, right. I think you're much more likely to be working with 2D points on some more sort of front-end-heavy application. I was talking with someone this week about managing a seat map for concerts and events like that and sort of creating a seat map and have it be really interactive, and you can, like, click on seats and things like that. And depending on the level of libraries you're using to build that, you may have to do a lot of 2D math to make it all come together.
STEPHANIE: Yeah. So, I would love to get into, you know, maybe we've realized, okay, we have some kind of compound number. What are some good reasons for using them differently than you would a primitive?
JOËL: So, you mentioned primitives, and I think this is where maybe I'm developing a reputation about, like, always wanting value objects for everything. But it would be really easy, let's say, for an xy point to be just an array of two numbers or maybe even a hash with an x key and a y key.
What's tricky about that is that then you don't have the ability to do math on them. Arrays do define the plus operator, but they don't do what you want them to do with points. It's the set union. So, adding two points would not at all do what you want, or subtracting two points. So, instead, if you have a custom 2D point class and you can define plus and minus on there to do the right thing, now they're not pairs of numbers, two values; they're a single value, and you can treat them as if they are just a single number.
STEPHANIE: You mentioned that arrays don't do the right thing when you try to add them up. What is the right thing that you're thinking of then?
JOËL: It probably depends a little bit on the type of object you're working with. So, with 2D points, you're probably trying to do vector addition where you're effectively saying almost, like, "Shift this point in 2D space by the amount of this other point." Or if you're doing a subtraction, you might even be asking, like, "What is the distance between these two points?" Euclidean distance, I think, is the technical term for this.
There's also a couple of different ways you can multiply values. You can multiply a 2D point by just a sort of, not by another point, but by just an integer. That's called scaling. So, you're just like, oh, take this point in 2D space, but make it bigger, make it five times bigger or five times further from the origin. Or you can do some stuff with other points. But what you don't want to do is turn this into, if you're starting with arrays, you don't want to turn this into an array of four points. When you add two points in 2D space, you're not trying to create a point in 4D space.
STEPHANIE: Whoa, I mean [laughs], maybe you're not.
JOËL: You could but -- [laughter]
STEPHANIE: Yeah. While you were saying that, I guess that is what is really cool about wrapping, encapsulating them in objects is that you get to decide what that means for you and your application, and --
JOËL: Yeah. Well, plus can mean different things, right?
STEPHANIE: Yeah.
JOËL: On arrays, plus means combining two arrays together. On integers, it means you do integer math. And on points, it might be vector addition.
STEPHANIE: Are there any other arithmetic operators you can think of that would be useful to implement if you were trying to create some functionality on a point?
JOËL: That's a good question because I think realizing the inverse of that is also a really powerful thing. Just because you create a sort of new mathematical object, a point in 2D space, doesn't mean that necessarily every arithmetic operator makes sense on it. Does it make sense to divide a point by another point? Maybe not. And so, instead of going with the mindset of, oh, a point is a mathematical object, I now need to implement all of arithmetic on this, instead, think in terms of your domain. What are the operations that make sense? What are the operations you need for this point?
And, you know, maybe the answer is look up what are the common sort of vector math operations and implement those on your 2D point. Some of them will map to arithmetic operators like plus and minus, and then some of them might just be some sort of custom method where maybe you say, "Oh, I want the Euclidean distance between these two points." That's just a thing. Maybe it's just a named instance method on there. But yeah, don't feel like you need to implement all of the math operators because that's a mistake that I have made and then have ended up, like, implementing nonsensical things.
STEPHANIE: [laughs] Creating your own math.
JOËL: Yes, creating my own math. I've done this even on where I've done value objects to wrap single values. I was doing a class to represent currency, and I was like, well, clearly, you need, like, methods to, like, add or subtract your currency, and that's another thing. If you have, let's say, a plus method, now you can plug it into, let's say, reduce plus. And you can just sum a list of these currency objects and get back a new currency. It's not even going to give you back an integer. You just get a sort of new currency object that is the sum of all the other ones, and that's really nice.
STEPHANIE: Yeah, that's really cool. It reminds me of all the magic of enumerable that you had talked about in a previous conference talk, where, you know, you just get so much out of implementing those basic operators that, like, kind of scales in handiness.
JOËL: Yes. Turns out Ruby is actually a pretty nice system. If you have objects that respond to some common methods and you plug them into enumerable, and it just all kind of works.
STEPHANIE: So, one thing you had said earlier that I've felt kind of excited about and wanted to highlight was you mentioned all the different ways that you could represent a 2D point with more primitive data stores, so, you know, an array of two integers, a hash with xy keys. It got me thinking about how, yeah, like, maybe if your system has to talk to another system and you're importing data or exporting data, it might eventually need to take those forms.
But what is cool about having an encapsulated object in your application is you can kind of control those boundaries a little bit and have more confidence in terms of the data types that you're using within your system by having various ways to construct that, like, domain object, even if the data coming in is in a different shape.
JOËL: And I think that you're hitting on one of the real beauties of object-oriented programming, where the sort of users of your object don't need to know about the internal representation. Maybe you store an array internally. Maybe it's two separate instance variables. Maybe it's something else entirely. But all that the users of your, let's say, 2D point object really need to care about is, hey, the constructor wants values in this shape, and then I can call these domain methods on it, and then the rest just sort of happens. It's an implementation detail. It doesn't matter.
And you alluded, I think, to the idea that you can sort of create multiple constructors. You called them constructors. I tend to call them that as well. But they're really just class methods that will kind of, like, add some sugar on top of the constructor. So, you might have, like, a from array pair or from hash or something like that that allows you to maybe do a little bit of massaging of the data before you pass it into your constructor that might want some underlying form. And I think that's a pattern that's really nice.
STEPHANIE: Yeah, I agree.
JOËL: Something that can be interesting there, too, is that mathematically, there are multiple ways you can think of a 2D point. An xy coordinate pair is a common one, but another sort of system for representing a point in 2D space is called the polar coordinate system. So, you have some sort of, like, origin point. You're 0,0. And then, instead of saying so many to the left and so many up from that origin point, you give an angle and a distance, and that's where your point is. So, an angle and distance point, I think, you know, theta and magnitude are the fancy terms for this.
You could, instead of creating a separate, like, oh, I have a polar coordinate point and a Cartesian coordinate point, and those are separate things, you can say, no, I just have a point in 2D space. They can be constructed from either an xy coordinate pair or a magnitude angle pair.
Internally, maybe you convert one to the other for internal representation because it makes the math easier or whatever. Your users never need to know that. They just pass in the values that they want, use the constructor that is most convenient for them, and it might be both. Maybe some parts of the app require polar coordinates; some require Cartesian coordinates. You could even construct one of each, and now you can do math with each other because they're just instances of the same class.
STEPHANIE: Whoa. Yeah, I was trying to think about transforming between the two types as well. It's all possible [laughs].
JOËL: Yes. Because you could have reader-type methods on your object that say, oh, for this point, give me its x coordinate; give me its y coordinate. Give me its distance from the origin. Give me its angle from the origin. And those are all questions you can ask that object, and it can calculate them. And you don't need to care what its internal representation is to be able to get all four of those.
So, we've been talking about a lot of these sort of composite numbers, not composite numbers, that's a separate mathematical thing, but numbers that are composed of sort of multiple sub-numbers. And what about situations where you have two things, and one of them is not a number? I'm thinking of all sorts of units of measure. So, I don't just have three. I have three, maybe...and we were talking about currency earlier, so maybe three U.S. dollars. Or I don't just have five; I have five, you know, let's say, meters of distance. Would you consider something like that to be one of these compound number things?
STEPHANIE: Right. I think I was–when we were originally talking about this, conflating the two. But I realized that, you know, just because we're adding context to a number and potentially packaging it as a value object, it's still different from what we're talking about today where, you know, there's multiple components to the number that are integral or required for it to mean what we intended to mean, if that makes sense.
JOËL: Yeah.
STEPHANIE: So yeah, I guess we did want to kind of make a distinction between value objects that while the additional context is important and you can implement a lot of different functionality based on what it represents, at the end of the day, it only kind of has one magnitude or, like, one integer to kind of encapsulate it represented as a number. Does that sound right?
JOËL: Yeah. You did throw out the words encapsulation and value object. So, in a situation maybe where I have three US dollars, would you create some kind of custom object to wrap that? Or is that a situation where you'd be more comfortable using some kind of primitive? Like, I don't know, maybe an array pair of three and the symbol USD or something like that.
STEPHANIE: Oh, I would definitely not do that [laughter]. Yeah. Like I, you know, for the most part, I think I've seen that as a currency object, and that expands the world of what we can do with it, converting into a lot of different other currencies. And yeah, just making sure those things don't get divorced from each other because that context is what gives it meaning. But when it comes to our compound numbers, it's like, without all of the components, it doesn't make sense, or it doesn't even represent the same, like, numerical value that we were trying to convey.
JOËL: Right. You need both, or, you know, it could be more than two. It could be three, four, or five numbers together to mean something. You mentioned conversions, which I think is something that's also interesting because a lot of units of measure have sort of multiple ways of measuring, and you often want to convert between them. And maybe that's another case where encapsulation is really nice where, you know, maybe you have a distance object. And you have five meters, and you put that into your distance object, but then somebody wants it in feet somewhere else or in centimeters, or something like that. And it can just do all the conversion math safely inside that object, and the user doesn't have to worry about it.
STEPHANIE: Right. This is maybe a bit of a tangent, but as a Canadian living in the U.S., I don't know [laughs] if you have any opinions about converting meters and feet.
JOËL: The one I actually do the most often is converting Celsius to Fahrenheit and vice versa. You know, I've been here, what, 11 years now? I don't have a great intuition for Fahrenheit temperatures. So, I'm converting in my head just [laughs] on a daily basis.
STEPHANIE: Yeah, that makes sense. Conversions: they're important. They help out our friends who [laughs] are on different systems of measurement.
JOËL: There's a classic story that I love about unit conversions. I think it's one of the NASA Mars missions.
STEPHANIE: Oh yeah.
JOËL: You've heard of this one. It was trying to land on Mars, and it burned up in the atmosphere because two different teams had been building different components and used different unit systems, both according to spec for their own module. But then, when the modules try to talk to each other, they're sending over numbers in meters instead of feet or something like that. And it just caused [laughs] this, like, multi-year, multi-billion dollar project to just burn up.
STEPHANIE: That's right. So, lesson of the day is don't do that. I can think of another example where there might be a little bit of misconceptions in terms of how to represent it. And I'm thinking about time and when that has been represented in multiple parts, such as in hours and, minutes and seconds. Do you have any initial impressions about a piece of data like that?
JOËL: So, that's really interesting, right? Because, at first glance, it looks like, oh, it's, like, a triplet of hour, minute, seconds. It's sort of another one of these sort of compound numbers, and I guess you could implement it that way. But in reality, you're tracking a single quantity, the amount of time elapsed, and that can be represented with a single number.
So, if you're representing, let's say, time of day, what would show up on your clock? That could be, depending on the resolution, number of, let's say, seconds since midnight, and that's a single counter. And then, you can do some math on it to get hours, minutes, seconds for a particular moment. But really, it's a single quantity, and we can do that with time. We can't do that with a 2D point. Like, it has to have two components.
STEPHANIE: So, do you have a recommendation for what unit of time time would best be stored? I'm just thinking of all the times that I've had to do that millisecond, you know, that conversion of, you know, however many thousands of milliseconds in my head into something that actually means [laughs] something to me as a human being who measures time in hours and minutes.
JOËL: My recommendation is absolutely go for a single number that you store in your, let's say, time of day object. It makes the math so much easier. You don't have to worry about, like, overflowing from one number into another when you're doing math or anything like that. And then the number that you count should be at the whatever the smallest resolution you care at.
So, is there ever any time where you want to distinguish between two different milliseconds in time? Or maybe you're like, you know what? These are, like, we're tracking time of day for appointments. We don't care about the difference between two milliseconds. We don't need to track them independently. We don't even care about seconds. The most granular we ever care about things is by the minute. And so, maybe then your internal number that you track is a counter of minutes since midnight. But if you need more precision, you can go down to seconds or milliseconds or nanoseconds.
But yeah, find what is the sort of the least resolution you want to get away with and then make that the unit of measure for a single counter in your object. And then encapsulate that so that nobody else needs to care that, internally, your time of day object is doing milliseconds because nobody wants to do that math. Just give me a nice, like, hours and minutes method on your object, and I will use that. I don't need to know internally what it's using.
Please don't just pass around integers; wrap it in an object, especially because integers, there's enough times where you're doing seconds versus milliseconds. And when I just have an integer, I never know if the person storing this integer means seconds or milliseconds. So, I'm just like, oh, I'm going to pass to this, like, user object, a, like, time integer. And unless there's a comment or a constant, you know, that's named something duration in milliseconds or something like that, you know, or sometimes even, like, one year in milliseconds, or there's no way of knowing.
STEPHANIE: Yeah. That makes a lot of sense. When you kind of choose a standard of a standard unit, it's, like, possible to make it easier [laughs].
JOËL: So, circling back to sort of the initial thing that sparked this conversation, the previous episode about T-shirt inventories, there we were dealing with what started off as, like, a hash of different T-shirt sizes and quantities of T-shirts that we had in that size, so small (five), medium (three), large (four). And then, we eventually turned that into a value object that represented...I think we called it a tally, but maybe we called it inventory.
And this may be wrong, so tell me if I'm wrong here, I think we can kind of treat that as a number, as, like, one of these compound numbers. It's a sort of multidimensional number where you say, well, we have sort of three dimensions where we can have numbers that sort of increase and decrease independently. We can do math on these because we can take inventories or tallies and add and subtract them. And that's what we ended up having to do. We created a value object. We implemented plus and minus on it. There are rules for how the math works. I think this is a multidimensional number with the definition we're working on this show. Am I wrong here?
STEPHANIE: I wouldn't say that you're wrong. I think I would have to think a little [laughs] more to say definitively that you're right. But I know that this example came from, you know, an application I was actually working on. And one of the main things that we had to do with these representations [laughs], I'm hesitant to call them a number, especially, but we had to compare these representations frequently because an inventory, for example, in a warehouse, wanting to make sure that it is equal to or there's enough of the inventory if someone was placing an order, which would also contain, like, a representation of T-shirt size inventory.
And that was kind of where some of that math happened because, you know, maybe we don't want to let someone place an order if the inventory at the warehouse is smaller than their order, right? So, there is something really compelling about the comparison operations that we were doing that kind of is leaning me in the direction of, like, yeah, like, it makes sense to me to use this in a way that I would compare, like, quantities or numbers of something.
JOËL: I think one thing that was really compelling to me, and that kind of blew my mind, was that we were trying to, like, figure out some things like, oh, we've got so many people with these size preferences, and we've got so many T-shirts across different warehouses. And we're summing them up and we're trying to say like, "How many do we need to purchase if there is a deficit?"
And we can come up with effectively a formula for this. We're like, sum these numbers, when we're talking about just before we introduce sizes when it's just like, oh, people have T-shirts. They all get the count of people and a count of T-shirts in our warehouse, and we find, you know, the difference between that. And there's a few extra math operations we do.
Then you introduce size, and you break it down by, oh, we've got so many of each. And now the whole thing gets really kind of messy and complicated. And you're doing these reduces and everything. When we start treating the tally of T-shirts as an object, and now it's a number that responds to plus and minus, all of a sudden, you can just plug those back into the original formula, and it all just works. The original formula doesn't care whether the numbers you're doing this formula on are simple integers or these sort of multidimensional numbers. And that blew my mind, and it was so cool.
STEPHANIE: Yeah, that is really neat. And you get a lot of added benefits, too.
I think the other important piece in the T-shirt size example was kind of tracking the state change, and that's so much easier when you have an object. There's just a lot more you can do with it. And even if, you know, you're not persisting every single version of the representation, you know, because sometimes you don't want to, sometimes you're really just kind of only holding it in memory to figure out if you need to, you know, do something else. But other times, you do want to persist it. And it just plugs in really well with, like, the rest of object-oriented programming [laughs] in terms of interacting with the rest of your business needs, I think, in your app.
JOËL: Yeah, turns out objects, they're kind of nice. And you can do math with them. Who knew? Math is not just about integers.
STEPHANIE: And on that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie has a delightful and cute Ruby thing to share: Honeybadger, the error monitoring service, has created exceptionalcreatures.com, where they've illustrated and characterized various common Ruby errors into little monsters, and they're adorable. Meanwhile, Joël encourages folks to submit proposals for RailsConf.
Together, Stephanie and Joël delve into the nuances of adapting to and working within new codebases, akin to aligning with a shared mental model or vision. They ponder several vital questions that every developer faces when encountering a new project: the balance between exploring a codebase to understand its structure and diving straight into tasks, the decision-making process behind adopting new patterns versus adhering to established ones, and the strategies teams can employ to assist developers who are familiarizing themselves with a new environment.
Honeybadger's Exceptional Creatures
RailsConf CFP coaching sessions
HTTP Cats
Support and Maintenance Episode
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I have a delightful and cute Ruby thing to share I'd seen just in our internal company Slack. Honeybadger, the error monitoring service, has created a cute little webpage called exceptionalcreatures.com, where they've basically illustrated and characterized various common Ruby errors into little monsters [laughs], and I find them adorable. I think their goal is also to make it a really helpful resource for people encountering these kinds of errors, learning about them for the first time, and figuring how to triage or debug them.
And I just think it's a really cool way of, like, making it super approachable, debugging and, you know, when you first encounter a scary error message, can be really overwhelming, and then Googling about it can also be equally [chuckles] overwhelming. So, I just really liked the whimsy that they kind of injected into something that could be really hard to learn about. Like, there are so many different error messages in Ruby and in Rails and whatever other libraries you're using. And so, that's kind of a...I think they've created a one-stop shop for, you know, figuring out how to move forward with common errors.
And I also like that it's a bit of a collective effort. They're calling it, like, a bestiary for all the little creatures [laughs] that they've discovered. And I think you can, like, submit your own favorite Ruby error and any guidance you might have for someone trying to debug it.
JOËL: That's adorable. It reminds me a little bit of HTTP status codes as cat memes site. It has that same energy. One thing that I think is really interesting is that because it's Honeybadger, they have stats on, like, frequency of these errors, and a lot of these ones are tied to...I think they're picking some of the most commonly surfaced errors.
STEPHANIE: Yeah, there's little, like, ratings, too, for how frequently they occur, kind of just like, I don't know, Pokémon [laughs] [inaudible 02:31]. I think it's really neat that they're using something like a learning from their business or maybe even some, like, proprietary information and sharing it with the world so that we can learn from it.
JOËL: I think one thing that's worth specifying as well is that these are specific exception classes that get raised. So, they're not just, like, random error strings that you see in the wild. They don't often have a whole lot of documentation around them, so it's nice to see a dedicated page for each and a little bit of maybe how this is used in the real world versus maybe how they were designed to be used. Maybe there's a line or two in the docs about, you know, core Ruby when a NoMethodError should be raised. How does NoMethodError actually get used, you know, in real life, and the exceptions that Honeybadger is capturing. That's really interesting to see.
STEPHANIE: Yeah, I like how each page for the exception class, and I'm glad you made that distinction, is kind of, like, crowdsourced guidance and information from the community, so I think you could even, you know, contribute to it if you wanted. But yeah, just a fun, little website to bring you some delight when you're on your next head-smacking, debugging adventure [laughs].
JOËL: And I love that it brings some joy to the topic, but, honestly, I think it's a pretty good reference. I could see myself linking to this anytime I want to have a deeper discussion on exceptions. So, maybe there's a code review, and maybe I want to suggest that we raise a different error than the one that we're doing. I could see myself in that GitHub comment being like, "Oh, instead of, you know, raising an exception here, why don't we instead raise a NoMethodError or something like that?" And then link to the bestiary page.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, just recently, RailsConf announced their call for proposals. It's a fairly short period this year, only about three-ish weeks long. So, I've been really encouraging colleagues to submit and trying to be a resource for people who are interested in speaking at conferences. We did a Q&A session with a fellow thoughtboter, Aji Slater, who's also a former RailsConf speaker, about what makes for a good talk, what is it like to submit to a call for proposals, you know, kind of everything from the process from having an idea all the way to stage presence and delivering. And there's a lot of great questions that got asked and some good discussion that happened there.
STEPHANIE: Nice. Yeah, I think I have noticed that you are doing a lot more to help, especially first-time speakers give their first conference talk this year. And I'm wondering if there's anything you've learned or any hopes and dreams you have for kind of the amount of time you're investing into supporting others.
JOËL: What I'd like to see is a lot of people submitting proposals; that's always a great thing. And, a proposal, even if it doesn't get accepted, is a thing that you can resubmit. And so, having gone through the effort of building a proposal and especially getting it maybe peer-reviewed by some colleagues to polish your idea, I think is already just a really great exercise, and it's one that you can shop around. It's one that you can maybe convert into a blog post if you need to. You can convert that into some kind of podcast appearance. So, I think it's a great way to take an idea you're excited about and focus it, even if you can't get into RailsConf.
STEPHANIE: I really like that metric for success. It reminds me of a writer friend I have who actually was a guest on the show, Nicole Zhu. She submits a lot of short stories to magazines and applications to writing fellowships, and she celebrates every rejection. I think at the end of the year, she, like, celebrates herself for having received, you know, like, 15 rejections or something that year because that meant that she just went for it and, you know, did the hard part of doing the work, putting yourself out there. And that is just as important, you know, if not more than whatever achievement or goal or the idea of having something accepted.
JOËL: Yeah, I have to admit; rejection hurts. It's not a fun thing to go through. But I think even if you sort of make it to that final stage of having written a proposal and it gets rejected, you get a lot of value out of that journey sort of regardless of whether you get accepted or not. So, I encourage more people to do that.
To any of our listeners who are interested, the RailsConf call for proposals goes through February 13th, 2024. So, if you are listening before then and are inspired, I recommend submitting. If you're unsure of what makes for a good CFP, RailsConf is currently offering coaching sessions to help craft better proposals. They have one on February 5th, one on February 6th, and one on February 7th, so those are also options to look into if this is maybe your first time and you're not sure. There's a signup form. We'll link to it in the show notes.
STEPHANIE: So, another update I have that I'm excited to get into for the rest of the episode is my recent work on our support and maintenance team, which I've talked about on the show before. But for any listeners who don't know, it's a kind of sub-team at thoughtbot that is focused on helping maintain multiple client projects at a time. But, at this point, you know, there's not as much active feature development, but the work is focused on keeping the codebase up to date, making any dependency upgrades, fixing any bugs that come up, and general support. So, clients have a team to kind of address those things as they come up.
And when I had last talked about it on the podcast, I was really excited because it was a bit of a different way of working. I felt like it was very novel to be, you know, have a lot of different projects and domains to be getting into. And knowing that I was working on this team, like, short-term and, you know, it may not be me in the future continuing what I might have started during my rotation, I thought it was really interesting to be optimizing towards, like, completion of a task. And that had kind of changed my workflow a bit and my process.
JOËL: So, now that you've been doing work on the support and maintenance team for a while and you've kind of maybe gotten more comfortable with it, how are you generally feeling about this idea of sort of jumping into new codebases all the time?
STEPHANIE: It is both fun and more challenging than I thought it would be. I tend to actually really enjoy that period of joining a new team or a project and exploring, you know, a codebase and getting up to speed, and that's something that we do a lot as consultants. But I think I started to realize that it's a bit of a tricky balance to figure out how much time should I be spending understanding what this codebase is doing? Like, how much of the application do I need to be understanding, and how much poking around should I be doing before just trying to get started on my first task, the first starter ticket that I'm given?
There's a bit of a balance there because, on one hand, you could just immediately start on the task and kind of just, you know, have your blinders [chuckles] on and not really care too much about what the rest of the code is doing outside of the change that you're trying to make. But that also means that you don't have that context of why certain things are the way they are. Maybe, like, the way that you want to be building something actually won't work because of some unexpected complexity with the app.
So, I think there, you know, needs to be time spent digging around a little bit, but then you could also be digging around for a long time [chuckles] before you feel like, okay, I finally have enough understanding of this new codebase to, like, build a feature exactly how a seasoned developer on the team might.
JOËL: I imagine that probably varies a little bit based on the task that you're doing. So, something like, oh, we want to upgrade this codebase to Ruby 3.3, probably requires you to have a very different understanding of the codebase than there's a bug where submitting a comment double posts it, and you have to dig into that. Both of those require you to understand the application on very different levels and kind of understand different mental models of what the app is doing.
STEPHANIE: Yeah, absolutely. That's a really good point that it can depend on what you are first asked to work on. And, in fact, I actually think that is a good guidepost for where you should be looking because you could develop a mental model that is just completely unrelated [chuckles] to what you're asked to do. And so, I suppose that is, you know, usually a good place to start, at least is like, okay, I have this first task, and there's some understanding and acceptance that, like, the more you work on this codebase, the more you'll explore and discover other parts of it, and that can be on a need to know kind of basis.
JOËL: So, I'm thinking that if you are doing something like a Ruby upgrade or even a Rails upgrade, a lot of what you care about the app is going to be on a more mechanical level. So, you want to know what gems you're using. You want to know what different patterns are being used, maybe how callbacks are happening, any particular features that are version-specific that are being used, things like that.
Whereas if you're, you know, say, fixing a bug, you might care a lot more about some of the product-level concerns. What are we actually trying to do here? What is the expected user experience? How does this deviate from that? What were the underlying mental models of the developers? So, there's almost, like, two lenses you can look at the code. Now, I almost want to make this a two-dimensional thing, where you can look at it either from, like, a very kind of mechanical lens or a product lens in one axis.
And then, on the other axis, you could look at it from a very high-level 10,000-foot view and maybe zoom in a little bit where you need, versus a very localized view; here's where the bug is happening on this page, and then sort of zoom out as necessary. And I could see different sorts of tasks falling in different quadrants there of, do I need a more mechanical view? Do I need a more product-focused view? And do I need to be looking locally versus globally?
STEPHANIE: Wow. I can't believe you just created a Cartesian graph [laughs] for this problem on the fly. But I love it because I do think that actually lines up with different strategies I've taken before. It's like, how much do you even look at the code before deciding that you can't really get a good picture of it, of what the product is, without just poking around from the app itself?
I actually think that I tend to start from the code. Like, maybe I'll see a screenshot that someone has shared of the app, you know, like a bug or something that they want me to fix, and then looking for that text in the code first, and then trying to kind of follow that path, whereas it's also, you know, perfectly viable to try to see the app being used in production, or staging, or something first to get a better understanding of some of the business problems it's trying to solve.
JOËL: When you jump into a new codebase, do you sort of consciously take the time to plan your approach or sort of think about, like, how much knowledge of this new codebase do I need before I can, like, actually look at the problem at hand?
STEPHANIE: Ooh, that's kind of a hard question to answer because I think my experience has told me enough times that it's never what I think it's going to [laughs] be, not never, but it frequently surprises me. It has surprised me enough times that it's kind of hard to know off the bat because it's not...as much as we work in frameworks that have opinions and conventions, a lot of the work that happens is understanding how this particular codebase and team does things and then having to maybe shift or adjust from there.
So, I think I don't do a lot of planning. I don't really have an idea about how much time it'll take me because I can't really know until I dive in a little bit. So, that is usually my first instinct, even if someone is wanting to, like, talk to me about an approach or be, like, "Hey, like, how long do you think this might take based on your experience as a consultant?" This is my first task. Oftentimes, I really can't say until I've had a little bit of downtime to, in some ways, like, acquire the knowledge [chuckles] to figure that out or answer that question.
JOËL: How much knowledge do you like to get upfront about an app before you dive into actually doing the task at hand? Are there any things, like, when you get access to a new codebase, that you'll always want to look at to get a sense of the project before you look at any tickets?
STEPHANIE: I actually start at the model level. Usually, I am curious about what kinds of objects we're working with. In fact, I think that is really helpful for me. They're like building blocks, in order for me to, like, conceptually understand this world that's being represented by a codebase. And I kind of either go outwards or inwards from there. Usually, if there's a model that is, like, calling to me as like, oh, I'll probably need to interact with, then I'll go and seek out, like, where that model is created, maybe through controllers, maybe through background jobs, or something like that, and start to piece together entry points into the application.
I find that helpful because a lot of the times, it can be hard to know whether certain pages or routes are even used at all anymore. They could just be dead code and could be a bit misleading. I've certainly been misled [chuckles] more than once. And so, I think if I'm able to pull out the main domain objects that I notice in a ticket or just hear people talk about on the team, that's usually where I gravitate towards first. What about you? Do you have a place you like to start when it comes to exploring a new codebase like that?
JOËL: The routes file is always a good sort of overview of, like, what is going on in the app. Scanning the models directory is also a great start in a Rails app to get a sense of what is this app about? What are the core nouns in our vocabulary? Another thing that's good to look for in a codebase is what are the big types of patterns that they tend to use?
The Rails ecosystem goes through fads, and, over time, different patterns will be more popular than others. And so, it's often useful to see, oh, is this an app where everything happens in service objects, or is this an app that likes to rely on view components to render their views? Things like that. Once you get a sense of that, you get a little bit of a better sense of how things are architected beyond just the basic MVC.
STEPHANIE: I like that you mentioned fads because I think I can definitely tell, you know, how modern an app is or kind of where it might be stuck in time [chuckles] a little bit based on those patterns and libraries that it's heavily utilizing, which I actually find to be an interesting and kind of challenging position to be in because how do you approach making changes to a codebase that is using a lot of patterns or styles from back in the day? Would you continue following those same patterns, or do you feel motivated to introduce something new or kind of what might be trendy now?
JOËL: This is the boring answer, but it's almost never worth it to, like, rewrite the codebase just to use a new pattern. Just introducing the new pattern in some of the new things means there are now two patterns. That's also not a great outcome for the team. So, without some other compelling reason, I default to using the established patterns.
STEPHANIE: Even if it's something you don't like?
JOËL: Yes. I'm not a huge fan of service objects, but I work in plenty of codebases that have them, and so where it makes sense, I will use service objects there. Service objects are not mutually exclusive with other things, and so sometimes it might make sense to say, look, I don't feel like I can justify a service object here. I'll do this logic in a view, or maybe I'll pull this out into some other object that's not a service object and that can live alongside nicely. But I'm not necessarily introducing a new pattern. I'm just deciding that this particular extraction might not necessarily need a service object.
STEPHANIE: That's an interesting way to describe it, not as a pattern, but as kind of, like, choosing not to use the existing [chuckles] pattern. But that doesn't mean, like, totally shifting the architecture or even how you're asking other people to understand the codebase. And I think I'm in agreement. I'm actually a bit of a follower, too, [laughs], where I want to, I don't know, just make things match a little bit with what's already been created, follow that style. That becomes pretty important to me when integrating with a team in a codebase.
But I actually think that, you know, when you are calibrating to a codebase, you're in a position where you don't have all that baggage and history about how things need to be. And maybe you might be empowered to have a little bit more freedom to question existing patterns or bring some new ideas to the team to, hopefully, like, help the code evolve. I think that's something that I struggle with sometimes is feeling compelled to follow what came before me and also wanting to introduce some new things just to see what the team might think about them.
JOËL: A lot of that can vary depending on what is the pattern you want to introduce and sort of what your role is going to be on that team. But that is something that's nice about someone new coming onto a project. They haven't just sort of accepted that things are the way they are, especially for things that the team already doesn't like but doesn't feel like they have the energy to do anything better about it.
So, maybe you're in a codebase where there's a ton of Ruby code in your ERB templates, and it's not really a pattern that you're following. It's just a thing that's there. It's been sort of the path of least resistance for a long time, and it's easier to add more lines in there, but nobody likes it. New person joins the team, and their naive exuberance is just like, "We can fix it. We can make it better."
And maybe that's, you know, going back and rewriting all of your views. That's probably not the best use of their time. But it could be maybe the first time they have to touch one of these views, cleaning up that one and starting a conversation among the team. "Hey, here are some patterns that we might like to clean up some of these views instead," or "Here are maybe some guidelines for anything new that we write that we want to do to keep our views clean," and sort of start moving the needle in a positive direction.
STEPHANIE: I like the idea of moving the needle. Even though I tend to not want to stir the pot with any big changes, one thing that I do find myself doing is in a couple of places in the specs, just trying to refactor a bit away from using lets. There were some kind of forward-thinking decisions made before when RSpec was basically going to deprecate using the describe block without prepending it with their module, so just kind of throwing that in there whenever I would touch a spec and asking other people to do the same.
And then, recently, one kind of, like, small syntax thing that I hadn't seen before, and maybe this is just because of the age of the codebases in which I'm working, the argument forwarding syntax in Ruby that has been new, I mean, it's like not totally new anymore [laughs], but throwing that in there a little every now and then to just kind of shift away from this, you know, dated version of the code kind of towards things that other people are seeing and in newer projects.
JOËL: I love harnessing that energy of being new on a project and wanting to make things better. How do you avoid just being, you know, that developer, though, that's new, comes in, and just wants to change everything for the sake of change or for your own personal opinions and just kind of moves things around, stirs the pot, but doesn't really contribute anything net positive to the team? Because I've definitely seen that as well, and that's not a good first contribution or, you know, contribution in general as a newer team member. How do we avoid being that person while still capitalizing on that energy of being someone new and wanting to make a positive impact?
STEPHANIE: Yeah, that's a great point, and I kind of alluded to this earlier when I asked, like, oh, like, even if you don't like an existing syntax or pattern you'll still follow it? And I think liking something a different way is not a good enough reason [chuckles]. But if you are able to have a good reason, like I mentioned with the RSpec prepending, you know, it didn't need to happen now, but if we would hope to upgrade that gem eventually, then yeah, that was a good reason to make that change as opposed to just purely aesthetic [laughs].
JOËL: That's one where there is pretty much a single right answer to. If you plan to keep staying up to date with versions of RSpec, you will eventually need to do all these code changes because, you know, they're deprecating the old way. Getting ahead of that gradually as we touch spec files, there's kind of no downside to it.
STEPHANIE: That's true, though maybe there is a person who exists out there who's like, "I love this old version of RSpec, and I will die on this hill that we have to stay on [laughs] it."
But I also think that I have preferences, but I'm not so attached to them. Ideally, you know, what I would love to receive is just, like, curiosity about like, "Oh, like, why did you make this change?" And just kind of share my reasoning. And sometimes in that process, I realize, you know, I don't have a great reason, and I'll just say, "I don't have a great reason. This is just the way I like it. But if it doesn't work for you, like, tell me, and I'll consider changing it back. [chuckles]"
JOËL: Maybe that's where there's a lot of benefit is the sort of curiosity on the part of the existing team and sort of openness to both learn about existing practices but also share about different practices from the new teammate. And maybe that's you're coming in, and you have a different style where you like to write tests, maybe without using RSpec's let syntax; the team is using it. Maybe you can have a conversation with the team. It's almost certainly not worth it for you to go and rewrite the entire test suite to not use let and be like, "Hey, first PR. I made your test better."
STEPHANIE: Hundreds of files changed, thousands [laughs] of lines of code. I think that's actually a good segue into the question of how can a team support a new hire or a new developer who is still calibrating to a codebase? I think I'm curious about this being different from onboarding because, you know, there are a lot of things that we already kind of expect to give some extra time and leeway for someone who's new coming in. But what might be some ways to support a new developer that are less well known?
JOËL: One that I really like is getting them involved as early as possible in code review because then they get to see the patterns that are coming in, and they can be involved in conversations on those. The first PR you're reviewing, and you see a bunch of tests leaning heavily on let, and maybe you ask a question, "Is this a pattern that we're following in this codebase? Did we have a particular motivation for why we chose this?"
And, you know, and you don't want to do it in a sort of, like, passive-aggressive way because you're trying to push something else. It has to come from a place of genuine curiosity, but you're allowing the new teammate to both see a lot of the existing patterns kind of in very quick succession because you see a pretty good cross-section of those when you review code.
And also, to have conversations about them, to ask anything like, "Oh, that's unusual. I didn't know we were doing that." Or, "Hey, is this a pattern that we're doing kind of just local to this subsystem, or is this something that's happening all the way? Is this a pattern that we're using and liking? Is this a thing that we were doing five years ago that we're phasing out, but there's still a few of them left?" Those are all, I think, great questions to ask when you're getting started.
STEPHANIE: That makes a lot of sense. It's different from saying, "This is how we do things here," and expecting them to adapt or, you know, change to fit into that style or culture, and being open to letting it evolve based on the new team, the new people on the team and what they might be bringing to the table.
I like to ask the question, "What do you need to know?" Or "What do you need to be successful?" as opposed to telling them what I think they need [laughs]. I think that is something that I actually kind of recently, not regret exactly, but I was kind of helping out some folks who were going to be joining the team and just trying to, like, shove all this information down their throats and be like, "Oh, and watch out for these gotchas. And this app uses a lot of callbacks, and they're really complex."
And I think I was maybe coloring their [chuckles] experience a little bit and expecting them to be able to drink from the fire hose, as opposed to trusting that they can see for themselves, you know, like, what is going on, and form opinions about it, and ask questions that will support them in whatever they are looking to do. When we talked earlier about the four different quadrants, like, the kind of information they need to know will differ based off of their task, based off of their experience. So, that's one way that I am thinking about to, like, make space for a new developer to help shape that culture, rather than insisting that things are the way they are.
JOËL: It can be a fine balance where you want to be open to change while also you have to remain kind of ruthlessly pragmatic about the fact that change can be expensive. And so, a lot of changes you need to be justified, and you don't want to just be rewriting your patterns for every new employee or, you know, just to follow the latest trends because we've seen a lot of trends come and go in the Rails ecosystem, and getting on all of them is just not worth our time.
STEPHANIE: And that's the hard truth of there's always trade-offs [laughs] in software development, isn't that right?
JOËL: It sure is. You can't always chase the newest shiny, as fun as that is.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël shares his recent experience with Turbo, a JavaScript framework that simplifies adding interactivity to websites without extensive JavaScript coding. Stephanie gives an update on her quest to work from her office more, and the birds have arrived—most notably, chickadees.
Stephanie and Joël address a listener question from Edward about the concept of a "spike" in software development. They discuss the nature of spikes, emphasizing that they are typically throwaway work aimed at learning and de-risking rather than producing final code, and explore how spikes can lead to better decision-making and prioritization in software development, especially in complex codebases.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I'm pretty excited because this week, I actually got to use a little bit of Turbo for the first time. Turbo is Rails'...I guess it's not technically just for Rails. It's a sort of unobtrusive JavaScript framework that allows you to build a lot of interactive functionality without actually having to write a lot of JavaScript yourself, just by writing some HTML in a certain way. And you can add a lot of functionality and interactivity to your site without having to drop to custom writing some JavaScript.
STEPHANIE: Cool. Yeah, that is exciting. I personally have not gotten to use too much of it in a production/client setting; only played around with it a little bit on my own to keep up with what's new and just kind of reading about how other people are excited to use it. So, what are your first impressions so far?
JOËL: It's pretty nice. It, you know, works as advertised. My situation, I was rendering a calendar view of a lot of events, and this is completely server-rendered. And I realized, wait a minute, there are some days where I've got, like, 20 events, and I really, like, I want my calendar squares to say sort of equally sized. So, I wanted to limit myself to only showing four or five events per calendar day.
And so, I added a little link at the bottom of the calendar day that says, you know, "See more." And when you click that link, it does some Turbo stuff, and it pulls in other events so that you can now sort of expand it to get the whole day. So, it's just a little bit of interactivity that you kind of get for free with Turbo just by wrapping a particular HTML tag around it and having the Turbo library loaded.
STEPHANIE: That's cool. I'm excited to try it out next time I'm working on a Rails project that just needs a little bit of that interactivity, you know, just to make that experience a little bit richer. And it seems like a really good, like, low-effort way to add some of those enhancements. Based on what you described, it sounds really easy.
JOËL: Yeah, I was impressed with just how low effort it all was, which is what you want, right? It works out of the box. So, for anyone who's kind of curious about it, Turbo Frames is the little bit that I used, and it worked really well.
Oh, something I'm actually excited about it as well; it plays nicely with clients that have disabled JavaScript. So, this link that I click to pull in the rest of the events, if somebody has JavaScript disabled, or if they command-click or control-click to open in a new tab, it doesn't just do nothing like it would often do in many sort of front-end framework-y places that have hijacked the URL click handler. Instead, it actually opens up the full list of items in a new page, just as if you'd clicked a normal link.
So, it really gives you that progressive enhancement feel where I can click a link, and it goes to another page with a list of all the 30 events if I don't have JavaScript. But if I do, maybe I get a slightly better experience where, instead of taking me to a new page, it just expands the list, and I get to see the full list. So, it plays nicely on both sides.
STEPHANIE: That's really cool. As someone who's just starting to dabble in some alternative browsers outside of the main popular ones [chuckles], I have noticed how many websites do not work for me anymore [laughs]. And that sounds, like, nice from a user perspective.
JOËL: So, other than dabbling with the new browsers, what's new in your world?
STEPHANIE: A few weeks ago, I talked about [laughs] sitting more at my desk and, you know, various incentives that I gave myself to do that. And I'd like to say that I've been doing a pretty good job [laughs]. So, what's new in my world is that I've followed up on my commitment to sit at my desk more, feel a little bit more organized in my workday. And that's especially true because the birds have finally discovered my bird feeder [laughs].
JOËL: Oh, that's really cool.
STEPHANIE: There were a few weeks where I was not really getting any visitors, and, you know, I was just like, when are they going to come and eat this delicious birdseed that I've [laughs] put out for them? And it seems like a flock of chickadees that normally like to hang out on the apple tree in my backyard have figured out this new source of food, and they'll sometimes, five of them at a time, will come, and sometimes they even fight [laughs] to get on the ledge to hang out at the bird feeder.
And yeah, it turns out that the six pounds of bird feed that I bought, I'll start to turn through [laughs] that a little bit quicker now, so I'm excited about that and just to also see other birds and species come and go as time goes on. So, that's been an exciting new development.
JOËL: So, the six pounds of birdseed might not last you through the winter.
STEPHANIE: I was debating between six pounds and, like, a 20-pound bag [laughs], which that would have been a lot. And so far, I think the six pounds has been serving me well. We'll see how long it lasts, but yeah, it's finally starting. I might have to refill it soon, so, you know, I was hopefully not going to have to store all that bird feed [laughs] just, like, in my house for a long time.
JOËL: Any birds that have shown up that have been particularly fun to watch or that are maybe your favorites?
STEPHANIE: I mentioned the chickadees because they seem to come as a group, and I really like watching them interact with each other. It's just kind of like bird TV, you know, it's not just a single bird. It's just watching these animals that are a collective do their thing. And I've been enjoying that a lot.
JOËL: Now I'm just imagining a reality TV but the Chickadee edition.
STEPHANIE: Oh yeah, definitely. I know some people put, like, cameras at their bird feeders to either live stream, which is funny because most of the time, there's nothing happening [laughs]. Usually, the birds are really in and out. Or they'll have, like, a really fancy camera to take, like, really beautiful up-close photos.
There's a blog that I discovered recently where someone posts about the birds that visit them at their place in Michigan. I'll link to it in the show notes, but it's really cool to see these, like, up-close and personal photos of basically the bird's mouth. Sometimes, they're open [laughs], so you can see right in them. I don't know; maybe there's a time where I'll get so into it that I'll create my own bird feeder blog.
JOËL: Well, if you do, you should definitely share it with the listeners on the podcast. Speaking of listeners on the podcast, we've recently had a listener question from Edward that I thought was a really interesting topic, and I wanted to take a whole episode to dig into.
And Edward asks about the concept of a spike. Sometimes, we're asked to investigate a complex new feature, and you might want to do some evaluation on the feasibility and complexity and build out just enough of it to make a well-informed opinion. And ideally, you're doing that in a way that reduces risk of spending too much time with unproven impact.
The problem is that in any reasonably complex codebase, that investigation work can be most of the work needed to build the feature. And Edward gives an example: if you're adding a system admin role, the core of the work is adding a new role with all of the abilities, but the real work is ensuring that it interacts with the entire system in the appropriate way. So, how do you manage making sure that you're doing spikes well?
And Edward asks if this is something that we've experienced a sort of feeling that we're doing 90% of the work in the spike. He also asks, does this say something about the codebase that you're working on? If it's hard to spike in it, does that say something about the underlying codebase, or are we just all doing spikes wrong? So yeah, I'm curious, Stephanie: do you occasionally spike things out in code on your projects?
STEPHANIE: Yeah, I do. I think one piece that was left a little bit unsaid is that I think spiking usually comes up when the team can't really estimate how long a task will take, you know, assuming that you use estimates on your team [chuckles]. That calls for a spike ticket, right? And someone will spend some time. And I think on some teams, this is usually time-boxed as well to maybe do a proof of concept or, yeah, do some of that initial exploration.
JOËL: Before we go too deep, I think it's probably useful to define spike in that I think it's a little bit easy and probably varies from team to team and even from a developer to developer. I think, for me, when I think of a spike, it's throwaway work. The code that I write will not get shipped, and this is not code that will just get improved later. It is entirely throwaway work. And the purpose of it is to learn something about the project that's being done.
Typically, it's in a sort of de-risking fashion, so to say, look, we've got a feature that's got a lot of unknowns in it. And if we commit to it right now or we start investing time into it, it could become a bit of a time pit. Let's try to answer some questions about it. Let's try to resolve some of those unknowns so that we can better make decisions around maybe estimation, but maybe even just prioritization. If this seems like something that would be really challenging to do, maybe we don't want to prioritize it this quarter. Is that similar to how you think of spikes, or do you have a different sort of definition of it?
STEPHANIE: Yeah, I am glad you mentioned that it's throwaway work. I think I was a little hesitant to commit to that definition with conviction because even based off of what Edward was saying, there's kind of, like, maybe different ideas about that or different expectations. But I sometimes think that, depending, spiking doesn't even necessarily need to lead to code. Like, it could just be answering questions. And so, at the same time, I think it is, I like what you said, work that helps you learn more about the system, whether or not there's some code written as, like, a potential path at the end of it.
JOËL: Interesting. So, you would put some things that don't involve code at all in the spike bucket.
STEPHANIE: I think there have been times where I've done a spike, and I've not coded out anything, but I've answered some questions, and I've left comments about unearthing some of the uncertainty that led us to want to explore the idea in the first place. Then, again, I also have gone down the path of, like, trying out a solution and maybe even multiple and then evaluating afterwards which ones I think were more suitable. So, it could mean both. I think that is actually something that's within the power of whoever is assigned this work to determine whatever is valuable to them in order to get enough information to figure out how you want to move forward.
JOËL: Another element of spikes that I think is often implied is that because this is throwaway work, you're not necessarily putting in all the work to make everything sort of clean, or well-structured, or reusable, or anything like that. So, it's quite possible that you would not even test this. You might not break this out into objects in the way that you would if this had to be reused. You might have duplication all over, and that's okay because the purpose of this code is not to be sort of production-grade; it's to answer some questions, and then you're going to throw it away and, using those answers, build something correctly.
STEPHANIE: Yeah, I think that's true. And it's kind of an interesting distinction from, you know, what you might consider your regular work in which the expectation is that it will be shipped [chuckles]. And there's also some amount of conflating the two, I think because if, you know, you and I are saying like, yeah, like, this exploration should be standalone, and it is not going to be used to be built on top of necessarily, there is some amount of revisiting. And you're not starting from scratch because you have an idea, but you are starting fresh if you will.
And so, you know, when you are doing that spiking, I think it allows you to move a little bit faster, but that doesn't mean that the work is, like, any X percent [laughs] done at the end of it.
JOËL: The work is still kind of, I guess, 0% done, again, because this is throwaway code, in our definition of a spike anyway. Would you distinguish between the terms spike and prototype?
STEPHANIE: Oh, interesting. My initial reaction is that a prototype would then be user-tested [laughs] in some way. Like, the point is to then show someone and then get them interacting with it, any initial reactions from that. Whereas a spike is really for the developer and maybe the team to discuss.
JOËL: I like that distinction. I definitely think that a spike, for me, is purely technical. We're not spiking out a feature by putting a thing live in production behind a feature flag, showing it to 10% of users, and seeing how they respond to that. That's not a spike. So, I think something a little bit more like that, or where you're showing things maybe to users, or you're wanting to do maybe some user testing with something. And it can be throwaway code still. I think now you're starting to get something more that you would call a prototype. So, I like that distinction of, is this sort of internal or external?
But in the way they're used, they can often be similar, and that oftentimes both will sort of...they're built to be as cheap as possible to answer the questions you're trying to get answered, whether that's from a user or just technical reasons. And so, the whole thing can be a little bit of smoke and mirrors, a little bit of duct tape and toothpicks, as long as you only have...like, the only solid parts you need are the parts that are going to help you answer your question. And so, any hack or cheat you can get to to bypass everything else is time you've saved, and that's a good thing.
STEPHANIE: Oh, I'm very curious about this idea of time saved because I think sometimes an underappreciated outcome of a spike is what not to do or is choosing not to do something. And it can feel not great to have spent hours or even days exploring a path just to realize that it's not worth it. I'm curious, like, when you know to stop and also, how you get other people kind of onboard that even just figuring out an initial idea was not a viable solution, how that could be a valuable insight to the rest of the team.
JOËL: Something that I think can be really useful is before you even start spiking out something, write a list of questions that you're trying to answer with this code, and then don't let yourself get distracted. Write the minimum amount of code that will allow you to answer those questions. So, maybe that is a question around, is it possible to connect this external API to our systems? There are some questions around, like, how credentials and things will work or how complex that will be.
It might be a question around, like, maybe there's even, like, a performance thing. We want to talk to an external system and, you know, the responses back need to be within a certain amount of time. Otherwise, this whole approach where we're going to try to fetch data live is not feasible. So, the answer we need there is, can we do it live, or do we need to consider some sort of background fetching, or caching approach, or something like that? So, write the minimum amount of code that it would take to do that.
And maybe the minimum amount of code, like you said, is not even really code. Maybe it's a script or even just trying out some curl commands and timing them at the command line. It could be a lot of things. But I think having a list of questions up front really helps you focus on the purpose of the spike.
And I think it helps me a little bit as well with emotional attachment in that success is not necessarily coming to a yes on all of those questions. It is having an answer, going from question mark to some answer. So, if I can answer that question, if I can find even a clever way to answer that question faster, that is success. I have done a good job with my spike.
STEPHANIE: I like that a lot. I think some people might struggle with spikes because they're so ambiguous. And if it's just, like, explore this potential feature, or, like, maybe not even that, but even saying, like, we want to build this admin role, to use Edward's example. And to constrain it to how should we do that, it already kind of guides the spike in a certain direction that may or may not be exactly what you're looking for. And so, there's some value in figuring out what questions to ask with the product team, even to get alignment on what the purpose of this task is.
And, you know, this is true of regular feature work, too. When those decisions have kind of already been made about what we're working on without a lot of input from developers who will be working on it, it can be really hard to, like, go back and be like, "Oh, actually, that's not really possible." But if the questions are like, "Is this possible?" or like, what it costs to do this, I think it prevents some of that friction and misalignment that might be had when the outcome of a spike turns out to be maybe not what someone wants to hear.
JOËL: And I think the questions you ask don't necessarily have to be yes or no questions. They could be some sort of list, right? It could be, look, we're looking at two different implementations or two general approaches, families of solutions for our super admin role. What are the trade-offs of each?
And so, a spike might be exploring. Can we come up with a list of pros and cons for each approach? And maybe some of them we just know from experience at developing, but maybe some of them might involve actually doing a little bit of work to play out the pros and cons. Maybe that's in our app. Maybe that's even spinning up a little app on the side, right? If we're comparing maybe two gems or something like that to see how we feel about throwing a few different scenarios and exploring edge cases. So, the questions don't need to be straight-up yes or no.
So, you mentioned earlier the idea that sometimes one developer might do the spike, and then another one might do the actual work, maybe inspired by the answers that were on that spike. And I think that can lead to some really interesting dynamics, especially if the developer who did the first spike has done kind of, like, what Edward describes, what feels like 90% of the feature.
It may be not so great code quality. And then this is a branch on GitHub, and they're like, "Okay, do the rest. Make it good. I've already explored the possibilities here," and then you're the developer who has to pick that up. Have you ever experienced that? And if so, how do you feel picking up a ticket like that?
STEPHANIE: Yeah, I have experienced it, and I think there is always something lost when that happens when you are not the person who did the research. And then having to just go from whatever was left in the notes or from the code and, you know, I don't know how feasible it is for whoever spiked to always be doing the implementing, but I certainly end up having a lot of questions, I think. Like, you can't document or even code out, like, every single thing you learned in that process, right? There's always from big to small decisions or alternatives considered that won't make it into however that communication or expression or knowledge transfer happens.
And I think the two choices that I have as a developer picking that up is either to just trust [laughs] that the work the other person did is taking me down a good path or to spend more time rebuilding some of that context and making some of my own evaluations along the way and deciding for myself whether I'm like, oh yeah, this is a good idea, or maybe, like, I might change something here. So, I think that there is some time lost, too. And I think that's a really good thing to point out when someone might think like, oh, this is mostly done. That's kind of my first reaction in terms of the context loss in an exchange like that.
JOËL: Do you feel like this is a situation where you would want to have the same developer do both the spike and the final implementation? Or is this maybe a situation where spikes aren't being done correctly, and maybe a branch with some code that's kind of half-written is maybe the wrong artifact to hand off from one developer to the other?
STEPHANIE: Oh, that's really interesting about if that's the wrong artifact to hand off because it could be misleading. Maybe it's not always, and maybe there's some really great code that comes out of it if someone builds on top of a work-in-progress branch or a spike branch.
Honestly, I think, and I haven't even really gotten to experience this all too much because maybe there is some perception that it's backtracking or, you know, it's more work or more time, but it would be really cool for whoever had spiked it to then bring someone along to pair on it and start fresh, like we mentioned, where they're kind of coming to each decision to be made with an idea, but it's not necessarily set in stone, right? There could be that discussion. It could be, like, a generative experience to either refine that code that had initially been spiked out or discover new things along the way. It's not like the outcome has already been decided because of the spike. It is information, and that's that.
JOËL: And we on this podcast are very pro-discovering new things along the way. I think sometimes as a developer, if I get sort of a, you know, maybe a 90% branch done that's get passed on to me from somebody else who did a spike, it feels a little bit like the finish the rest of the owl meme, except that now I'm not even, like, just trying to follow a tutorial. Just somebody did the first couple of circles and then is like, "Oh yeah, you finish the rest of the owl. I did the hard work. You just need to polish it up."
On the one hand, it's like, dude, if you're, like, doing 90%, you may as well finish it. I don't want to just be polishing somebody else's work. And, you know, oftentimes, it might feel like it's done 90% of the time, but it's actually, like, there's a lot of edge cases and nuance that have not been handled. And, you know, a spike is meant to be throwaway work to start with. So, I felt like those sorts of handoffs often, I don't know, they don't sit with me well.
STEPHANIE: Yeah. You could also come in and be like, this doesn't even look like an owl at all [laughs].
JOËL: I feel like maybe in my ideal world, a branch with partly written code is, I guess, an intermediate artifact that might be useful to show. But what I really want from a spike is answers to questions that will allow me, when I build the thing from scratch to make intelligent decisions.
So, probably what I want out of a spike is something that's closer to documentation, a list of questions that we were asking, and then the answers we came to by doing the spike work. And that might be maybe a list of trade-offs, or maybe we didn't really know the correct endpoints from this undocumented API, and we tried some stuff, and we, like, figured out what endpoints we needed, or what the shape of the JSON payload needed to be, things like that.
Maybe we tried a couple of different implementations, or we did some exploration around, like, what gem we'd like to use, and we have a recommendation for a gem. Those are all, I think, very concrete outcomes from a spike that I can then use when I'm building it from scratch. And I'm not just, like, branching off your branch or having it open in another browser and copy-pasting snippets while trying to, like, add some testing and maybe modularizing it a little bit.
I think that leads to probably a better outcome for the person who's doing the spike because they have a tighter scope and also a better outcome for me, who's then trying to build that feature correctly from the ground up. I think that would be my sort of ideal workflow.
STEPHANIE: While you were saying that, I thought about how a lot of those points sounded like requirements for a feature. And that, I think, is also a good outcome when a spike then leads to more concrete requirements because those are all decisions that were thought through, right? And even better is if that also documents things that were tried and the trade-offs that came with them or, like, the reasons why they were less viable or not ideal for that added context because that is also work that happened [laughs] and should be captured so someone can know that that might not be time they need to spend on that.
I am really interested in one piece that we haven't quite touched on is the complexity of the app and what it means for spiking to be a challenge because of the complexity of the app.
JOËL: Yeah. And I think sort of inherent in there is that maybe the idea that if you have a really complex app, it sort of forces you to go to the 90% of the work done in order to successfully answer the questions you wanted to answer with your spike in a way that maybe a better-structured app would not. Do you think that's true?
STEPHANIE: Well, I actually think that if the app is complex, you're actually seeing that affect all parts of feature development, not just spiking, where everything takes longer [laughs] because you maybe feel less confident. You're nervous about breaking something. Edward called the real work ensuring that it interacts with the entire system correctly, and that's true of, I think, just software development in general. And so, I wonder if, you know, spiking happens to be one way that it manifests, but if there are signals that it's affecting, you know, all parts of your workflow.
JOËL: There definitely is a cost, right? Complex software imposes costs everywhere. In some way, I think maybe spiking is attempting to get around some of those, in that there are some decisions that we can just say, you know what? We'll build the feature, and we'll just kind of figure it out as we go along, and we'll, like, build the thing.
Spiking attempts to say, look, let's not build the whole thing. Let's fake out a bunch of parts because, really, we have a big question that we want to answer about a thing that is three steps down, you know. And maybe the question is, look, we're trying to build the super admin role, and we know it's got all these, like, edge cases we need to deal with. Maybe we need a list of the edge cases, and maybe that's how we, like, try to drive them out.
But maybe this is a, hey, do we want to go with more of a, like, a role hierarchy inheritance-based approach, or do we want to go with some sort of escalating defaults? Or whatever the couple of different strategies you might want to do. And the spike might be trying to answer the question, how can we, as cheaply as possible while doing the minimum amount of work, sort of explore which of these implementations works best? And in a complex system, is it possible to get to the answer to those questions without building out 90% of the feature itself? I think, going to what you said, you might have to do more work if it's a complex system.
But I would also encourage everyone to go absolute minimalist, like, keep your goal in mind: what is the question you're trying to answer? And then ruthlessly cut everything you need to get to your point where you can answer that question. Do you need to hard code? Do you need to metaprogram? Do you need to do just, like, the worst, dirtiest code that you've ever written? That's okay because, like, the implementation does not matter. The fact that you're not exercising the full system does not matter, as long as the part that you're trying to exercise and answer your questions does get used.
STEPHANIE: Yeah, I like that a lot. And I wonder if the impulse to want to spike something is coming out of nervousness about how complicated the ask is. And it's like, well, I don't want to tell you that it's going to take a long time because this app is extremely complex, and everything takes a long time. You know, it's like not wanting to face that hard question of either we need to just set our expectations that things take longer, or we need to make some kind of change to make that easier to work with. And that is a lot of thought and effort.
And so, it's kind of an answer to be like, well, like, let me spike this out and then see [laughs]. And so it may be a way to appease someone making a request for a feature. I don't know; I'm perhaps projecting a little bit here [chuckles]. But it could also be an important question to ask yourself if you find your team, like, needing to lean on spikes a lot because you just don't know.
JOËL: That's really interesting because I think that maybe connects to a recent episode we did on breaking features down into smaller chunks. Spikes can often manifest, or the need for a spike can often manifest when you've got a larger, less well-defined feature that you want to do. So, sometimes, breaking things into smaller pieces will help you have something that's a little bit more well-defined that you feel confident jumping into without doing a spike.
Or maybe the act of trying to split this sort of large, undefined task into smaller pieces will reveal questions that need to be answered and say, look, I don't know where the seam should be, where to split this task because I don't know the answer to this one question. If I could know the answer to this one question, I would know where to split this feature. That's your spike right there.
Do the minimum amount of code to answer that one question, and then you can split your feature and confidently work on the two smaller pieces. And I think that's a win for everyone.
STEPHANIE: Yeah. And you can listen back to our vertical slice episode [laughs] for some inspiration on that.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie shares her task of retiring a small, internally-used link-shortening app. She describes the process as both celebratory and a bit mournful. Meanwhile, Joël discusses his deep dive into ActiveRecord, particularly in the context of debugging. He explores the complexities of ActiveRecord querying schemas and the additional latency this introduces.
Together, the hosts discuss the nuances of package management systems and their implications for developers. They touch upon the differences between system packages and language packages, sharing personal experiences with tools like Homebrew, RubyGems, and Docker.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, this week, I got to have some fun working on some internal thoughtbot work. And what I focused on was retiring one of our just, like, small internal self-hosted on Heroku apps in favor of going with a third-party service for this functionality. We basically had a tiny, little app that we used as a link-shortening service. So, if you've ever seen a tbot.io short link out in the world, we were using our just, like, an in-house app to do that, you know, but for various reasons, we wanted to...just it wasn't worth maintaining anymore. So, we wanted to just use a purchased service.
But today, I got to just, like, do the little bit of, like, tidying up, you know, in preparation to archive a repo and kind of delete the app from Heroku, and I hadn't done that before. So, it felt a little bit celebratory and a little bit mournful even [laughs] to, you know, retire something like that. And I was pairing with another thoughtbot developer, and we used a pairing app called Tuple. And you can just send, like, fun reactions to each other. Like, you could send, like, a fire emoji [laughs] or something if that's what you're feeling.
And so, I sent some, like, confetti when we clicked the, "I understand what deleting this app means on GitHub." But I joked that "Actually, I feel like what I really needed was a, like, a salute kind of like thank you for your service [laughs] type of reaction."
JOËL: I love those moments when you're kind of you're hitting those kind of milestone-y moments, and then you get to send a reaction. I should do that more often in Tuple. Those are fun.
STEPHANIE: They are fun. There's also a, like, table flip reaction, too, is one that I really enjoy [laughs], you know, you just have to manifest that energy somehow. And then, after we kind of sent out an email to the company saying like, "Oh yeah, we're not using our app anymore for link shortening," someone had a great suggestion to make our archived repo public instead of private. I kind of liked it as a way of, like, memorializing this application and let community members see, you know, real code in a real...the application that we used here at thoughtbot. So, hopefully, if not me, then someone else will be able to do that and maybe publish a little blog post about that.
JOËL: That's exciting. So, it's not currently public, the repo, but it might be at some point in the future.
STEPHANIE: Yeah, that's right.
JOËL: We'll definitely have to mention it on a future episode if that happens so that people following along with the story can go check out the code.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been doing a deep dive into how ActiveRecord works. Particularly, I am debugging some pretty significant slowdowns in querying ActiveRecord models that are backed not by a regular Postgres database but instead a Snowflake data warehouse via an ODBC connection. So, there's a bunch of moving pieces going on here, and it would just take forever to make any queries.
And sure, the actual reported query time is longer than for a local Postgres database, but then there's this sort of mystery extra waiting time, and I couldn't figure out why is it taking so much longer than the actual sort of recorded query time. And I started digging into all of this, and it turns out that in addition to executing queries to pull actual data in, ActiveRecord needs to, at various points, query the schema of your data store to pull things like names of tables and what are the indexes and primary keys and things like that.
STEPHANIE: Wow. That sounds really cool and something that I have never needed to do before. I'm curious if you noticed...you said that it takes, I guess, longer to query Snowflake than it would a more common Postgres database. Were you noticing this performance slowness locally or on production?
JOËL: Both places. So, the nice thing is I can reproduce it locally, and locally, I mean running the Rails app locally. I'm still talking to a remote Snowflake data warehouse, which is fine. I can reproduce that slowness locally, which has made it much easier to experiment and try things. And so, from there, it's really just been a bit of a detective case trying to, I guess, narrow the possibility space and try to understand what are the parts that trigger slowness. So, I'm printing timestamps in different places. I've got different things that get measured.
I've not done, like, a profiling tool to generate a flame graph or anything like that. That might have been something cool to try. I just did old-school print statements in a couple of places where I, like, time before, time after, print the delta, and that's gotten me pretty far.
STEPHANIE: That's pretty cool. What do you think will be an outcome of this? Because I remember you saying you're digging a little bit into ActiveRecord internals. So, based on, like, what you're exploring, what do you think you could do as a developer to increase some of the performance there?
JOËL: I think probably what this ends up being is finding that the Snowflake adapter that I'm using for ActiveRecord maybe has some sort of small bug in it or some implementation that's a little bit too naive that needs to be fine-tuned. And so, probably what ends up happening here is that this finishes as, like, an open-source pull request to the Snowflake Adapter gem.
STEPHANIE: Yeah, that's where I thought maybe that might go. And that's pretty cool, too, and to, you know, just be investigating something on your app and being able to make a contribution that it benefits the community.
JOËL: And that's what's so great about open source because not only am I able to get the source to go source diving through all of this, because I absolutely need to do that, but also, then if I make a fix, I can push that fix back out to the community, and everybody gets to benefit.
STEPHANIE: Cool. Well, that's another thing that I look forward to hearing more on the development of [laughs] later if it pans out that way.
JOËL: One thing that has been interesting with this Snowflake work is that there are a lot of moving parts and multiple different packages that I need to install to get this all to work. So, I mentioned that I might be doing a pull request against the Snowflake Adapter for ActiveRecord, but all of this talks through a sort of lower-level technology protocol called ODBC, which is a sort of generic protocol for speaking to data stores, and that actually has two different pieces. I had to install two different packages.
There is a sort of low-level executable that I had to install on my local dev machine and that I have to install on our servers. And on my Mac, I'm installing that via Homebrew, which is a system package. And then to get Ruby bindings for that, there is a Ruby gem that I install that allows Ruby code to talk to ODBC, and that's installed via RubyGems or Bundler.
And that got me thinking about sort of these two separate ecosystems that I tend to work with every day. We've got sort of the system packages and the, I don't know what you want to call them, language packages maybe, things like RubyGems, but that could also be NPM or whatever your language of choice is, and realizing that we kind of have things split into two different zones, and sometimes we need both and wondering a little bit about why is that difference necessary.
STEPHANIE: Yeah, I don't have an answer to that [laughs] question right now, but I can say that that was an area that really tripped me up, I think, when I was first a fledgling developer. And I was really confused about where all of these dependencies were coming from and going through, you know, setting up my first project and being, like, asked to install Postgres on my machine but then also Bundler, which then also installs more dependencies [laughs].
The lines between those ecosystems were not super clear to me. And, you know, even now, like, I find myself really just kind of, like, learning what I need to know to get by [laughs] with my day-to-day work. But I do like what you said about these are kind of the two main layers that you're working with in terms of package management. And it's really helpful to have that knowledge so you can troubleshoot when there is an issue at one or the other.
JOËL: And you mentioned Postgres. That's another one that's interesting because there are components in both of those ecosystems. Postgres itself is typically installed via a system package manager, so something like Homebrew on a Mac or apt-get on a Linux machine. But then, if you're interacting with Postgres in a Ruby app, you're probably also installing the pg gem, which are Ruby's bindings for Postgres to allow Ruby to talk to Postgres, and that lives in the package ecosystem on RubyGems.
STEPHANIE: Yeah, I've certainly been in the position of, you know, again, as consultants, we oftentimes are also setting up new laptops entirely [laughs] like client laptops and such and bundling and the pg gem is installed. And then at least I have, you know, I have to give thanks to the very clear error message that [laughs] tells me that I don't have Postgres installed on my machine. Because when I mentioned, you know, troubleshooting earlier, I've certainly been in positions where it was really unclear what was going on in terms of the interaction between what I guess we're calling the Ruby package ecosystem and our system level one.
JOËL: Especially for things like the pg gem, which need to compile against some existing libraries, those always get interesting where sometimes they'll fail to compile because there's a path to some C compiler that's not set correctly or something like that. For me, typically, that means I need to update the macOS command line tools or the Xcode command line tools; I forget what the name of that package is. And, usually, that does the trick. That might happen if I've upgraded my OS version recently and haven't downloaded the latest version of the command line tools.
STEPHANIE: Yeah. Speaking of OS versions, I have a bit of a story to share about using...I've never said this name out loud, but I am pretty sure that it would just be pronounced as wkhtmltopdf [laughs]. For some reason, whenever I see words like that in my brain, I want to, like, make it into a pronounceable thing [laughs].
JOËL: Right, just insert some vowels in there.
STEPHANIE: Yeah, wkhtmltopdf [laughs]. Anyway, that was being used in an app to generate PDF invoices or something. It's a pretty old tool. It's a CLI tool, and it's, as far as I can tell, it's been around for a long time but was recently no longer maintained. And so, as I was working on this app, I was running into a bug where that library was causing some issues with the PDF that was generated. So, I had to go down this route of actually finding a Ruby gem that would figure out which package binary to use, you know, based off of my system. And that worked great locally, and I was like, okay, cool, I fixed the issue.
And then, once I pushed my change, it turns out that it did not work on CI because CI was running on Ubuntu. And I guess the binary didn't work with the latest version of Ubuntu that was running on CI, so there was just so many incompatibilities there. And I was wanting to fix this bug. But the next step I took was looking into community-provided packages because there just simply weren't any, like, up-to-date binaries that would likely work with these new operating systems. And I kind of stopped at that point because I just wasn't really sure, like, how trustworthy were these community packages. That was an ecosystem I didn't know enough about.
In particular, I was having to install some using apt from, you know, just, like, some Linux community. But yeah, I think I normally have a little bit more experience and confidence in terms of the Ruby package ecosystem and can tell, like, what gems are popular, which ones are trustworthy. There are different heuristics I have for evaluating what dependency to pull in. But here I ended up just kind of bailing out of that endeavor because I just didn't have enough time to go down that rabbit hole.
JOËL: It is interesting that learning how to evaluate packages is a skill you have to learn that varies from package community to package community. I know that when I used to be very involved with Elm, we would often have people who would come to the Elm community from the JavaScript community who were used to evaluating NPM packages. And one of the metrics that was very popular in the JavaScript community is just stars on GitHub. That's a really important metric. And that wasn't really much of a thing in the Elm community.
And so, people would come and be like, "Wait, how do I know which package is good? I don't see any stars on GitHub." And then, it turns out that there are other metrics that people would use. And similarly, you know, in Ruby, there are different ways that you might use to evaluate Ruby gems that may or may not involve stars on GitHub. It might be something entirely different.
STEPHANIE: Yeah. Speaking of that, I wanted to plug a website that I have used before called the Ruby Toolbox, and that gives some suggestions for open-source Ruby libraries of various categories. So, if you're looking for, like, a JSON parser, it has some of the more popular ones. If you're looking for, you know, it stores them by category, and I think it is also based on things like stars and forks like that, so that's a good one to know.
JOËL: You could probably also look at something like download numbers to see what's popular, although sometimes it's sort of, like, an emergent gem that's more popular. Some of that almost you just need to be a little bit in the community, like, hearing, you know, maybe listening to podcasts like this one, subscribing to Ruby newsletters, going to conferences, things like that, and to realize, okay, maybe, you know, we had sort of an old staple for JSON parsing, but there's a new thing that's twice as fast. And this is sort of becoming the new standard, and the community is shifting towards that. You might not know that just by looking at raw stats. So, there's a human component to it as well.
STEPHANIE: Yeah, absolutely. I think an extension of knowing how to evaluate different package systems is this question of like, how much does an average developer need to know about package management? [laughs]
JOËL: Yeah, a little bit to a medium amount, and then if you're writing your own packages, you probably need to know a little bit more. But there are some things that are really maybe best left to the maintainers of package managers. Package managers are actually pretty complex pieces of software in terms of all of the dependency management and making sure that when you say, "Oh, I've got Rails, and this other gem, and this other gem, and it's going to find the exact versions of all those gems that play nicely together," that's non-trivial.
As a sort of working developer, you don't need to know all of the algorithms or the graph theory or any of that that underlies a package manager to be able to be productive in your career. And even as a package developer, you probably don't need to really know a whole lot of that.
STEPHANIE: Yeah, that makes sense. I actually had referred to our internal at thoughtbot here, our kind of, like, expectations for skill levels for developers. And I would say for an average developer, we kind of just expect a basic understanding of these more complex parts of our toolchain, I think, specifically, like, command line tools and package management. And I think I'd mentioned earlier that, for me, it is a very need-to-know basis.
And so, yeah, when I was going down that little bit of exploration around why wkhtmltopdf [chuckles] wasn't working [chuckles], it was a bit of a twisty and turning journey where I, you know, wasn't really sure where to go. I was getting very obtuse error messages, and, you know, I had to dive deep into all these forums [laughs] for all the various platforms [laughs] about why libraries weren't working.
And I think what I did come away with was that like, oh, like, even though I'm mostly working on my local machine for development, there was some amount of knowledge I needed to have about the systems that my CI and, you know, production servers are running on. The project I was working on happened to have, like, a Docker file for those environments, and, you know, kind of knowing how to configure them to install the packages I needed to install and just knowing a little bit about the different ways of doing that on systems outside of my usual daily workflows.
JOËL: And I think that gets back to some of the interesting distinctions between what we might call language packages versus system packages is that language packages more or less work the same across all operating systems. They might have a build step that's slightly different or something like that, but system packages might be pretty different between different operating systems.
So, development, for me, is a Mac, and I'm probably installing system packages via something like Homebrew. If I then want that Rails app to run on CI or some Linux server somewhere, I can't use Homebrew to install things there. It's going to be a slightly different package ecosystem. And so, now I need to find something that will install Postgres for Linux, something that will install, I guess, wkhtmltopdf [laughs] for Linux.
And so, when I'm building that Docker file, that might be a little bit different for Mac versus for...or I guess when you run a Docker file, you're running a containerized system. So, the goal there is to make this system the same everywhere for everyone. But when you're setting that up, typically, it's more of a Linux-like system. And so running inside the Docker container versus outside on the native Mac might involve a totally different set of packages and a different package tool. As opposed to something like Bundler, you've got your gem file; you bundle install. It doesn't matter if you're on Linux or macOS.
STEPHANIE: Yes, I think you're right. I think we kind of answered our own question at the top of the show [laughs] about differences and what do you need to know about them. And I also like how you pointed out, oh yeah, like, Docker is supposed to [laughs], you know, make sure that we're all developing in the same system, essentially. But, you know, sometimes you have different use cases for it.
And, yeah, when you were talking about installing an application on your native Mac and using Homebrew, but even, you know, not everyone even uses Homebrew, right? You can install manually [laughs] through whatever official installer that application might provide. So, there's just so many different ways of doing something. And I had the thought that it's too bad that we both [chuckles] develop on Mac because it could be really interesting to get a Linux user's perspective in here.
JOËL: You mentioned not installing via Homebrew. A kind of glaring example of that in my personal setup is that I use Postgres.app to manage Postgres on my machine rather than using Homebrew. I've just...over the years, the Homebrew version every time I upgrade my operating system or something, it's just such a pain to update, and I've lost too many hours to it, and Postgres.app just works, and so I've switched to that. Most other things, I'll use the Homebrew version, but Postgres it's now Postgres.app. It's not even a command line install, and it works fine for me.
STEPHANIE: Nice. Yeah. That's interesting. That's a good tip. I'll have to look into that next time because I have also certainly had to just install so many [laughs] various versions of Postgres and figure out what's going on with them every time I upgrade my OS. I'm with you, though, in terms of the packages world I'm looking for, it works [laughs].
JOËL: So, you'd mentioned earlier that packages is sort of an area that's a bit of a need-to-know basis for you. Are there, like, particular moments in your career that you remember like, oh, that's the moment where I needed to, like, take some time and learn a little bit of the next level of packages?
STEPHANIE: That's a great question. I think the very beginnings of understanding how package versions work when you have multiple projects on your machine; I just remember that being really confusing for me. When I started out, like, you know, as soon as I cloned my second repo [laughs], and was very confused about, like, I'm sure I went through the process of not installing gems using Bundler, and then just having so much chaos [laughs] wrecked in my development environment and, you know, having to ask someone, "I don't understand how this works. Like, why is it saying I have multiple versions of this library or whatever?"
JOËL: Have you ever sudo gem installed a gem?
STEPHANIE: Oh yeah, I definitely have. I can't [laughs], like, even give a good reason for why I have done it, but I probably was just, like, pulling my hair out, and that's what Stack Overflow told me to do. I don't know if I can recommend that, but it is [chuckles] one thing to do when you just are kind of totally stuck.
JOËL: There was a time where I think that that was in the READMEs for most projects.
STEPHANIE: Yeah, that's a really good point.
JOËL: So, that's probably why a lot of people end up doing that, but then it tends to install it for your system Ruby rather than for...because if you're using something like Rbenv or RVM or ASDF to manage multiple Ruby versions, those end up being what's using or even Homebrew to manage your Ruby. It wouldn't be installing it for those versions of Ruby. It would be installing it for the one that shipped with your Mac.
I actually...you know what? I don't even know if Mac still ships with Ruby. It used to. It used to ship with a really old version of Ruby, and so the advice was like, "Hey, every repo tells you to install it with sudo; don't do that. It will mess you up."
STEPHANIE: Huh. I think Mac still does ship with Ruby, but don't quote me on that [laughter]. And I think that's really funny that, like, yeah, people were just writing those instructions in READMEs. And I'm glad that we've collectively [laughs] figured out that difference and want to, hopefully, not let other developers fall into that trap [laughs]. Do you have a particular memory or experience when you had to kind of level up your knowledge about the package ecosystem?
JOËL: I think one sort of moment where I really had to level up is when I started really needing to understand how install paths worked, especially when you have, let's say, multiple versions of a gem installed because you have different projects. And you want to know, like, how does it know which one it's using? And then you see, oh, there are different paths that point to different directories with the installs.
Or when you might have an executable you've installed via Homebrew, and it's like, oh yeah, so I've got this, like, command that I run on my shell, but actually that points to a very particular path, you know, in my Homebrew directory. But maybe it could also point to some, like, pre-installed system binaries or some other custom things I've done. So, there was a time where I had to really learn about how the path shell variable worked on a machine in order to really understand how the packages I installed were sometimes showing up when I invoked a binary and sometimes not.
STEPHANIE: Yeah, that is another really great example that I have memories of [laughs] being really frustrated by, especially if...because, you know, we had talked earlier about all the different ways that you can install applications on your system, and you don't always know where they end up [laughs].
JOËL: And this particular memory is tied to debugging Postgres because, you know, you're installing Postgres, and some paths aren't working. Or maybe you try to update Postgres and now it's like, oh, but, like, I'm still loading the wrong one. And why does PSQL not do the thing that I think it does? And so, that forced me to learn a little bit about, like, under the hood, what happens when I type brew install PostgreSQL? And how does that mesh with the way my shell interprets commands and things like that? So, it was maybe a little bit of a painful experience but eye-opening and definitely then led to me, I think, being able to debug my setup much more effectively in the future.
STEPHANIE: Yeah. I like that you also pointed out how it was interacting with your shell because that's, like, another can of worms, right? [laughs] In terms of just the complexity of how these things are talking to each other.
JOËL: And for those of our listeners who are not familiar with this, there is a shell command that you can use called which, W-H-I-C-H. And you can prefix that in front of another command, and it will tell you the path that it's using for that binary. So, in my case, if I'm looking like, why is this PSQL behaving weirdly or seems to be using the old version, I can type 'which space psql', and it'll say, "Oh, it's going to this path."
And I can look at it and be like, oh, it's using my system install of Postgres. It's not using the Homebrew one. Or, oh, maybe it's using the Homebrew install, not my Postgres.app version. I need to, like, tinker with the paths a little bit. So, that has definitely helped me debug my package system more than once.
STEPHANIE: Yeah, that's a really good tip. I can recall just totally uninstalling everything [laughs] and reinstalling and fingers crossed it would figure out a route to the right thing [laughs].
JOËL: You know what? That works. It's not the, like, most precise solution but resetting your environment when all else fails it's not a bad solution.
So, we've been talking a lot about what it's like to interact with a package ecosystem as developers, as users of packages, but what if you're a package developer? Sometimes, there's a very clear-cut place where to publish, and sometimes it's a little bit grayer. So, I could see, you know, I'm developing a database, and I want that to be on operating systems, probably should be a system-level package rather than a Ruby gem.
But what if I'm building some kind of command line tool, and I write it in Ruby because I like writing Ruby? Should I publish that as a gem, or should I publish that as some kind of system package that's installed via Homebrew? Any opinions or heuristics that you would use to choose where to publish on one side or the other?
STEPHANIE: As not a package developer [laughs], I can only answer from that point of view. That is interesting because if you publish on a, you know, like, a system repository, then yeah, like, you might get a lot more people using your tool out there because you're not just targeting a specific language's community.
But I don't know if I have always enjoyed downloading various things to my system's OS. I think that actually, like, is a bit complicated for me or, like, I try to avoid it if I can because if something can be categorized or, like, containerized in a way that, like, feels right for my mental model, you know, if it's written in Ruby or something really related to things I use Ruby in, it could be nice to have that installed in my, like, systems RubyGems. But I would be really interested to hear if other people have opinions about where they might want to publish a package and what kind of developers they're hoping to find to use their tool.
JOËL: I like the heuristic that you mentioned here, the idea of who the audience is because, yeah, as a Ruby developer who already has a Ruby setup, it might be easier for me to install something via a gem. But if I'm not a Ruby developer who wants to use the packages maybe a little bit more generic, you know, let's say, I don't know, it's some sort of command line tool for interacting with GitHub or something like that. And, like, it happens to be written in Ruby, but you don't particularly care about that as a user of this. Maybe you don't have Ruby installed and now you've got to, like, juggle, like, oh, what is RubyGems, and Bundler, and all this stuff?
And I've definitely felt that occasionally downloading packages sort of like, oh, this is a Python package. And you're going to need to, like, set up all this stuff. And it's maybe designed for a Python audience. And so, it's like, oh, you're going to set up a virtual environment and all these things. I'm like, I just want your command line tools. I don't want to install a whole language. And so, sometimes there can be some frustration there.
STEPHANIE: Yeah, that is very true. Before you even said that, I was like, oh, I've definitely wanted to download a command line tool and be like, first install [laughs] Python. And I'm like, nope, I'm bailing out of this.
JOËL: On the other hand, as a developer, it can be a lot harder to write something that's a bit more cross-platform and managing all that. And I've had to deal a little bit with this for thoughtbot's Parity tool, which is a command-line tool for working with Heroku. It allows you to basically run commands on either staging or production by giving you a staging command and a production command for common Heroku CLI tasks, which makes it really nice if you're working and you're having to do some local, some development, some staging, and some production things all from your command line.
It initially started as a gem, and we thought, you know what? This is mostly command line, and it's not just Rubyists who use Heroku. Let's try to put this on Homebrew. But then it depends on Ruby because it's written in Ruby. And now we had to make sure that we marked Ruby as a dependency in Homebrew, which meant that Homebrew would then also pull in Ruby as a dependency. And that got a little bit messy.
For a while, we even experimented with sort of briefly available technology called Traveling Ruby that allowed you to embed Ruby in your binary, and you could compile against that. That had some drawbacks. So, we ended up rolling that back as well. And eventually, just for maintenance ease, we went back to making this a Ruby gem and saying, "Look, you install it via RubyGems." It does mean that we're targeting more of the Ruby community. It's going to be a little bit harder for other people to install, but it is easier for us to maintain.
STEPHANIE: That's really interesting. I didn't know that history about Parity. It's a tool that I have used recently and really enjoyed. But yeah, I think I remember someone having some issues between installing it as a gem and installing it via Homebrew and some conflicts there as well. So, I can also see how trying to decide or maybe going down one path and then realizing, oh, like, maybe we want to try something else is certainly not trivial.
JOËL: I think, in me, I have a little bit of the idealist and the pragmatist that fight. The idealist says, "Hey, if it's not, like, aimed for Ruby developers as a, like, you can pull this into your codebase, if it's just command line tools and the fact that it's written in Ruby is an implementation detail, that should be a system package. Do not distribute binaries via RubyGems." That's the idealist in me. The pragmatist says, "Oh, that's a lot of work and not always worth it for both the maintainers and sometimes for the users, and so it's totally okay to ship binaries as RubyGems."
STEPHANIE: I was totally thinking that I'm sure that you've been in that position of being a user and trying to download a system package and then seeing it start to download, like, another language. And you're like, wait, what? [laughter] That's not what I want.
JOËL: So, you and I have shared some of our heuristics in the way we approach this problem. Now, I'm curious to hear from the audience. What are some heuristics that you use to decide whether your package is better shipped on RubyGems versus, let's say, Homebrew? Or maybe as a user, what do you prefer to consume?
STEPHANIE: Yes. And speaking of getting listener feedback, we're also looking for some listener questions. We're hoping to do a bit of a grab-bag episode where we answer your questions. So, if you have anything that you're wanting to hear me and Joël's thoughts on, write us at [email protected].
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël shares a unique, time-specific bug he encountered, which causes a page to crash only in January. This bug has been fixed in previous years, only to reemerge due to subsequent changes. Stephanie talks about her efforts to bring more structure to her work-from-home environment. She describes how setting up a bird feeder near her desk and keeping chocolates at her desk serve as incentives to work more from her desk.
Together, Stephanie and Joël take a deep dive into the challenges of breaking down software development tasks into smaller, more manageable chunks. They explore the concept of 'vertical slice' development, where features are implemented in thin, fully functional segments, contrasting it with the more traditional 'horizontal slice' approach. This discussion leads to insights on collaborative work, the importance of iterative development, and strategies for efficient and effective software engineering.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world in the year 2024?
JOËL: Yeah, it's 2024. New year, new me. Or, in this case, maybe new year, new bugs? I'm working on a project where I ran into a really interesting time-specific bug. This particular page on the site only crashes in the month of January. There's some date logic that has a weird boundary condition there, and if you load that page during the month of January, it will crash, but during the entire rest of the year, it's fine.
STEPHANIE: That's a fun New Year's tradition for this project [laughs], fixing this bug [laughs] every year.
JOËL: It's been interesting because I looked a little bit at the git history of this bug, and it looks like it's been fixed in past Januarys, but then the fix changes the behavior slightly, so people bring the behavior back correct during the rest of the year that also happens to reintroduce the bug in January, and now I'm back to fixing it in January. So, it is a little bit of a tradition.
STEPHANIE: Yeah, that is really funny. I was also recently debugging something, and we were having some flakiness with a test that we wrote. And we were trying to figure out because we had some date/time logic as well. And we were like, is there anything strange about this current time period that we are in that would potentially, you know, lead to a flaky test?
And we were looking at the clock and we're like, "I don't think it's like, you know, midnight UTC or anything [laughs] like that." But, I mean, I don't know. It's like, how could you possibly think of, like, all of the various weird edge cases, you know, related to that kind of thing? I don't think I would ever be like, huh, it's January, so, surely, that must [laughs] mean that that's this particular edge case I'm seeing.
JOËL: It's interesting because I feel like there's a couple of types of time-specific bugs that we see pretty frequently. If you're near the daylight savings boundary, let's say a week before sometimes, or whatever you're...if you're doing, like, a week from now logic or something like that, typically, I'll see failures in the test suite or maybe actual crashes in the code a week before springing forward and a week before falling back. And then, like you said, sometimes you see failures at the end of the day, Eastern time for me, when you approach that midnight UTC time boundary. I think this is the first time I've seen a failure in January due to the month being, like, a month boundary...or it's a year boundary really is what's happening.
STEPHANIE: Yeah. That just sounds like another [laughs] thing you have to look out for. I'm curious: are you going to fix this bug for real or leave it for [laughs] 2025?
JOËL: I've got a fix that I think is for real and that, like, not only fixes the break in January but also during the rest of the year gives the desired behavior. I think part of what's really interesting about this bug is that there are some subtle behavioral changes between a few different use cases where this code is called, part of which depend on when in the year you're calling it and whether you want to see it for today's date versus you can also specify a date that you want to see this report. And so, it turns out that there are a lot more edge cases than might be initially obvious.
So, this turned into effectively a product discussion, and realizing, wait a minute, the code isn't telling the full story. There's more at a product level we need to discuss. And actually, I think I learned a lot about the product there. So, while it was maybe a surprising and kind of humorous bug to come across, I think it was actually a really good experience.
STEPHANIE: Nice. That's awesome. That's a pretty good way to start the year, I would say.
JOËL: I'd say so. How about you? What's new in your world?
STEPHANIE: So, I don't know, I think towards the end of the year, last year, I was in a bit of a slump where I was in that work-from-couch phase of [laughs] the year, you know, like, things are slowing down and I, you know, winter was starting here. I wanted to be cozy, so I'd, you know, set up on the couch with a blanket.
And I realized that I really wasn't sitting at my desk at all, and I kind of wanted to bring a little bit of that structure back into my workday, so I [chuckles] added some incentives for me to sit at my desk, which include I recently got a bird feeder that attaches to the window in my office. So, when I sit at my desk, I can hopefully see some birds hanging out.
They are very flighty, so I've only seen birds when I'm, like, in the other room. And I'm like, oh, like, there's a bird at the bird feeder. Like, let me get up close to, like, get to admire them. And then as soon as I, like [laughs], get up close to the window, they fly away. So, I'm hoping that if I sit at my desk more, I'll spontaneously see more birds, and maybe they'll get used to, like, a presence closer to the window. And then my second incentive is I now have little chocolates at my desk [laughs].
JOËL: Nice.
STEPHANIE: I've just been enjoying, like, a little treat and trying to keep them as a...okay, I've worked at my desk for an hour, and now I get a little reward for that [laughs].
JOËL: I like that. Do you know what kind of species of birds have been coming to your feeder?
STEPHANIE: Ooh, yes. So, we got this birdseed mix called Cardinal and Friends [laughs].
JOËL: I love that.
STEPHANIE: So, I have seen, like, a really beautiful red male cardinal come by. We get some robins and some chickadees, I think. Part of what I'm excited for this winter is to learn more how to identify more bird species. And I usually like to be out in nature and stuff, and winter is a hard time to do that. So, this is kind of my way of [chuckles] bringing that more into my life during the season.
So, this is our first episode after a little bit of a break for the holidays. There actually has been some content of ours that has been published out in the world on the internet [laughs] during this time. And just wanted to point out in the few weeks that there weren't any Bike Shed episodes, I ended up doing a thoughtbot Rails development livestream with thoughtbot CEO Chad Pytel, and that was my first-time live streaming code [laughs].
And it was a really cool experience. I'm glad I had this podcast experience. So, I'm like, okay, well I have, you know, that, like, ability to do stuff kind of off script and present in the moment. But yeah, that was a really cool thing that I got to do, and I feel a little bit more confident about doing those kinds in the future.
JOËL: And for those who are not aware, Chad does–I think it's a weekly live stream on Fridays where he's doing various types of code. So, he's done some work on some internal projects. He did a series where he upgraded, I think, a Rails 2 app all the way to Rails 7, typically with a guest who's another teammate from thoughtbot working on a thing. So, for those of our listeners that might find interesting, we'll put a link in the show notes where you can go see that. I think it's on YouTube and on Twitch.
STEPHANIE: Yes.
JOËL: What did you pair on? What kind of project were you doing for the livestream?
STEPHANIE: So, we were working on thoughtbot's internal application called Hub, which is where we have, like, our internal messaging features. It's where we do a lot of our business operations-y things [laughs]. So, all of the, like, agency work that we do, we use our in-house software for that, and so Chad and I were working on a feature to introduce something that would help out with how we staff team members on projects.
In other content news [laughs], Joël, I think you have something to share as well.
JOËL: Yeah. So, we've mentioned on past episodes that I gave a talk at RubyConf this past November all about what the concept of time actually means within a program and the different ways of representing it, and the fact that time isn't really a single thing but actually kind of multiple related quantities. And over the holiday break, the talks from that conference got published. I'm pretty excited that that is now out there. We'd mentioned that as a highlight in the previous episode, highlighting accomplishments for the year, but it just wasn't quite out yet. We couldn't link it there. So, I'll leave a link in the show notes for this episode for anyone who's interested in seeing that.
STEPHANIE: Sounds like that talk is also timely for a debug you --
JOËL: Ha ha ha!
STEPHANIE: Were also mentioning earlier in the episode. So, a few episodes ago, I believe we mentioned that we had recently had, like, our company internal hackathon type thing where we have two days to get together and work with team members who we might not normally work with and get some cool projects started or do some team bonding, that kind of thing.
And since I'm still, you know, unbooked on client work, I've been doing a lot of internal thoughtbot stuff, like continuing to work on the Hub app I mentioned just a bit ago. And from the hackathon, there was some work that was unfinished by, like, a project team that I decided to pick up this week as part of my internal work.
And as I was kind of trying to gauge how much progress was made and, like, what was left to accomplish to get it over the finish line so it could be shipped, I noticed that because there were a couple of different people working on it, they had broken up this feature which was basically introducing, like, a new report for one of our teams to get some data on how certain projects are going.
And there was, like, a UI portion and then some back-end portion, and then part of the back-end portion also involved a bit of a complex query that was pulled out as a separate ticket on its own. And so, all of those things were slightly, you know, were mostly done but just needed those, like, finishing touches, and then it also needed to come together.
And I ended up pairing on this with another thoughtboter, and we spent the same amount of time that the hackathon was, so two days. We spent those two days on that, like, aspect of putting it all together. And I think I was a bit surprised by how much work that was, you know, we had kind of assumed that like, oh, like, all these pieces are mostly finished, but then the bulk of what we spended our time doing was integrating the components together.
JOËL: Does this feel like a bit of a finish the rest of the OWL meme?
STEPHANIE: What is that meme? I'm not familiar with it, but now I really want to know [laughs].
JOËL: It's a meme kind of making fun of some of these drawing tutorials where they're like, oh; first you draw, like, three circles.
STEPHANIE: [laughs]
JOËL: And then just finish the rest of the owl.
STEPHANIE: [laughs]
JOËL: And I was thinking of this beautifully drawn picture.
STEPHANIE: Oh, that's so funny. Okay, yeah, I can see it in my head [laughs] now. It's like how to go from three circles, you know, to a recognizable [laughs] owl animal.
JOËL: So, especially, they're like, oh, you know, like, we put in all the core classes and everything. It's all just basically there. You just need to connect it all together, and it's basically done [laughs]. And then you spend a lot of time actually getting that what feels like maybe the last 20 or 10% but takes maybe 80% of the time.
STEPHANIE: Yeah, that sounds about right. So, you know, kind of working on that got me thinking about the alternative, which is honestly something that I'm still working on getting better at doing in my day-to-day. But there is this idea of a vertical slice or a full-stack slice, and that, basically, involves splitting a large feature into those full-stack slices. So, you have, like, a fully implemented piece rather than breaking them apart by layers of the stack.
So, you know, I just see pretty frequently that, like, maybe you'll have a back-end ticket to do the database migration, to create your models, just whatever, maybe your controllers, or maybe that is even, like, another piece and then, like, the UI component. And those are worked on separately, maybe even by different people. But this vertical slice theory talks about how what you really want is to have a very thin piece of the feature that still delivers value but fully works.
JOËL: As opposed to what you might call a horizontal slice, which would be something like, oh, I've built three Rails models. They're there. They're in the code. They talk to tables in the database, but there's nothing else happening with them. So, you've done work, but it's also more or less dead code.
STEPHANIE: Yeah, that's a good point. I have definitely seen a lot of unused code paths [laughs] when you kind of go about it that way and maybe, like, that UI ticket never gets completed.
JOËL: What are some tips for trying to do some of these narrower slices? Like, I have a ticket, and I have some work I need to do. And I want to break it down because I know it's going to be too big, and maybe the, like, intuitive way to do it is to split it by layers of your stack where I might do all the models, commit, ship that, deploy, then do some controllers, then do some view, or something like that, and you're suggesting instead going full stack. How do you break down the ticket more when all the pieces are interrelated?
STEPHANIE: Yeah, that's a great point. One easy way to visualize it, especially if you have designs or something for this feature, right? Oftentimes, you can start to parse out sections or components of the user interface to be shipped separately. Like, yes, you would want all of it to have that rich feature, but if it's a view of some cards or something, and then, yeah, there's, like, the you can filter by them. You can search by them. All of those bits can be broken up to be like, well, like, the very basic thing that a customer would want to see is just that list of cards, and you can start there.
JOËL: So, aggressively breaking down the card at, like, almost a product level. Instead of breaking it down by technical pieces, say, like, can we get even smaller amounts of behavior while still delivering value?
STEPHANIE: Yeah, yeah, exactly. I like that you said product level because I think another axis of that could also be complexity. So, oftentimes, you know, I'll get a feature, and we're like, oh, we want to support these X number of things that we've identified [laughs]. You know, if it's like an e-com app you're building, you know, you're like, "Do we have all these products that we want to make sure to support?" And, you know, one way to break that down into that vertical slice is to ask, like, what if we started with just supporting one before we add variants or something like that?
Teasing out, like, what would end up being the added complexity as you're developing, once you have to start considering multiple parameters, I think that is a good way to be able to start working more iteratively. And so, you don't have to hold all of that complexity in your head.
JOËL: It's almost a bit of like a YAGNI principle but applied to features rather than to code.
STEPHANIE: Yeah. Yeah. I like that. At first, I hesitated a little bit because I've certainly been in the position where someone has said like, "Well, we do really need this [laughs]."
JOËL: Uh-huh. And, sometimes, the answer is, yes, we do need that, but what if I gave you a smaller version of that today, and we can do the other thing tomorrow?
STEPHANIE: Right. Yeah, it's not like you're rejecting the idea that it's necessary but the way that you get about to that end result, right?
JOËL: So, you keep using the term vertical slice or full-stack slice. I think when I hear that term, I think of specifically an article written by former thoughtboter, German Velasco, on our blog. But I don't know if that's maybe a term that has broader use in the industry. Is that a term that you've heard elsewhere?
STEPHANIE: That's a good question. I think I mostly hear, you know, some form of like, "Can we break this ticket down further?" and not necessarily, like, if you think about how, right? I'm, like, kind of doing a motion with my hand [chuckles] of, like, slicing from top to bottom as opposed to, you know, horizontal.
Yeah, I think that it may not be as common as I wish it were. Even if there's still some amount of adapting or, like, persuading your team members to get on board with this idea, like, I would be interested in, like, introducing that concept or that vocabulary to get teams talking about, like, how do they break down tickets? You know, like, what are they considering? Like, what alternatives are there? Like, are horizontal slices working for them or not?
JOËL: A term that I've heard floating around and I haven't really pinned down is Elephant Carpaccio. Have you heard that before?
STEPHANIE: I have, only because I, like, discovered a, like, workshop facilitation guide to run an exercise that is basically, like, helping people learn how to identify, like, smaller and smaller full-stack slices. But with the Elephant Carpaccio analogy, it's kind of like you're imagining a feature as big as an elephant. And you can create, like, a really thin slice out of them, and you can have infinite number of slices, but they still end up creating this elephant. And I guess you still get the value of [chuckles] a little carpaccio, a delicious [laughs] appetizer of thinly sliced meat.
JOËL: I love a colorful metaphor. So, I'm curious: in your own practice, do you have any sort of guidelines or even heuristics that you like to use to help work in a more, I guess, iterative fashion by working with these smaller slices?
STEPHANIE: Yeah, one thought that I had about it is that it plays really well with Outside-In Test Driven Development.
JOËL: Hmmm.
STEPHANIE: Yeah. So, if, you know, you are starting with a feature test, you have to start somewhere and, you know, maybe starting with, like, the most valuable piece of the feature, right? And you are starting at that level of user interaction if you're using Capybara, for example. And then it kind of forces you to drop down deeper into those layers.
But once you go through that whole process of outside-in and then you arrive back to the top, you've created your full-stack feature [laughs], and that is shippable or, like, committable and, you know, potentially even shippable in and of itself. And you already have full test coverage with it. And that was a cool way that I saw some of those two concepts work well together.
JOËL: Yeah, there is something really fun about the sort of Red-Green-Refactor cycle that TDD forces on you and that you're typically writing the minimum code required to pass a test. And it really forces you out of that developer brain where you're just like, oh, I've got to cover my edge cases. I've got to engineer for some things. And then maybe you realize you've written code that wasn't necessary. And so, I've found that often when I do, like, actually TDD a feature, I end up with code that's a lot leaner than I would otherwise.
STEPHANIE: Yes, lean like a thin slice of Elephant Carpaccio.
[laughter]
JOËL: One thing you did mention that I wanted to highlight was the fact that when you do this outside-in approach for your tiny slice, at the end, it is shippable. And I think that is a core sort of tenet of this idea is that even though you're breaking things down into smaller and smaller slices, every slice is shippable to production. Like, it doesn't break the build. It doesn't break the website. And it provides some kind of value to the user.
STEPHANIE: Yeah, absolutely. I think one thing that I still kind of get hung up on sometimes, and I'm trying to, you know, revisit this assumption is that idea of, like, is this too small? Like, is this valuable enough? When I mentioned earlier that I was working on a report, I think there was a part of me that's like, could I just ship a report with two columns [laughs]? And the answer is yes, right? Like, I thought about it, and I was like, well, if that data is, like, not available anywhere else, then, yeah, like, that would be valuable to just get out there.
But I think the idea that, like, you know, originally, the hope was to have all of these things, these pieces of information, you know, available through this report, I think that, like, held me back a little bit from wanting to break it down. And I held it a little bit too closely and to be like, well, I really want to, like, you know, deliver something impressive. When you click on it, it's like, wow, like, look at all this data [laughs]. So, I'm trying to push back a little bit on my own preconceived notions that, like, there is such a thing as, like, a too small of a demo.
JOËL: I've often worked with this at a commit level, trying to see, like, how small can I get a commit, and what is too small? And now you get into sort of the fraught question of what is a, you know, atomic commit? And I think, for me, where I've sort of come down is that a commit must pass CI. Like, I don't want a commit that's going to go into the main branch. I'm totally pro-work-in-progress commits on a branch; that's fine. But if it's going to get shipped into the main branch, it needs to be green. And it also cannot introduce dead code.
STEPHANIE: Ooh.
JOËL: So, if you're getting to the point where you're breaking either of those, you've got some sort of, like, partial commit that's maybe too small that needs more to be functional. Or you maybe need to restructure to say, look, instead of adding just ten models, can I add one model but also a little bit of a controller and a view? And now I've got a vertical slice.
STEPHANIE: Yeah, which might even be less code [laughs] in the end.
JOËL: Yes, it might be less code.
STEPHANIE: I really like that heuristic of not introducing dead code, that being a goal. I'm going to think about that a lot [laughs] and try to start introducing that into when I think something is ready.
JOËL: Another thing that I'll often do, I guess, that's almost like it doesn't quite fit in the slice metaphor, but it's trying to separate out any kind of refactor work into its own commit that is, you know, still follows those rules: it does not introduce dead code; it does not break the build; it's independently shippable. But that might be something that I do that sets me up for success when I want to do that next slice.
So, maybe I'm trying to add a new feature, but just the way we built some of the internal models, they don't have the interface that I need right now, and that's fine because I don't want to build these models in anticipation of the future. I can change them in the future if I need. But now the future has come, and I need a slightly different shape. So, I start by refactoring, commit, maybe even ship that deploy. Maybe I then do my small feature afterwards. Maybe I come back next week and do the small feature, but there are two independent things, two different commits, maybe two different deploys.
I don't know that I would call that refactor a slice and that it maybe goes across the full stack; maybe it doesn't. It doesn't show to the user because a refactor, by definition, is just changing the implementation without changing behavior. But I do like to break that out and keep it separate. And I guess it helps keep my slices lean, but I'm not quite sure where refactors fit into this metaphor.
STEPHANIE: Yeah, that's interesting because, in my head, as I was listening to you talk about that, I was visualizing the owl again, the [laughs] owl meme. And I'm imagining, like, the refactoring making the slice richer, right? It's like you're adding details, and you're...it's like when you end up with the full animal, or the owl, the elephant, whatever, it's not just, like, a shoddy-looking drawing [laughs]. Like, ideally, you know, it has those details. Maybe it has some feathers. It's shaded in, and it is very fleshed out. That's just my weird, little brain trying [laughs] to stretch this metaphor to make it work.
Another thing that I want to kind of touch a little bit more about when we're talking about how a lot of the work I was spending recently was that glue work, you know, the putting the pieces together, I think there was some aspect of discovery involved that was missed the first time around when these tickets were broken up more horizontally. I think that one really important piece that I was doing was trying to reconcile the different mental models that each person had when they were working on their separate piece.
And so, maybe there's, like, an API, and then the frontend is expecting some sort of data, and, you know, you communicate it in a way that's, like, kind of hand-off-esque. And then when you put it together, it turns out that, oh, the pieces don't quite fit together, and how do you actually decide, like, what that mental model should be? Naming, especially, too, I've, you know, seen so many times when the name...like, an attribute on the frontend is named a little bit different than whatever is on the backend, and it takes a lot of work to unify that, like, to make that decision about, should they be the same? Should they be different? A lot of thought goes into putting those pieces together.
And I think the benefit of a full-stack slice is that that work doesn't get lost. Especially if you are doing stuff like estimating, you're kind of discovering that earlier on. And I think what I just talked about, honestly, is what prevents those features from getting shipped in the end if you were working in a more horizontal way.
JOËL: Yeah. It's so easy to have, like, big chunks of work in progress forever and never actually shipping. And one of the benefits of these narrower slices is that you're shipping more frequently. And that's, you know, interesting from a coding perspective, but it's kind of an agile methodology thing as well, the, like, ship smaller chunks more frequently. Even though you're maybe taking a little bit more overhead because you're having to, like, take the time to break down tasks, it will make your project move faster as a whole.
An aspect that's really interesting to me, though, is what you highlighted about collaboration and the fact that every teammate has a slightly different mental model. And I think if you take the full-stack slice and every member is able to use their mental model, and then close the loop and actually, like, do a complete thing and ship it, I think it allows every other member who's going to have a slightly different mental model of the problem to kind of, yes, and... the other person rather than all sort of independently doing their things and having to reconcile them at the end.
STEPHANIE: Yeah, I agree. I think I find, you know, a lot of work broken out into backend and frontend frequently because team members might have different specialties or different preferences about where they would like to be working. But that could also be, like, a really awesome opportunity for pairing [laughs]. Like, if you have someone who's more comfortable in the backend or someone more comfortable in the frontend to work on that full-stack piece together, like, even outside of the in-the-weeds coding aspects of it, it's like you're, at the very least, making sure that those two folks have that same mental model.
Or I like what you said about yes, and... because it gets further refined when you have people who are maybe more familiar with, like, something about the app, and they're like, "Oh, like, don't forget about we should consider this." I think that, like, diversity of experience, too, ends up being really valuable in getting that abstraction to be more accurate so that it best represents what you're trying to build.
JOËL: Early on, when I was pretty new working at thoughtbot, somebody else at the company had given me the advice that if I wanted to be more effective and work faster on projects, I needed to start breaking my work down into smaller chunks, and this is, you know, fairly junior developer at the time. The advice sounds solid, and everything we've talked about today sounds really solid. Doing it in practice is hard, and it's taken me, you know, a decade, and I'm still working on getting better at it.
And I wrote an article about working iteratively that covers a lot of different elements where I've kind of pulled on threads and found out ways where you can get better at this. But I do want to acknowledge that this is not something that's easy and that just like the code that we're working on iteratively, our technique for breaking things down is something that we improve on iteratively. And it's a journey we're all on together.
STEPHANIE: I'm really glad that you brought up how hard it is because as I was thinking about this topic, I was considering barriers into working in that vertical slice way, and barriers that I personally experience, as well as just I have seen on other teams. I had alluded to some earlier about, like, the perception of if I ship this small thing, is it impressive enough, or is it valuable enough? And I think I realized that, like, I was getting caught up in, like, the perception part, right? And maybe it doesn't matter [chuckles], and I just need to kind of shift the way I'm thinking about it.
And then, there are more real barriers or, like, concrete barriers that are tough. Long feedback loops is one that I've encountered on a team where it's just really hard to ship frequently because PR reviews aren't happening fast enough or your CI or deployment process is just so long that you're like, I want to stuff everything into [chuckles] this one PR so that at least I won't have to sit and wait [laughs].
And that can be really hard to work against, but it could also be a really interesting signal about whether your processes are working for you. It could be an opportunity to be like, "I would like to work this way, but here are the things that are preventing me from really embracing it. And is there any improvement I can make in those areas?"
JOËL: Yeah. There's a bit of a, like, vicious cycle that happens there sometimes, especially around PR review, where when it takes a long time to get reviews, you tend to decide, well, I'm going to not make a bunch of PRs; I'm going to make one big one. But then big PRs are very, like, time intensive and require you to commit a lot of, like, focus and energy to them, which means that when you ask me for a review, I'm going to wait longer before I review it, which is going to incentivize you to build bigger PRs, which is going to incentivize me to wait longer, and now we just...it's a vicious cycle.
So, I know I've definitely been on projects where a question the team has had is, "How can we improve our process? We want faster code review." And there's some aspect of that that's like, look, everybody just needs to be more disciplined or more alert and try to review things more frequently. But there's also an element of if you do make things smaller, you make it much easier for people to review your code in between other things.
STEPHANIE: Yeah, I really liked you mentioning incentives because I think that could be a really good place to start if you or your team are interested in making a change like this, you know, making an effort to look at your team processes and being like, what is incentivized here, and what does our system encourage or discourage? And if you want to be making that shift, like, that could be a good place to start in identifying places for improvement.
JOËL: And that happens on a broader system level as well. If you look at what does it take to go from a problem that is going to turn into a ticket to in-production in front of a client, how long is that loop? How complex are the steps to get there? The longer that loop is, the slower you're iterating. And the easier it is for things to just get hung up or for you to waste time, the harder it is for you to change course.
And so, oftentimes, I've come on to projects with clients and sort of seen something like that, and sort of seen other pain points that the team has and sort of found that one of the root causes is saying, "Look, we need to tighten that feedback loop, and that's going to improve all these other things that are kind of constellation around it."
STEPHANIE: Agreed. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie is hosting a holiday cookie swap. Joël talks about participating in thoughtbot's end-of-the-year hackathon, Ralphapalooza.
We had a great year on the show! The hosts wrap up the year and discuss their favorite episodes, the articles, books, and blog posts they’ve read and loved, and other highlights of 2023 (projects, conferences, etc).
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I am so excited to talk about this. I'm, like, literally smiling [chuckles] because I'm so pumped. Sometimes, you know, we get on to record, and I'm like, oh, I got to think of something that's new, like, my life is so boring. I have nothing to share. But today, I am excited to tell you about [chuckles] the holiday cookie swap that I'm hosting this Sunday [laughs] that I haven't been able to stop thinking about or just thinking about all the cookies that I'm going to get to eat.
It's going to be my first time throwing this kind of shindig, and I'm so pleased with myself because it's such a great idea. You know, it's like, you get to share cookies, and you get to have all different types of cookies, and then people get to take them home. And I get to see all my friends. And I'm really [chuckles] looking forward to it.
JOËL: I don't think I've ever been to a cookie swap event. How does that work? Everybody shows up with cookies, and then you leave with what you want?
STEPHANIE: That's kind of the plan. I think it's not really a...there's no rules [laughs]. You can make it whatever you want it to be. But I'm asking everyone to bring, like, two dozen cookies. And, you know, I'm hoping for a lot of fun variety. Myself I'm planning on making these pistachio olive oil cookies with a lemon glaze and also, maybe, like, a chewy ginger cookie. I haven't decided if I'm going to go so extra to make two types, but we'll see.
And yeah, we'll, you know, probably have some drinks and be playing Christmas music, and yeah, we'll just hang out. And I'm hoping that everyone can kind of, like, take home a little goodie bag of cookies as well because I don't think we'll be going through all of them.
JOËL: Hearing you talk about this gave me an absolutely terrible idea.
STEPHANIE: Terrible or terribly awesome? [laughs]
JOËL: So, imagine you have the equivalent of, let's say, a LAN party. You all show up with your laptops.
STEPHANIE: [laughs]
JOËL: You're on a network, and then you swap browser cookies randomly.
STEPHANIE: [laughs] Oh no. That would be really funny. That's a developer's take on a cookie party [laughs] if I've ever heard one.
JOËL: Slightly terrifying. Now I'm just browsing, and all of a sudden, I guess I'm logged into your Facebook or something. Maybe you only swap the tracking cookies. So, I'm not actually logged into your Facebook, but I just get to see the different ad networks it would typically show you, and you would see my ads. That's maybe kind of fun or maybe terrifying, depending on what kind of ads you normally see.
STEPHANIE: That's really funny. I'm thinking about how it would just be probably very misleading and confusing for those [laughs] analytics spenders, but that's totally fine, too. Might I suggest also having real cookies to munch on as well while you are enjoying [laughs] this browser cookie-swapping party?
JOËL: I 100% agree.
STEPHANIE: [laughs]
JOËL: I'm curious: where do you stand on raisins in oatmeal cookies?
STEPHANIE: Ooh.
JOËL: This is a divisive question.
STEPHANIE: They're fine. I'll let other people eat them. And occasionally, I will also eat an oatmeal cookie with raisins, but I much prefer if the raisins are chocolate chips [chuckles].
JOËL: That is the correct answer.
STEPHANIE: [laughs] Thank you. You know, I understand that people like them. They're not for me [laughs].
JOËL: It's okay. Fans can send us hate mail about why we're wrong about oatmeal cookies.
STEPHANIE: Yeah, honestly, that's something that I'm okay with being wrong about on the internet [laughs]. So, Joël, what's new in your world?
JOËL: So, as of this recording, we've just recently done thoughtbot's end-of-the-year hackathon, what we call Ralphapalooza. And this is sort of a time where you kind of get to do pretty much any sort of company or programming-related activity that you want as long as...you have to pitch it and get at least two other colleagues to join you on the project, and then you've got two days to work on it. And then you can share back to the team what you've done.
I was on a project where we were trying to write a lot of blog posts for the thoughtbot blog. And so, we're just kind of getting together and pitching ideas, reviewing each other's articles, writing things at a pretty intense rate for a couple of days, trying to flood the blog with articles for the next few weeks. So, if you're following the blog and as the time this episode gets released, you're like, "Wow, there's been a lot of articles from the thoughtbot blog recently," that's why.
STEPHANIE: Yes, that's awesome. I love how much energy that the blog post-writing party garnered. Like, I was just kind of observing from afar, but it sounds like, you know, people who maybe had started posts, like, throughout the year had dedicated time and a good reason to revisit them, even if they had been, you know, kind of just, like, sitting in a draft for a while. And I think what also seemed really nice was people were just around to support, to review, and were able to make that a priority. And it was really cool to see all the blog posts that are queued up for December as a result.
JOËL: People wrote some great stuff. So, I'm excited to see all of those come out. I think we've got pretty much a blog post every day coming out through almost the end of December. So, it's exciting to see that much content created.
STEPHANIE: Yeah. If our listeners want more thoughtbot content, check out our blog.
JOËL: So, as mentioned, we're recording this at the end of the year. And I thought it might be fun to do a bit of a retrospective on what this year has been like for you and I, Stephanie, both in terms of different work that we've done, the learnings we've had, but maybe also look back a little bit on 2023 for The Bike Shed and what that looked like.
STEPHANIE: Yes. I really enjoyed thinking about my year and kind of just reveling and having been doing this podcast for over a year now. And yeah, I'm excited to look back a little bit on both things we have mentioned on the show before and things maybe we haven't. To start, I'm wondering if you want to talk a little bit about some of our favorite episodes.
JOËL: Favorite episodes, yes. So, I've got a couple that are among my favorites. We did a lot of good episodes this year. I really liked them. But I really appreciated the episode we did on heuristics, that's Episode 398, where we got to talk a little bit about what goes into a good heuristic, how we tend to come up with them. A lot of those, like, guidelines and best practices that you hear people talk about in the software world and how to make your own but then also how to deal with the ones you hear from others in the software community. So, I think that was an episode that the idea, on the surface, seemed really basic, and then we went pretty deep with it. And that was really fun.
I think a second one that I really enjoyed was also the one that I did with Sara Jackson as a guest, talking about discrete math and its relevance to the day-to-day work that we do. That's Episode 374. We just had a lot of fun with that. I think that's a topic that more developers, more web developers, would benefit from just getting a little bit more discrete math in their lives. And also, there's a clip in there where Sara reinterprets a classic marketing jingle with some discrete math terms in there instead. It was a lot of fun. So, we'd recommend people checking that one out.
STEPHANIE: Nice. Yes. I also loved those episodes. The heuristics one was really great. I'm glad you mentioned it because one of my favorite episodes is kind of along a similar vein. It's one of the more recent ones that we did. It's Episode 405, where we did a bit of a retro on Sandi Metz' Rules For Developers. And those essentially are heuristics, right? And we got to kind of be like, hey, these are someone else's heuristics. How do we feel about them? Have we embodied them ourselves? Do we follow them? What parts do we take or leave? And I just remember having a really enjoyable conversation with you about that.
You and I have kind of treated this podcast a little bit like our own two-person book club [laughs]. So, it felt a little bit like that, right? Where we were kind of responding to, you know, something that we both have read up on, or tried, or whatever. So, that was a good one.
Another one of my favorite episodes was Episode 391: Learn with APPL [laughs], in which we basically developed our own learning framework, or actually, credit goes to former Bike Shed host, Steph Viccari, who came up with this fun, little acronym to talk about different things that we all kind of need in our work lives to be fulfilled. Our APPL stands for Adventure, Passion, Profit, and Low risk.
And that one was really fun just because it was, like, the opposite of what I just described where we're not discussing someone else's work but discovered our own thing out of, you know, these conversations that we have on the show, conversations we have with our co-workers. And yeah, I'm trying to make it a thing, so I'm plugging it again [laughs].
JOËL: I did really like that episode. One, I think, you know, this APPL framework is a little bit playful, which makes it fun. But also, I think digging into it really gives some insight on the different aspects that are relevant when planning out further growth or where you want to invest your sort of professional development time. And so, breaking down those four elements led to some really insightful conversation around where do I want to invest time learning in the next year?
STEPHANIE: Yeah, absolutely.
JOËL: By the way, we're mentioning a bunch of our favorite things, some past episodes, and we'll be talking about a lot of other types of resources. We will be linking all of these in the show notes. So, for any of our listeners who are like, "Oh, I wonder what is that thing they mentioned," there's going to be a giant list that you can check out.
STEPHANIE: Yeah. I love whenever we are able to put out an episode with a long list of things [laughs].
JOËL: It's one of the fun things that we get to do is like, oh yeah, we referenced all these things. And there is this sort of, like, further reading, more threads to pull on for people who might be interested.
So, you'd mentioned, Stephanie, that, you know, sometimes we kind of treat this as our own little mini, like, two-person book club. I know that you're a voracious reader, and you've mentioned so many books over the course of the year. Do you have maybe one or two books that have been kind of your favorites or that have stood out to you over 2023?
STEPHANIE: I do. I went back through my reading list in preparation for this episode and wanted to call out the couple of books that I finished. And I think I have, you know, I mentioned I was reading them along the way. But now I get to kind of see how having read them influenced my work life this past year, which is pretty cool.
So, one of them is Engineering Management for the Rest of Us by Sarah Drasner. And that's actually one that really stuck with me, even though I'm not a manager; I don't have any plans to become a manager. But one thing that she talks about early on is this idea of having a shared value system. And you can have that at the company level, right? You have your kind of corporate values. You can have that at the team level with this smaller group of people that you get to know better and kind of form relationships with. And then also, part of that is, like, knowing your individual values.
And having alignment in all three of those tiers is really important in being a functioning and fulfilled team, I think. And that is something that I don't think was really spelled out very explicitly for me before, but it was helpful in framing, like, past work experiences, where maybe I, like, didn't have that alignment and now identify why. And it has helped me this year as I think about my client work, too, and kind of where I sit from that perspective and helps me realize like, oh, like, this is why I'm feeling this way, and this is why it's not quite working. And, like, what do I do about it now? So, I really enjoyed that.
JOËL: Would you recommend this book to others who are maybe not considering a management path?
STEPHANIE: Yeah.
JOËL: So, even if you're staying in the IC track, at least for now, you think that's a really powerful book for other people.
STEPHANIE: Yeah, I would say so. You know, maybe not, like, all of it, but there's definitely parts that, you know, she's writing for the rest of us, like, all of us maybe not necessarily natural born leaders who knew that that's kind of what we wanted. And so, I can see how people, you know, who are uncertain or maybe even, like, really clearly, like, "I don't think that's for me," being able to get something out of, like, either those lessons in leadership or just to feel a bit, like, validated [laughs] about the type of work that they aren't interested in.
Another book that I want to plug real quick is Confident Ruby by Avdi Grimm. That one was one I referenced a lot this year, working with newer developers especially. And it actually provided a good heuristic [laughs] for me to talk about areas that we could improve code during code review. I think that wasn't really vocabulary that I'd used, you know, saying, like, "Hey, how confident is this code? How confident is this method and what it will receive and what it's returning?" And I remember, like, several conversations that I ended up having on my teams about, like, return types as a result and them having learned, like, a new way to view their code, and I thought that was really cool.
JOËL: I mean, learning to deal with uncertainty and nil in Ruby or maybe even, like, error states is just such a core part of writing software. I feel like this is something that I almost wish everyone was sort of assigned maybe, like, a year into their programming career because, you know, I think the first year there's just so many things you've got to learn, right? Like basic programming and, like, all these things.
But, like, you're looking maybe I can start going a little bit deeper into some topic. I think that some topic, like, pretty high up, would be building a mental model for how to deal with uncertainty because it's such a source of bugs. And Avdi Grimm's book, Confident Ruby, is...I would put that, yeah, definitely on a recommended reading list for everybody.
STEPHANIE: Yeah, I agree. And I think that's why I found myself, you know, then recommending it to other people on my team and kind of having something I can point to. And that was really helpful in the kind of mentorship that I wanted to offer.
JOËL: I did a deep dive into uncertainty and edge cases in programs several years back when I was getting into Elm. And I was giving a talk at Elm Europe about how Elm handles uncertainty, which is a little bit different than how Ruby does it. But a lot of the underlying concepts are very similar in terms of quarantining uncertainty and pushing it to the edges and things like that. Trying to write code that is more confident that is definitely a term that I used. And so Confident Ruby ended up being a little bit of an inspiration for my own journey there, and then, eventually, the talk that I gave that summarized my learnings there.
STEPHANIE: Nice. Do you have any reading recommendations or books that stood out to you this year?
JOËL: So, I've been reading two technical books kind of in tandem this year. I have not finished either of them, but I have been enjoying them. One is Sustainable Rails by David Bryant Copeland. We had an episode at the beginning of this year where we talked a little bit about our initial impressions from, I think, the first chapter of the book. But I really love that vocabulary of writing Ruby and Rails code, in particular, in a way that is sustainable for a team. And that premise, I think, just gives a really powerful mindset to approach structuring Rails apps.
And the other book that I've been reading is Domain Modeling Made Functional, so kind of looking at some domain-driven design ideas. But most of the literature is typically written to an object-oriented audience, so taking a look at it from more of a functional programming perspective has been really interesting. And then I've been, weirdly enough, taking some of those ideas and translating back into the object-oriented world to apply to code I'm writing in Ruby. I think that has been a very useful exercise.
STEPHANIE: That's awesome. And it's weird and cool how all those things end up converging, right? And exploring different paradigms really just lets you develop more insight into wherever you're working.
JOËL: Sometimes the sort of conversion step that you have to do, that translation, can be a good tool for kind of solidifying learnings or better understanding. So, I'm doing this sort of deep learning thing where I'm taking notes as I go along. And those notes are typically around, what other concepts can I connect ideas in the book?
So, I'll be reading and say, okay, on page 150, he mentioned this concept. This reminds me of this idea from TDD. I could see this applying in a different way in an object-oriented world. And interestingly, if you apply this, it sort of converges on maybe single responsibility or whatever other OO principle. And that's a really interesting connection. I always love it when you do see sort of two or three different angles converging together on the same idea.
STEPHANIE: Yeah, absolutely.
JOËL: I've written a blog post, I think, two years ago around how some theory from functional programming sort of OO best practices and then TDD all kind of converge on sort of the same approach to designing software. So, you can sort of go from either direction, and you kind of end in the same place or sort of end up rediscovering principles from the other two. We'll link that in the show notes. But that's something that I found was really exciting. It didn't directly come from this book because, again, I wrote this a couple of years ago. But it is always fun when you're exploring two or three different paradigms, and you find a convergence. It really deepens your understanding of what's happening.
STEPHANIE: Yeah, absolutely. I like what you said about how this book is different because it is making that connection between things that maybe seem less related on the surface. Like you're saying, there's other literature written about how domain modeling and object-oriented programming make more sense a little bit more together. But it is that, like, bringing in of different schools of thought that can lead to a lot of really interesting discovery about those foundational concepts.
JOËL: I feel like dabbling in other paradigms and in other languages has made me a better Ruby developer and a better OO programmer, a lot of the work I've done in Elm. This book that I'm reading is written in F#. And all these things I can kind of bring back, and I think, have made me a better Ruby developer. Have you had any experiences like that?
STEPHANIE: Yeah. I think I've talked a little bit about it on the show before, but I can't exactly recall. There were times when my exploration in static typing ended up giving me that different mindset in terms of the next time I was coding in Ruby after being in TypeScript for a while, I was, like, thinking in types a lot more, and I think maybe swung a little bit towards, like, not wanting to metaprogram as much [laughs]. But I think that it was a useful, like you said, exercise sometimes, too, and just, like, doing that conversion or translating in your head to see more options available to you, and then deciding where to go from there.
So, we've talked a bit about technical books that we've read. And now I kind of want to get into some in-person highlights for the year because you and I are both on the conference circuit and had some fun trips this year.
JOËL: Yeah. So, I spoke at RailsConf this spring. I gave a talk on discrete math and how it is relevant in day-to-day work for developers, actually inspired by that Bike Shed episode that I mentioned earlier. So, that was kind of fun, turning a Bike Shed episode into a conference talk.
And then just recently, I was at RubyConf in San Diego, and I gave a talk there around time. We often talk about time as a single quantity, but there's some subtle distinctions, so the difference between a moment in time versus a duration and some of the math that happens around that. And I gave a few sort of visual mental models to help people keep track of that. As of this recording, the talk is not out yet, so we're not going to be able to link to it. But if you're listening to this later in 2024, you can probably just Google RubyConf "Which Time Is It?" That's the name of the talk. And you'll be able to find it.
STEPHANIE: Awesome. So, as someone who is giving talks and attending conferences every year, I'm wondering, was this year particularly different in any way? Was there something that you've, like, experienced or felt differently community-wise in 2023?
JOËL: Conferences still feel a little bit smaller than they were pre-COVID. I think they are still bouncing back. But there's definitely an energy that's there that's nice to have on the conference scene. I don't know, have you experienced something similar?
STEPHANIE: I think I know what you're talking about where, you know, there was that time when we weren't really meeting in person. And so, now we're still kind of riding that wave of, like, getting together again and being able to celebrate and have fun in that way. I, this year, got to speak at Blue Ridge Ruby in June. And that was a first-time regional conference. And so, that was, I think, something I had noticed, too, is the emergence of regional conferences as being more viable options after not having conferences for a few years.
And as a regional conference, it was even smaller than the bigger national Ruby Central conferences. I really enjoyed the intimacy of that, where it was just a single track. So, everyone was watching talks together and then was on breaks together, so you could mingle. There was no FOMO of like, oh, like, I can't make this talk because I want to watch this other one. And that was kind of nice because I could, like, ask anyone, "What did you think of, like, X talk or like the one that we just kind of came out of and had that shared experience?" That was really great.
And I got to go tubing for the first time [laughs] in Asheville. That's a memory, but I am still thinking about that as we get into winter. I'm like, oh yeah, the glorious days of summer [laughs] when I was getting to float down a lazy river.
JOËL: Nice. I wasn't sure if this was floating down a lazy river on an inner tube or if this was someone takes you out on a lake with a speed boat, and you're getting pulled.
STEPHANIE: [laughs] That's true. As a person who likes to relax [laughs], I definitely prefer that kind of tubing over a speed boat [laughs].
JOËL: What was the topic of your talk?
STEPHANIE: So, I got to give my talk about nonviolent communication in pair programming for a second time. And that was also my first time giving a talk for a second time [laughs]. That was cool, too, because I got to revisit something and go deeper and kind of integrate even more experiences I had. I just kind of realized that even if you produce content once, like, there's always ways to deepen it or shape it a little better, kind of, you know, just continually improving it and as you learn more and as you get more experience and change.
JOËL: Yeah. I've never given a talk twice, and now you've got me wondering if that's something I should do. Because making a bespoke talk for every conference is a lot of work, and it might be nice to be able to use it more than once. Especially I think for some of the regional conferences, there might be some value there in people who might not be able to go to a big national conference but would still like to see your talk live. Having a mix of maybe original content and then content that is sort of being reshared is probably a great combo for a regional conference.
STEPHANIE: Yeah, definitely. That's actually a really good idea, yeah, to just be able to have more people see that content and access it. I like that a lot. And I think it could be really cool for you because we were just talking about all the ways that our mental models evolve the more stuff that we read and consume. And I think there's a lot of value there.
One other conference that I went to this year that I just want to highlight because it was really cool that I got to do this: I went to RubyKaigi in Japan [laughs] back in the spring. And I had never gone to an international conference before, and now I'm itching to do more of that. So, it would be remiss not to mention it [laughs]. I'm definitely inspired to maybe check out some of the conferences outside of the U.S. in 2024.
I think I had always been a little intimidated. I was like, oh, like, it's so far [laughs]. Do I really have, like, that good of a reason to make a trip out there? But being able to meet Rubyists from different countries and seeing how it's being used in other parts of the world, I think, made me realize that like, oh yeah, like, beyond my little bubble, there's so many cool things happening and people out there who, again, like, have that shared love of Ruby. And connecting with them was, yeah, just so new and something that I would want to do more of.
So, another thing that we haven't yet gotten into is our actual work-work or our client work [laughs] that we do at thoughtbot for this year. Joël, I'm wondering, was there anything especially fun or anything that really stood out to you in terms of client work that you had to do this year?
JOËL: So, two things come to mind that were novel for me. One is I did a Rails integration against Snowflake, the data warehouse, using an ODBC connection. We're not going through an API; we're going through this DB connection. And I never had to do that before. I also got to work with the new-ish Rails multi-database support, which actually worked quite nice. That was, I think, a great learning experience.
Definitely ran into some weird edge cases, or some days, I was really frustrated. Some days, I was actually, like, digging into the source code of the C bindings of the ODBC gem. Those were not the best days. But definitely, I think, that kind of integration and then Snowflake as a technology was really interesting to explore.
The other one that's been really interesting, I think, has been going much deeper into the single sign-on world. I've been doing an integration against a kind of enterprise SAML server that wants to initiate sign-in requests from their portal. And this is a bit of an alphabet soup, but the term here is IdP-initiated SSO.
And so, I've been working with...it's a combination of this third-party kind of corporate SAML system, our application, which is a Rails app, and then Auth0 kind of sitting in the middle and getting all of them to talk to each other. There's a ridiculous number of redirects because we're talking SAML on one side and OIDC on the other and getting everything to line up correctly. But that's been a really fun, new set of things to learn.
STEPHANIE: Yeah, that does sound complicated [laughs] just based on what you shared with me, but very cool. And I was excited to hear that you had had a good experience with the Rails multi-database part because that was another thing that I remember being...it had piqued my interest when it first came out. I hope I get to, you know, utilize that feature on a project soon because that sounds really fun.
JOËL: One thing I've had to do for this SSO project is lean a lot on sequence diagrams, which are those diagrams that sort of show you, like, being redirected from different places, and, like, okay, server one talks to server two talks, to the browser. And so, when I've got so many different actors and sort of controllers being passed around everywhere, it's been hard to keep track of it in my head. And so, I've been doing a lot of these diagrams, both for myself to help understand it during development, and then also as documentation to share back with the team.
And I found that Mermaid.js supports sequence diagrams as a diagram type. Long-term listeners of the show will know that I am a sucker for a good diagram. I love using Mermaid for a lot of things because it's supported. You can embed it in a lot of places, including in GitHub comments, pull requests. You can use it in various note systems like Notion or Obsidian. And you can also just generate your own on mermaid.live.
And so, that's been really helpful to communicate with the rest of the team, like, "Hey, we've got this whole process where we've got 14 redirects across four different servers. Here's what it looks like. And here, like, we're getting a bug on, you know, redirect number 8 of 14. I wonder why," and then you can start a conversation around debugging that.
STEPHANIE: Cool. I was just about to ask what tool you're using to generate your sequence diagrams. I didn't know that Mermaid supported them. So, that's really neat.
JOËL: So, last year, when we kind of looked back over 2022, one thing that was really interesting that we did is we talked about what are articles that you find yourself linking to a lot that are just kind of things that maybe were on your mind or that were a big part of conversations that happened over the year? So, maybe for you, Stephanie, in 2023, what are one or two articles that you find yourself sort of constantly linking to other people?
STEPHANIE: Yes. I'm excited you asked about this. One of them is an article by a person named Cat Hicks, who has a PhD in experimental psychology. She's a data scientist and social scientist. And lately, she's been doing a lot of research into the sense of belonging on software teams. And I think that's a theme that I am personally really interested in, and I think has kind of been something more people are talking about in the last few years. And she is kind of taking that maybe more squishy idea and getting numbers for it and getting statistics, and I think that's really cool.
She points out belonging as, like, a different experience from just, like, happiness and fulfillment, and that really having an impact on how well a team is functioning. I got to share this with a few people who were, you know, just in that same boat of, like, trying to figure out, what are the behaviors kind of on my team that make me feel supported or not supported?
And there were a lot of interesting discussions that came out of sharing this article and kind of talking about, especially in software, where we can be a little bit dogmatic. And we've kind of actually joked about it on the podcast [chuckles] before about, like, we TDD or don't TDD, or, you know, we use X tool, and that's just like what we have to do here. She writes a little bit about how that can end up, you know, not encouraging people offering, like, differing opinions and being able to feel like they have a say in kind of, like, the team's direction. And yeah, I just really enjoyed a different way of thinking about it. Joël, what about you? What are some articles you got bookmarked? [chuckles]
JOËL: This year, I started using a bookmark manager, Raindrop.io. That's been nice because, for this episode, I could just look back on, what are some of my bookmarks this year? And be like, oh yeah, this is the thing that I have been using a lot.
So, an article that I've been linking is an article called Preemptive Pluralization is (Probably) Not Evil. And it kind of talks a little bit about how going from code that works over a collection of two items to a collection of, you know, 20 items is very easy. But sometimes, going from one to two can be really challenging. And when are the times where you might want to preemptively make something more than one item? So, maybe using it has many association rather than it has one or making an attribute a collection rather than a single item.
Controversial is not the word for it, but I think challenges a little bit of the way people typically like to write code. But across this year, I've run into multiple projects where they have been transitioning from one to many. That's been an interesting article to surface as part of those conversations. Whether your team wants to do this preemptively or whether they want to put it off and say in classic YAGNI (You Aren't Gonna Need It) form, "We'll make it single for now, and then we'll go plural," that's a conversation for your team. But I think this article is a great way to maybe frame the conversation.
STEPHANIE: Cool. Yeah, I really like that almost, like, a counterpoint to YAGNI [laughs], which I don't think I've ever heard anyone say that out loud [laughs] before. But as soon as you said preemptive pluralization is not evil, I thought about all the times that I've had to, like, write code, text in which a thing, a variable could be either one or many [laughs] things. And I was like, ooh, maybe this will solve that problem for me [laughs].
JOËL: Speaking of pluralization, I'm sure you've been linking to more than just one article this year. Do you have another one that you find yourself coming up in conversations where you've always kind of like, "Hey, dropping this link," where it's almost like your thing?
STEPHANIE: Yes. And that is basically everything written by Mandy Brown [laughs], who is a work coach that I actually started working with this year. And one of the articles that really inspired me or really has been a topic of conversation among my friends and co-workers is she has a blog post called Digging Through the Ashes. And it's kind of a meditation on, like, post burnout or, like, what's next, and how we have used this word as kind of a catch-all to describe, you know, this collective sense of being just really tired or demoralized or just, like, in need of a break.
And what she offers in that post is kind of, like, some suggestions about, like, how can we be more specific here and really, you know, identify what it is that you're needing so that you can change how you engage with work? Because burnout can mean just that you are bored. It can mean that you are overworked. It can mean a lot of things for different people, right?
And so, I definitely don't think I'm alone [laughs] in kind of having to realize that, like, oh, these are the ways that my work is or isn't changing and, like, where do I want to go next so that I might feel more sustainable? I know that's, like, a keyword that we talked about earlier, too. And that, on one hand, is both personal but also technical, right? It, like, informs the kinds of decisions that we make around our codebase and what we are optimizing for. And yeah, it is both technical and cultural. And it's been a big theme for me this year [laughs].
JOËL: Yeah. Would you say it's safe to say that sustainability would be, if you want to, like, put a single word on your theme for the year? Would that be a fair word to put there?
STEPHANIE: Yeah, I think so. Definitely discovering what that means for me and helping other people discover what that means for them, too.
JOËL: I feel like we kicked off the year 2023 by having that discussion of Sustainable Rails and how different technical practices can make the work there feel sustainable. So, I think that seems to have really carried through as a theme through the year for you. So, that's really cool to have seen that. And I'm sure listeners throughout the year have heard you mention these different books and articles. Maybe you've also been able to pick up a little bit on that.
So, I'm glad that we do this show because you get a little bit of, like, all the bits and pieces in the day-to-day, and then we aggregate it over a year, and you can look back. You can be like, "Oh yeah, I definitely see that theme in your work."
STEPHANIE: Yeah, I'm glad you pointed that out. It is actually really interesting to see how something that we had talked about early, early on just had that thread throughout the year. And speaking of sustainability, we are taking a little break from the show to enjoy the holidays. We'll be off for a few weeks, and we will be back with a new Bike Shed in January.
JOËL: Cheers to a new year.
STEPHANIE: Yeah, cheers to a new year. Wrapping up 2023. And we will see you all in 2024.
JOËL: On that note, shall we wrap up the whole year?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at tbot.io/
referral. Or you can email us at [email protected] with any questions.
Joël shares his experiences with handling JSON in a Postgres database. He talks about his challenges with ActiveRecord and JSONB columns, particularly the unexpected behavior of storing and retrieving JSON data. Stephanie shares her recent discovery of bookmarklets and highlights a bookmarklet named "Check This Out," which streamlines searching for books on Libby, an ebook and audiobook lending app.
The conversation shifts to using constants in code as a form of documentation. Stephanie and Joël discuss how constants might not always accurately reflect current system behavior or logic, leading to potential misunderstandings and the importance of maintaining accurate documentation.
Transcript
STEPHANIE: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: What's new in my world is JSON and how to deal with it in a Postgres database. So, I'm dealing with a situation where I have an ActiveRecord model, and one of the columns is a JSONB column. And, you know, ActiveRecord is really nice. You can just throw a bunch of different data at it, and it knows the column type, and it will do some conversions for you automatically.
So, if I'm submitting a form and, you know, form values might come in as strings because, you know, I typed in a number in a text field, but ActiveRecord will automatically parse that into an integer because it knows we're saving that to an integer column. So, I don't need to do all these, like, manual conversions.
Well, I have a form that has a string of JSON in it that I'm trying to save in a JSONB column. And I expected ActiveRecord to just parse that into a hash and store it in Postgres. That is not what happens. It just stores a raw string, so when I pull it out again, I don't have a hash. I have a raw string that I need to deal with. And I can't query it because, again, it is a raw string. So, that was a bit of an unexpected behavior that I saw there.
STEPHANIE: Yeah, that is unexpected. So, is this a field that has been used for a while now? I'm kind of surprised that there hasn't been already some implementations for, like, deserializing it.
JOËL: So, here's the thing: I don't think you can have an automatic deserialization there because there's no way of knowing whether or not you should be deserializing. The reason is that JSON is not just objects or, in Ruby parlance, hashes. You can also have arrays. But just raw numbers not wrapped in hashes are also valid JSON as are raw strings.
And if I just give you a string and say, put this in a JSON field, you have no way of knowing, is this some serialized JSON that you need to deserialize and then save? Or is it just a string that you should save because strings are already JSON? So, that's kind of on you as the programmer to make that distinction because you can't tell at runtime which one of these it is.
STEPHANIE: Yeah, you're right. I just realized it's [laughs] kind of, like, an anything goes [laughs] situation, not anything but strings are JSON, are valid JSON, yep [laughs]. That sounds like one of those things that's, like, not what you think about immediately when dealing with that kind of data structure, but...
JOËL: Right. So, the idea that strings are valid JSON values, but also all JSON values can get serialized as strings. And so, you never know: are you dealing with an unserialized string that's just a JSON value, or are you dealing with some JSON blob that got serialized into a string? And only in one of those do you want to then serialize before writing into the database.
STEPHANIE: So, have you come to a solution or a way to make your problem work?
JOËL: So, the solution that I did is just calling a JSON parse before setting that attribute on my model because this value is coming in from a form. I believe I'm doing this when I'm defining the strong parameters for that particular form. I'm also transforming that string by parsing it into a hash with the JSON dot parse, which then gets passed to the model. And then I'm not sure what JSONB serializes as under the hood. When you give it a hash, it might store it as a string, but it might also have some kind of binary format or some internal AST that it uses for storage. I'm not sure what the implementation is.
STEPHANIE: Are the values in the JSONB something that can be variable or dynamic? I've seen some people, you know, put that in getter so that it's just kind of done for you for anyone who needs to access that field.
JOËL: Right now, there is a sort of semi-consistent schema to that. I think it will probably evolve to where I'll pull some of these out to be columns on the table. But it is right now kind of an everything else sort of dumping ground from an API.
STEPHANIE: Yeah, that's okay, too, sometimes [laughs].
JOËL: Yeah. So, interesting journey into some of the fun edge cases of dealing with a format whose serialized form is also a valid instance of that format. What's been new in your world?
STEPHANIE: So, I discovered something new that has been around on the internet for a while, but I just haven't been aware of it. Do you know what a bookmarklet is?
JOËL: Oh, like a JavaScript code that runs in a bookmark?
STEPHANIE: Yeah, exactly. So, you know, in your little browser bookmark where you might normally put a URL, you can actually stick some JavaScript in there. And it will run whenever you click your bookmark in your browser [chuckles]. So, that was a fun little internet tidbit that I just found out about. And the reason is because I stumbled upon a bookmarklet made by someone. It's called Check This Out.
And what it does is there's another app/website called Libby that is used to check out ebooks and audiobooks for free from your local public library. And what this Check This Out bookmarklet does is you can kind of select any just, like, text on a web page, and then when you click the bookmarklet, it then just kind of sticks it into the query params for Libby's search engine. And it takes you straight to the results for that book or that author, and it saves you a few extra manual steps to go from finding out about a book to checking it out.
So, that was really neat and cute. And I was really surprised that you could do that. I was like, whoa [laughs]. At first, I was like, is this okay? [laughs] If you, like, you can't read, you know, you don't know what the JavaScript is doing, I can see it being a little sketchy. But –-
JOËL: Be careful of executing arbitrary JavaScript.
STEPHANIE: Yeah, yeah. When I did look up bookmarklets, though, I kind of saw that it was, you know, just kind of a fun thing for people who might be learning to code for the first time to play around with. And some fun ideas they had for what you could do with it was turning all the font on a web page to Comic Sans [laughs]. So yeah, I thought that was really cute.
JOËL: Has that inspired you to write your own?
STEPHANIE: Well, we did an episode a while ago on productivity tricks. And I was thinking like, oh yeah, there's definitely some things that I could do to, you know, just stick some automated tasks that I have into a bookmarklet. And that could be a really fun kind of, like, old-school way of doing it, as opposed to, you know, coding my little snippets or getting into a new, like, Omnibar app [laughs].
JOËL: So, something that is maybe a little bit less effort than building yourself a browser extension or something like that.
STEPHANIE: Yeah, exactly.
JOËL: I had a client project once that involved a...I think it was, like, a five-step wizard or something like that. It was really tedious to step through it all to manually test things. And so, I wrote a bookmarklet that would just go through and fill out all the fields and hit submit on, like, five pages worth of these things. And if anything didn't work, it would just pause there, and then you could see it. In some way, it was moving towards the direction of, like, an automated like Capybara style test. But this was something that was helping for manual QA. So, that was a really fun use of a bookmarklet.
STEPHANIE: Yeah, I like that. Like, just an in-between thing you could try to speed up that manual testing without getting into, like you said, an automated test framework for your browser.
JOËL: The nice thing about that is that this could be used without having to set up pretty much anything, right? You paste a bit of JavaScript into your bookmark bar, and then you just click the button. That's all you need to do. No need to make sure that you've got Ruby installed on your machine or any of these other things that you would need for some kind of testing framework. You don't need Selenium. You don't need ChromeDriver. It just...it works.
So, I was working...this was a greenfield startup project. So, I was working with a non-technical founder who didn't have all these things, you know, dev tooling on his machine. So, he wanted to try out things but not spend his days filling out forms. And so, having just a button he could click was a really nice shortcut.
STEPHANIE: That's really cool. I like that a lot. I wasn't even thinking about how I might be able to bring that in more into just my daily work, as opposed to just something kind of fun. But that's an awesome idea. And I hope that maybe I'll have a good use for one in the future.
JOËL: It feels like the thing that has a lot of potential, and yet I have not since written...I don't think I've written any bookmarklets for myself. It feels like it's the kind of thing where I should be able to do this for all sorts of fun tooling and just automate my life away. Somehow, I haven't done that.
STEPHANIE: Bring back the bookmarklet [laughs]. That's what I have to say.
JOËL: So, I mentioned earlier that I was working with a JSONB column and storing JSON on an ActiveRecord model. And then I wanted to interact with it, but the problem is that this JSON is somewhat arbitrary, and there are a lot of magic strings in there. All of the key names might change. And I was really concerned that if the schema of that JSON ever changed, if we changed some of the key names or something like that, we might accidentally break code in multiple parts of the app.
So, I was very careful while building that model to quarantine any references to any raw strings only within that model, which meant that I leaned really heavily on constants. And, in some way, those constants end up kind of documenting what we think the schema of that JSON should be. And that got me thinking; you were telling me recently about a scenario where some code you were working with relied heavily on constants as a form of documentation, and that documentation kind of lied to you.
STEPHANIE: Yeah, it did. And I think you mentioned something that I wanted to point out, which is that the magic strings that you think might change, and you wanted to pull that out into a constant, you know, so at least it's kind of defined in one place. And if it ever does change, you know, you don't have to change it in all of those places.
And I do think that, normally, you know, if there's opportunities to extract those magic strings and give a name to them, that is beneficial. But I was gripping a little bit about when constants become, I guess, like, too wieldy, or there's just kind of, like, too much of a dependency on them as the things documenting how the app should work when it's constantly changing. I realized that I just used constant and constantly [laughs].
JOËL: The only constant is that it is not constant.
STEPHANIE: Right. And so, the situation that I found myself in—this was on a client project a little bit ago—was that the constants became, like, gatekeepers of that logic where dev had to change it if the app's behavior changed, and maybe we wanted to change the value of it. And also, one thing that I noticed a lot was that we, as developers, were getting questions about, "Hey, like, how does this actually work?"
Like, we were using the constants for things like pricing of products, for things like what is a compatible version for this feature. And because that was only documented in the code, other people who didn't have access to it actually were left in the dark. And because those were changing with somewhat frequency, I was just kind of realizing how that was no longer working for us.
JOËL: Would you say that some of these values that we stored as constants were almost more like config rather than constants or maybe they're just straight-up application data? I can imagine something like price of an item you probably want that to be a value in the database that can be updated by an admin. And some of these other things maybe are more like config that you change through some kind of environment variable or something like that.
STEPHANIE: Yeah, that's a good point. I do think that they evolved to become things that needed to be configured, right? I suppose maybe there wasn't as much information or foresight at the beginning of like, oh, this is something that we expect to change. But, you know, kind of when you're doing that first pass and you're told, like, hey, like, this value should be the price of something, or, like, the duration of something, or whatever that may be. It gets codified [chuckles]. And there is some amount of lift to change it from something that is, at first, just really just documenting what that decision was at the time to something that ends up evolving.
JOËL: How would you draw a distinction between something that should be a constant versus something that maybe would be considered config or some other kind of value? Because it's pretty easy, right? As developers, we see magic numbers. We see magic strings. And our first thought is, oh, we've seen this problem before—constant. Do you have maybe a personal heuristic for when to reach for a constant versus when to reach for something else?
STEPHANIE: Yeah, that's a good question. I think when I started to see it a lot was especially when the constants were arrays or hashes [laughs]. And I guess that is actually kind of a signal, right? You will likely be adding more stuff [laughs] into that data structure [laughs]. And, again, like, maybe it's okay, like, the first couple of times. But once you're seeing that request happen more frequently, that could be a good way to advocate for storing it in the database or, like, building a lightweight admin kind of thing so that people outside of the dev team can make those configuration changes.
I think also just asking, right? Hey, like, how often do we suspect this will change? Or what's on the horizon for the product or the team where we might want to introduce a way to make the implementation a bit more flexible to something that, you know, we think we know now, but we might want to adjust for?
JOËL: So, it's really about change and how much we think this might change in the future.
STEPHANIE: Speaking of change, this actually kind of gets into the broader topic of documentation and how to document a changing and evolving entity [chuckles], you know, that being, like, the codebase or the way that decisions are made that impact how an application works. And you had shared, in preparation for this topic, an article that I read and enjoyed called Hierarchy of Documentation.
And one thing that I liked about it is that it kind of presented all of the places that you could put information from, you know, straight in the code, to in your commit messages, to your issue management system, and to even wikis for your repo or your team. And I think that's actually something that we would want to share with new developers, you know, who might be wondering, like, where do I find or even put information? I really liked how it was kind of, like, laid out and gave, like, different reasons for where you might want to put something or not.
JOËL: We think a lot about documentation as code writers. I'm curious what your experience is as a code reader. How do you tend to try to read code and understand documentation about how code works? And, apparently, the answer is, don't read the constants because these constants lie.
STEPHANIE: I think you are onto something, though, because I was just thinking about how distrustful I've become of certain types of documentation. Like, when I think of code comments, on one hand, they should be a signal, right? They should kind of draw your attention to something maybe weird or just, like, something to note about the code that it's commenting on, or where it's kind of located in a file.
But I sometimes tune them out, I'm not going to lie. When I see a really big block of code [chuckles] comment, I'm like, ugh, like, do I really have to read all of this? I'm also not positive that it's still relevant to the code below it, right? Like, I don't always have git blame, like, visually enabled in my editor. But oftentimes, when I do a little bit of digging, that comment is left over from maybe when that code was initially introduced. But, man, there have been lots of commits [chuckles] in the corresponding, you know, like, function sense, and I'm not really sure how relevant it is anymore.
Do you struggle with the signal versus noise issue with code comments? How much do you trust them, and how much do you kind of, like, give credence to them?
JOËL: I think I do tend to trust them with maybe some slight skepticism. It really depends on the codebase. Some codebases are really bad sort of comment hygiene and just the types of comments that they put in there, and then others are pretty good at it. The ones that I tend to particularly appreciate are where you have maybe some, like, weird function and you're like, what is going on here? And then you've got a nice, little paragraph up top explaining what's going on there, or maybe an explanation of ways you might be tempted to modify that piece of code and, like, why it is the way it is.
So, like, hey, you might be wanting to add an extra branch here to cover this edge case. Don't do that. We tried it, and it causes problems for XY reasons. And sometimes it might be, like, a performance thing where you say, look, the code quality person in you is going to look at this and say, hey, this is hard to read. It would be better if we did this more kind of normalized form. Know that we've particularly written this in a way that's hard to read because it is more performant, and here are the numbers. This is why we want it in this way. Here's a link to maybe the issue, or the commit, or whatever where this happened.
And then if you want to start that discussion up again and say, "Hey, do we really need performance here at the cost of readability?", you can start it up again. But at least you're not going to just be like, oh, while I'm here, I'm going to clean up this messy code and accidentally cause a regression.
STEPHANIE: Yeah. I like what you said about comment hygiene being definitely just kind of, like, variable depending on the culture and the codebase.
JOËL: I feel like, for myself, I used to be pretty far on the spectrum of no comments. If I feel the need to write a comment, that's a smell. I should find other ways to communicate that information. And I think I went pretty far down that extreme, and then I've been slowly kind of coming back. And I've probably kind of passed the center, where now I'm, like, slightly leaning towards comments are actually nice sometimes. And they are now a part of my toolkit. So, we'll see if I keep going there.
Maybe I'll hit some point where I realize that I'm putting too much work into comments or comments are not being helpful, and I need to come back towards the center again and focus on other ways of communicating. But right now, I'm in that phase of doing more comments than I used to. How about you? Where do you stand on that sort of spectrum of all information should be communicated in code tokens versus comments?
STEPHANIE: Yeah, I think I'm also somewhere in the middle. I think I have developed an intuition of when it feels useful, right? In my gut, I'm like, oh, I'm doing something weird. I wish I didn't have to do this [chuckles]. I think it's another kind of intuition that I have now. I might leave a comment about why, and I think that is more of that signal, right?
Though I also recently have been using them more as just, like, personal notes for myself as I'm, you know, in my normal development workflow, and then I will end up cleaning them up later. I was working on a codebase where there was a soft delete functionality. And that was just, like, a concern that was included in some of the models. And I didn't realize that that's what was going on. So, when I, you know, I was calling destroy, I thought it was actually being deleted, and it turns out it wasn't. And so, that was when I left a little comment for myself that was like, "Hey, like, this is soft deleted."
And some of those things I do end up leaving if I'm like, yes, other people won't have the same context as me. And then if it's something that, like, well, people who work in this app should know that they have soft delete, so then I'll go ahead and clean that up, even though it had been useful for me at the time.
JOËL: Do you capture that information and then put it somewhere else then? Or is it just it was useful for you as a stepping-stone on the journey but then you don't need it at the end and nobody else needs to care about it?
STEPHANIE: Oh, you know what? That's actually a really great point. I don't think I had considered saving that information. I had only thought about it as, you know, just stuff for me in this particular moment in time. But that would be really great information to pull out and put somewhere else [chuckles], perhaps in something like a wiki, or like a README, or somewhere that documents things about the system as a whole. Yeah, should we get into how to document kind of, like, bigger-picture stuff?
JOËL: How do you feel about wikis? Because I feel like I've got a bit of a love-hate relationship with them.
STEPHANIE: I've seen a couple of different flavors of them, right? Sometimes you have your GitHub wiki. Sometimes you have your Confluence ecosystem [laughs]. I have found that they work better if they're smaller [laughs], where you can actually, like, navigate them pretty well, and you have a sense of what is in there, as opposed to it just being this huge knowledge base that ends up actually, I think, working against you a little bit [laughs].
Because so much information gets duplicated if it's hard to find and people start contributing to it maybe without keeping in mind, like, the audience, right? I've seen a lot of people putting in, like, their own personal little scripts [laughs] in a wiki, and it works for them but then doesn't end up working for really anyone else. What's your love-hate relationship to them?
JOËL: I think it's similar to what you were saying, a little bit of structure is nice. When they've just become dumping grounds of information that is maybe not up to date because over the course of several years, you end up with a lot of maybe conflicting articles, and you don't know which one is the right thing to do, it becomes hard to find things.
So, when it just becomes a dumping ground for random information related to the company or the app, sometimes it becomes really challenging to find the information I need and to find information that's relevant, to the point where oftentimes looking something up in the wiki is my last resort. Like, I'm hoping I will find the answer to my question elsewhere and only fallback to the wiki if I can't.
STEPHANIE: Yeah, that's, like, the sign that the wiki is really not trustworthy. And it kind of is diminishing returns from there a bit. I think I fell into this experience on my last project where it was a really, really big wiki for a really big codebase for a lot of developers. And there was kind of a bit of a tragedy of the commons situation, where on one hand, there were some things that were so manual that the steps needed to be very explicitly documented, but then they didn't work a lot of the time [laughs].
But it was hard to tell if they weren't working for you or because it was genuinely something wrong with, like, the way the documentation laid out the steps. And it was kind of like, well, I'm going to fix it for myself, but I don't know how to fix it for everyone else. So, I don't feel confident in updating this information.
JOËL: I think that's what's really nice about the article that you mentioned about the hierarchy of documentation. It's that all of these different forms—code, comments, commit messages, pull requests, wikis—they don't have to be mutually exclusive. But sometimes they work sort of in addition to each other sort of each adding more context.
But also, sometimes it's you sort of choose the one that's the highest up on that list that makes sense for what you're trying to do, so something like documenting a series of steps to do something maybe a wiki is a good place for that. But maybe it's better to have that be executable. Could that be a script somewhere? And then maybe that can be a thing that is almost, like, living documentation, but also where you don't need to maybe even think about the individual steps anymore because the script is running, you know, 10 different things.
And I think that's something that I really appreciated from the book Sustainable Rails is there's a whole section there talking about the value of setup scripts and how people who are getting started on your app don't want to have to care about all the different things to set it up, just run a script. And also, that becomes living documentation for what the app needs, as opposed to maybe having a bulleted list with 10 elements in it in your project README.
STEPHANIE: Yeah, absolutely. In the vein of living documentation, I think one thing that wikis can be kind of nice for is for putting visual supplements. So, I've seen them have, like, really great graphs. But at the same time, you could use a gem like Rails-ERD that generates the entity relationship diagram as the schema of your database changes, right? So, it's always up to date.
I've seen that work well, too, when you want to have, like I said, those, like, system-level documentation that sometimes they do change frequently and, you know, sometimes they don't. But that's definitely worth keeping in mind when you choose, like, how you want to have that exist as information.
JOËL: How do you feel about deleting documentation? Because I feel like we put so much work into writing documentation, kind of like we do when writing tests. It feels like more is always better. Do you ever go back and maybe sort of prune some of your docs, or try to delete some things that you think might no longer be relevant or helpful?
STEPHANIE: I was also thinking of tests when you first posed that question. I don't know if I have it in my practice to, like, set aside time and be like, hmm, like, what looks outdated these days? I am starting to feel more confident in deleting things as I come across them if I'm like, I just completely ignored this or, like, this was just straight up wrong [laughs]. You know, that can be scary at first when you aren't sure if you can make that determination.
But rather than thrust that, you know, someone else going through that same process of spending time, you know, trying to think about if this information was useful or not, you can just delete it [laughs]. You can just delete tests that have been skipped for months because they don't work. Like, you can delete information that's just no longer relevant and, in some ways, causing you more pain because they are cluttering up your wiki ecosystem so that no one [laughs] feels that any of that information is relevant anymore.
JOËL: I'll be honest, I don't think I've ever deleted a wiki article that was out of date or no longer relevant. I think probably the most I've done is go to Slack and complain about how an out-of-date wiki page led me down the wrong path, which is probably not the most productive way to channel those feelings. So, maybe I should have just gone back and deleted the wiki page.
STEPHANIE: I do like to give a heads up, I think. It's like, "Hey, I want to delete this thing. Are there any qualms?" And if no one on your team can see a reason to keep it and you feel good about that it's not really, like, serving its purpose, I don't know, maybe consider just doing it.
JOËL: To kind of wrap up this topic, I've got a spicy question for you.
STEPHANIE: Okay, I'm ready.
JOËL: Do you think that AI is going to radically change the way that we interact with documentation? Imagine you have an LLM that you train on maybe not just your code but the Git history. It has all the Git comments and maybe your wiki. And then, you can just ask it, "Why does function foo do this thing?" And it will reference a commit message or find the correct wiki article. Do you think that's the future of understanding codebases?
STEPHANIE: I don't know. I'm aware that some people kind of can see that as a use case for LLMs, but I think I'm still a little bit nervous about the not knowing how they got there kind of part of it where, you know, yes, like I am doing this manual labor of trying to sort out, like, is this information good or trustworthy or not? But at least that is something I'm determining for myself. So, that is where my skepticism comes in a little bit. But I also haven't really seen what it can do yet or seen the outcomes of it. So, that's kind of where I'm at right now.
JOËL: So, you think, for you, the sort of the journey of trying to find and understand the documentation is a sort of necessary part of building the understanding of what the code is doing.
STEPHANIE: I think it can be. Also, I don't know, maybe my life would be better by having all that cut out for me, or I could be burned by it because it turns out that it was bad information [laughs]. So, I can't say for sure.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie recommends "Blue Eye Samurai" and a new ceramic pot (donabe) for cooking. Joël talks about the joy of holding a warm beverage in a unique mug.
Stephanie discusses her shift to a part-time support and maintenance role at thoughtbot, contrasting it with her full-time development work. She highlights the importance of communication, documentation, and workplace flexibility in this role. Stephanie appreciates the professional growth opportunities and aligns this flexible work style with her long-term career goals.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I have a TV show recommendation this week. I think this is my first time having TV or movies to recommend, so this will be fun. My partner and I just finished watching Blue Eye Samurai on Netflix, which is an animated historical Samurai drama.
But the really cool thing about it is that the protagonist she's a woman who is disguising herself as a man, and she is half Japanese and half White, which the show takes place during Edo, Japan. And so that was a time when Japan was locked down, and there were no outsiders allowed in the country. And so, to be mixed race like that was to be, like, kind of, like, demonized and to be really excluded and shamed. And so, the main character is on, like, a revenge mission.
And it was such a cool show. I was kind of, like, on the edge of my seat the whole time. And it's very beautifully animated. There were just a lot of really awesome things about it. And I think it's very different from what I've been seeing on TV these days.
JOËL: Is this a single-season show?
STEPHANIE: So far, there's just one season. I think it's pretty new, yeah. It's very watchable in a couple of weekends.
[laughter]
JOËL: Dangerously so.
STEPHANIE: Yeah, exactly [laughs].
JOËL: How do you feel about the way they end the arc in season one? Do they kind of leave you on a cliffhanger, or does it feel like a pretty satisfying place?
STEPHANIE: Ooh, I think both, which is the sweet spot, in my opinion, where it's not, like, cliffhanger for the sake of, like, ugh, now I feel like I have to just watch the next part to see what happens because I was left unsatisfied. I like when seasons are kind of like chapters of the story, right? And the characters are also well written, too, and really fleshed out even, you know, some of the side characters. They all have their arcs that are really satisfying. And, again, I just was left very impressed.
JOËL: I guess that's the power of good storytelling.
STEPHANIE: Yeah. I was reading a review of the show. And that was kind of the theme of–it was just that, like, this is really good storytelling, and I would have to agree. Yeah, I highly recommend checking it out. It was very fun. It was very bloody, but [chuckles], for me, it being animated actually made it a little more palatable for me [laughs]. The fight scenes, the action scenes were really cool.
I think the way that it's been described is kind of, like, you know, if you like historical dramas, or if you like things like Game of Thrones, there's kind of something for everyone. I recommend checking it out. Joël, what's new in your world?
JOËL: Listeners of the show don't know this, but you and I are on a video call while we're recording this. And you'd commented earlier that I was holding a cool mug. It's got a rock climbing hold as a handle, which is pretty fun. I enjoy a lot of bouldering. That makes it a fun mug.
But I was recently thinking about just how much pleasure I get from holding a mug with a warm beverage. It's such a small thing, but it makes me so happy. And that got me thinking more broadly about what are things in life that are kind of like that. They're small things that have, like, an outsized impact on your happiness. Do you have anything like that?
STEPHANIE: Oh yes, absolutely. You were talking about the warmth of a hot beverage in your hands. And I was thinking about something similar, too, because I'm pretty sure this time of year last year, I talked about something that was new in my world that was just, like, a thing that I got to make winter more tolerable for me here in Chicago, and I think it was, like, a heated blanket [laughs].
And I am similarly in that space this year of like, what can I do or get to make this winter better than last winter? So, this year, what I got that I'm really excited to use— it actually just came in the mail—is this ceramic pot called the donabe that's kind of mainly used for Japanese cooking, especially, like, hot pot. And so, it will be a huge improvement to my soup game this year [laughs].
Similarly, it's kind of, like, one of those small things where you can take it from the stovetop where you're cooking straight to the table, and I'm so looking forward to that. It's kind of like your hot beverage in your hand but, like, three times the size [laughs].
JOËL: Right. The family-style version of it.
STEPHANIE: Yeah, exactly. So, that's what I'm really looking forward to this year as something that is just, like, I don't know, a little small upgrade to my regular soup routine. But I think I will get a lot of pleasure [laughs] out of it.
JOËL: What do you normally cook in that style of pot? Is it typically you do a hot pot in there, or is it meant for soups?
STEPHANIE: Yeah, it holds heat really well, so I think that's why it's used for soup a lot. And the one that I got specifically has a little ceramic steamer plate as well. And so, I'm looking forward to having, like, this setup that's made for steaming, where you don't have to have any, like, too many extra bits. And, again, it can go from stove to table, and that's one less thing I [chuckles] need to wash.
JOËL: I love it. So, something else that is kind of new in your world is you'd mentioned on a recent episode you'd wrapped up with your current client. And you've rotated on to not exactly a new client but a new almost line of business. You're doing a rotation with our support and maintenance team. Can you tell us a little bit about what that is like?
STEPHANIE: Yeah. I'm excited to share more about it because this is my first time on this team doing this work. And it's pretty new for thoughtbot, too. I think it's only, like, a year old that we have had this sub-team of the one that you and I are on, Boost. In the sub-team, support and maintenance is focused on providing flexible part-time work for clients who are just needing some dedicated hours, not necessarily for, you know, a lot of, like, intense new feature work, but making sure that things are running smoothly.
A lot of the clients, you know, have had Rails apps that are several years old, that are chugging along [chuckles], just need that, like, attention every now and then to make sure that upgrades are happening, fix any bugs, kind of as the app just continues to work and provide value. And then, occasionally, there is a little bit of feature work.
But the interesting thing about being on this team is that instead of being on one client full-time, you are working on a lot of different clients at the same time, and a lot of them are on retainers. So, they maybe have, like, 20 hours a month of work that gets filled with kind of whatever tasks need to be done during that time. So yeah, I recently joined a few days ago and have been very surprised by kind of this style of work. It's different from what I'm used to.
JOËL: That seems pretty different than the sort of traditional thoughtbot client engagement. Typically, if I'm a client and I'm hiring a team from thoughtbot, as a client, I get sort of a dedicated team. And they're probably either building some things for me or maybe working with my team and sort of full-time building features.
Whereas if I hire the support and maintenance team, it sounds like it's a bit more ad hoc. And it's things I assume it's like, oh, we probably need to upgrade our Rails version since a new release came out last month. Can you do that? Here's a small bug that was reported. Can somebody fix that? Things along those lines. Is that pretty approximate of what the experience is for a client?
STEPHANIE: Yeah, I would say so. I think the other surprising thing has been there have been a little bit of more DevOps type of tasks as well mixed in there. Because oftentimes, these are smaller clients who maybe have, like, a few developers actively working on new features and that type of stuff. But there is, like, so much of the connecting work that needs to happen when you have an application. And if you don't have a full in-house team for that, that often gets put on developers' plates. But it's kind of nice to have this flexible support and maintenance team, again, to, like, do the work as it comes up.
A lot of it is not necessarily, like, stuff that can be planned in advance. It's kind of like, oh, we're hitting, like, our usage limit for this Heroku add-on. Let's evaluate if this is still working for us, if this is a good tier to be on. Like, should we upgrade? Are there other levers we could pull or adjustments we can make?
So, that's actually been some of the stuff that I've been working on, too, which is, again, a little bit different from normal development work but also still very much related. And it's all kind of part of the job. And, you know, a lot of the skills are transferable. And to know how to do development in a framework then sets you up, I think, really well to, like, be able to make those kinds of evaluations.
JOËL: So, it sounds like you almost, in a sense, provide a bit of a velocity cushion for clients so that if something does come up where they would maybe normally need to pull a dev off of feature work to do some side thing for a couple of days, you can come in and handle that so that their dev team stays focused on shipping features.
STEPHANIE: Yeah, I like that phrase you used: velocity cushion. That's cool. I like it. The other surprising thing that I have kind of quite enjoyed, at least for now, is because we bill a little bit differently on this work; we have to track our hours more explicitly. And that has actually helped me focus a lot more on what I'm doing and if I should continue to be doing what I'm doing. I'm timeboxing things a lot more because I know that if there is a ceiling on the number of hours, I want to make sure that that time is spent in the most valuable way.
And I also really enjoy, like, the boundaries of timeboxing, yes, but also, like, the tasks are usually scoped pretty narrowly so that they are things that you can accomplish, definitely in the week, because you don't know if you'll kind of still be working for this client next week but even more so, like, within a few days.
And that is nice because I can kind of, like, you know, track my hours, finish the task, and then feel a little bit more free to go do something else without being, like, okay, like, what's the next thing that I need to be doing? There's a little bit more freedom, I think, when you're kind of, like, optimizing towards, like, finishing each item.
JOËL: Do the stories of the work that you have to do does it typically come kind of pre-scoped? Are you involved in making sure that it has, like, very aggressive scoping?
STEPHANIE: Yeah. So far, I've not been involved in doing the scoping work, and it has come pre-scoped, which has been nice. This was also, again, just different. Because I was on a client team previously, a lot of the work to be done was the disambiguating, the, like, figuring out what to be doing. Whereas here, because, again, we're kind of optimized for people coming in and out, if there is uncertainty or lack of clarity, it's pointed out early, and someone is like, "Okay, I will take care of this. Like, I'll take the lead on this so that it can be handed off."
One client that I'm working on is using Basecamp's Shape Up methodology, which I actually hadn't worked with in a very explicit way before. And that has been interesting to learn about a little bit, too. One thing that I have enjoyed about it is instead of sprints, they're called cycles. And I like that a lot because, you know, sprints kind of have the connotation of, like, you're running as fast as you can but also, like, you can't run that way forever [laughs]. And so, even that, like, little bit of rewording change is really nice. The variable part is scope, right? It's we're focused on delivering something completely and very intentionally cutting scope as kind of the main lever.
JOËL: How do you maintain sort of focus and flow if you're jumping across multiple clients? Because you said, you work with multiple clients as part of this team. And I feel like I can get a little bit frustrated sometimes, even just jumping between, like, tickets within one project. And so, I could imagine that jumping between different clients during the week or even the day might be really disruptive. Have you found techniques to help you stay in the flow?
STEPHANIE: Yeah, that is a tough one because, also, every client has their different application; you then have to start up on [laughs] your local machine, and that is kind of annoying. You know, I do still tend to kind of, like, bundle similar work together. If, like, there's a few things I can do for a client on one day, I'll make sure to focus on that.
But what I mentioned earlier about, like, seeing something to completion has been really, I want to say, fun even. Because it then kind of, like, frees up that mental space of, like, okay, I don't have to, like, have this thing that I'm working on lingering in my head about, like, oh, did I forget to do something? Or, you know, have, like, shower thoughts of like, oh, I just thought of a new way to implement this [laughs] feature because it doesn't spill over as much as maybe larger initiatives anyway.
And so, I am context-switching, but it's only kind of after I've gotten something to a good place where I've left all of the notes. And that's another thing that I'm now kind of compelled to do a little more actively. It's like, every single day, I'm kind of making sure that the work that I've done has been reported on, one, because I have to track my hours, so, you know, and I sometimes leave notes about what that time was spent on doing.
And also, when the expectation is that someone else will be picking up, then there's no, like, oh, like, let me hold on to this, and only when I know that I have to hand off something that's when I'll do the, like, dedicated knowledge dumping. It's kind of just built into the process a little more frequently.
JOËL: So, you're setting up for, like, an imminent vacation factor.
STEPHANIE: Yeah. Which I kind of like because then I can take a vacation [laughs] whenever I want and not have to worry too much about, oh, did I do everything I needed to do before I leave?
JOËL: So, you know, these practices that you're doing are specifically adapted for the style of work that you have. Are there any that you think you would bring to your own practice if you ever rotated back on to a dedicated client project, anything that you would do there that you would want to include from your practice here?
STEPHANIE: Yeah. It does sound kind of weird because part of what's nice about being on a full-time team is that there is less, oh, if I don't get something done today, I have tomorrow to do it [laughs]. And it seems like that would be like, oh, like, kind of take the pressure off a little bit.
But I would be really curious to continue having, like, such an intense awareness about how I'm spending my time. Because I've certainly gotten a little bit lax on, like, full-time development work when you just go down a rabbit hole [laughs] and you come out, like, three hours later, and you're like, "What did I just do?" [laughs] And, you know, maybe that's what needed to be done, and that's fine.
But if you have the information that it took you three hours, you can at least make a better-informed decision about, like, oh, maybe I should have stopped a little earlier or, like, yeah, it took about three hours, and that's okay. I think that would be an interesting area to incorporate and to be able to report more frequently. And I also like to know how other people spend their time, too. So, just, like, that sharing of information would also be really beneficial even to, like, a team.
JOËL: What about the more aggressive documentation? Is that something that...because that can be really time-consuming, I imagine, as well. Is that something that you would value in a kind of, more focused full-time project context?
STEPHANIE: Yeah. One part I've enjoyed about it is that I'm documenting, like, decision-making a lot more actively where, you know, I'm kind of, like, surfacing to be like, hey, here's the outcomes of, like, my research. We're not as, you know, embedded in the business, and we don't have as much of that, like, context and knowledge about what the best solutions are all the time. I'm documenting all of that, you know, usually, for the client stakeholder to be like, "Hey, here's my recommendations, like, how do you want to...what do you think is the best way to go?
On one hand, it's kind of nice not to have to, like, be solely responsible for making that decision, right? And I'm kind of, like, leaning on, like, hey, like, you're the expert of your application and your product, you know, here's what I've learned. And now I've, like, put this all, like, for you and presented it to you.
And I think that, for me, has gotten lost sometimes when I end up being the same person of, like, doing the research and then deciding, and it just kind of ends up being held in my head. And that, I think, is something really important to document, even if it's just for other people to, like, see how that process might work or, like, what things I already considered or didn't try. That exercise, I think, can be really important.
So, so far, the documentation has not necessarily been, like, code level, but more, like, for each task, it's, like, showing your work, right? And not in a, like, you're being monitored [laughs] sort of way but in a way that supports it getting done with a lot of that turnover.
JOËL: It's almost like a mini report that you're doing. So, you'd mentioned, for example, an application running into memory problems on Heroku. It sounds like you would then go maybe investigate that and then make some recommendations on whether they need to increase some dynos or maybe make some internal changes. It sounds like you may or may not be the one to execute those changes. But you would write up some, like, a mini report and submit that to the client, and then they can make their own execution choices.
STEPHANIE: Yeah, exactly. And they can execute it themselves or then create a new ticket for the next person rotating on to support and maintenance to tackle it in a different cycle.
JOËL: So, support and maintenance doesn't just do the investigation. Your team might do the execution as well. It's just that the sort of more research-y stuff and the execution stuff gets split out into different tickets because it's so tightly scoped.
STEPHANIE: Yeah, that sounds right.
JOËL: I like that.
STEPHANIE: One area that I wasn't sure that I was going to like so much about this kind of work is, you know, when you're not kind of embedded on a team, I was thinking that I might not feel as connected, or I would miss a bit of that getting to know people and just, like, seeing people face to face on a daily basis.
I'm still evaluating how that would go so far because it has definitely been, like, mostly asynchronous communication, you know, which is what works well for this type of the style of team or project. But I think what has been helpful is realizing that, like, oh yeah, like, I can also get that elsewhere, you know, with thoughtbot folks like with you doing this podcast every week.
And right now, there are, like, two Boost members who are doing support and maintenance full time, and folks who are unbooked kind of come in and out. And I can see that there's still a team. So, it's not nearly as kind of, like, isolating as what I had thought it would be.
JOËL: There's something that's really curious to me, I think, sitting at the intersection of the idea of fostering more team interactions and the sort of, like, mini reports that you write. And that's that I would love to see more sharing among all of us at thoughtbot about different interesting problems that we've had to solve or that we're tackling on different client work. Because I think in that case, it's a situation where we all just learn something, you know, maybe I've never had to deal with a memory leak or might not even have an idea of, like, how to approach memory issues on Heroku.
So, seeing your little mini report, if you'd maybe share that, and, you know, maybe it can be anonymized in some way if needs to, I think would be really nice, at the very least, something that could be done, like, internally. So, I almost wonder if, like, building that practice of, you know, maybe not for every ticket that I do because, you know, I don't want to just be dumping my tickets in the thoughtbot Slack. But I run into something interesting and be like, oh, let me tell a little story about this and do a little write-up. That might be something that's good for the whole team and not just for folks who are on support and maintenance.
STEPHANIE: Yeah, absolutely. As you were saying that, I was thinking about how it does kind of encourage me to find support outside of my, like, immediate team, right? Because I don't necessarily have one with the client and to, I don't know, I'm imagining, like, these roots growing in terms of different communities I'm a part of and bringing those problems just outside of my internal world, and kind of getting that outside feedback because by necessity a little bit, right? But also, with the added benefit of, you know, I think that's also how a lot of people end up writing content that gets shared with the world.
So, I had the misconception that I would be kind of just, like, on my own off doing things like just tickets and being a little coding robot, but I've been surprised by it feels very fresh and new. So, I think, I guess, I was needing a little bit of that [laughs].
JOËL: I was having a conversation with another thoughtboter recently about how valuable sometimes change can be for its own sake and how that can sort of refresh. You want it just at the rate where you have a chance to build some stability. You don't want chaos. But sometimes change can sort of take you out of a rut, give you energy, maybe sort of restart some good habits that you had sort of let atrophy. And that finding, like, just that right level of shaking things up can really help a team, you know, get their effectiveness to the next level.
STEPHANIE: Yeah. I like what you said about good habits, for sure. A couple of other random, little things that I just thought of about what I've liked is, I don't know, maybe this is a little silly. But we, you know, use shared credentials for logging into different services and applications or third parties that clients are using. And that has actually been something that has been so easy [laughs] and very low friction compared to, you know, joining a new project and manually be added as, like, your individual account to all of the different things. And things inevitably get forgotten, and then you have to rely on someone else to do it. And sometimes they don't get back to you [laughs] for a while.
The self-serviceness of this work has been cool, too. And I just, yeah, wanted to say that I really appreciated the thought that went into making it as easy as possible to be like, yeah, I can find the credentials here. It is, you know, a bit more anonymized because I'm just using, like, a shared account.
JOËL: Like a generic thoughtbot account on a client system rather than stephanie@thoughtbot.
STEPHANIE: Yeah, exactly. But I think I saved so much time [laughs] this week just being able to do all of that myself and, you know, knowing where to look first before having to ask.
JOËL: I guess you'd need something like that, right? If you're only jumping in on a project for the first time, for a couple of hours or something like that, you don't want to go through a whole onboarding process because that might then, like, easily double. You know, instead of doing two hours on this project, you're now doing four.
STEPHANIE: Yeah, exactly. I guess the other takeaway, for me, was like, oh, definitely, if I were to have to set up accounts [laughs] for an application, you know, I've obviously seen where it was like, very clearly, like, the founder having created all these personal accounts for this services, and people are still using their credentials many years later [laughs], even though they probably, like, maybe may not even work for the company anymore.
But yeah, the shared credentials and using that generic account that anyone can kind of get into when needed has really lowered the barrier to jump into doing that work, right? And especially because, like you said, it reduces that time. And we're, you know, billing by the hour anyway. So, it's kind of a win-win situation.
JOËL: And I totally understand why you would not want something like that for a longer engagement. But for something like support and maintenance, it sounds like it was the right choice.
STEPHANIE: Yeah, yeah. Again, I just mentioned it because it's just different. And so, maybe if this sparks any ideas for our listeners about how processes could be different or, like, the styles or ways of working can be different, I think that would be cool.
JOËL: And just to be clear here, it sounds like what you're doing is for sort of each client; you create a separate set of credentials that are for that client but that are about thoughtbot generically. You don't have, like, one thoughtbot email and password that we reuse for every client.
STEPHANIE: [laughs] Oh yes. That would be not so good [laughs] if we got hacked and suddenly, now they have access to everything.
JOËL: So, every client gets its own unique email password combo. We're using security best practices here. And then, since you do have to share them through a team, are you doing some sort of, like, shared 1Password vault or something along those lines?
STEPHANIE: Yeah, we are using a shared 1Password vault. That is definitely what I meant [laughs] the first time when I was mentioning the shared credentials, where that was basically the only thing I had to get onboarded to, the vault, for support and maintenance to be able to hit the ground running.
JOËL: So, this sounds like a pretty exciting new style of project for you. Is this something that you would see yourself preferring to do longer term, to sort of focus on this style of project? Or do you think that you'd like to come back to more classic project work in the near future?
STEPHANIE: I'm not sure yet, but I'm also hoping to have an answer to that question. And it definitely does feel like an experiment for me personally. I can see liking it, and that also fitting well with some of my longer-term goals of being able to, like, step back from work. Maybe working fewer days a week is something that I've, like, thought about in terms of, like, a long-term goal of mine because I'm not as needed [laughs] on a team.
Which I think, in the past, I also had a bit of a misconception that, like, in order to be a good developer, I had to have all the domain knowledge, and be indispensable, and, like, be the go-to person to answer all the questions. But now I'm at a point where I don't want to [laughs] necessarily have to answer, like, every question because that creates, like, a dependency on me. And if I need to step away from work, then that could be tough, right? The vacation factor that you mentioned.
So, this style of work is very interesting in terms of if it might provide me a little bit more of that, not exactly work-life balance, but just kind of be closer to my goals in terms of what I want out of work and my time. And, hopefully, I'm going to be doing this next week, but I don't know because that's the nature of it [laughs]. But if I am, then I'll definitely have more to say about it. Probably.
JOËL: Well, it definitely sounds like we'll have to check in again on what's, I guess, not so new in your world on a future episode. On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël recaps his time at RubyConf! He shares insights from his talk about different aspects of time in software development, emphasizing the interaction with the audience and the importance of post-talk discussions. Stephanie talks about wrapping up a long-term client project, the benefits of change and variety in consulting, and maintaining a balance between project engagement and avoiding burnout.
They also discuss strategies for maintaining work-life balance, such as physical separation and device management, particularly in a remote work environment.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Well, as of this recording, I have just gotten back from spending the week in San Diego for RubyConf.
STEPHANIE: Yay, so fun.
JOËL: It's always so much fun to connect with the community over there, talk to other people from different companies who work in Ruby, to be inspired by the talks. This year, I was speaking, so I gave a talk on time and how it's not a single thing but multiple different quantities. In particular, I distinguish between a moment in time like a point, a duration and amount of time, and then a time of day, which is time unconnected to a particular day, and how those all connect together in the software that we write.
STEPHANIE: Awesome. How did it go? How was it received?
JOËL: It was very well received. I got a lot of people come up to me afterwards and make a variety of time puns, which those are so easy to make. I had to hold myself back not to put too many in the talk itself. I think I kept it pretty clean. There were definitely a couple of time puns in the description of the talk, though.
STEPHANIE: Yeah, absolutely. You have to keep some in there. But I hear you that you don't want it to become too punny [laughs]. What I really love about conferences, and we've talked a little bit about this before, is the, you know, like, engagement and being able to connect with people. And you give a talk, but then that ends up leading to a lot of, like, discussions about it and related topics afterwards in the hallway or sitting together over a meal.
JOËL: I like to, in my talks, give little kind of hooks for people who want to have those conversations in the hallway. You know, sometimes it's intimidating to just go up to a speaker and be like, oh, I want to, like, dig into their talk a little bit. But I don't have anything to say other than just, like, "I liked your talk." So, if there's any sort of side trails I had to cut for the talk, I might give a shout-out to it and say, "Hey, if you want to learn more about this aspect, come talk to me afterwards."
So, one thing that I put in this particular talk was like, "Hey, we're looking at these different graphical ways to think about time. These are similar to but not the same as thinking of time as a one-dimensional vector and applying vector math to it, which is a whole other side topic. If you want to nerd out about that, come find me in the hallway afterwards, and I'd love to go deeper on it." And yeah, some people did.
STEPHANIE: That's really smart. I like that a lot. You're inviting more conversation about it, which I know, like, you also really enjoy just, like, taking it further or, like, caring about other people's experiences or their thoughts about vector math [laughs].
JOËL: I think it serves two purposes, right? It allows people to connect with me as a speaker. And it also allows me to feel better about pruning certain parts of my talk and saying, look, this didn't make sense to keep in the talk, but it's cool material. I'd love to have a continuing conversation about this. So, here's a path we could have taken. I'm choosing not to, as a speaker, but if you want to take that branch with me, let's have that afterwards in the hallway.
STEPHANIE: Yeah. Or even as, like, new content for yourself or for someone else to take with them if they want to explore that further because, you know, there's always something more to explore [chuckles].
JOËL: I've absolutely done that with past talks. I've taken a thing I had to prune and turned it into a blog post. A recent example of that was when I gave a talk at RailsConf Portland, which I guess is not so recent. I was talking about ways to deal with a test suite that's making too many database requests. And talking about how sometimes misusing let in your RSpec tests can lead to more database requests than you expect.
And I had a whole section about how to better understand what database requests will actually be made by a series of let expressions and dealing with the eager versus lazy and all of that. I had to cut it. But I was then able to make a blog post about it and then talk about this really cool technique involving dependency graphs. And that was really fun. So, that was a thing where I was able to say, look, here's some content that didn't make it into the talk because I needed to focus on other things. But as its own little, like, side piece of content, it absolutely works, and here's a blog post.
STEPHANIE: Yeah. And then I think it turned into a Bike Shed episode, too [laughs].
JOËL: I think it did, yes. I think, in many ways, creativity begets creativity. It's hard to get started writing or producing content or whatever, but once you do, every idea you have kind of spawns new ideas. And then, pretty soon, you have a backlog that you can't go through.
STEPHANIE: That's awesome. Any other highlights from the conference you want to shout out?
JOËL: I'd love to give a shout-out to a couple of talks that I went to, Aji Slater's talk on the Enigma machine as a German code machine from World War II and how we can sort of implement our own in Ruby and an exploration of object-oriented programming was fantastic. Aji is just a masterful storyteller. So, that was really great.
And then Alan Ridlehoover's talk on dealing with flaky tests that one, I think, was particularly useful because I think it's one of the talks that is going to be immediately relevant on Monday morning for, like, every developer that was in that room and is going back to their regular day job. And they can immediately use all of those principles that Alan talked about to deal with the flaky tests in their test suite.
And there's, in particular, at the end of his presentation, Alan has this summary slide. He kind of broke down flakiness across three different categories and then talked about different strategies for identifying and then fixing tests that were flaky because of those reasons.
And he has this table where he sort of summarizes basically the entire talk. And I feel like that's the kind of thing that I'm going to save as a cheat sheet. And that can be, like, I'm going to link to this and share it all over because it's really useful. Alan has already put his slides up online. It's all linked to that particular slide in the show notes because I think that all of you would benefit from seeing that.
The talks themselves are recorded, but they're not going to be out for a couple of weeks. I'm sure when they do, we're going to go through and watch some and probably comment on some of the talks as well.
So, Stephanie, what is new in your world?
STEPHANIE: Yeah. So, I'm celebrating wrapping up a client project after a nine-month engagement.
JOËL: Whoa, that's a pretty long project.
STEPHANIE: Yeah, that's definitely on the longer side for thoughtbot. And I'm, I don't know, just, like, feeling really excited for a change, feeling really, you know, proud of kind of, like, all of the work that we had done. You know, we had been working with this client for a long time and had been, you know, continuing to deliver value to them to want to keep working with us for that long. But I'm, yeah, just looking forward to a refresh.
And I think that's one of my favorite things about consulting is that, you know, you can inject something new into your work life at a kind of regular cadence. And, at least for me, that's really important in reducing or, like, preventing the burnout. So, this time around, I kind of started to notice, and other people, too, like my manager, that I was maybe losing a bit of steam on this client project because I had been working on it for so long.
And part of, you know, what success at thoughtbot means is that, like, we as employees are also feeling fulfilled, right? And, you know, what are the different ways that we can try to make sure that that remains the case? And kind of rotating folks on different projects and kind of making sure that things do feel fresh and exciting is really important.
And so, I feel very grateful that other people were able to point that out for me, too, when I wasn't even fully realizing it. You know, I had people checking in on me and being like, "Hey, like, you've been on this for a while now. Kind of what I've been hearing is that, like, maybe you do need something new." I'm just excited to get that change.
JOËL: How do you find the balance between sort of feeling fulfilled and maybe, you know, finding that point where maybe you're feeling you're running out of steam–versus, you know, some projects are really complex, take a while to ramp up; you want to feel productive; you want to feel like you have contributed in a significant way to a project? How do you navigate that balance?
STEPHANIE: Yeah. So, the flip side is, like, I also don't think I would enjoy having to be changing projects all the time like every couple of months. That maybe is a little too much for me because I do like to...on our team, Boost, we embed on our team. We get to know our teammates. We are, like, building relationships with them, and supporting them, and teaching them. And all of that is really also fulfilling for me, but you can't really do that as much if you're on more shorter-term engagements.
And then all of that, like, becomes worthwhile once you're kind of in that, like, maybe four or five six month period where you're like, you've finally gotten your groove. And you're like, I'm contributing. I know how this team works. I can start to see patterns or, like, maybe opportunities or gaps. And that is all really cool, and I think also another part of what I really like about being on Boost.
But yeah, I think what I...that losing steam feeling, I started to identify, like, I didn't have as much energy or excitement to push forward change. When you kind of get a little bit too comfortable or start to get that feeling of, well, these things are the way they are [laughs], --
JOËL: Right. Right.
STEPHANIE: I've now identified that that is kind of, like, a signal, right?
JOËL: Maybe time for a new project.
STEPHANIE: Right. Like starting to feel a little bit less motivated or, like, less excited to push myself and push the team a little bit in areas that it needs to be pushed. And so, that might be a good time for someone else at thoughtbot to, like, rotate in or maybe kind of close the chapter on what we've been able to do for a client.
JOËL: It's hard to be at 100% all the time and sort of always have that motivation to push things to the max, and yeah, variety definitely helps with that. How do you feel about finding signals that maybe you need a break, maybe not from the project but just in general? The idea of taking PTO or having kind of a rest day.
STEPHANIE: Oh yeah. I, this year, have tried out taking time off but not going anywhere just, like, being at home but being on vacation. And that was really great because then it was kind of, like, less about, like, oh, I want to take this trip in this time of year to this place and more like, oh, I need some rest or, like, I just need a little break. And that can be at home, right? Maybe during the day, I'm able to do stuff that I keep putting off or trying out new things that I just can't seem to find the time to do [chuckles] during my normal work schedule. So, that has been fun.
JOËL: I think, yeah, sometimes, for me, I will sort of hit that moment where I feel like I don't have the ability to give 100%. And sometimes that can be a signal to be like, hey, have you taken any time off recently? Maybe you should schedule something. Because being able to refresh, even short-term, can sort of give an extra boost of energy in a way where...maybe it's not time for a rotation yet, but just taking a little bit of a break in there can sort of, I guess, extend the time where I feel like I'm contributing at the level that I want to be.
STEPHANIE: Yeah. And I actually want to point out that a lot of that can also be, like, investing in your life outside of work, too, so that you can come to work with a different approach. I've mentioned the month that I spent in the Hudson Valley in New York and, like, when I was there, I felt, like, so different. I was, you know, just, like, so much more excited about all the, like, novel things that I was experiencing that I could show up to work and be like, oh yeah, like, I'm feeling good today. So, I have all this, you know, energy to bring to the tasks that I have at work.
And yeah, so even though it wasn't necessarily time off, it was investing in other things in my life that then brought that refresh at work, even though nothing at work really changed [laughs].
JOËL: I think there's something to be said for the sort of energy boost you get from novelty and change, and some of that you get it from maybe rotating to a different project. But like you were saying, you can change your environment, and that can happen as well. And, you know, sometimes it's going halfway across the country to live in a place for a month.
I sometimes do that in a smaller way by saying, oh, I'm going to work this morning from a coffee shop or something like that. And just say, look, by changing the environment, I can maybe get some focus or some energy that I wouldn't have if I were just doing same old, same old.
STEPHANIE: Yeah, that's a good point.
So, one particularly surprising refresh that I experienced in offboarding from my client work is coming back to my thoughtbot, like, internal company laptop, which had been sitting gathering dust [laughs] a little bit because I had a client-issued laptop that I was working in most of the time. And yeah, I didn't realize how different it would feel.
I had, you know, gotten everything set up on my, you know, my thoughtbot computer just the way that I liked it, stuff that I'd never kind of bothered to set up on my other client-issued laptop. And then I came back to it, and then it ended up being a little bit surprising. I was like, oh, the icons are smaller on this [laughs] computer than the other computer.
But it definitely did feel like returning to home, I think, instead of, like, being a guest in someone else's house that you haven't quite, like, put all your clothes in the closet or in the drawers. You're still maybe, like, living out of a suitcase a little bit [laughs]. So yeah, I was kind of very excited to be in my own space on my computer again.
JOËL: I love the metaphor of coming home, and yeah, being in your own space, sleeping in your own bed. There's definitely some of that that I feel, I think, when I come back to my thoughtbot laptop as well.
Do you feel like you get a different sense of connection with the rest of our thoughtbot colleagues when you're working on the thoughtbot-issued laptop versus a client-issued one?
STEPHANIE: Yeah. Even though on my client-issued computer I had the thoughtbot Slack, like, open on there so I could be checking in, I wasn't necessarily in, like, other thoughtbot digital spaces as much, right? So, our, like, project management tools and our, like, internal company web app, those were things that I was on less of naturally because, like, the majority of my work was client work, and I was all in their digital spaces.
But coming back and checking in on, like, all the GitHub discussions that have been happening while I haven't had enough time to catch up on them, just realizing that things were happening [laughs] even when I was doing something else, that is both cool and also like, oh wow, like, kind of sad that I [chuckles] missed out on some of this as it was going on.
JOËL: That's pretty similar to my experience. For me, it almost feels a little bit like the difference between back when we used to be in person because thoughtbot is now fully remote. I would go, usually, depending on the client, maybe a couple of days a week working from their offices if they had an office. Versus some clients, they would come to our office, and we would work all week out of the thoughtbot offices, particularly if it was like a startup founder or something, and they might not already have office space.
And that difference and feeling the connection that I would have from the rest of the thoughtbot team if I were, let's say, four days a week out of a client office versus two or four days a week out of the thoughtbot office feels kind of similar to what it's like working on a client-issued laptop versus on a thoughtbot-issued one.
STEPHANIE: Another thing that I guess I forgot about or, like, wasn't expecting to do was all the cleanup, just the updating of things on my laptop as I kind of had it been sitting. And it reminded me to, I guess, extend that, like, coming home metaphor a little bit more. In the game Animal Crossing, if you haven't played the game in a while because it tracks, like, real-time, so it knows if you haven't, you know, played the game in a few months, when you wake up in your home, there's a bunch of cockroaches running around [laughs], and you have to go and chase and, like, squash them to clean it up.
JOËL: Oh no.
STEPHANIE: And it kind of felt like that opening my computer. I was like, oh, like, my, like, you know, OS is out of date. My browsers are out of date. I decided to get an internal company project running in my local development again, and I had to update so many things, you know, like, install the new Ruby version that the app had, you know, been upgraded to and upgrade, like, OpenSSL and all of that stuff on my machine to, yeah, get the app running again.
And like I mentioned earlier, just the idea of like, oh yeah, this has evolved and changed, like, without me [laughs] was just, you know, interesting to see. And catching myself up to speed on that was not trivial work. So yeah, like, all that maintenance stuff still got to do it. It's, like, the digital cleanup, right?
JOËL: Exactly. So, you mentioned that on the client machine, you still had the thoughtbot Slack. So, you were able to keep up at least some messages there on one device. I'm curious about the experience, maybe going the other way. How much does thoughtbot stuff bleed into your personal devices, if at all?
STEPHANIE: Barely. I am very strict about that, I think. I used to have Slack on my phone, I don't know, just, like, in an earlier time in my career. But now I have it a rule to keep it off. I think the only thing that I have is my calendar, so no email either. Like, that is something that I, like, don't like to check on my personal time. Yeah, so it really just is calendar just in case I'm, like, out in the morning and need to be, like, oh, when is my first meeting?
But [laughs] I will say that the one kind of silly thing is that I also refuse to sign into my Google account for work. So, I just have the calendar, like, added to my personal calendar but all the events are private. So, I can't actually see what the events are [laughs]. I just know that I have something going on at, like, 10:00 a.m. So, I got to make sure I'm back home by then [laughs], which is not so ideal. But at the risk of being signed in and having other things bleed into my personal devices, I'm just living with that for now [laughs].
JOËL: What I'm hearing is that I could put some mystery events on your calendar, and you would have a fun surprise in the morning because you wouldn't know what it is.
STEPHANIE: Yeah, that is true [laughs]. If you put, like, a meeting at, like, 8:00 a.m., [laughs] then I'm like, oh no, what's this? And then I arrive, and it's just, like [laughs], a fun prank meeting.
So, you know, you were talking about how you were at the conference this week. And I'm wondering, how connected were you to work life?
JOËL: Uh, not very. I tried to be very present in the moment at the conference. So, I'm, you know, connected to all the other thoughtboters who were there and connecting with the attendees. I do have Slack on my phone, so if I do need to check it for something. There was a little bit of communication that was going on for different things regarding the conference, so I did check in for that. But otherwise, I tried to really stay focused on the in-person things that are happening.
I'm not doing any client work during those days that I'm at RubyConf, and so I don't need to deal with anything there. I had my thoughtbot laptop with me because that's what I used to give my presentation. But once the presentation was done, I closed that laptop and didn't open it again, and, honestly, that felt kind of good.
STEPHANIE: Yeah, that is really nice. I'm the same way, where I try to be pretty connected at conferences, and, like, I will actually redownload Slack sometimes just for, like, coordinating purposes with other folks who are there. But I think I make it pretty clear that I'm, like, away. You know, like, I'm not actually...like, even though I'm on work time, I'm not doing any other work besides just being present there.
JOËL: So, you mentioned the idea of work time. Do you have, like, a pretty strict boundary between personal time and work time and, like, try not to allow either to bleed into each other?
STEPHANIE: Yeah. I can't remember if I've mentioned this on the show. I think I have, but I'm going to again because one of my favorite things that I picked up from The Bike Shed back when Chris Toomey and Steph Viccari were hosting the show is Chris had, like, a little ritual that he would do every day to signal that he was done with work. He would close his laptop and say, "Schedule shutdown complete," I think.
And I've started adopting it because then it helps me be like, I'm not going to reopen my laptop after this because I have said the words. And even if I think of something that I maybe need to add to my to-do list, I will, instead of opening my computer and adding to my, like, whatever digital to-do list, I will, like, write it down on a piece of paper instead for the sake of, you know, not risking getting sucked back into, you know, whatever might be going on after the time that I've, like, decided that I need to be done.
JOËL: So, you have a very strict divisioning between work time and personal time.
STEPHANIE: Yeah, I would say so. I think it's important for me because even when I take time off, you know, sometimes folks might work a half day or something, right? I really struggle with having even a half day feel like, once I'm done with work, having that feel like okay, like, now I'm back in my personal time. I'd much prefer not working the entire day at all because that is kind of the only way that I can feel like I've totally reclaimed that time.
Otherwise, it's like, once I start thinking about work stuff, it's like I need a mental boundary, right? Because if I'm thinking about a work problem, or, like, an interaction or, like, just anything, it's frustrating because it doesn't feel like time in my own brain [laughs] is my own.
What do work and personal time boundaries look like for you?
JOËL: I think it's evolved over time. Device usage is definitely a little bit more blurry for me. One thing that I have started doing since we've gone fully remote as the pandemic has been winding down and, you know, you can do things, but we're still working from home, is that more days than not, I work from home during the day, and then I leave my home during the evening. I do a variety of social activities. And because I like to be sort of present in the moment, that means that by being physically gone, I have totally disconnected because I'm not checking emails or anything like that.
Even though I do have thoughtbot email on my phone, Gmail allows me to like log into my personal account and my thoughtbot account. I have to, like, switch between the two accounts, and so, that's, like, more work than I would want. I don't have any notifications come in for the thoughtbot account. So, unless I'm, like, really wanting to see if a particular email I'm waiting for has come in, I don't even look at it, ever. It's mostly just there in case I need to see something.
And then, by being focused in the moment doing social things with other people, I don't find too much of a temptation to, like, let work life bleed into personal life. So, there's a bit of a physical disconnect that ends up happening by moving out of the space I work in into leaving my home.
STEPHANIE: Yeah. And I'm sure it's different for everyone. As you were saying that, I was reminded of a funny meme that I saw a long time ago. I don't think I could find it if I tried to search for it. But basically, it's this guy who is, you know, sitting on one side of the couch, clearly working. And he's kind of hunched over and, like, typing and looking very serious.
And then he, like, closes his laptop, moves over, like, just slides to the other side of the couch, opens his laptop. And then you see him, like, lay back, like, legs up on the coffee table. And it's, like, work computer, personal computer, but it's the same computer [laughs]. It's just the, like, how you've decided like, oh, it's time for, you know, legs up, Netflix watching [laughs].
JOËL: Yeah. Yeah. I'm curious: do you use your thoughtbot computer for any personal things? Or is it just you shut that down; you do the closing ritual, and then you do things on a separate device?
STEPHANIE: Yeah, I do things on a separate device. I think the only thing there might be some overlap for are, like, career-related extracurriculars or just, like, development stuff that I'm interested in doing, like, separate from what I am paid to do. But that, you know, kind of overlaps a little bit because of, like, the tools and the stuff I have installed on my computer. And, you know, with our investment time, too, that ends up having a bit of a crossover.
JOËL: I think I'm similar in that I'll tend to do development things on my thoughtbot machine, even though they're not necessarily thoughtbot-related, although they could be things that might slot into something like investment time.
STEPHANIE: Yeah, yeah. And it's because you have all your stuff set up for it. Like, you're not [laughs] trying to install the latest Ruby version on two different machines, probably [laughs].
JOËL: Yeah. Also, my personal device is a Windows machine. And I've not wanted to bother learning how to set that up or use the Windows Subsystem for Linux or any of those tools, which, you know, may be good professional learning activities. But that's not where I've decided to invest my time.
STEPHANIE: That makes sense. I had an interesting conversation with someone else today, actually, about devices because I had mentioned that, you know, sometimes I still need to incorporate my personal devices into work stuff, especially, like, two-factor authentication. And specifically on my last client project...I have a very old iPhone [laughs]. I need to start out by saying it's an iPhone 8 that I've had for, like, six or seven years. And so, it's old.
Like, one time I went to the Apple store, and I was like, "Oh, I'm looking for a screen protector for this." And they're like, "Oh, it's an iPhone 8. Yikes." [laughs] This was, you know, like, not too long ago [laughs]. And the multi-factor authentication policy for my client was that, you know, we had to use this specific app. And it also had, like, security checks. Like, there's a security policy that it needed to be updated to the latest iOS. So, even if I personally didn't want to update my iOS [laughs], I felt compelled to because, otherwise, I would be locked out of the things that I needed to do at work [laughs].
JOËL: Yeah, that can be a challenge sometimes when you're adding work things to personal devices, maybe not because it's convenient and you want to, but because you don't have a choice for things like two-factor auth.
STEPHANIE: Yeah, yeah. And then the person I was talking to actually suggested something I hadn't even thought about, which is like, "Oh, you know, if you really can't make it work, then, like, consider having that company issue another device for you to do the things that they're, like, requiring of you." And I hadn't even thought of that, so... And I'm not quite at the point where I'm like, everything has to be, like, completely separate [laughs], including two-factor auth. But, I don't know, something to consider, like, maybe that might be a place I get to if I'm feeling like I really want to keep those boundaries strict.
JOËL: And I think it's interesting because, you know, when you think of the kind of work that we do, it's like, oh, we work with computers, but there are so many subfields within it. And device management and, just maybe, corporate IT, in general, is a whole subfield that is separate and almost a little bit alien.
Two, I feel like me, as a software developer, I'm just aware of a little bit...like, I've read a couple of articles around...and this was, you know, years ago when the trend was starting called Bring Your Own Device. So, people who want to say, "Hey, I want to use my phone. I want to have my work email on my phone." But then does that mean that potentially you're leaking company memos and things? So, how do you secure that kind of thing? And everything that IT had to think through in order to allow that, the pros and cons.
So, I think we're just kind of, as users of that system, touching the surface of it. But there's a lot of thought and discussion that, as an industry, the kind of corporate IT folks have gone through to struggle with how to balance a lot of those things.
STEPHANIE: Yeah, yeah. I bet there's a lot of complexity or nuance there. I mean, we're just talking about, like, ways that we do or don't mix work and personal life. And for that kind of work, you know, that's, like, the job is to think really thoroughly about how people use their devices and what should and shouldn't be permissible.
The last thing that I wanted to kind of ask about in terms of device management or, like, work and personal intermixing is the idea of being on call and your device being a way for work to reach you and that being a requirement, right? I feel very lucky to obviously not really be in that position. As consultants, like, we're not usually so embedded into a team that we're then brought into, like, an on-call rotation, and I think that's good for me. Like, I don't think that that is something I'd be interested in doing anytime soon. Do you have any experience with that?
JOËL: I have not been on a project where I've had to be on call, and I think that's generally true for most of us at thoughtbot who are doing software development. I know those who are doing more kind of platformy SRE-type things are on call. And, in fact, we have specifically hired people in different regions around the world so that we can provide 24-hour coverage for that kind of thing.
STEPHANIE: Yeah. And I imagine kind of like what we're talking about with work device management looks even different for that kind of role, where maybe you do need a lot more access to things, like, wherever you might be.
JOËL: And maybe the answer there is you get issued a work-specific device and a work phone or something like that, or an old-school work pager.
STEPHANIE: [laughs]
JOËL: PagerDuty is not just a metaphoric thing. Back in the day, they used actual pagers.
STEPHANIE: Yeah, that would be very funny.
JOËL: So yeah, I can't speak to it from personal experience, but I could imagine that maybe some of the dynamics there might be a little bit different. And, you know, for some people, maybe it's fine to just have an app on your phone that pings you when something happens, and you have to be on call. And you're able to be present while waiting, like, in case you get pinged, but also let it go while you're on call. I can imagine that's, like, a really weird kind of, like, shadow, like, working, not working experience that I can't really speak to because I have not been in that position.
STEPHANIE: Yeah. As you were saying that, I also had the thought that, like, our ability to step away from work and our devices is also very much dependent on, like, a company culture and those types of factors, right? Where, you know, it is okay for me to not be able to look at that stuff and just come back to it Monday morning, and I am very grateful [laughs] for that. Because I recognize that, like, not everyone is in that position where there might be a lot more pressure or urgency to be on top of that. But right now, for this time in my life, like, that's kind of how I like to work.
JOËL: I think it kind of sits at the intersection of a few different things, right? There's sort of where you are personally. It might be a combination, like, personality and maybe, like, mental health, things like that, how you respond to how sharp or blurry those lines between work and personal life can be.
Like you said, it's also an element of company culture. If there's a company culture that's really pushing to get into your personal life, maybe you need firmer boundaries. And then, finally, what we spent most of this episode talking about: technical solutions, whether that's, like, physically separating everything such that there are two devices. And you close down your laptop, and you're done for the day. And whether or not you allow any apps on your personal phone to carry with you after you leave for the day.
So, I think at the intersection of those three is sort of how you're going to experience that, and every person is going to be a little bit different. Because those three...I guess I'm thinking of a Venn diagram. Those three circles are going to be different for everyone.
STEPHANIE: Yeah, that makes complete sense.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.
Stephanie interviews Edward Loveall, a former thoughtbotter, now software developer at Relevant Healthcare.
Part of their discussion centers around Edward's blog post on the tech industry's over-reliance on GitHub. He argues for the importance of exploring alternatives to avoid dependency on a single platform and encourages readers to make informed technological choices. The conversation broadens to include how to form opinions on technology, the balance between personal preferences and team decisions, and the importance of empathy and nuance in professional interactions. Both Stephanie and Edward highlight the value of considering various perspectives and tools in software development, advocating for a flexible, open-minded approach to technology and problem-solving in the tech industry.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. And today, I'm joined by a very special guest, a friend of the pod and former thoughtboter, Edward Loveall.
EDWARD: Hello, thanks for having me.
STEPHANIE: Edward, would you share a little bit about yourself and what you're doing these days?
EDWARD: Yes, I am a software developer at a company called Relevant Healthcare. We do a lot of things, but the maybe high-level summary is we take very complicated medical data and help federally-funded health centers actually understand that data and help their population's health, which is really fun and really great.
STEPHANIE: Awesome. So, Edward, what is new in your world?
EDWARD: Let's see, this weekend...I live in a dense city. I live in Cambridge, Massachusetts, and it's pretty dense there. And a lot of houses are very tightly packed. And delivery drivers struggle to find the numbers on the houses sometimes because A, they're old and B, there is many of them.
And so, we put up house numbers because I live in, like, a three-story kind of building, but there are two different addresses in the same three stories, which is very weird. And so [laughs], delivery drivers are like, "Where is number 10 or 15?" or whatever. And so, there's two different numbers. And so, we finally put up numbers after living here for, like, four years [chuckles]. So, now, hopefully, delivery drivers in the holiday busy season will be able to find our house [laughs].
STEPHANIE: That's great. Yeah, I have kind of a similar problem where, a lot of the times, delivery folks will think that my house is the big building next door. And the worst is those at the building next door they drop off their packages inside the little, like, entryway that is locked for people who don't live there. And so, I will see my package in the window and, you know, it has my name on it. It has, like, my address on it.
And [laughs] some strategies that I've used is leaving a note on the door [laughter] that is, like, "Please redeliver my package over there," and, like, I'll draw an arrow to the direction of my house. Or sometimes I've been that person to just, like, buzz random [laughter] units and just hope that they, like, let me in, and then I'll grab my package. And, you know, if I know the neighbors, I'll, like, try to apologize the next time I see them. But sometimes I'll just be like, I just need to get my package [laughs].
EDWARD: You're writing documentation for those people working out in the streets.
STEPHANIE: Yeah. But I'm glad you got that sorted.
EDWARD: Yeah. What about you? What's new in your world?
STEPHANIE: Well, I wanted to talk a little bit about a thing that you and I have been doing lately that I have been enjoying a lot. First of all, are you familiar with the group chat trend these days? Do you know what I'm talking about?
EDWARD: No.
STEPHANIE: Okay. It's basically this idea that, like, everyone is just connecting with their friends via a group chat now as opposed to social media. But as a person who is not a big group chat person, I can't, like, keep up with [chuckles], like, chatting with multiple people [laughter] at once. I much prefer, like, one-on-one interaction.
And, like, a month ago, I asked you if you would be willing to try having a shared note, like, a shared iOS note that we have for items that we want to discuss with each other but, you know, the next time we either talk on the phone or, I don't know, things that are, like, less urgent than a text message would communicate but, like, stuff that we don't want to forget.
EDWARD: Yeah. You're, like, putting a little message in my inbox and vice versa. And yeah, we get to just kind of, whenever we want, respond to it, or think about it, or use it as a topic for a conversation later.
STEPHANIE: Yeah. And I think it is kind of a playbook from, like, a one-on-one with a manager. I know that that's, like, a strategy that some folks use. But I think it works well in the context of our friendship because it's just gotten, like, richer over time. You know, maybe in the beginning, we're like, oh, like, I don't know, here are some random things that I've thought about. But now we're having, like, whole discussions in the note [laughter]. Like, we will respond to each other, like, with sub-bullets [laughs]. And then we end up not even needing to talk about it on the phone because we've already had a whole conversation about it in the note.
EDWARD: Which is good because neither of us are particularly brief when talking on the phone. And [laughs] we only dedicate, like, half an hour every two weeks. It sort of helps clear the decks a little.
STEPHANIE: Yeah, yeah. So, that's what I recommend. Try a shared note for [laughs] your next friendship hangout.
EDWARD: Yeah, it's great. I heartily recommend it.
STEPHANIE: So, one of the things that we end up talking about a lot is various things that we've been reading about tech on the web [laughs]. And we share with each other a lot of, like, blog posts, or articles, various links, and recently, something of yours kind of resurfaced. You wrote a blog post about GitHub a little while ago about how, you know, as an industry, we should make sure that GitHub doesn't become our only option.
EDWARD: Yeah, this was a post I wrote, I think, back in May, or at least earlier this year, and it got a bunch of traction. And it's a somewhat, I would say, controversial article or take. GitHub just had their developer conference, and it resurfaced again.
And I don't have a habit of writing particularly controversial articles, I don't think. Most of my writing history has been technical posts like tutorials. Like, I wrote a whole tutorial on how to write SQL, or I did write one about how to communicate online. But I wasn't, like, so much responding to, like, a particular person's communication or a company's communication.
And this is the first big post I've written that has been a lot more very heavily opinionated, very, like, targeted at a particular thing or entity, I guess you'd say. It's been received well, I think, mostly, and I'm proud of it. But it's a different little world for me, and it's a little scary, honestly.
STEPHANIE: Yeah, I hear that, having an opinion [laughs], a very strong and maybe, like, a less popular opinion, and publishing that for the world. Could you recap what the thesis of it is for our listeners?
EDWARD: Yeah, and I think you did a great job of it, too. I see GitHub or really any singular piece of technology that we have in...I'll say our stack with air quotes, but it's, you know, all the tools that we use and all the things that we use. It's a risk if you only have one of those things, let's say GitHub. Like, if the only way you know how to contribute to a code repository with, you know, 17 people all committing to that repository, if the only way you know how to do that is a pull request and GitHub goes away, and you don't have pull requests anymore, how are you going to contribute to code?
It's not that you couldn't figure it out, or there aren't multiple ways or even other pull request equivalents on other sites. But it is a risk to rely on one company to provide all of the things that you potentially need, or even many of the things that you potentially need, without any alternatives.
So, I wanted to try to lay out A: those risks, and B: encourage people to try alternatives, to say that GitHub is not necessarily bad, although they may not actually fit what you need for various reasons, or someone else for various reasons. But you should have an alternative in your back pocket so that in case something changes, or you get locked out, or they go away, or they decide to cancel that feature, or any number of other scenarios, you have greatly diminished that risk. So, that's the main thrust of the post.
STEPHANIE: Yeah, I really appreciated it because, you know, I think a lot of us probably take GitHub for granted [laughs]. And, you know, every new thing that they kind of add to the platform is like, oh, like, cool, like, I can now do this. In the post, you kind of lay out all of the different features that GitHub has rolled out over the last, you know, couple of years. And when you see it all like that, you know, like, in addition to being, like, a code repository, you now have, like, GitHub Actions for CI/CD, you know, you can deploy static pages with it.
It now has, like, an in-browser editor, and then, you know, Copilot, which, like, the more things that they [laughs] roll out, the more it's becoming, like, the one-stop shop, right? That, like, do all of your work here. And I appreciated kind of, like, seeing that and being like, oh, like, is this what I want?
EDWARD: Right. Yeah, exactly. Yeah. And you mentioned a bunch. There's also issues and discussions. You mentioned their in-browser editor. But so many people use VS Code, which, while it was technically made by Microsoft, it's based on Electron, which was developed at GitHub. And GitHub even, like, took away their other Electron-based editor, Atom. And then now officially recommends VS Code.
And everything from deploying all the way down to, like, thinking about and prioritizing features and editing the code and all of that pretty much could happen on GitHub. I think maybe the only thing they don't currently do is host non-static sites, maybe [laughs]. That's maybe about it. And who knows? Maybe they're working on that; as far as I know, they are, so...
STEPHANIE: Yeah, absolutely. You also mentioned one thing that I really liked about the content in the post was that you talked about alternatives to GitHub, even, like, alternatives to all of the different features that we mentioned. I guess I'm wondering, like, what were you hoping that a reader from your blog post, like, what they would get out of reading and, like, what they would take away from kind of sharing your opinion?
EDWARD: I wanted to try to meet people where I think they might be because I think a lot of people do use GitHub, and they do take it for granted. And they do sort of see it as this thing that they must use, or they want to use even, and that's fine. That's not necessarily a bad thing. I want them to see those alternatives and have at least some idea that there is something else out there, that GitHub doesn't become just not only the default, but, like, the only thing.
I mean, to just [chuckles] re-paraphrase the title of the post, I want to make sure GitHub does not become the only option, right? I want people to realize that there are other options out there and be encouraged to try them. And I have found, for me, at least, the better way to do that is not to only focus on, like, hey, don't use GitHub. Like, I hope people did not come away with only that message or even that message at all. But that it is more, hey, maybe try something else out and to encourage you to try something out.
I'm going to A: share the risks with you and B: give you some actual things to try. So, I talk about the things I'm using and some other platforms and different paradigms to think about and use. So, I hope they take those. We'll see what happens in the next, you know, months or years. And I'll probably never know if it was actually just from me or from many other conversations, and thoughts, and articles, and all that kind of stuff. But that's what it takes, so...
STEPHANIE: Yeah. I think the other fun thing about kind of the, like, meta-conversation we're having about having an opinion and, like, sharing it with the world is that you don't even really say like, "This is better than GitHub," or, like, kind of make a statement about, like, you shouldn't use...you don't even say, "You shouldn't use GitHub," right? The message is, like, here are some options: try it out, and, like, decide for yourself.
EDWARD: Yeah, exactly. I want to empower people to do that. I don't think it would have been useful if I'd just go and say, "Hey, don't do this." It's very frustrating to me to see posts that are only negatives. And, honestly, I've probably written those posts, like, I'm not above them necessarily.
But I have found that trying to help people do what you want them to do, as silly and maybe obvious as that sounds, is a more effective way to get them to do what you want them to do [laughs], as opposed to say, "Hey, stop doing the thing I don't want you to do," or attack their identity, or their job, or some other aspect of their life. Human behavior does not respond well to that generally, at least in my experience.
Like, having your identity tied up in a tool or a platform is, unfortunately, pretty common in, like, a tech space. Like, oh, like, Ruby on Rails is the best piece of software or something like that. And it's like, well, you might like it, and that might be the best thing for you. And personally, I really like Ruby on Rails. I think it does a great job of what it does. But as an example, I would not use Ruby on Rails to maybe build an iOS app. I could; I think that's possible, but I don't think that's maybe the best tool for that job. And so, trying to, again, meet people where they are.
STEPHANIE: I guess it kind of goes back to what you're saying. It's like, you want to help people do what they are trying to do.
EDWARD: Yeah. Maybe there's a little paternalistic thinking, too, of, like, what's good for the industry, even if it feels bad for you right now. I don't love that sort of paternalistic thinking. But if it's a real risk, it seems worth at least addressing or pointing out and letting people make that decision for themselves.
STEPHANIE: Yeah, absolutely. I am actually kind of curious about how do you, like, decide something for yourself? You know, like, how do you form your own opinion about technology? I think, yeah, like, a lot of people take GitHub for granted. They use it because that's just what's used, and that may or may not be a good reason for doing so.
But that was a position I was in for a long time, right? You know, especially when you're newer to the industry, you're like, oh, well, this is what the company uses, or this is what, like, the industry uses. But, like, how do you start to figure out for yourself, like, do I actually like this? Does this help me meet my goals and needs? Is it doing what I want it to be doing? Do you have any thoughts about that?
EDWARD: Yeah. I imagine most people listening to this have tried lots of different pieces of software and found them great, or terrible, or somewhere in between. And I don't think there's necessarily one way to do this. But I think my way has been to try lots of things, unsurprisingly, and evaluate them based on the thing that I'm trying to do.
Sometimes I'll go into a new field, or a new area, or a new product, or whatever, and you just sort of use what's there, or what people have told you about, or what you heard about last, and that's fine. That's a great place to start, right? And then you start seeing maybe where it falls down, or where it is frustrating or doesn't quite meet those needs. And it takes a bit of stepping back.
Again, I don't think I'm, like, going to blow anyone's mind here by this amazing secretive technique that I have for, like, discovering good software. But it's, like, sitting there and going through this iterative loop of try it, evaluate it. Be honest with, is it meeting or not meeting some particular needs? And then try something else. Or now you have a little more info to arm yourself to get to the next piece that is potentially good.
As you go on in your career and you've tried many, many, many pieces of things, you start to see patterns, right? And you know, like, oh, it's not like, oh, this is how I make websites. It's like, ah, I understand that websites are made with a combination of HTML, and CSS, and JavaScript and sometimes use frameworks. And there's a database layer with an ORM. And you start to understand all the different parts. And now that you have those keywords and those pieces a little more under your control or you have more experience with them, you can use all that experience to then seek out particular pieces.
I'm looking for an ORM that's built with Rust because that's the thing I need to do it for; that's the platform I need to work with. And I needed to make sure that it supports MySQL and Postgres, right? Like, it's a very targeted thing that you wouldn't know when you're starting out. But over years of experience, you understand the difference and the reasons why you might need something like that.
And sometimes it's about kind of evaluating options and maybe making little test projects to play around with those things or side projects. That's why something like investment time or 20% time is so helpful and useful for that if you're the kind of person who, you know, enjoys programming on your own in your own free time like I am. And that's also a great time to do it, although it's certainly not required. And so, that's kind of how I go through and evaluate whatever tool it is that I need.
For something maybe more professional or higher stakes, there's a little more evaluation upfront, right? You want to make sure you make the right choice before you spend thousands of hours using it and potentially regretting [laughs] it and having to roll it back, causing even more thousands of hours of time. So, there's obviously some scrutiny there. But, again, that also takes experience and understanding the kind of need that you have.
So, yeah, it's kind of a trade-off of, like, your time, and your energy, and your experience, and your interest. You will have many different inputs from colleagues, from websites, from posts on the internet, from Twitter, or fediverse-type kind of blogging and everything in between, right? So, you take all that in, and you try a bunch of stuff, and you come out on the other side, and then you do it again.
STEPHANIE: Yeah, it sounds like you really like to just experiment, and I think that's really great. And I actually have to say that I am not someone who likes to do that [laughs]. Like, it's not where I focus a lot of my time. And it's why I'm, like, glad I'm friends with you, first of all.
EDWARD: [laughs]
STEPHANIE: But also, I've realized I'm much more of, like, a gatherer in terms of information and opinions. Like, I like hearing about other people's experience to then, like, help inform an opinion that I might develop myself. And, you know, it's not to say that, like, I am, like, oh yeah, like, so and so said this, and so, therefore, yeah, I completely believe what they have to say.
But as someone who does not particularly want to spend a ton of my time trying out things, it is really helpful to know people who do like to do that, know people who I do trust, right? And then kind of like you had mentioned, just, like, having all these different inputs.
And one thing that has changed for me with more experience is, previously, a lot of, like, the basis of what I thought was the quote, unquote, "right way" to develop software was, like, asking, like, other people and, you know, their opinions becoming my own. And, you know, at some point, though, that, like, has shifted, right? Where it's like, oh, like, you know, I remember learning this from so and so, and, like, actually, I think I disagree now.
Or maybe it's like, I will take one part of it and be like, yeah, I really like test-driven development in this particular way that I have figured out how I do it, but it is different still from, like, who I learned it from. And even though, like, that was kind of what I thought previously as, like, oh yeah, like, this is the way that I've adopted without room for adjustment.
I think that has been a growth, I guess, that I can point to and be like, oh yeah, like, I once was in a position where maybe opinions weren't necessarily my own. But now I spend a lot more time thinking about, like, oh, like, how do I feel about this? And I think there is, like, some amount of self-reflection required, right? A lot, honestly. Like, you try things, and then you think about, like, did I like that? [laughs] One without the other doesn't necessarily fully informed opinion make.
EDWARD: Yeah, absolutely. I mean, I'm really glad you brought up that, like, you've heard an opinion, or a suggestion, or an idea from somebody, and you kind of adopt it as your own for a little bit. I like to think of it as trying on ideas like you try on clothing. Or something like, let me try on this jacket. Does this fit? And maybe you like it a little bit. Or maybe you look ridiculous, and it's [laughs] not quite for you. And you don't feel like it's for you. But you have to try. You have to, like, actually do it.
And that is a completely valid way to, like, kick-starting some of those opinions, getting input from friends or colleagues, or just the world around you. And, like, hearing those things and trying them is 100% valid. And I'm glad you mentioned that because if I mentioned it, I think I kind of skipped over it or went through it very quickly. So, absolutely. And you're talking about how you just take, like, one part of it maybe. That nuance, that is, I think, really critical to that whole thought, too.
Everything works differently for different people. And every tool is good for other, like, different jobs. Like, it will be like saying a hammer is the best tool, and it's, like, well, it's a good tool for the right thing. But, like, I wouldn't use a hammer to, like, I don't know, level the new house numbers I put on my house, right? But I might use them to, like, hit the nail to get them in. So, it's a silly analogy, but, like, there is always nuance and different ways to apply these different tools and opinions.
STEPHANIE: I like that analogy. I think it would be really funny if there was someone out there who claimed that the hammer is the best tool ever invented [laughs].
EDWARD: Oh, I'm sure. I'm sure there is, you know. I'm not going to use a drill to paint my house, though [laughs].
STEPHANIE: That's a fair point, and you don't have to [chuckles].
EDWARD: Thank you [laughs].
STEPHANIE: But, I guess, to extend this thought further, I completely and wholeheartedly agree that, like, yeah, everyone gets to decide for themselves what works for them. But also, we work in relation with others. And I'm very interested in the balance of having your own ideas and opinions about tooling, software practices, like, whatever, and then how to bring that back into, like, working on a team or, like, working with others.
EDWARD: Yeah. Well, I don't know if this is exactly what you're asking, but it makes me think of: you've gone off; you've discovered a whole bunch of stuff that you think works really well for you. And then you go to work, or you go to a community that is using a very different way of working, or different tools, or different technologies.
That can be a piece of friction sometimes of, like, "Oh my gosh, I love Ruby on Rails. It's the best." And someone else is like, "I really, really don't like Ruby on Rails for reasons XYZ. And we don't use it here." And that can be really tough and, honestly, sometimes even disheartening, depending on how strongly you feel about that tool and how strongly they feel about their tools.
And as a young developer many years ago, I definitely had a lot more of my identity wrapped up in the tools and technologies that I used. And that has been very useful to try to separate those two. I don't claim to be perfect at it or done with that work yet. But the more I can step away and say, you know, like, this is only a tool. It is not the tool. It is not the best tool. It is a tool that can be very effective at certain things. And I've found, at least right now, the more useful thing is to get to the root of the problem you're trying to solve and make sure you agree with everybody on that premise.
So, yes, you may have come from a world where fast iteration and a really fluent language interface like Ruby has and a really fast iteration cycle like Rails has, is, like, the most important need to be solved because other things have been solved. You understand what you're doing for your product, or maybe you need to iterate quickly on that product. You've figured out an audience. You're getting payroll. You're meeting all that as a business.
But then you go into a business that's potentially, like, let's say, much less funded. Or they have their market fit, and now they're working on, like, extreme performance optimization, or they're working on getting, like, government compliance, or something like that. And maybe Rails is still great. This is maybe a...the analogy may fall apart here. But let's pretend it isn't for some reason. You have to agree that, hey, like, yes, we've solved problem X that Rails really helps you solve. And now we're moving on to problem Y, and Rails may not help you solve that, or whatever technology you're using may not help you solve that.
And I've found it to be much more useful to stop worrying about the means, and the tools, the things in between, and worry about the ends, worry about the goal, worry about the problems you're actually trying to solve. And then you can feel really invested in trying to solve that problem together as a group, as a team, as a community. I've found that to be very helpful.
And I would also like to say it is extremely difficult to let some of that stuff go. It takes a lot of work. I see you nodding along. Like, it's really, really hard. And, like I said, I'm not totally done with it either. But that's, I think, it's something I'm really working on now and something I feel really strongly about.
STEPHANIE: Yeah. You mentioned the friction of, like, working in an environment where there are different opinions, which is, you know, I don't know, just, like, reality, I guess [laughs].
EDWARD: Human nature.
STEPHANIE: Yeah, exactly. And one thing I was thinking about recently was, like, okay, like, so someone else maybe made a decision about using a type of technology or, like, made a decision about architecture before my time or, like, above me, or whatever, right? Like, I wasn't there, and that is okay. But also, like, how do I maintain what I believe in and hold fast to, like, my opinions based on my value system, at least, without complaining? [laughs]
Because I've only seen that a little bit before, right? When it just becomes, like, venting, right? It's like, ugh, like, you know, I have seen people who are coming from maybe, like, microservices or more of a JavaScript world, and they're like, ugh, like, what is going on with Rails? Like, this sucks [laughs].
And one thing I've been trying lately is just, like, communicating when I don't agree that something's a great idea. But also, like, acknowledging that, like, yeah, but this is how it is for this team, and I'm also not in a position to change it. Or, like, I don't feel so strongly about it that I'm like, "Hey, we should totally rethink using this, like, background job [laughs] platform."
But I will be like, "Hey, like, I don't like this particular thing about it. And, you know, maybe here are some things that I did to mitigate whatever thing I'm not super into," or, like, "If I had more time, this is what I would do," and just putting it out there. Sometimes, I don't get, like, engagement on it. But it's a good practice for me to be, like, this is how I can still have opinions about things, even if I'm not, at least in this particular moment, in a position to change anything.
EDWARD: It sounds to me like you in, at least at the lowest level, like, you want to be acknowledged, and you want to, like, be heard. You want to be part of a process. And yes, it doesn't always go with Stephanie's initial thought, or even final thought, or Edward's final thought. But it is very helpful to know that you are heard and you are respected. And it isn't someone just, like, completely disregarding any feeling that you have.
As much as we like to say programming is this very, like, I don't know, value neutral, zero emotion kind of job, like, there's tons of emotion in this job. We want to do good things for the world. We want our technology to serve the people, ultimately, at least I do, and I know you do. But we sometimes disagree on the way to do that.
And so, you want to make sure you're heard. And if you can't get that at work, like, and I know you do this, but I would encourage anyone listening out there to, like, get a buddy that you can vent to or get somebody that you can express, and they will hear you. That is so valuable just as a release, in some ways, to kind of get through what you need to get through sometimes. Because it is a job, and you aren't always the person that's going to make the decisions.
And, honestly, like, you do still have one decision left, which is you can go work somewhere else if it really is that bad. And, like, it's useful to know that you are staying where you are because you appreciate the trade-offs that you have: a steady paycheck, or the colleagues that you work with, or whatever. And that's fine. That's an okay trade-off. And at some point, you might want to make a different trade-off, and that's also fine. We're getting real managery and real here. But I think it's useful. Like you said, this can be a very emotional career, and it's worth acknowledging that.
STEPHANIE: Yeah, you just, you know, raised a bunch of, like, very excellent points. Yeah, at the end of the day, like, you know, you can do your best to, like, propose changes or, like, introduce new tooling and, like, see how other people feel about it. But, like, yeah, if you fundamentally do not enjoy working with a critical tool that, you know, a lot of the foundation of the work that you're doing day to day is built off of, then maybe there is a place where, like, another company that's using tools that you do feel excited or, like, passionate or, like, are a better alignment with what you hope to be doing.
Kind of just going back to that theme that we were talking about earlier, like, everyone gets to decide for themselves, right? Like, the tools to help them do what they want to be doing.
EDWARD: And you could even, like, reframe it for yourself, where instead of it being about the tools, maybe it's about the problem. Like, you start being more invested in, like, the problem that you're solving and, okay, maybe you don't want to use microservices or whatever, but, like, maybe you can get behind that if you realign yourself. The thing you're trying to solve is not the tool. The thing you're trying to solve is the problem. And that can be a useful, like, way to mitigate that or to, like, help yourself feel okay about the thing, whatever that is.
STEPHANIE: Yeah. Now, how do I have this conversation with everyone [laughter] who claims on the internet that X is the solution to all their problems or the silver bullet, [laughs] or whatever?
EDWARD: Yeah, that's tough because there are some very strong opinions on the internet, as I'm sure [laughs] you've observed. I don't know if I have the answer [laughs]. Once again, nuance and indecisions.
I have been currently approaching it from kind of a meta-perspective of, like, if someone says, "X is the best tool," you know, "A hammer is the best tool," right? I'm not going to go write the post that's like, "No, hammer is, in fact, not the best tool. Don't use hammers." I would maybe instead write a post that's like, "Consider what makes the best tool." I've effectively, like, raised up one level of abstraction from, we're no longer talking about is X, or Y, or Z, the best tool? We're talking about how do we even decide that? How do we even think about that?
One post...I'm now just promoting my blog posts, so get ready. But one thing I wrote was this post called And Not But. And I tried to make the case that instead of saying the word but in a sentence, so, like, yeah, yeah, we might want to use hammers, but we have to use drills or whatever. I'm trying to make the case that you can use and instead. So yeah, hammers are really good, and drills are really good in these other scenarios.
And trying to get that nuance in there, like, really, really putting that in there and getting people to, like, feel that better, I think, has been really helpful, for me, certainly to get through. And part of the best thing about writing a blog post is just getting your own thoughts...I mean, it's another way to vent, right? It's getting your own thoughts out somewhere.
And sometimes people respond to them. You'd be surprised who just reaches out and been like, "Hey, yeah, like, I really appreciated that post. That was really great." You weren't trying to reach that person, but now you have another connection. So, a side benefit for writing blog posts [inaudible 30:17] do it, or just even getting your thoughts out via a podcast, via a video, whatever. So, I've kind of addressed that.
I also wrote a post when I worked at thoughtbot called Empathy Online. And that came out of, like, frustration with seeing people being too divisive or, in my opinion, unempathetic or inconsiderate. And instead of, again, trying to just say, "Stop it, don't do that," [laughs] but trying to, like, help use what I have learned when communicating in a medium that is kind of inherently difficult to get across emotion and empathy.
And so, again, it's, in some ways, unsatisfying because what you really want to do is go talk to that person that says, "Hammer is the best tool," and say, "No, stop it [laughs]," and, like, slap them on the head or whatever, politely. But I think that probably will not get you very far. And so, if your goal, really, is to change the way people think about these things, I find it way more effective to, like, zoom out and talk about that on that sort of more meta-level and that higher level.
STEPHANIE: Yeah. I liked how you called it, like, a higher level of abstraction. And, honestly, the other thing I was thinking about as you were talking about the, like, divisiveness that opinions can create, there's also some aspect of it, as a reader, realizing that one person sharing their opinion does not take away your ability to have a differing opinion [laughs].
And sometimes it's tough when someone's like, "Tailwind sucks [laughs], and it is a backward step in, you know, how we write CSS," or whatever. Yes, like, sometimes that can be kind of, like, inflammatory. But if you, like, kind of are translating it or, like, reading between the lines, they're just writing about their perspective from the things that they value. And it is okay for you to value different things and, for that reason, have a different perspective on the same thing.
And, I don't know, that has helped me sometimes avoid getting into that, like, headspace of wanting to argue with someone [laughs] on the internet. Or they'll be like, "This is why I am right." [laughs] Now I have to write something and share it on the internet in response [laughs].
EDWARD: There's this idea of the narcissism of minor differences. And I believe the idea is this, like, you know, you're more likely to argue with someone who, like, 90% agrees with you. But you're just, like, quibbling over that last 10%. I mean, one might call it bikeshedding. I don't know if you've heard that phrase.
But the thing that I have often found, too, is that, like the GitHub post, I will get people arguing with me, like, there's the kind of stuff I expected, where it's like, "Oh, but GitHub is really good," and XYZ and that's fine. And we can have that conversation. But it's kind of surprising, and I should have expected it, that people will sometimes be like, "Hey, you didn't go far enough. You should tell people to, like, completely delete their GitHub or, like, you know, go protest in the street." And, like, maybe that's true. I'm not saying it is or isn't.
But I think one thing I try to think about is, in any post, in any trying convincing argument, like, you're potentially moving someone 1 step forward, even if there's ten steps to go. But they're never going to make those ten steps if they don't make the first 1. And so, you can kind of help them get there. And someone else's post can absolutely take them from step 5 to 6 or 6 to 7 or 7 to 8. And you won't accomplish it all at once, and it's kind of a silly thing to try, and your efforts are probably lost [laughs].
Unfortunately, it's a little bit of preaching to the choir because, like, yeah, the people that are going to respond to, like, the extreme, the end are, like, the people that already get it. And the people that you're trying to convince and move along are not going to get that thing. I do want to say that I could see this being perceived as, like, a very privileged position of, like, if there's some, like, genuine atrocity happening in the world, like, it is appropriate to go to extremes many times and sometimes, and that's fine, and people are allowed to be there. I don't want to invalidate that. It's a really tricky balance.
And I'm trying to say that if your goal is to vent, that's fine. And if your goal is to move people from step 3 to 4, you have to meet people at step 3. And all that's valid and okay to try to help people move in that way. But it is very tricky. And I don't want to invalidate someone who's extremely frustrated because they're at step 10, and no one else is seeing the harm that not everybody else being at step 10 is. Like, that's an incredibly reasonable place to be and an okay place to be.
STEPHANIE: Yeah, yeah. The other thing you just sparked, for me, is also the, like, power of, yeah, being able to say like, "Yeah, I agree with this 50%, or 60%, or, like, 90%." And also, there's this 10% that I'm like, oh, like, I wish were different, or I wish they'd gone further, or I wish they didn't say that. Or, you know, I just straight up disagree with this step 1 sentence, but the rest of the article, you know, I really related to.
And, like, teasing that apart has been very useful for me, right? Because then I'm no longer like being like, oh, was this post good or bad? Do I agree with it or don't agree with it? It's like, there's room for [laughs] all of it.
EDWARD: Yeah, that's that nuance that, you know, I liked this post, and I did not agree with these two parts of it, or whatever. It's so useful.
STEPHANIE: Well, thanks, Edward, so much for coming on the show and bringing that nuance to this conversation. I feel really excited about kind of what we talked about, and hopefully, it resonates with some of our listeners.
EDWARD: Yeah, I hope so too. I hope I can take them from step 2 to step 3 [laughs].
STEPHANIE: On that note, shall we wrap up?
EDWARD: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël got to do some pretty fancy single sign-on work. And when it came time to commit, he documented the ridiculous number of redirects to give people a sense of what was happening. Stephanie has been exploring Rails callbacks and Ruby debugging tools, using methods like save_callbacks
and Kernel.caller
, and creating a function call graph to better understand and manage complex code dependencies.
Stephanie is also engaged in an independent project and seeking strategies to navigate the challenges of solo work. She and Joël explore how to find external support and combat isolation, consider ways to stimulate creativity, and obtain feedback on her work without a direct team. Additionally, they ponder succession planning to ensure project continuity after her involvement ends. They also reflect on the unique benefits of solo work, such as personal growth and flexibility. Stephanie's focus is on balancing the demands of working independently while maintaining a connected and sustainable professional approach.
ASCII Sequence Diagram Creator
Callback debugging methods
Kernel.caller
Method.source_location
Building web apps by your lonesome by Jeremy Smith
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I got to do something really fun this week, where I was doing some pretty fancy single sign-on work. And when it came time to commit, I wanted to document the kind of ridiculous number of redirects that happen and give people a sense of what was going on.
And for my own self, what I had been doing is, I had done a sequence diagram that sort of shows, like, three different services that are all talking to each other and where they redirect to each other as they all go through the sequence to sign someone in. And I was like, how could I embed that in the commit message? Because I think it would be really useful context for someone trying to get an overview of what this commit is doing.
And the answer, for me, was, can I get this sequence diagram in ASCII form somewhere? And I found a website that allows me to do this in ASCII art. It's the textart.io/sequence. And that allows me to create a sequence diagram that gets generated as ASCII art. I can copy-paste that into a commit message. And now anybody else who is like, "What is it that Joël is trying to do here?" can look at that and be like, "Oh, oh okay, so, we got these, like, four different places that are all talking to each other in this order. Now I see what's happening."
STEPHANIE: That's super neat. I love the idea of having it directly in your commit message just because, you know, you don't have to go and find a graph elsewhere if you want to understand what's going on. It's right there for you, for future commit explorers [laughs] trying to understand what was going on in this snippet of time.
JOËL: I try as much as possible to include those sorts of things directly in the commit message because you never know who's reading the commit. They might not have access to some sort of linked resource. So, if I were like, "Hey, go to our wiki and see this link," like, sure, that would be helpful, but maybe the person reading it doesn't have access to the wiki. Maybe they do have access, but they're not on the internet right now, and so they don't have access to the wiki. Maybe the wiki no longer exists, and that's a dead link. So, as much as possible, I try to embed context directly in my commit messages.
STEPHANIE: That's really cool. And just another shout out to ASCII art, you know [laughs], persevering through all the times with our fancy tools. It's still going strong [laughs].
JOËL: Something about text, right?
STEPHANIE: Exactly. I actually also have a diagram graph thing to share about what's new in my world that is kind of in a similar vein. Another thoughtboter and former guest on the show, Sara Jackson, shared in our dev channel about this really cool mural graph that she made to figure out what was going on with callbacks because she was working on, you know, understanding the lifecycle of this model and was running into, like, a lot of complex behavior.
And she linked to a really neat blog post by Andy Croll, too, that included a little snippet sharing a few callback debugging methods that are provided by ActiveRecord. So, basically, you can have your model and just call double underscore callbacks. And it returns a list of all the callbacks that are defined for that model, and I thought that was really neat. So, I played around with it and copypastad [laughs] the snippet into my Rails console to figure out what's going on with basically, like, the god object of that that I work in.
And the first issue I ran into was that it was undefined because it turns out that my application was on an older [laughs] version of Rails than that method was provided on. But, there are more specific methods for the types of callbacks. So, if you are looking specifically for all the callbacks related to a save or a destroy, I think it's save underscore callbacks, right? And that was available on the Rails version I was on, which was, I think, 4. But that was a lot of fun to play around with.
And then, I ended up chatting with Sara afterwards about her process for creating the diagram after, you know, getting a list of all these methods. And I actually really liked this hybrid approach she took where, you know, she automated some parts but then also manually, like, went through and stepped through the code and, like, annotated notes for the methods as she was traversing them.
And, you know, sometimes I think about, like, wow, like, it would be so cool if this graph just generated automatically, but I also think there is some value to actually creating it yourself. And there's some amount of, like, mental processing that happens when you do that, as opposed to, like, looking at a thing that was just, you know, generated afterwards, I think.
JOËL: Do you know what kind of graph Sara generated? Was it some kind of, like, function call graph, or was it some other way of visualizing the callbacks?
STEPHANIE: I think it was a function call graph, essentially. It even kind of showed a lot of the dependencies, too, because some of the callback functions were quite complicated and then would call other classes. So, there was a lot of, I think, hidden dependencies there that were unexpected, you know, when you think you're just going to create a regular old [laughs] record.
JOËL: Yeah, I've been burned by unexpected callbacks or callbacks that do things that you wouldn't want in a particular context and then creating bad data or firing off external services that you really didn't want, and that can be an unpleasant surprise. I appreciate it when the framework offers debugging tools and methods kind of built-in, so these helpers, which I was not aware of. It's really cool because they allow you to kind of introspect and understand the code that you're going through. Do you have any others like that from Rails or Ruby that you find yourself using from time to time to help better understand the code?
STEPHANIE: I think one I discovered recently was Kernel.caller, which gives you the stack trace wherever you are when executing. And that was really helpful when you're not raising an exception in certain places, and you need to figure out the flow of the code. I think that was definitely a later discovery. And I'm glad to have it in my back pocket now as something I can use in any kind of Ruby code.
JOËL: That can, yeah, definitely be a really useful context to have even just in, like, an interactive console. You're like, wait a minute, where's this coming from? What is the call stack right now?
STEPHANIE: Do you have any debugging tools or methods that you like to use that maybe are under the radar a little bit?
JOËL: One that I really appreciate that's built into Ruby is the source location method on the method object, so Ruby has a method object. And so, when you're dealing with some sort of method and, like, maybe it got generated programmatically through metaprogramming, or maybe it's coming from a gem or something like that, and you're just like, where is this define? I'm trying to find it.
If you're in your editor and you're doing stuff, maybe you could run some sort of search, or maybe it has some sort of keyword lookup where you can just find the definition of what's under your cursor. But if you're in an interactive console, you can create a method object for that method name and then call dot source location on it. And it will tell you, here's where it's defined. So, very handy in the right circumstances.
STEPHANIE: Awesome. That's a great tip.
JOËL: Of course, one of the most effective debugging tools is having a pair, having somebody else work with you, but that's not always something that you have. And you and I were talking recently about what it's like to work solo on a project. Because you're currently on a project, you're solo, at least from the thoughtbot side of things. You're embedding with a team, with a client. Are you working on kind of, like, a solo subtask within that, or are you still kind of embedding and interacting with the other teammates on a regular basis?
STEPHANIE: Yeah. So, the past couple of weeks, I am working on more of a solo initiative. The other members of my client team are kind of ramping up on some other projects for this next quarter. And since my engagement is ending soon, I'm kind of left working on some more residual tasks by myself. And this is new for me, actually. I've not really worked in a super siloed by-myself kind of way before. I usually have at least one other dev who I'm, like, kind of partnering up with on a project, or an epic, or something like that.
And so, I've had a very quiet week where no one is, you know, kind of, like, reaching out to me and asking me to review their code, or kind of checking in, or, you know, asking me to check in with them. And yeah, it's just a little bit different than how I think I like to normally work. I do like to work with other people. So, this week has been interesting in terms of just kind of being a more different experience where I'm not as actively collaborating with others.
JOËL: What do you think are some of the biggest challenges of being kind of a little bit out in your own world?
STEPHANIE: I think the challenges for me can definitely be the isolation [laughs], and also, what kind of goes hand in hand with that is when you need help, you know, who can you turn to? There's not as much of an obvious person on your team to reach out to, especially if they're, like, involved with other work, right? And that can be kind of tough.
Some of the other ones that I've been thinking about have been, you know, on one hand, like, I get to make all of the decisions that I want [laughs], but sometimes you kind of get, like, really in your own head about it. And you're not in that space of, like, evaluating different solutions that you maybe might not think of. And I've been trying to figure out how to, like, mitigate some of that risk.
JOËL: What are some of the strategies that you use to try to balance, like making good decisions when you're a bit more solo? Do you try to pull in someone from another team to talk ideas through? Do you have some sort of internal framework that you use to try to figure out things on your own? What does that look like?
STEPHANIE: Yeah, luckily, the feature I'm working on is not a huge project. Well, if it were, I think then I wouldn't be alone on it. But, you know, sometimes you find yourself kind of tasked with one big thing for a while, and you are responsible for from start to finish, like all of the architectural decisions to implementation. But, at least for me, the scope is a little more narrow. And so, I don't feel as much of a need to get a lot of heads together because I at least feel somewhat confident in what I'm doing [laughs].
But I have found myself being a bit more compelled to kind of just verbalize what I'm doing more frequently, even to, like, myself in Slack sometimes. It's just like, I don't know who's reading this, but I'm just going to put it out there because maybe someone will see this and jump in and say, "Oh, like, interesting. Here's some other context that I have that maybe might steer you away from that," or even validating what I have to say, right? Like, "That sounds like a good idea," or, you know, just giving me an emoji reaction [laughs] is sometimes all I need.
So, either in Slack or when we give our daily sync updates, I am, I think, offering a little more details than I might if I already was working with someone who I was more in touch with in an organic way.
JOËL: And I think that's really powerful because it benefits you. Sort of by having to verbalize that or type it out, you, you know, gain a little bit of self-awareness about what you're trying to do, what the struggles are. But also, it allows anybody else who has potentially helpful information to jump in. I think that's not my natural tendency. When I'm on something solo, I tend to kind of, like, zoom in and focus in on something and, like, ignore a little bit of the world around me. Like, that's almost the time when I should look at overcommunicating.
So, I think most times I've been on something solo, I sort of keep relearning this lesson of, like, you know, it's really important to constantly be talking out about the things that you're doing so that other people who are in a broader orbit around you can jump in where necessary.
STEPHANIE: Yeah, I think you actually kind of touched on one of the unexpected positives, at least for me. Something I wasn't expecting was how much time I would have to just be with my thoughts. You know, as I'm implementing or just in my head, I'm mulling over a problem. I have less frequent, not distractions necessarily, but interruptions. And sometimes, that has been a blessing because I am not in a spot where I have a lot of meetings right now. And so, I didn't realize how much generative thought happens when you are just kind of, like, doing your own thing for a little bit.
I'm curious, for you, is that, like, a space that you enjoy being when you're working by yourself? And I guess, you know, you were saying that it's not your natural state to kind of, like, share what's going on until maybe you've fully formed an idea.
JOËL: I think I often will regret not having shared out before everything is done. The times that I have done it, I've been like, that was a really positive experience; I should do that more. I think it's easy to sort of wait too long before sharing something out. And with so many things, it feels like there's only one more small task before it's done. Like, I just need to get this one test to go green, and then I can just put up a PR, and then we'll have a conversation about it. But then, oh, this other test broke, or this dependency isn't installing correctly.
And before you know it, you've spent a whole day chasing down these things and still haven't talked. And so, I think if some of those things were discussed earlier, it would help both to help me feel more plugged in, but also, I think everybody else feels like they're getting a chance to participate as well.
STEPHANIE: So, you mentioned, you know, obviously, there's, like, the time spent just arriving at the solution before sharing it out for feedback. But have you ever been in a position where there is no one to give you feedback and, like, not even a person to review your code?
JOËL: That's really challenging. So, occasionally, if I'm working on a project, maybe it would be, like, very early-stage startup that maybe just has, like, a founder, and then I'm, like, the only technical person on the team, generally, what I'll try to do is to have some kind of review buddy within thoughtbot, so some other developer who's not staffed on my project but who has access to the code such that I can ask them to say, "Hey, can you just take a look at this and give me a code review?" That's the ideal situation.
You know, some companies tend to lock things down a lot more if you're dealing with something like healthcare or something like that, where there might be some concerns around personal information, that kind of thing. But generally, in those cases, you can find somebody else within the company who will have some technical knowledge who can take a look at your code; at least, that's been my experience.
STEPHANIE: Nice. I don't think I've quite been in that position before; again, I've really mostly worked within a team. But there was a conference talk I watched a little bit ago from Jeremy Smith, and it was called Building Web Apps by Your Lonesome. And he is a, like, one-man agency. And he talked about, you know, what it's like to be in that position where you pretty much don't have other people to collaborate with, to review your code. And one thing that he said that I really liked was shifting between writer and editor mode.
If you are the person who has to kind of just decide when your code is good enough to merge, I like that transition between, like, okay, I just spent however many hours putting together the solution, and now I'm going to look at it with a critical eye. And sometimes I think that might require stepping away for a little bit or, like, revisiting it even the next day. That might be able to help see things that you weren't able to notice when you were in that writing mode. But I have found that distinction of roles really helpful because it does feel different when you're looking at it from those two lenses.
JOËL: I've definitely done that for some, like, personal solo projects, where I'm participating in a game jam or something, and then I am the only person to review my code. And so, I will definitely, at that point, do a sort of, like, personal code review where I'll look at it. Maybe I'm doing PRs on GitHub, and I'm just merging. Maybe I'm just doing a git diff and looking at a commit in the command line on my own machine.
But it is useful, even for myself, to sort of switch into that editor mode and just kind of look at everything there and say, "Is it in a good place?" Ideally, I think I do that before putting it out for a co-worker's review, so you kind of get both. But on a solo project, that has worked actually pretty well for me as well.
STEPHANIE: One thing that you and I have talked about before in a different context, I think, when we have chatted about writing conference talks, is you are really great about focusing on the audience. And I was thinking about this in relation to working solo because even when you are working by yourself on a project, you're not writing the code for yourself, even though you might feel like [laughs] it in the moment.
And I also kind of like the idea of asking, like, who are you building for? You know, can you ask the stakeholder or whoever has hired you, like, "Who will maintain this project in the future?" Because likely, it won't be you. Hopefully, it won't be you unless that's what you want to be doing.
There's also what my friend coined the circus factor as opposed to the bus factor, which is, like, if you ran away to the circus tomorrow [laughs], you know, what is the impact that would have? And yeah, I think working solo, you know, some people might think, like, oh, that gives me free rein to just write the code exactly how I want to, how I want to read it. But I think there is something to be said about thinking about the future of who will be [inaudible 18:10] what you just happen to be working on right now.
JOËL: And keep in mind that that person might be future you who might be coming back and be like, "What is going on here?" So, yeah, audience, I think, is a really important thing to keep in mind. I like to ask the question, if somebody else were looking at this code, and somebody else might be future me, what parts would they be confused by? If I was walking somebody else through the code for the first time, where would they kind of stop me through the walkthrough and be like, "Hey, why is this happening? What's the connection between these two things? I can see they're calling each other, but I don't know why."
And that's where maybe you put in a comment. Maybe you find a better method or a class name to better explain what happens. Maybe you need to put more context in a commit message. There's all sorts of tools that we can use to better increase documentation. But having that pause and asking, "What will confuse someone?" is, I think, one of the more powerful techniques I do when I'm doing self-review.
STEPHANIE: That's really cool. I'm glad you mentioned that, you know, it could also be future you. Because another thing that Jeremy says in this talk that I was just thinking about is the idea of optimizing for autonomy. And there's a lot to be said there because autonomy is like, yeah, like, you end up being the person who has to deal with problems [laughs], you know, if you run into something that you can't figure out, and, ideally, you'll have set yourself up for success.
But I think working solo doesn't mean that you are in your own universe by yourself completely. And thinking about future, you, too, is kind of, like, part of the idea that the person in this moment writing code will change [laughs]. You'll get new information. Maybe, like, you'll find out about, like, who might be working on this in the future. And it is kind of a fine balance between making sure that you're set up to handle problems, but at the same time, maybe it's that, like, you set anyone up to be able to take it away from where you left it.
JOËL: I want to take a few moments to sort of talk a little bit about what it means to be solo because I think there are sort of multiple different solo experiences that can be very different but also kind of converge on some similar themes. Maybe some of our listeners are listening to us talking and being like, "Well, I'm not at a consultancy, so this never happens to me." But you might find yourself in that position.
And I think one that we mentioned was maybe you are embedded on a team, but you're kind of on a bit of a larger project where you're staffed solo. So, even though you are part of a larger team, you do feel like the initiative that you're on is siloed to you a little bit. Are there any others that you'd like to highlight?
STEPHANIE: I think we also mentioned, you know, if you're a single developer working on an application because you might be the first technical hire, or a one-person agency, or something, that is different still, right? Because then your community is not even your company, but you have to kind of seek out external communities on social networks, or Slack groups, or whatever.
I've also really been interested in the idea of developers kind of being able to be rotated with some kind of frequency where you don't end up being the one person who knows everything about a system and kind of becomes this dependency, right? But how can we make projects so, like, well functioning that, like, anyone can step in to do some work and then move on? If that's just for a couple of weeks, for a couple of months. Do you have any thoughts about working solo in that kind of situation where you're just stepping into something, maybe even to help someone out who's, you know, on vacation, or kind of had to take an unexpected leave?
JOËL: Yeah, that can be challenging. And I think, ideally, as a team, if you want to make that easier, you have to set up some things both on a, like, social level and on a tactical level, so all the classic code quality things that you want in place, well structured, encapsulated code, good documentation, things like that. To a certain extent, even breaking down tasks into smaller sort of self-sufficient stories. I talk a lot about working incrementally.
But it's a lot easier to say, "Hey, we've got this larger story. It was broken down into 20 smaller pieces that can all be shipped independently, and a colleague got three of them done and then had to go on leave for some reason. Can you step in and do stories 4 through 20?" As opposed to, "Hey, we have this big, amorphous story, and your colleague did some work, and it kind of is done. There's a branch with some code on it. They left a few notes or maybe sent us an email. But they had to go on leave unexpectedly. Can you figure it out and get it done?" The second scenario is going to be much more challenging.
STEPHANIE: Yeah, I was just thinking about basically what you described, right? Where you might be working on your own, and you're like, well, I have this one ticket, and it's capturing everything, and I know all that's going on [laughs], even though it's not quite documented in the ticket. But it's, you know, maybe on my branch, or in my head, or, worst of all, on my local machine [laughs] without being pushed up.
JOËL: I think maybe that's an anti-pattern of working solo, right? A lot of these disciplines that you build when you're working in a team, such as breaking up tickets into smaller pieces, it's easy to kind of get a little bit lazy with them when you're working solo and let your tickets inflate a little bit, or just have stuff thrown together in branches on your local machine, which then makes it harder if somebody does need to come in to either collaborate with you or take over from you if you ever need to step aside.
STEPHANIE: Right. I have definitely seen some people, even just for their personal projects, use, like, a Trello board or some other project management tool. And I think that's really neat because then, you know, obviously, it's maybe just for their own, like, self-organization needs, but it's, like, that recognition that it's still a complicated project. And just because they're working by themselves doesn't mean that they can't utilize a tool for project management that is meant for teams or not even teams [laughs], you know, people use them for their own personal stuff all the time.
But I really like that you can choose different levels of how much you're documenting for your future self or for anyone else. You had mentioned earlier kind of the difference between opening up a PR for you...you have to merge your branch into main or whatever versus just committing to main. And that distinction might seem, like, if you were just working on a personal project, like, oh, you know, why go through the extra step? But that can be really valuable in terms of just seeing, like, that history, right?
JOËL: I think on solo projects, it can really depend on the way you tend to treat your commit history. I'm very careful with the history on the main branch where I want it to tell a sort of, like, cohesive story. Each commit is kind of, like, crafted a little bit. So, even when I'm working solo and I'm committing directly to master or to the main branch, I'm not just, like, throwing random things there. Ideally, every commit is green and builds and is, like, self-contained.
If you don't have that discipline, then it might be particularly valuable to go through the, like, a branching system or a PR system. Or if you just want, like, a place to experiment, just throw a bunch of code together, a bunch of things break; nothing is cohesive, that's fine. It's all a work in progress until you finally get to your endpoint, and then you squash it down, or you merge it, or whatever your workflow is, and then it goes back into the main branch.
So, I think that for myself, I have found that, oftentimes, I get not really a whole lot of extra value by going through a branching and PR system when it's, like, a truly solo project, you know, I'm building a side project, something like that. But that's not necessarily true for everyone.
STEPHANIE: I think one thing I've seen in other people's solo projects is using a PR description and, you know, having the branching strategy, even just to jot down future improvements or future ideas that they might take with the work, especially if you haven't kind of, like, taken the next step of having that project management system that we talked about. But there is, like, a little more room for some extra context or to, like, leave yourself little notes that you might not want necessarily in your commit history but is maybe more related to this project being, like, a work in progress where it could go in a lot of different directions, and you're figuring that out by yourself.
JOËL: Yeah, I mean, definitely something like a draft PR can be a great place to have work in progress and experiment and things like that. Something you were saying got me wondering what distinction you typically have between what you would put in a commit message versus something that you would put in a PR description, particularly given that if you've got, like, a single commit PR, GitHub will automatically make the commit message your PR message as well.
STEPHANIE: This has actually evolved for me over time, where I used to be a lot more reliant on PR descriptions holding a lot of the context in terms of the decision-making. I think that was because I thought that, like, that was the most accessible place of information for reviewers to find out, you know, like, why certain decisions were made. And we were using, you know, PR templates and stuff like that.
But now the team that I'm working on uses commit message templates that kind of contain the information I would have put in a PR, including, like, motivation for the change, any risks, even deployment steps. So, I have enjoyed that because I think it kind of shortens the feedback loop, too, right? You know, you might be committing more frequently but not, you know, opening a PR until later. And then you have to revisit your commits to figure out, like, okay, what did I do here? But if you are putting that thought as soon as you have to commit, that can save you a little bit of work down the line.
What you said about GitHub just pulling your commit message into the PR description has been really nice because then I could just, like, open a thing [laughs]. And that has been nice.
I think one aspect that I really like about the PR is leaving myself or reviewers, like, notes via comments, like, annotating things that should not necessarily live in a more permanent form. But maybe I will link to documentation for a method that I'm using that's a little less common or just add some more information about why I made this decision over another at a more granular level.
JOËL: Yeah, I think that's probably one of the main things that I tend to put in a PR message rather than the commit message is any sort of extra information that will be helpful at review time. So, maybe it's a comment that says, "Hey, there is a lot of churn in this PR. You will probably have a better experience if you review this in split view versus unified view," things like that.
So, kind of, like, meta comments about how you might want to approach reviewing this PR, as opposed to something that, let's say somebody is reviewing the history or is, like, browsing the code later, that wouldn't be relevant to them because they're not in a code review mindset. They're in a, like, code reading, code understanding mindset or looking at the message to say, "Why did you make the changes? I saw this weird method. Why did you introduce that?" So, hopefully, all of that context is in the commit message.
STEPHANIE: Yeah, you reminded me of something else that I do, which is leave notes to my future self to revisit something if I'm like, oh, like, this was the first idea I had for the, you know, the way to solve this problem but, you know, note to self to look at this again tomorrow, just in case I have another idea or even to, like, you know, do some more research or ask someone about it and see if they have any other ideas for how to implement what I was aiming for.
And I think that is the editor mode that we were talking about earlier that can be really valuable when you're working by yourself to spend a little extra time doing. You know, you are essentially optimizing for autonomy by being your own reviewer or your own critic in a healthy and positive way [laughs], hopefully.
JOËL: Exactly.
STEPHANIE: So, at the beginning of this episode, I mentioned that this is a new experience for me, and I'm not sure that I would love to do it all of the time. But I'm wondering, Joël, if there are any, you know, benefits or positives to working solo that you enjoy and find that you like to do just at least for a short or temporary amount of time.
JOËL: I think one that I appreciate that's maybe a classic developer response is the heads downtime, the focus, being able to just sit down with a problem and a code editor and trying to figure it out. There are times where you really need to break out of that. You need somebody else to challenge you to get through a problem. But there are also just amazing times where you're in that flow state, and you're getting things done. And that can be really nice when you're solo.
STEPHANIE: Yeah, I agree. I have been enjoying that, too. But I also definitely am looking forward to working with others on a team, so it's kind of fun having to get to experience both ways of operating.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeeeee!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie discovered a new book: The Staff Engineer's Path! Joël's got some D&D goodness.
Together, they revisit a decade-old blog post initially published in 2013, which discussed the application of Sandi Metz's coding guidelines and whether these rules remain relevant and practiced among developers today.
The Manager’s Path
The Staff Engineer’s Path
Not Another D&D Podcast
Sandi Metz rules for developers
Bike Shed episode on heuristics
In Relentless Pursuit of REST
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I picked up a new book from the library [laughs], which that in itself is not very new. That is [laughs] a common occurrence in my world. But it was kind of a fun coincidence that I was just walking around the aisles of what's new in nonfiction, and staring me right in the face was an O'Reilly book, The Staff Engineer's Path. And I think in the past, I've plugged The Manager's Path by Camille Fournier on the show.
And in recent years, this one was published, and it's by Tanya Reilly. And it is kind of, like, the other half of a career path for software engineers moving up in seniority at those higher levels. And it has been a really interesting companion to The Manager's Path, which I had read even though I wasn't really sure I wanted to be manager [laughs].
And now I think I get that, like, accompaniment of like, okay, like, instead of walking that path, like, what does a staff plus engineer look like? And kind of learning a little bit more about that because I know it can be really vague or ambiguous or look very different at a lot of different companies. And that has been really helpful for me, kind of looking ahead a bit. I'm not too far into it yet. But I'm looking forward to reading more and bringing back some of those learnings to the show.
JOËL: I feel like at the end of the year, Stephanie, you and I are probably going to have to sit down and talk through maybe your reading list for the year and, you know, maybe shout out some favorites. I think your reading list is probably significantly longer than mine. But you're constantly referencing cool books. I think that would probably be a fun, either end-of-year episode or a beginning-of-year episode for 2024.
One thing that's really interesting, though, about the contrast of these two particular books you're talking about is how it really lines up with this, like, fork in the road that a lot of us have in our careers as we get more senior. You either move into more of a management role, which can be a pretty significant departure from what you have to do as a developer, or you kind of go into this, like, ultra-senior individual contributor path. But how that looks day to day can be very different from your sort of just traditional sitting down and banging out tickets. So, it's really cool there's two books looking at both of those paths.
STEPHANIE: Yeah, absolutely. And I think the mission that they were going for with these books was to kind of illuminate a little bit more about that fork and that decision because, you know, it can be easy for people to maybe just default into one or the other based on what their organization wants for them without, like, fully knowing what that means. And the more senior you get, the more vague and, like, figure it out yourself [laughs] the work becomes.
And it can be very daunting to kind of just be thrown into that and be like, well, I'm in this leadership position now. People are looking to me, and I have all this responsibility, but, like, what do I do? Yeah, so I'm kind of enjoying this book, that is...it's not a technical book, which is actually kind of what I like about it. It's actually more of a leadership book, which is really important for that kind of role. Even though, you know, they are still in that IC track, but it does come with a lot of leadership responsibility.
JOËL: Yes, leadership in a very different way than management. But—and this may be counterintuitive for some people, especially earlier on in their careers—going further up that individual contributor track doesn't just mean getting more intense technically. It often means you've got to focus on things more like leadership, like being a bit more strategic, aligning technical initiatives with strategic goals.
STEPHANIE: Yeah, and having a bigger impact and being a force multiplier, even in both the manager and, like, the staff plus role, like, that, you know, is the thing that ties the rising level.
JOËL: Yeah, in many ways, maybe the individual contributor track is slightly misnamed because while, yes, you're not managing a sort of sub-organization within the company, it's still about being a force multiplier.
STEPHANIE: Yeah, that's a really great point [laughs]. Maybe we'll be able to come up with a better [laughs] name for that.
JOËL: I've mentioned several times on this podcast that I've been enjoying playing Dungeons & Dragons, D&D, with some friends and some colleagues. And something that was particularly fun that some friends and I did this summer is we hired a professional DM to run one shot for us. And that was just an absolutely lovely experience. Well, as a result of that, I am now subscribed to this guy's newsletter. And he'll do, like, various D&D events at different times.
One thing that was really cool that I found out recently...as we're recording this, it's the week before election day in the U.S. And because a lot of voting happens in schools, typically, schools have the day off. And so, this guy sent out an email saying he was offering to run a, like, all day...effectively, a little mini-D&D camp for school-age kids on election day so that you can do your work. You can go vote, and you don't have to...basically, he'll watch your kids for you and, like, get them introduced to playing D&D, which I think is just a really cool thing to do.
STEPHANIE: I love that. It's so heartwarming [laughs]. And it's such a great idea because, oftentimes, people are still working, and so they need childcare, like, on those kinds of days. And yeah, I think D&D is such a fun thing for kids to get into, too. You know, it requires so much, like, imagination, and I can imagine it's such a blast.
JOËL: I got that email, and I was like, that is such a perfect idea. I love it so much.
STEPHANIE: I wanted to plug my D&D recommendation. I'm pretty sure I have mentioned it on the show before. But there is a podcast that I listen to called Not Another D&D Podcast, which is, you know, a live play Dungeons & Dragons podcast campaign that's hosted by these comedians, formerly of CollegeHumor, and it's very fun. I always laugh.
They have this, like, a kind of offshoot of the main show that they do called D&D Court, which is very fun. Because, as you were saying, like, you know, you hired a DM to run your game. And I know that...I'm sure lots of people have fun stories about their home games and, like, the drama that happens [laughs] with their friends.
JOËL: Absolutely. Absolutely.
STEPHANIE: And so, with D&D Court, listeners can write in with their drama or their conflicts and get an official ruling from the hosts about who was right [laughs] in the situation that they write in about.
JOËL: So, you get to bring your best rules lawyering to the D&D Court.
STEPHANIE: Yeah, exactly [laughs].
JOËL: That sounds kind of amazing.
Recently, I had someone reach out to me asking about an older blog post that we'd written about the Sandi Metz Rules. This blog post was initially published in 2013, so ten years ago, and was talking about some guidelines that Sandi Metz at the time was talking about that she was using in some of her code. And we talked about how our experience was applying those to some of our work as well. And so, the question was, you know, ten years later, is that still something that thoughtbot developers like to follow in their code?
We'll link to the article in the show notes. But I'll just read out the rules here real quick. So, there's four of them. The first one is a class can be no longer than 100 lines of code. The second is a method can be no longer than five lines of code. The third is pass no more than four parameters into a method, and hash options count, so no getting clever with those. And then, finally, controllers can only instantiate one object. You only get one instance variable. And views can only talk to that one instance variable.
Had you or are you familiar with these rules? Is that something that you think about or use in your daily writing of code?
STEPHANIE: Yeah. So, when you proposed this topic, I had to revisit these rules. And I can't recall if I had seen them before. They seemed familiar. And I've read, you know, a couple of Sandi Metz's books, so maybe those were places where she had mentioned them.
But the one thing that really struck me when I was first reading the rules was how declarative they were in terms of, like, kind of just telling you what the results should be without really saying how. So, for example, the one where you said, you know, a method should not be more than five lines [laughs], I had the silly thought of, like, well, you could just, you know, stuff everything into a single line [laughs] and just completely disregard line limit if you wanted, and it would technically still follow the rule.
JOËL: If they didn't want us to do that, they wouldn't give us semicolons in Ruby.
STEPHANIE: Exactly [laughs]. So, that is kind of what struck me at first. Is that something you noticed?
JOËL: I think what is interesting with them is that there's not always a ton of rationale given behind them. Our article talks a little bit about some of the why that might be helpful and how that might look like in practice. I'm not sure what Sandi's original...I don't know if it was one of her books or maybe on a...it might have been on a podcast appearance she talked about them, so she might go more in-depth there. But yeah, they are a little bit declarative. It's just like, hey, here's...it's almost basically the kind of thing that can be enforced by a linter, which is perhaps the point.
STEPHANIE: Ooh, that's really interesting. It's like, on one hand, I like how simple they are, right? It's like, they're very obvious. If you're not following them, you can tell. But on the other hand, they seem to be more of a supplement to the gained knowledge and experience that you kind of get from knowing how to implement those rules. I think you and I will both agree that we don't want to stuff everything [laughs] into a single line with semicolons. But if someone who maybe is newer to development and is coming to these rules, I think they might be wondering, like, how do I do this?
JOËL: Do you follow these rules in your own code?
STEPHANIE: I think the ones that are easier to follow, for me, and that I think I've come to do more instinctually, are the rules about class line length and method line length just because I'm kind of looking out for opportunities to pull out a method or, you know, make sure that this class is just doing one thing. And if it's starting to seem to cover a couple of different responsibilities, I'm a little bit more on the lookout. But I do like these rules as like, you know, like, hey, once you hit, you know, 100 lines in a class, like, maybe that's your cue to start thinking about opportunities for extraction.
JOËL: Do you sort of consciously follow these rules or have them maybe even encoded in a linter? Or is it more you're following other things, and somehow, it just lines up with these principles?
STEPHANIE: I would say that, like, I'm not thinking about them very actively. But that could be a very interesting exercise, and I think, you know, that's what folks did in the blog post. They were like, hey, we took these rules, and we really kept them in mind as we were developing. But I think kind of what we were talking about earlier about, like, what we've learned or the strategies we've learned to implement kind of converge on these rules. And the rules actually are more of the result of other ideas or heuristics that we follow.
JOËL: I mean, you dropped the keyword heuristics there. And I think that brings me back to an earlier episode we did where we talked about heuristics. And one of the things that came up on that episode was the idea that, oftentimes, we use heuristics as a way to sort of flatten a lot of experience and knowledge into sort of one, like, short rule, or short phrase, or something, one guideline, even though it's sort of trying to just summarize a mountain of wisdom.
And so, oftentimes, you can look at something like these rules and be like, okay, well, what's the point? Or maybe you even just follow it to the letter without really thinking about the why behind it, and that can sometimes be problematic. And on the other hand, you might know all of the ideas that go behind them. And without necessarily knowing the rule itself, you just kind of happen to follow it because you're intimately familiar with all of these other software principles that converge on those same ideas.
STEPHANIE: Yeah, agreed. I think that the more interesting ones to me are the no more than four method arguments and only one instance variable per controller.
JOËL: Interesting.
STEPHANIE: I'm curious if those are sparking anything for you [laughs].
JOËL: I think the no more than four method arguments, to me, is probably the least controversial. It's generally accepted that having many arguments to a method is a code smell. And there's a few different code smells that are related to that. There's forms of coupling incandescence; there are data clumps, things like that.
I've often heard a sort of rule of three. And so, if you're going more than three, then you might want to revisit the structure of what you're building. Four is a bit of an arbitrary cut-off, I'll agree. Most of these are arbitrary cut-offs. But I think the idea to maybe keep your method to fewer arguments is generally a good thing to do.
STEPHANIE: I liked that the rule points out that hash options account because I think that's maybe where people get a little more hand-wavy, or you have your ops hash [laughs] that can be just a catch-all for anything. You know, it's like, once you start putting stuff in there, I don't know, I feel like it's a like a law of the universe. It's like, oh, people will just stuff more things in there [laughs]. And it takes obviously more effort or, like, specific energy to, like, think through what those things might represent, or some alternative ways of handling those arguments.
We definitely do have, I think, a couple of episodes on value objects. But that's something that we have talked a lot about before in terms of, you know, how can we take some kind of primitive data, hashes included, and turn them into a richer object that can then be passed on its own?
JOËL: Right. And an options hash is generally...it's too much of a catch-all to really have an identity as its own sort of value object. It doesn't really represent any single thing. It's just everything else bag of data. One thing that's interesting that the article notes is that a lot of the helpers in Rails take a lot of arguments and that it is absolutely not worth trying to fight the framework to try to follow these rules. So, the article does take a very pragmatic approach, I think, to the idea of these rules.
STEPHANIE: Yeah. And I think there is even a caveat to the rules where it's like, you can break them if you have a good reason, or if you're working with someone else and they give you the thumbs up [laughs], which I really like a lot because it almost kind of compels you to stop and be like, do I have a good reason of doing this? Just making sure, or I'll run it by a friend. And shifting that, I guess, that focus from kind of just coding from, like, your default mode of thinking to a more active one.
JOËL: Right. There is a rule zero, which says you can break any of the other rules as long as you convince either your pair or your reviewer to give you a thumbs up on breaking the rule.
So, you'd mentioned the fourth rule about a single instance variable in a controller kind of was one of the ones that stood out to you. What is particularly striking about that rule?
STEPHANIE: I think this one is hard to follow, and I think the blog post mentions that as well. Because at least I've seen our web applications grow more and more complex. And it can be really challenging to just be like, what is this page doing? Like, what, you know, data does it need to know? And have that be the single thing. Because really, a lot of our web apps have a lot of things [laughs] that they're doing, and sometimes it feels like you have to capture more than one object or at least, like, a responsibility in this way.
I think that's the one that I, you know, in my ideal world, I'm like, yeah, like, we have all these, like, perfectly RESTful routes. And, you know, we're only dealing with, you know, a single resource. But once you start to have some more complexity, I think that can be a little more challenging.
JOËL: I think it's interesting that you mentioned RESTful routing because I think that is maybe one of the bigger things that does trigger having more instance variables in your controller actions. If you're following sort of the traditional Rails RESTful routes, every page is generally focused on a singular resource. Now, that may not necessarily line up with a table in your database, and that's fine. But you're dealing with a singular thing or perhaps, you know, in the case of an index page, a singular collection of things, which can be represented with a single instance variable.
Once you start adding custom routes that may not be necessarily tied to a particular resource, now you can very easily kind of have a proliferation of all sorts of different things that interact with each other because you're no longer centered on a single thing.
STEPHANIE: Yeah, that's true, which actually reminds me of something we've talked about before, too, when we were both reading Sustainable Rails. The author talks about custom routes and actually advocates for making all routes RESTful. And if you need a vanity URL or something like that, you can always alias it. That I liked, right? It's like, okay, even if, you know, your resource is not something that's like, ActiveRecord-backed, is there some abstraction or concept of a resource in there?
And I actually did really like in the blog post in the example; that is one that I've used before, too. They were dealing with this idea of a dashboard, which I would, you know, say is pretty common in a lot of web applications these days. And it's funny because a dashboard can hold so much data, right? It's really, like, a composite of a lot of different things, you know, what is most, like, useful for the user to see in one place. But they were in the blog post. And this, again, this is kind of something that I've done before. They were able to capture that with the idea of, like, a dashboard as an object and that being codified using a presenter or a facade.
JOËL: Right. So, instead of having a group, and a status, and a user, and all these, like, separate things that your page that you're showing is a sort of collection of all these different types of objects, you wrap them together in a dashboard object that's kind of a facade. And I guess that really does line up with the idea of RESTful routing because you're likely going to have a dashboard's controller show action that's showing the user's dashboard. So, it makes sense, you know, that show page is rendering a dashboard object.
STEPHANIE: Can we talk a little bit about things not to do, or maybe things that might be a little more questionable [laughs], and if you've seen them and how you felt about them?
JOËL: I think it is sometimes tricky to define your boundaries right in that sometimes you create a facade object that really is just...it doesn't really represent anything. It's just there to wrap around some other things. And sometimes that can be awkward. I think the dashboard works partly because it lines up so neatly with the sort of RESTful routing and thinking in terms of resources that you want to do at the controller layer.
But drawing boundaries incorrectly or just trying to throw everything in some kind of grab bag object can...it's not a magic bullet, right? You've got to put some thought into the data modeling, even when you are pulling the facade pattern.
STEPHANIE: Yeah, I think other things that I've seen before that could theoretically follow this rule maybe [laughs], you know, I'd love to hear your thoughts about it. When you start, you're like, oh, like, my controller action method does just, you know, set one instance variable. But it turns out that there's all these other instance variables that either through a hook or, like, in the parent controller or even in the view I've seen before, too [laughs]. And I'm just kind of curious if that kind of raises your eyebrow at all or if you've seen any good reasons for doing so.
JOËL: I think setting instance variables in a view would absolutely cause me to raise an eyebrow.
STEPHANIE: [laughs] Agreed.
JOËL: Generally, don't put logic in the view. I think that we definitely have in parent controllers; we'll set other instance variables for things like maybe a current user, right? That's how we store that state. And I think that is totally fine to have around. Typically, we don't access that instance variable directly. We're referencing some kind of helper method. But yeah, I would not consider that a violation of the rule.
I think another common one that might come up is when you have some kind of nested resource. And so, in your URL, you might have a nested resource where you're saying, "Oh, I'm looking at specifically this comment under this article or something like that." And then, you want to have access to both objects in the controller. So, I think that's a pretty common scenario where you might want to have both instance variables.
Something that I'm thinking about...this is not a fully formed thought, so I'm curious about your opinions here. Is there an interesting distinction between variables in code that you want to use within a controller versus variables that you want to be accessible from a view? Because instance variables in a controller are kind of overloaded. They're a way of having state in a controller, but they're also a way of passing data into a view. And so, that sort of dual purpose there maybe causes them to be a little bit trickier to reason about than instance variables in a random Ruby object. What do you think?
STEPHANIE: Yeah, I was actually having the same thought as you were going there, which is that it is kind of interesting that the view, you know, is that level of what you want to display to your user. But it can have access to, like, whatever you put in the controller [laughs], and that is...and, you know, in some ways, it's like, that connection needs to happen somewhere, right? And it's here. But I think that can definitely be abused sometimes, too.
So, this, you know, fourth rule that we're talking about really has to do with a more traditional Rails app. But, again, with the complexity of web apps in 2023 [chuckles], you know, we also see Rails used just as an API a lot with a separate front-end framework. And your controller is rendering some JSON, which I think has that harder boundary between what is the data that the server is involved with and what we want to send to our client. And I'm curious if you have any thoughts about how this rule applies in that situation.
JOËL: I think I tend to see not really any difference there. If I'm building an API, typically, I'm trying to do so in a pretty RESTful manner. Maybe I'm doing a GraphQL API, and things might be different for that. But for a traditional REST API, yeah, typically, you're fetching one resource or some sort of compound resource, in which case, you're representing that with a facade object.
And yep, you can generally get away, I think, with a single instance variable with, you know, a few exceptions around maybe some extra context about maybe something like the current user, or a parent object, or something like that. I guess the view is really you're using a different mechanism for rendering JSON, and there are a bunch out there that the community uses. I think I don't really see a difference between rendering to HTML versus rendering to JSON, or XML, or whatever. How about you?
STEPHANIE: That's a good point. I think I'm with you where the rule still applies. But I have also seen things get really loosey-goosey [laughs] when we decide we're rendering JSON, and now we're suddenly putting the instance variables into a hash along with other stuff.
But what you said was interesting about, like, sometimes you do need that extra context, right? And, like, figuring out what the best way to package that requires a bit of, like, sustained thought, I think, because it can, you know, be really easy to be like, oh well, this is the one interface that I have to get data from the server. So, if I just sneak this in here [laughs], what's the matter? But yeah, I think, you know, that's probably why rules like this exist [laughs] to help provide some guardrails and make us think a little deeper about it.
JOËL: I think sometimes, as a community, we maybe exaggerate the differences between, like, RESTful HTML view and a RESTful JSON API. I tend to think of them as more or less the same. We just have, you know, a different representation the V layer of our MVC framework. Everything else still kind of lines up.
STEPHANIE: Yeah, that's a really good point. I actually hadn't thought about it that way. Because I think maybe I have been influenced by the world of GraphQL [laughs] a little bit, or it's kind of hard to have a foot in both worlds, where you maybe have to context switch a little bit about, like, the paradigms, and then you find them influencing you in different ways. Because I have seen sometimes, like, what maybe initially were meant to be traditional more, like, RESTful JSON APIs kind of start to turn into that, like, how do we get what we need from this endpoint?
JOËL: I'm curious how you feel in general about the facade pattern. Is that something that you've used, something that you like?
STEPHANIE: I think I would say that I don't actually reach for it, like, upfront, right? Usually, I'm still trying to maybe put some things in my models [laughs]. But I have used it before once; it kind of became clear that, like, a lot of the methods on the model had to do with more really server-side concerns. And I was, like, wanting to just pull out some presentational pieces. I think the hardest part with the facade pattern is naming. I have really struggled sometimes to think of, like, it's not quite the component that makes it up. So, what is it instead?
JOËL: Right. Right. I think, for me, sometimes the naming goes the other way around in that I'll start more to kind of, like, routing our resource level and try to think about, okay, this particular view of the data that I want to have, or this particular operation that I want to do, what am I actually dealing with? What is the resource here? So, maybe I'm viewing a dashboard. Or maybe what I'm doing is creating or destroying a subscription, even though those are not necessarily tables in the database. And once I have that underlying concept, then I can start creating an object that represents that, which might be a combination of multiple ActiveRecord models that represent tables.
STEPHANIE: Yeah. You're actually pointing out, like, a really great use case that we see a lot, I think, is when you start to have to reach for resources, you know, that are different ActiveRecord classes. And how do you combine them together to represent the idea that you want, you know, for your feature?
JOËL: I think it's more of, like, an outside-facing perspective rather than an inside-facing perspective. So, instead of looking at, hey, these are the set of ActiveRecord classes I have because these are my database tables, how can I, like, tack on to them to make this operation work? I'll sort of start almost from, like, a zoomed-out perspective, blank slate to say, "Hey, this is the kind of operation that I'm trying to do. What sort of resource am I dealing with ideally?"
And, you know, maybe the idea is, okay, I'm dealing with a dashboard. I'm trying to subscribe to something...a newsletter, so the idea is I'm creating a subscription. Then, from there, I can start looking at, okay, do I have the concept of a subscription in this application? Oh, I don't. There is no subscriptions table because that's not a thing that we track in our data mode. That's fine. But I probably need at least some kind of in-memory object to track the idea of a subscription, and then maybe from there, that grows. So, I'm kind of working from the problem towards the database rather than from the database out.
STEPHANIE: Yeah, I like that a lot. The outside-in phrase that you used really triggered something for me, which is being product engineers, right? Like, having a seat at the table when the feature is in that, like, ideation phase, I think is also really important because that's where you really learn what that like, abstraction is at the user level. And also, it could be a really good place to give your input if the feature is being designed in a way that doesn't really support the, you know, kind of quality of code and, like, separation that you would like. That's the part that I'm still working on and still learning how to do.
But sometimes, you know, it's, like, really critical to the job to, like, be in that room and be like, these designs; what are some places that we could extract it at that level even? And kind of, like, separate things out from there rather than having to deal with it [laughs] deep in your codebase.
JOËL: I think what I'm really kind of hearing and emphasizing in what you just said is the importance of not just writing code but being involved in the product and how that really enriches you because you know the problem domain. And that allows you to then write the code that you need at the different levels of the app to best model the situation you're working with.
So, we've kind of gone through all the rules and talked about them. I'm curious, though, for you, are these rules that you follow in your code? How closely do you adhere to this set of rules? Is this still something that's relevant to you in 2023 as much as it was to the authors of that blog post in 2013?
STEPHANIE: I have to say they're not ones that I have thought about on a daily basis, but after this conversation, maybe they will be. And I am kind of excited to maybe, like, bring this up to other people on my team and be like, "What do you think about these rules?" Just, like, revisiting them as a group or just, like, having that conversation. Because I think that's, you know, where I am most interested in is, like, is wondering how other people incorporate them into their work and hearing different opinions from the team. And I think there's a lot of, like, generative discussion that ultimately leads to better code as a result.
JOËL: I think for myself, I'm not following the rules directly. But a lot of my code ends up approximating those rules anyway because of other principles that I follow. So, in practice, while my code doesn't strictly follow those rules, it does look pretty close to that anyway.
STEPHANIE: I almost think this could be a great, you know, discussion for your team, too, like, if any listeners want to...not quite a book club but kind of an article club, if you will [laughs], and see how other people on your team feel about it. Because I think that's kind of where there is, like, a really sweet spot in terms of learning and development.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at tbot.io/referral. Or you can email us at [email protected] with any questions.
Joël was selected to speak at RubyConf in San Diego! After spending a month testing out living in Upstate New York, Stephanie is back in Chicago.
Stephanie reflects on a recent experience where she had to provide an estimate for a project, even though she didn't have enough information to do so accurately. In this episode, Stephanie and Joël explore the challenges of providing estimates, the importance of acknowledging uncertainty, and the need for clear communication and transparency when dealing with project timelines and scope.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Big piece of news in my world: I recently got accepted to speak at RubyConf in San Diego next month in November. I'm really excited. I'm going to be talking about the concept of time and how that's actually multiple different things and the types of interactions that do and do not make sense when working with time.
STEPHANIE: Yay. That's so exciting. Congratulations. I am very excited about this topic. I'm wondering, is this something that you've been thinking about doing for a while now, or was it just an idea that was sparked recently?
JOËL: It's definitely a topic I've been thinking about for a long time.
STEPHANIE: Time? [laughs]
JOËL: Haha.
STEPHANIE: Sorry, that was an easy one [laughs].
JOËL: The idea that we often use the English word time to refer to a bunch of, like, fundamentally different quantities and that, oftentimes, that can sort of blur into one another. So, the idea that a particular point in time might be different from a duration, might be different from a time of day, might be different to various other quantities that we refer to generically as time is something that's been in the back of my mind for quite a while. But I think turning that into a conference talk was a more recent idea.
STEPHANIE: Yeah, I'm curious, I guess, like, what was it that made you feel like, oh, like, this would be beneficial for other people? Did everything just come together, and you're like, oh, I finally have figured out time [laughs]; now I have this very clear mental model of it that I want to share with the world?
JOËL: I think it was sparked by a conversation I had with another member of the thoughtbot team. And it was just one of those where it's like, hey, I just had this really interesting conversation pulling on this idea that's been in the back of my mind for years. You know, it's conference season, and why not make that into a talk proposal?
As often, you know, the best talk proposals are, at least for me, I don't always think ahead of time, oh, this would be a great topic. But then, all of a sudden, it comes up in a conversation with a colleague or a client, or it becomes really relevant in some work that I'm doing. It happens to be conference season, and like, oh, that's something I want to talk about now.
STEPHANIE: Yeah, I like that a lot. I was just thinking about something I read recently. It was about creativity and art and how long a piece of work takes. And someone basically said it really just takes your whole life sometimes, right? It's like all of your experiences accumulated together that becomes whatever the body of work is. Like, all of that time spent maybe turning the idea in your head or just kind of, like, sitting with it or having those conversations, all the bugs you've probably encountered [laughs] involving date times, and all of that coalescing into something you want to create.
JOËL: And you build this sort of big web of ideas, not all that makes sense to talk about in a conference talk. So, one of the classic sources of bugs when dealing with time are time zone and daylight savings. I've chosen not to include those as part of this talk. I think other people have talked about them. I think it's less interesting or less connected to the core idea that I have that, like, there are different types of time. Let's dig into what that means for us. So, I purposefully left that out. But there's definitely a lot that could be said for those.
STEPHANIE: Awesome. Well, I really look forward to watching your talk when it is released to the public.
JOËL: So, our listeners won't be able to tell, but we're on a video call right now. And I can see from your background that you are back at home in Chicago. It's been a few weeks since we've recorded together. And, in the last episode we did, you were trying out living somewhere in Upstate New York. How was that experience? And what has the transition back to Chicago been for you?
STEPHANIE: Yeah, thanks for asking. I was in Upstate New York for the whole month of September. And then I took the last two weeks off of work to, you know, just really enjoy being there and make sure I got to do everything that I wanted to do out there before I came home to, you know, figure out, like, is this a place where I want to move? And yeah, this is my first, like, real full week back at work, back at home.
And I have to say it's kind of bittersweet. I think we really enjoyed our time out there, my partner and I. And coming back home, especially, you know, when you're in a stage of life where you're wanting to make a change, it can be a little tough to be like, uh, okay, like, now I have to go back [laughs] to what my life was like before.
But we've been very intentional about trying to bring back some of the things that we enjoyed being out there, like, back into just our regular day-to-day lives. So, over the weekend, we were making sure that we wanted to spend some time in nature because that's something that we were able to do a lot of during our time in New York. And, yeah, I think just bringing a bit of that, like, vacation energy into day-to-day life so the grind of kind of work doesn't become too much.
JOËL: Anything in particular that you've tried to bring back from that experience to your daily life in Chicago?
STEPHANIE: Yeah. I think, you know, when you're in a new place, everything is very exciting and, like, novel. Before work or, like, during my breaks, I would go out into the world and take a little walk and, like, look at the houses on the street that I was staying at. Or there's just a sense of wonder, I suppose, where everywhere you look, you're like, oh, like, this is all new. And I felt very, like, present when I was doing that.
And over time, when you've been somewhere for a long time, you lose a little bit of that sense of, like, willingness to be open to new things, or just, like, yeah, that sense of like, oh, like, curiosity, because you feel like you know somewhere and, like, you kind of start to expect oh, like, this street will be exactly like how I've walked a million times [laughs].
But trying to look around a little more, right? Like, be a little aware and be like, oh, like, Halloween is coming around the corner. And so, enjoying that as, like, the thing that I notice around me, even if I am still on the same block, you know, in my same neighborhood, and, yeah, wanting to really appreciate, like, my time here before we leave. Like, I don't want to just spend it kind of waiting for the next thing to happen. Because I'm sure there will also be a time where I miss [laughs] Chicago here once we do decide to move.
JOËL: I don't know about you, but I feel like a sense of change, even if it is cyclical, is really helpful for me to kind of maintain a little bit of that wonder, even though I've lived in one place for a decade. So, I live in New England in the Northeast U.S. We have pretty marked seasons that change. And so, seeing that happen, you know, kind of a warm summer, and now we're transitioning into fall, and the weather is getting colder. The trees are turning all these colors.
So, there's always kind of within, like, a few weeks or a few months something to look forward to, something that's changing. Life never feels stagnant, even though it is cyclical. And I don't know if that's been a similar experience for you.
STEPHANIE: Yeah, I like that a lot because I think one of the issues around feeling kind of stuck here in Chicago was that things were starting to feel stagnant, right? Like, we're wanting to make a big change in our life. That's still on the table, and that's still our plan. But noticing change, even when you think like, ugh, like, this again? [laughs] I think that could really shift your perspective a little bit or at least change how you feel about being somewhere. And that's definitely what I'm trying to do, kind of even when I am in a place of, like, waiting to figure out what the next step is.
Speaking of change, I had a recent lesson learned or, I suppose, a story to share with you about a new insight or perspective I had about how I show up at work that I'd like to share.
JOËL: What is this new perspective?
STEPHANIE: Well, I guess, [chuckles], first of all, I'm curious to get your reaction on this. Have you ever heard anyone tell you estimates are lies?
JOËL: Yes, a lot. It's maybe cynical, but there are a lot of cynics in our industry.
STEPHANIE: That's true. Part of this story is me giving an estimate that was a lie. So, in some ways, there is a grain of truth to it [laughs]. But I wanted to share with you this experience I had a few weeks ago where I was in kind of a like, project status update meeting. And I was coming to this meeting for the first time actually. And so, it was with a group of people who I hadn't really met before. It was kind of a large meeting. So, there were a handful of people that I wasn't super familiar with. And I was coming in to share with this bigger group, like, how the work I had been doing was going.
And during that time, we had gotten some new information that was kind of making us reassess a few things about the work, trying to figure out, like, where to go next and how to meet our ultimate goal for delivering this feature. With that new information in mind, one of the project managers was asking me how long I thought it might take. And I did not have enough information to feel particularly confident about an answer, you know, I just didn't know.
And I mentioned that this was kind of my first time in this meeting. There were a lot of people I didn't know, including the person who was asking the question. And they were saying, "Oh, well, you can just guess or, like, you know, it doesn't have to be perfect. But could you give us a date?" And I felt really hard-pressed to give them an answer in that moment because, you know, I kind of was stalling a little bit. And there was still this, like, air of expectancy.
I eventually, I have to say, I made something up [laughs]. I was like, "Well, I don't know, like, three weeks," you know, just really pulling it out of thin air. And, you know, that's what they put down on the spreadsheet, and then they moved on [laughs] to the next item. And then, I sat there in the rest of the meeting.
And afterwards, I felt really bad. I, like, really regretted it, I think, because I knew that the answer I gave was mostly BS, right? Like, I can't even say how I came up with that. Just that I, like, wanted to maybe give us some extra time, in case the task ends up being complicated, or, you know, there are all these unknowns. But yeah, it really didn't feel good.
JOËL: I'm curious why that felt bad. Was it the uncertainty around that number or the fact that the number maybe you felt like you'd given, like, a ridiculously large number? Typically, I feel like when estimates are for a story, it's, like, in the order of a few days, not a few weeks. Or is it something else, the fact that you felt like you made it up?
STEPHANIE: I think both, where it was such a big task. The larger and higher level the task is, the harder it is to come up with an answer, let alone an accurate one. But it was knowing that, like, I didn't have the information. And even though I was doing as they asked of me [chuckles], it was almost like I lost a little bit of my own integrity, right? In terms of kind of based on my experience doing software development, like, I know when I don't know, and I wasn't able to say it. At least in that moment, didn't feel comfortable saying it.
JOËL: Because they're not taking no for an answer.
STEPHANIE: Yeah, yeah, or at least that was my interpretation of the conversation. But the insight or the learning that I took away from it was that I actually don't want to feel that way again [laughs], that I don't want to give a lie as an estimate because it didn't feel good for me. And the experience that I have knowing that I don't have an answer now, but there are, like, ways to get the answer, right?
What I wish I had said in that meeting was that I didn't know, but I could find out, or, like, I would let them know as soon as I did have more information. Or, like, here is the information that I do need to come up with something that is more useful to them, honestly, and could make it, like, a win for all of us. But yeah, I've been reflecting on [chuckles] that a lot. Because, in a sense, like, I really needed to trust myself and, like, trust my gut to have been able to do my best work.
JOËL: I wonder if there's maybe also a sense in which, you know, generally, you're a very kind of earnest person. And maybe by giving a ridiculous number there just to, like, check a box, maybe felt like you gave way to a certain level of cynicism that wasn't, like, true to who you are as a person.
STEPHANIE: Yeah, yeah, that feels real [laughs].
JOËL: Have you ever done estimation sessions where you put error bars on your number? So, you say, hey, this is my estimate, but plus or minus. And, like, the more uncertainty there is around a number, the larger those plus or minus values are to the point where I could imagine something as ridiculous as like, oh, this is going to take three weeks, plus or minus three weeks.
STEPHANIE: I like that. That's funny. No, I have not ever done that before or even heard of that. That is a really interesting technique because that seems just more real to me, where, again, people have different opinions about estimation and how effective or useful it is. But for organizations where, like, it is somewhat valuable, or it is just part of the process, I like the idea of at least acknowledging the uncertainty or the ambiguity or, like, the level of confidence, right? That seems like an important piece of context to that information.
JOËL: And that can probably lead to some really interesting conversations as well because just getting a big number by itself, you might have a pretty high certainty. I mean, three weeks is big enough that you might say, okay, there's definitely going to be some fuzziness around that. But getting a sense of the certainty can, in certain contexts, I find, drive really interesting conversations about why things are uncertain.
And then, that can lead to some really good conversations around scoping about, okay, so we have this larger story. What are the elements of it that are uncertain that you might even talk in terms of risk? What are the risky elements of this story or maybe even a project? And how do we de-risk those? Is there a way that we could remove maybe a small part of the story and then, all of a sudden, those error bars of plus or minus three weeks drop down to plus or minus three days? Because that might be possible by having that conversation.
STEPHANIE: Yeah, I like what you said about scope because the way that it was presented as this really big chunk of work that was very critical to this deadline, there was no room to do scope, right? Because we weren't even talking about what makes up this feature task. We hadn't really broken it down.
In some ways, I think it was very, like, wishful, right? To be like, oh yeah, we want this feature. We're not going to talk too much about, like, the specific details [laughs], as opposed to the idea of it, right? And that, I think, is, you know, was part of what led to that ambiguity of, like, I can't even begin to estimate this because, like, it could mean so many different things.
JOËL: Right. And software problems, often, a slight change in the scope can make a massive change in complexity. I always think of a classic xkcd comic where two people are talking about a task, and somebody starts by describing something that kind of sounds complex. But the person implementing it is like, "Oh yeah, no, that's, you know, it's super easy. I can do that in half a day." And then, like, the person making the ask is like, "Oh, and, by the way, one small detail," and they add, like, one small thing that seems inconsequential, and the person is just like, "Okay, sorry, I'm going to need a research team and a couple of PhDs. And it's going to take us five years."
STEPHANIE: That's really funny. I haven't seen this comic before, but I need to [laughs] because I feel that so much where it's like, you just have different expectations about how long things will take. And I think maybe that is where, like, I felt really disappointed afterwards. Because in my inability to, like, just really speak up and say, like, "In my experience, like, this is kind of what happens when we don't have this information or when we aren't sure," yeah, I just wasn't able to bring that to the table in that, you know, meeting.
And I really am glad we're having this conversation now because I've been thinking about, like, okay, when I find myself in this position again, how would I like to respond differently? And even just that comic feels really validating [laughs] in terms of like, oh yeah, like, other people have experienced this before, where when we don't have that shared understanding or, like, if we're not being super transparent about how long does a thing really take, and why does it make it complex, or, like, what is challenging about it, it can be, like, speaking in [chuckles] two different languages sometimes.
JOËL: I think what I'm hearing almost is that in a situation like what you found yourself in, you're almost sort of wishing that you'd picked one extreme or the other, either sort of, like, standing up to—I assume this is a project manager or someone...to say, "Look, there's no number I can give you that's going to make sense. I'm not going to play this game. I have no number I can give you," and kind of ending it there.
Or, on the other hand, leaning into, say, "Okay, let's have a nuanced conversation, and we'll try to understand this. And we'll try to maybe scope it and maybe put some error bars on this or something and try to come up with a number that's a little bit more realistic." But by kind of, like, trying to maybe do a middle path where you just kind of give a ridiculously large number that's meaningless, maybe everybody feels unfulfilled, and that feels, like, maybe the worst of the paths you could have taken.
STEPHANIE: Yeah, I agree. I like that everyone [laughs] feels slightly unfulfilled point. Because, you know, my estimate is likely wrong. And, like, what impact will that have on other folks and, you know, their work?
While you were saying, like, oh yeah, here were the kind of two different options I could have chosen, I was thinking about the idea of, like, yeah, like, there are different strategies depending on the audience and depending who you're working with. And that is something I want to keep in mind, too, of, like, is this the right group to even have the, like, okay, let's figure this out conversation? Because it's not always the case, right? And sometimes you do need to just really stand firm and say, like, "I can't give you an answer. And I will go and find the people [laughs] who I can work this out with so that I can come back with what you need."
JOËL: And sometimes there may be a place for some sort of, like, placeholder data that is obviously wrong, but you need to put a value there, as long as everybody's clear on that's more or less what's happening. I had to do something kind of like that today. I'm connecting with a third-party SAML for authentication using the service Auth0. And this third party I'm talking to...so there's data that they need from me, and there's data that I need from them. They're not going to give me data until I give them our data first, so this is, like, you know, callback URLs, and entity IDs, and things like that you need to pass.
In order to have those, I need to stand up a SAML connection on the Auth0 side of things. In order to create that, Auth0 has a bunch of required fields, including some of the data that the third party would have given me. So, we've got a weird thing where, like, I need to give them data so they can stand up their end. But I can't really stand up my end until they give me some data.
STEPHANIE: Sounds like a circular dependency, if I've ever heard one [laughs].
JOËL: It kind of is, right? And so, I wanted to get this rolling. I put in a bunch of fake values for these callback URLs and things like that in the places where it would not affect the data that I'm giving to the third party. And so, it will generate as a metadata file that gets generated and stuff like that. And so, I was able to get that data and send it over. But I did have to put a callback URL whose domain may or may not be example.com.
STEPHANIE: [laughs] Right.
JOËL: So, it is a placeholder. I have to remember to go and change it later on. But that was a situation where I felt better about doing that than about asking the third party, "Hey, can I get your information first?"
STEPHANIE: Yeah, I like that as sometimes, like, you recognize that in order to move forward, you need to put something or fill in that gap. And I think that, you know, there was always an opportunity afterwards to fix it or, like, to reassess and revisit it.
JOËL: With the caveat that, in software, there's nothing quite so permanent as a temporary fix.
STEPHANIE: Oof, yeah [laughs]. That's real.
JOËL: So, you know, caution advised, but yes. Don't always feel bad about placeholders if it allows you to unblock yourself.
STEPHANIE: So, I'm really glad I got to bring up this topic and tell you this story because it really got me thinking about what estimates mean to me. I'm curious if any of our listeners if you all have any input. Do you love estimates? Do you hate them? Did our conversation make you think about them differently? Feel free to write to us at [email protected].
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeeeee!!!!!
AD:
Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us.
More info on our website at tbot.io/referral. Or you can email us at [email protected] with any questions.
Stephanie is engrossed in Kent Beck's Substack newsletter, which she appreciates for its "working thoughts" format. Unlike traditional media that undergo rigorous editing, Kent's content is more of a work-in-progress, focusing on thought processes and evolving ideas. Joël has been putting a lot of thought into various tools and techniques and realized that they all fall under one umbrella term: analysis.
From there, Stephanie and Joël discuss all the productivity tricks they like to use in their daily workflows. Do you have some keyboard shortcuts you like? Are you an Alfred wizard? What are some tools or mindsets around productivity that make YOUR life better?
Transcript:
AD:
Ruby developers, The Rocky Mountain Ruby Conference returns to Boulder, Colorado, on October 5th and 6th. Join us for two days of insightful talks from experienced Ruby developers and plenty of opportunities to connect with your Ruby community.
But that's not all. Nestled on the edge of the breathtaking Rocky Mountains, Boulder is a haven for outdoor lovers of all stripes. Take a break from coding. Come learn and enjoy at the conference and explore the charm of Downtown Boulder: eclectic shops, first-class restaurants and bars, and incredible street art everywhere. Immerse yourself in the vibrant culture and the many microbrew pubs that Boulder has to offer.
Grab your tickets now at rockymtnruby.dev and be a part of the 2023 Rocky Mountain Ruby Conference. That's rockymtnruby.dev, October 5th and 6th in Boulder. See you there.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I have a new piece of content that I'm consuming lately. That is Kent Beck's Substack [chuckles], Kent Beck of Agile Manifesto and Extreme Programming notoriety. I have been really enjoying this trend of independent content creation in the newsletter format lately, and I subscribe to a lot of newsletters for things outside of work as well. I've been using an RSS feed to like, keep track of all of the dispatches I'm following in that way so that it also kind of keeps out of my inbox. And it's purely just for when I'm in an internet-reading kind of mood.
But I subscribed to Kent's Substack. Most of his content is behind a subscription. And I've been really enjoying it because he treats it as a place for a lot of his working thoughts, kind of a space that he uses to explore topics that could be whole books. But he is still in the phase of kind of, like, thinking them through and, like, integrating, you know, different things he's learning, and acknowledging that, like, yeah, like, not all of these ideas are fully fleshed, but they are still worth publishing for people who might be interested in kind of his thought process or where his head is at.
And I think that is really cool and very different from just, like, other types of content I consume, where there has been, like, a lot of, especially more traditional media, where there has been, like, more editing involved and a lot of time and effort to reach a final product. And I'm curious about this, like I mentioned, trend towards a little less polished and people just publishing things as they're working through them and acknowledging that the way they're thinking about things can change over time.
JOËL: It sounds like this is kind of halfway between a book which has gone through a lot of editing and, you know, a tweet thread, which is pure stream of consciousness.
STEPHANIE: Yeah, that's a really great insight, actually. And I think that might be my sweet spot in terms of things I enjoy consuming or reading because I like that room for change and that there is a bit of a, you know, community aspect to Substack where you can comment on posts. But, at least in my experience, has seemed, like, relatively healthy because it is, you know, you're kind of with a community of people who are at least invested or willing to pay [chuckles] for the content. So, there is some amount of good faith involved.
His newsletter title itself it's called "Tidy First?" And so, that almost implies that it's, like, something he's still exploring or experimenting with, which I think is really cool. It's not like a I have discovered, like, the perfect way to do things, and, you know, you must always tidy first before you do your software development. He's kind of in the position of, this is what I think works, and this is my space for continuing to refine this idea.
JOËL: I'm curious: are there any sort of articles that you've read or just thoughts in general that you've seen from Kent that are particularly impactful or memorable to you?
STEPHANIE: Yeah. One I read today during my investment time is called Accountability in Software Development. And it was a very interesting take on the idea of accountability, not necessarily, like, when it's forced by others or external forces like a manager or, you know, your organization, but when it comes from yourself. And he describes it as a way to feel comfortable and confident in the work that he's doing and also building trust in himself and in his work but also in his teams.
By being transparent and literally accounting for the things that he's doing and sharing them, communicating them publicly, that almost ends up diminishing any kind of, like, distrust, or shame, or any of those weird kind of squishy things that can happen when you hide those things or, like, hide what you're doing. It becomes a way to foster the good parts of working with other people but not in a necessarily like, resentful way or in a hierarchical way. I was really interested in the idea of accountability, ultimately, like, for yourself, and then that ends up just propagating to the team.
JOËL: That's a really interesting topic because I think it sort of sits at the intersection of the personal and the technical.
STEPHANIE: Yeah, absolutely. He mentions more technical strategies or tasks that kind of do the same thing. You know, he mentions test-driven development, as well as, like, a way of holding yourself accountable to writing software that, you know, doesn't have bugs in it. So, I think that it can be applied to, you know, exactly both of those, like, interpersonal stuff and also technical aspects too, anyway, that's what's new in my world. Joël, what about you?
JOËL: So, this year, I've been putting a lot of thought into a variety of tools and processes. And I think I've come to the realization that they all really fall under one kind of umbrella term, and that would be analysis. It's a common step in some definitions of the traditional software development lifecycle. And it's where you try to after you've kind of gathered the requirements, try to break them down and understand what exactly that means from a technical perspective, what needs to happen. And so, a lot of the things that have been really fascinating to me this year have been different techniques that I can use to become better at that sort of phase.
STEPHANIE: Wow. That's very powerful, I think. And honestly, the first thing that comes to mind is, how do you make time for it?
JOËL: I think we all do it to a certain extent. You know, you pick up a ticket, and there is a prose description of some work to be done, hopefully not telling you directly, like, just go make a change to this class, but here's a business problem to be solved. And then you have to sort of figure out how to break it down. So, this can be as simple as, oh, what objects, what classes do I need to introduce for this change? But it might be more subtle in terms of thinking, okay, well, what are the edge cases I need to think about? Where are things that could fail, and how am I going to handle failure?
So, there's a variety of techniques that you can use to get better at all of these. You can use them kind of at the micro level when thinking about just a ticket. You can use them when working on a larger epic, a larger initiative, a whole project because I think analysis fits into kind of all of these levels. And so, I think those are the techniques that have been most exciting to me this year and that have really connected.
STEPHANIE: That is very exciting. It's triggering a lot of thoughts for me about how I incorporate analysis into my work and how that has actually evolved; where I think before, earlier in my career, I assumed that the analysis had been done by someone else who knew better than me or who knew more than me. And that by the time that you know, a piece of work kind of landed in my lap, I was like, okay, well, I just want to know what to do, right? Like, I want someone else to tell me what to do [laughs]. But now I think I have taken it upon myself to do more of that and, like, have realized that it's part of my role.
And sometimes it will now be kind of a flag or, like, a signal to me when that hasn't been done. And I can tell when I receive a ticket, and it's, like, maybe missing the business problem or doesn't have enough information. And determining whether that is information that I need to go and find out, or if there's someone else who I can work together with to do that analysis with, or having a better understanding of, like, what is within my realm of analysis to do, and what I need to encourage other people to do analysis for before the work is ready for me.
JOËL: I think there is an interesting distinction between more traditional requirements gathering and analysis, where traditional requirements gathering is getting all that business problem information from product people, from customers, things like that. The analysis step is often a little bit more about breaking down a business problem into, like, what are the technical ramifications of that?
But there can be a little of a synergy there where sometimes, once you start exploring the technical side of it, it might bring up a lot of edge cases that have impacts on the product side, on the business side. And then you have to go back to the businesspeople and say, "Hey, we only talked about sort of the happy path. What happens if payment is declined? What do we want to do there?" And now we're back in sort of that requirements gathering phase a little bit more rather than purely analysis.
But it can come out of an analysis phase where you've done maybe some state machine diagramming to try to better understand how things flow from one phase to another. Or maybe you were building out a truth table for some complex logic and realized, wait a minute, there's an edge case I didn't handle. It's not a strictly linear process. The two kind of feed into each other and, honestly, into the implementation side as well.
STEPHANIE: Yeah, I'm with you there. I'm thinking about a piece of work that I've been working on, where we were thinking of doing a database migration and adding some new columns to a table. But the more I dug into it, the more I realized that that was the first idea or the immediate idea that came from a need that I had limited information about. And what was nice was I was able to sit on it for a little bit, get some input from others. And I realized that there were all of these things that I couldn't answer yet.
And someone, I think literally asked in a code review if you've already done this analysis, between knowing that these columns will be the kind of extent of what you need versus, you know, will the data end up needing more columns? And should the data model be a little more flexible to that potential change? And they said, "If you had already done this analysis, then, like, otherwise, it looks good to me." And I was like, "Oh, I didn't." [laughs]
And that encouraged me to go back to some cross-functional members of the team and ask more questions. And that has taken more time. That was another challenge that I had to encounter was saying like, "Yeah, we started this, and we made some progress. But actually, we need to revisit a few things, like a few parts of the premise, before continuing on."
JOËL: Are there any techniques or approaches that you particularly enjoy when it comes to doing an analysis or that maybe are go-to's for you?
STEPHANIE: Reminding myself to revisit my assumptions [laughs], or at least even starting by being really clear about what I'm assuming, right? Because I think that has to happen first before you can even revisit them is having an awareness of what assumptions you're making. And I actually think this is where collaboration has been really helpful, where I've been working on this task with another developer on my team. And when we've been talking about it, I found myself saying, "Oh, I'm assuming this," right? Or, like, I'm assuming that the stakeholder knows what they need [laughs]. And that's why we're going to do it this way, where we were kind of given the pieces of data that we should be persisting.
And the more that we had that conversation, the more I realized, like, actually, like, I'm not convinced that they have that full picture of, like, what they need in the future. And because we're making this decision now, like, we are turning, you know, literally from, like, the abstract into, like, a concrete change [chuckles] in the database, now seems like...now that we're faced with that decision, it seems like a good time to revisit the assumption that I was making.
And that has proved helpful in making ultimately, like, a more informed decision about, like, which way to go technically. But I personally have found a lot of value in verbally processing it with someone else. It's a lot harder for me to identify them, I think, when I'm in my own head.
JOËL: That's really interesting that you keyed in on the idea of assumptions. I typically think of assumptions being, like, so important mostly in debugging rather than analysis. In fact, I wrote a whole blog post about why listing your assumptions is so important as part of your debugging process.
Now, like, my mind is spinning a little bit. I'm like, oh, I wonder if I could use some of those, like, debugging techniques as part of more of my analysis step. And could that make me better? So, I think you've put me on a whole, like, thought track of, like, oh, how many of these debugging techniques can I use to make my analysis better? So, that's really cool.
STEPHANIE: Yeah, and vice versa. So, a few minutes ago, I'd asked you how you make time for that analysis. Because I was thinking that, you know, in my day-to-day work, I'm juggling so many things. I often find myself running out of time and not able to do all of it. And that, I think, leads us really well into our topic for this episode, which is productivity tricks and ways that we make the most use out of our limited time.
JOËL: I think I may have a maybe a bit of a controversial opinion on productivity tricks. I feel like a lot of productivity tricks don't actually make me that much faster. Like, maybe I save a couple of minutes a day, maybe 5 or 10 a day with productivity tricks. And, sure, that adds up over the course of a year. But there are other things I could do in terms of, like, maybe better habits, better managing of my schedule that probably have a much more significant impact.
Where I think they are incredibly valuable, though, is not directly making me better with my time management but managing my focus, allowing me to kind of keep in the flow and get things done without getting sidetracked. Or just kind of giving me the things that I need in the moment that I need them so that I'm not getting on to a subtask that I don't really need to be doing.
STEPHANIE: Yeah. I really like that reframing of what helps you focus because as I was brainstorming ways that I stay on track for my work, I think I ended up discovering a similar theme where it wasn't so much, like, little snippets and tools for me, as opposed to how I structure all of the noise, I guess, in my day-to-day work and being able to see what it is that I need to care about the most right now.
JOËL: I think one of the things that I've tried to do for myself is to make it easy to have access to the information and the tools that I need. Probably one of the most useful bits of that is a combination of the documentation viewer Dash and the...I'm not sure what it would be called– launcher, productivity manager tool for Mac. Alfred, with a CMD + Space, it brings up this bar I can type into. And then you can trigger all sorts of things from there.
And so I can type the name of a language or some kind of keyword that I have set up and the name of a method. And then, all of a sudden, it'll show me everything like, you know, top five results. And I can hit Enter, and it will bring up the documentation for that.
So, if I want to say, oh yeah, what is the order of the arguments for Enumerable's inject method (which I constantly forget)? You know, it's a few keyboard shortcuts, you know, CMD + Space Ruby Enumerable inject. It's fuzzy finding, so I probably don't even need to type all of that. Hit Enter, and I have the documentation right in front of me. So, that makes it so that I can get access to that with very little amount of context shifting.
STEPHANIE: Yeah. I like what you said about how the tools are really helping you, like, narrow down, like, the views of, like, what is most important for you in that moment, and it's doing a little bit of that work for you. I think the couple of tools and apps that I actually did want to share are kind of similar.
One MacOS app I really like is called Rectangle for windows management, which is really crucial for me because I don't enjoy like, swiping and tabbing between applications. I would much prefer just seeing, usually, just two things. I try to keep my screen limited to two different windows at once because once it gets more than that, I'm already just, like, overwhelmed [laughs].
And as I'm trying to focus a little bit more on just having, like, one thing be the focus of my attention at a time, Rectangle has been really nice in just really quickly being able to do my windows resizing. So, I usually have, like, either things split between my screen half and half. Like, right now, I have your face on my screen as we record this podcast, and then my notes editing software for taking notes about what we talk about.
During my development workflow, it's usually, you know, just my editor, my terminal, and then maybe my browser ends up being, like, the thing that I tab into. But I'm able to just, like, set that all up, and as I need those windows to change depending on what my focus has been shifted to, to kind of make more space for whatever I'm reading, or looking at, or processing visually. The keyboard shortcuts that Rectangle...that I have now, you know, ingrained into my fingers [laughs] has been really helpful. It's like, I'm not fussing with just, like, too many things open.
JOËL: I have yet to, like, dive into a window manager. I'm still in the clunky world of CMD tabbing. But maybe I should give that a try.
STEPHANIE: For me, it has helped even just, like, identify the things that I need to give more space to on my screen and aggressively, like, cut everything else [laughs]. So, that's a really great MacOS app.
And then, the other one is actually kind of a similar vein. It's called Meeter, M-E-E-T-E-R. And it has been really helpful for managing my meetings, especially my video call meetings where the video call software that's being used for the meeting may be variable. And also, when I have multiple email addresses that meetings are being sent to, you're able to sign into all of your calendar accounts. And it provides a really nice view of all of your meetings.
It has a really, like, minimal, I guess, design in your toolbar, where it shows you how many minutes until your next meeting. And from that toolbar button, you can click to go to the video conferencing software directly for whatever meeting is up next. And you don't have to, you know, scramble to open Google Meet, or Zoom, or Webex, or whatever it is. And that's [chuckles] been nice, again, just kind of, like, cutting down on the amount of stuff that I need to remember and shift through to get to my destination.
JOËL: I think I'm hearing kind of two themes emerge out of some of the things that we've shared. And I'd like to maybe explore them a little bit; one is the power of keyboard shortcuts. And I think that's maybe what a lot of us think of when we think of productivity apps, at least developers, right? We love keyboard shortcuts.
And then, secondly, I think I'm hearing automation, right? So, you don't have to go through and, like, find that email or calendar link to find the Zoom link or whatever. It shows up in your toolbar. So, maybe we can dig into a little bit of the idea of keyboard shortcuts. Are you a person who like customizes a lot of keyboard shortcuts? And is that a part of your kind of productivity setup?
STEPHANIE: Well, a while ago, we had talked about not keyboard shortcuts in the context of productivity, but I think I had mentioned that I was trying to use my mouse less [chuckles] because I was getting a little bit of wrist pain. And I think that actually has rolled into a little bit of, you know, just, like, more efficient navigation on my computer.
I think my keyboard shortcut usage is mostly around window management, like I mentioned. I do feel like I have, like, a medium amount of efficiency in my editor. Sometimes, when I'm pairing with other people who use Vim, I'm, like, shook by how fast they're moving. And I have figured out what works for me in VS Code, and I don't think I need to get any faster. You know, I've just accepted that [laughs].
In fact, it's almost, like, the amount of speed and friction that I have, in my experience, is actually a little more beneficial for the speed that my mind works [laughs]. It kind of helps me slow down when I need to think about what I'm doing as opposed to just, like, being able to, like, do anything at my fingertips, and kind of my brain is just not able to think that fast.
And then navigating Slack, which is where I also spend a lot of my time on my computer. Now, using Slack with my keyboard shortcuts has been really helpful because, again, I'm not, like, mindlessly browsing or clicking around. I'm just looking at my unread messages. One non-keyboard shortcut I really like with Slack is Command + K, which is the jump-to feature. And so, I'm using that to go to a specific channel that I know I'm looking for or my own personal DMs, where I keep a lot of notes as well. And, honestly, I think that's, like, the extent of my keyboard shortcut usage. I'm curious what your setup is in regards to that, though.
JOËL: I think I'm similar to you in that I have not kind of maxed out the productivity around keyboard shortcuts. You'd mentioned the jump to in Slack. Several pieces of software have something kind of like that. It might be some sort of omnibar, or a command palette, or something like that, where you really just need to know...CMD + K, or CMD + P, CTRL + P are common ones. Then you can sort of, like, type a few characters to just describe the thing you want to do, or a search you want to make, or something like that.
Just knowing that one keyboard shortcut for your one piece of software gets you, I don't know, 80% of the productivity that you want. It's kind of amazing. I love the idea of an omnibar.
STEPHANIE: Yeah, I hadn't heard of omnibar as a phrase before, but that feels very accurate. I like that a lot, too, where it's, like, oftentimes, I don't do whatever particular thing enough necessarily for it to justify a keyboard shortcut, for me at least. I'm still able to be fast enough to get to, like I said, that final destination or the action that I want to take with a more universal shortcut like that.
JOËL: In my editor...so I use Vim, and I got used to Vim's keyboard-based navigation. And that is something that I deeply appreciate, maybe not so much for speed but being able to almost kind of feel one with the machine. And the cursor moves around, and I don't have to, like, think about moving it. It's really a magical sort of feeling. And it's become so much muscle memory now that I can just sort of...the cursor jumps around, things change out. And I'm not, like, constantly thinking about it to the point where now, if I'm in any other editor, I really want to get those shortcuts or, I guess, maybe not shortcuts but a Vim-style navigation, keyboard-based navigation.
STEPHANIE: Yeah, it sounds like it's not so much the time savings but the power that you have or the control that you have over your tools.
JOËL: Yes. And I think, again, the idea of focus. Navigation has stopped becoming a thing where I have to actively think about it. And I feel like I really do just sort of think my fingers are on the keyboard. I'm not having to, like, do a physical motion where I switch my hands. Like, I'm typing, and I'm writing code, then I have to switch my hand away to a mouse to shift around or, like, move my hand off the home row to, like, find the arrow keys and, like, move around. I just kind of think, and the cursor jumps up. It's great.
Maybe I'd be the same if I'd put a lot of time into getting really good at, you know, maybe arrow-based navigation. I still think the mouse you have to move your hand off. It breaks just in the tiniest little way the flow. So, for me, I really appreciate being fully keyboard-based when I'm writing code.
STEPHANIE: Right. Being one with the keyboard. As you were talking about that, I very viscerally felt, you know, when you encounter a new piece of technology, and you're trying to navigate it for the first time, and you're like, wow, like, that takes so much mental overhead that it's, you know, just completely disruptive to the goal that you're trying to achieve with the software itself.
JOËL: Yeah, it is a steep learning curve.
So, we've talked about custom keyboard shortcuts in the editor. But it's common for people to augment their editor with plugins, maybe even some kind of, like, snippet manager to maybe expand snippets or to paste common pieces in. Is that something that you've done in your editor setup? I think you said you use VS Code as your sort of daily editor.
STEPHANIE: Yeah, that's right. I actually think I almost forgot about some of my little bits of automation because they are just so spelled for me [laughs] that I don't have to think about them. But you prompting me just now reminded me that there are a few that I'd like to shut out.
Snippets-wise, I mostly use them for when I'm writing tests and just having the it blocks or the context blocks expand out for me so I don't have to do any of that typing of the setup there. And since I do use a terminal outside of my editor...I know that some people really like kind of having that integrated and being able to run tests even faster without having to switch to a different application, but I like having them separate.
There is a really great plugin called Go to Spec where you can be in any, you know, application code file, and it will pull up the spec file for you. I've been really enjoying that, and that is what helps my test writing be a little more automated, even though I'm having it in separate applications.
JOËL: That is really useful. So, as a Vim user, I also have a plugin that does something similar, where I can switch to what's considered the alternate for a particular file, which is typically the spec, or if I'm in the spec, it'll switch to the source file that the spec is testing.
STEPHANIE: And then, I do have one really silly one, which is that I got so sick and tired of not remembering how to, you know, type the symbols for string interpolation in Ruby that has also become a snippet where the hash key and the [inaudible 28:48] brackets can [laughs] populate it for me.
JOËL: I love it.
So, Stephanie, I'd like to go back to something you were talking about earlier in the show. When you were sharing about what was new in your world and, you mentioned that you subscribe to the Substack and that you subscribe to, actually, a lot of newsletters, and you said something that really caught my attention. You were saying that you don't want these all cluttering up your email inbox. And instead, you send all of these to an RSS reader application. What kind of application do you like to use?
STEPHANIE: I use Feedbin for this. And I actually think that this was recommended by Chris Toomey back in the day on a previous Bike Shed episode before you and I hosted the show. But that has been really awesome. It has a just, like, randomly generated email address you can use when you sign up for newsletters. You use that instead.
And I really like having that distinction because I honestly treat my email inbox as a bit of a to-do list, where I am archiving or deleting a lot of stuff. And then the things that remain in my inbox are things that I need to either respond to, or do, or get back to in some way. And then yeah, when I've completed it, then that's when I archive or delete.
But now that we do have all this great content back in email form, I needed a separate space for that, where I similarly kind of treat it as, like, a to-read list. And yeah, like, I look at my unreads in the newsletter RSS reader that I'm using and go through that when I'm in a blog-reading kind of mood.
JOËL: I really like that separation because I'm kind of like you. I treat my inbox as a to-do list. And it's hard to have newsletters come in and, like, I'm not ready to read them. But I don't want them in my to-do, or, like, they'll just kind of sit there and get mixed in and maybe, like, filtered down to the bottom. So, having that explicit separation to say, hey, here's the place I go to when I am in a reading mood, then I can read things.
I think there's also I've sort of trained myself to only check my email during certain times. So, for example, I will not check my work email outside of working hours. But if I'm on the subway going somewhere and I've got some time where I could do some reading, it would probably be a good thing to be going through some kind of newsletter or something like that. So, I either have to remember to go back to it, or what I tend to do is just scroll Twitter and hope that someone has shared that link, and then I read it there, which is not a particularly effective way of doing things. So, I might try the RSS feed reader tool. What was it called?
STEPHANIE: Feedbin.
JOËL: Feedbin. All right, I might try to get into that.
STEPHANIE: Yeah, I look forward to hearing if that ends up working for you because I agree, having the two separate spaces has been really helpful because I don't want to get distracted by my email/to-do list inbox if I'm just wanting to do a bit of reading, enjoy some content.
So, one more theme around productivity that I don't think we've quite mentioned yet, but maybe we've talked a little bit around, is the idea that it's, at least for me, it's a product of time and energy. So, even if you have all the time in the world, you know, you can just stare into space or, like, stare at a line of code and not get [laughs] anything done.
JOËL: I know the feeling.
STEPHANIE: Right? I am kind of curious how or if you have any techniques for managing that aspect. When your focus is low like, how can you kind of get that back so that you can get back to doing your tasks or getting what you need to do done?
JOËL: If I have the time, taking a break is a really powerful thing, particularly taking a break and doing something physical. So, if I can go outside and take a walk around the block, that's really helpful. And if I need a shorter thing that can be done in, like, five minutes or something, I have a pull-up bar set up in my place. So, I'll just go up and do a few sets there and get a little bit of the heart rate slightly up, do a little bit of blood pumping. And that sometimes can help reset a little bit.
STEPHANIE: Nice. Yes, I'm all for doing something else [chuckles]. Even when you know that this is a priority or is kind of urgent or whatever, but you just can't get yourself to do it, I've found that asking myself the question, "What would make this task easier for me right now?" has been helpful during those moments. And, for me, that might be grabbing a friend, like, maybe I'm blocked because I'm really just unmotivated. But having someone along can kind of inject some of that energy for me.
And then, there's a really great blog post by a woman named Mandy Brown. It's called Energy Makes Time. And she talks about how doing the things that fill our cup, actually, you know, even though it seems like how could we possibly have time to be creative, or, like you said, maybe do something physical, those seem, like, lower on the priority list.
But when you kind of get to the point where you just feel so overwhelmed and can't do anything else, and you just go do those things that you know feel good for you, you kind of come back with a renewed perspective on your to-do list. And you can see, like, what things actually aren't that critical and can be taken off. Or you just find that you have the capacity or the energy to get the things that you are really dreading out of the way.
So, that has been really helpful when I just am feeling blocked. Instead of, like, feeling bad about how unproductive [chuckles] I'm being, I take that as a sign of an opportunity to do something else that might set me up for success later.
JOËL: Yeah. I think oftentimes, it's easy to think of productivity in terms of, like, how can I maybe eliminate some tasks that are not high value through clever automation, or keyboard shortcuts, or things like that? But oftentimes, it can be more about just sort of managing your focus, managing your energy. And by doing that, you might have a much higher impact on both how productive you feel—because that's an important thing as well, in terms of motivation—and, you know, how productive you actually are at getting things done.
STEPHANIE: Right. At least for me, like, not all TDM is bad and needs to be automated away, but, like, my ability to, like, handle it in the moment. Whereas yeah, sometimes maybe I've just run the same few lines that should be just a script [chuckles], that should just be, you know, one command, enough times that I'm like, oh, like, I can't even do this anymore because of just, like, other things going on. But other times, like, it's really not a big deal for me to just, you know, run a few extra commands. And, like, that is perfectly fine.
JOËL: I love writing a good Vim macro. Yeah. So, it's important to think beyond just the fun tools and the code that we can write. Kind of think a little bit more at that energy and that mental level.
That said, there are a ton of great tools out there. We've named-dropped a bunch of them in this episode. For our listeners who are wondering or who weren't, like, necessarily taking notes, we've linked all of them in the show notes: bikeshed.fm. You can find them there.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël describes an old-school object orientation exercise that involves circling nouns in a business problem description. The purpose is determining which nouns could become entities or objects in a system. Stephanie shares she's working from the Hudson Valley in New York as a trial run for potentially relocating there. She enjoys the rail trails for biking and contrasts it with urban biking in Chicago.
The conversation between Joël and Stephanie revolves around mentorship, both one-on-one and within a group setting. They introduce a new initiative at thoughbot where team members pair up with principal developers for weekly sessions, emphasizing sharing perspectives and experiences.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I was recently having a conversation with a colleague about some old-school object orientation exercises that people used to do when trying to do more of the analysis phase of software, ones that I haven't seen come up a lot in the past; you know, 5, 10 years.
The particular one that I'm thinking about is an exercise where you write out the sort of business problem, and then you go through and you circle all of the nouns in that paragraph. And then, from there, you have a conversation around which one of these are kind of the same thing and are just synonyms? Which ones might be slight variations on an idea? And which ones should become entities in your system? Because, likely, these things are then going to be objects in the system that you're creating.
STEPHANIE: Wow, that sounds really cool. I'm surprised that it's considered old school or, I guess, I haven't heard of it before. So, it's not something in my toolbox these days. But I really like that idea. I guess, you know if you're doing it on pen and paper, it's obviously kind of timeless to me.
JOËL: And you could easily do it, you know, in a Google Doc and underline, or highlight, or whatever you want to do. But it's not an exercise that I see people really doing even at the larger scale but even at, the smaller scale, where you have maybe a ticket in your ticketing system, and it has a paragraph there kind of describing what needs to be done. We tend to just kind of jump into, oh, we're going to build a story and do the work and maybe not always think about what are the entities that need to happen out of that.
STEPHANIE: I think the other thing that I really like about this idea is the aligning on shared vocabulary. So, if you find yourself using different words for the same idea, is that an opportunity to pick the vocabulary that best represents what this means? Rather than a situation that I often find myself in, where we're all talking about the same things but using different words and sometimes causing a little bit more confusion than I think is necessary.
JOËL: Definitely. It can also be a good opportunity to connect with the product or businesspeople around; hey, here are two words that sound like they're probably meaning the same thing. Is there a distinction in your business? And then, you realize, wait a minute, a shopping cart and an order actually do have some slight differences. And now you can go into those. And that probably sparks some really valuable learning about the problem that you're trying to solve that might not come up otherwise, or maybe that only comes up at code review time, or maybe even during the QA phase rather than during the analysis phase.
STEPHANIE: Yeah. I think that's really important for us as developers because, as we know, naming is often the hardest part of writing code, right? And, you know, at that point, you are making that decision or that distinction between maybe a couple of different terms that you're using to describe an idea and putting that down then will continue on to be read. And just propagating that down the line of, is this name actually what we mean? Or maybe we are using words that, at this lower level, make more sense, but when interacting or communicating with business stakeholders or product folks, they are using a different term.
And I really like the idea of that activity being a cross-functional one where you can kind of agree on how to move forward there. Because lately, I've been finding myself oftentimes using both words where the product folks are describing it this way, and then we've, on the engineering side, have decided that, okay, we're actually going to call our database table this other thing, and now having to type out both [chuckles] meanings each time because I know that my audience is in both camps.
JOËL: Yeah. There's, I think, a lot of value in using the business terms where possible. If you don't use them, there has to be a good reason. There's a slight distinction for the technical term. We're using it to say, hey, it's different from the business idea in interesting ways that only matter to the dev team.
STEPHANIE: Is there a name for this activity?
JOËL: I don't know, just circling nouns or underlining nouns.
STEPHANIE: Cool. Maybe we can come up with something. [laughs] Or someone else can tell us if they know what this kind of exercise is called.
JOËL: Gotta name the naming activity. So, how about you, Stephanie, what's new in your world?
STEPHANIE: So, I have a pretty exciting life development to share. I am currently working from a different location than my home in Chicago. I'm in the Hudson Valley in New York for the next month because my partner and I are considering moving out here. And we are just kind of looking for a different pace of life a little bit. And we are taking this month as a trial run to see if we want to, you know, be out here permanently. And I've been having a great time so far.
One thing that I've really enjoyed is all of the rail trails out here. So, a lot of old railroad tracks have been repurposed for outdoor recreation, and they are great for biking, or running, or even just walking. And I've been able to hop on my bike, you know, and bike a few minutes, and then I'm on the trail and just kind of surrounded by trees and forests. And that's been really nice because I missed having access to nature kind of, like, right outside my door.
JOËL: So, you used to do quite a bit of urban biking in Chicago. But it sounds like now you're getting a chance to do more kind of nature biking.
STEPHANIE: Yeah, it's a big difference for me because urban biking was always pretty rough or a little scary just, you know, having to bike with traffic. And I got a lot better at it. But now I'm, you know, biking completely off the roads. And I don't have to worry too much about cars. And I can, you know, just enjoy the fresh air around me and just be a lot more relaxed, I think, than I was able to when I was commuting in the city.
JOËL: So, here's the real question. At this new location that you're staying at, do you have a bike shed?
STEPHANIE: Not yet. But we now could have a bike shed because there's a lot more space out here, too. So, I could theoretically have my bike shed in my nice, big yard right next to my garden. And these are all [laughs] the hopes and dreams I have for my future life.
JOËL: Before you build the bike shed, you can have six months of discussion about what color you want to paint it.
STEPHANIE: Yeah, that's why I have this podcast, actually. [laughter] So, look out for that in what's new in my world is considering paint colors for a theoretical future bike shed in a place where I yet don't live.
JOËL: You're going to become an expert in the Pantone color palettes.
STEPHANIE: I hope so. That would be a great addition to my title. So, another thing that's new in both of our shared worlds is a new initiative on Boost, the team we're on, that you have been involved in. It's pairing sessions with the principal developer.
JOËL: Yes. So myself and, another principal developer at thoughtbot have been doing weekly pairing sessions, where we take Tuesday afternoons and pair with one of the other members of the team on their client project doing whatever. So, it's not, a, like, pull someone in when you need help or anything that's more kind of targeted in that way. It's more of a you sign up for this ahead of time. And you just know that on this week, you get someone to pair with you who can hopefully bring in a different perspective a lot of experience, and pair with you on your particular project.
STEPHANIE: Yeah. I'm so excited about this initiative because I've not been staffed on a project with you before or the other principal developer who's involved. And I have really wanted to work with you all and be able to learn from you. And I think this is a really cool way to make that expertise more accessible if you just don't happen to be working on a project together.
JOËL: Yeah. One of the challenges I think of the principal role is that we want it to be a role that has a high impact on the team as a whole. But also, we are people who can be staffed on pretty much any client project that gets thrown at us and can easily be staffed on projects that require solo work. Whereas there are some teammates who I think it's the developer position that we guarantee they're never staffed solo.
And so, that can often mean that our principals get staffed on to the really technically challenging problems or the solo problems, but then there's maybe not as much room to have interactions with the rest of the team on a day-to-day basis.
STEPHANIE: Yeah. I think the key word you said that had me nodding my head was impact. And I'm curious what your hopes are for this effort and what kind of impact you want to be having for our team.
JOËL: I think it's impact on a few different levels, definitely some form of knowledge sharing. Myself and the other principal developer have decade plus experience each in the field, have deep knowledge in a lot of different things like test-driven development, object modeling, security, things like that that build on top of kind of more basic developer skills that we all have. And those are all, I think, great ways that we can support our team if there's any interest in those particular skills or if they come up on a particular project.
And knowledge sharing works both ways, right? I think anytime you're pairing with someone else, there's an opportunity to learn on both sides. And so, I think a really important thing when you're pairing with someone, even if you're kind of maybe more explicitly the mentor figure, is to kind of keep that open mind and look for not only what can I give, how can I teach, but what can I learn from this other person?
STEPHANIE: Yeah, absolutely. I guess I'm wondering...and I know this is a pretty new programming so far, but is there anything you've learned or anything that surprised you that you weren't expecting when you, you know, first conceived of the idea based on how it's been going?
JOËL: Something that really surprised me, there's some feedback I got after one of the pairing sessions, where this colleague who we'd paired together...and I felt like I hadn't contributed a ton, like, this colleague just really had it and was just kind of going through and doing things. So, I was kind of, like, leaving that pairing session being like, oh, I don't know if I added a ton of value here.
And then, this colleague reached out to me and said, "Oh, you know, I felt, like, this huge boost of confidence because we were pairing together, and you were just kind of nodding along and basically saying yes to all of my choices." And I hadn't really considered that that can be a really valuable aspect of this sort of pairing. Sometimes you know the right thing to do, like, you've got it. But it's really easy to second-guess yourself. And just having someone along to, you know, give you that thumbs of like, yeah, this is the thing to do, can give you that confidence boost and kind of keep you moving in a way that feels really positive.
STEPHANIE: Wow. I love that. That's really powerful, and I get that. Because, you know, obviously, it's very valuable to have your colleagues help generate different ideas that you might not have considered. But that validation can be really useful. And, you know, that's just not something you get when with a rubber duck. [laughs] The rubber duck can't respond, and [laughs] nod along.
So, I think that's really cool that you were able to provide some of that confidence. And, in fact, I think that is contributing to their growth, right? In terms of helping identify, you know, those aspects that they're already really strong at, as well as developing that relationship so they know you're available to them next time if they do need someone to either do that invalidating or validating of an idea.
JOËL: Yeah, there's a lot of power, I think, in kind of calling out people's strength and providing validation in a way that can really help someone get to the next level in their career. And it feels like such a simple thing. But yeah, sometimes you can have the biggest impact not by kind of going in and helping but just kind of maybe, like, standing back a little bit and giving someone a thumbs up. So, definitely one of the biggest surprises or, I think, one of the biggest lessons learned for me in the past few weeks of doing this.
STEPHANIE: That's very cool.
JOËL: So, Stephanie, you've also been doing some pairing or some mentoring from what I hear.
STEPHANIE: Yeah. So, on my current client work, I have been pairing with a new hire on my client team who recently graduated from college. And this is his first job in software development. And I have been thinking and learning a lot through this experience because one of my goals was to get better at coaching, specifically the idea of asking guiding questions to help someone, you know, arrive at their own solution instead of, you know, making the suggestions myself or kind of dictating where to go.
And this has kind of been a progression for me of kind of starting from, well, you know, I have the way that I want to do it. And the person I'm working with who maybe has less experience, like, they might not know where to go. So, we're just going to go along with my idea. And then the next step was offering a few different ideas, like a menu of options and kind of having that discussion about which way to go.
And now, I really wanting to practice letting someone else lead entirely and helping them start thinking about the right things but ultimately not giving them the answer. But hopefully, like, the questions I've been asking means that they are able to get to a well-informed answer where they've thought through some of the things that I would think about if I were in the position of making the decision or figuring out how to implement.
JOËL: Is this mostly asking questions to get them to think about edge cases, or is this, like, a Socratic approach to teaching?
STEPHANIE: Could you describe Socratic approach for me?
JOËL: So, the Socratic approach is a teaching approach that is question-based, where you kind of help the student come to the conclusions themselves by answering questions rather than by telling them the answer.
STEPHANIE: Oh, interesting. I think a little bit of both. Where it's true, I am able to see some edge cases that folks with less experience might not consider because they just haven't had to run into them before or fight the fires when [laughs] their code in production ends up being a big issue or causes a bug.
But I think that's just part of the work where there is kind of, like, a default dynamic that might be fallen into when two people are working together, and their experience levels differ, where the person who has less experience is wanting to lean on the more senior person to tell them where to go, or to expect to be in that position of just learning from them and not necessarily doing as much of the active thinking. But I was really interested in flipping that and doing a bit of a role reversal because I think it can be really impactful and, you know, help folks earlier in their career, like, really level up even more quickly than just watching, but actually doing.
And so, the questions I've been asking have been a lot more open-ended in terms of, like, asking, "What do you think about this code that we're looking at?" Or, like, "Where do you want to go next?" And based on, you know, their answers, digging in a little more, and, at the end, maybe, like, giving that validation that we were talking about earlier. I was like, "Great. Like, I think that's a great path forward," or, "I think that's a good idea to spend our time on right now."
But the open-ended questions, I think, are also ones that I also would have liked when I was in that position of learning, where having someone trust that I could draw on my past experience but, like, also knowing that they were there to support and maybe orient me if I ended up straying too far off the path.
JOËL: How have you navigated situations where maybe you're asking a question about "What do you want to do next?" and they pick something that maybe would work but is not your sort of preferred approach, or maybe something that seems like it would work well enough but, you know, there's maybe a better approach? How do you navigate that? Do you let them take their approach and maybe kind of let them run into some of the edge cases and problems and then say, "Hey, let me show you something new"? Do you probe a little bit earlier? Or do you say, "Hey, that's good, but why don't we try my way"? How do you navigate that kind of situation?
STEPHANIE: That is so hard. It's really challenging. Because if you kind of know that there's maybe a more effective way, or a cleaner way, or whatever, and you're seeing your pair or your mentee kind of go down a different path, you know, it's so easy to just kind of jump in and be like, "Oh, actually, like, let me save you some time, and effort, and pain and just kind of tell you that there's something else we could try."
But I think I've been trying to sit on my hands a little bit and let them go down that path or at least let them finish explaining kind of what their thought process is and giving them the opportunity to do that act of thinking to see it through without interrupting them because I think it's really important to, you know, just honor the process that they're going through.
I will say, though, that I also try to keep an eye on the time. And I am also, like, holding in my head a bit of a higher level, like, the project status, any deadlines, what's on our plate for the sprint. And so, if I'm seeing that maybe the path they want to go down might end up taking a while or we don't quite have enough time for that, to then come back and revisit and adjust and reiterate on, like, their first solution. Then that is usually an opportunity where I might offer them another way or say, like, "Hey, like, this is what I'm thinking," because of those things I mentioned before with deadlines or something I'm considering.
But I generally try not to impose any of that as, like, this is what we will do so much as saying, "This is what I think we should do." Because I really want to hone in on the idea that, like, everyone just has opinions [laughs] about how they want to do things. And I'm not claiming mine is the perfect way or even the best way, but just what I'm thinking in this moment.
JOËL: Yeah, time permitting, I've really appreciated scenarios where you give people a chance to do the non-optimal solution and run into edge cases that kind of show why that solution is not optimal and then backtrack out of it and then go to the optimal path. I think that's a lesson that really sticks much longer. So, I've even done that in scenarios where I'm building some training material. And I'll kind of purposely have the group go down the sort of obvious path, but that turns out to be non-optimal.
And then, you hit a wall where things don't work, and then you have to backtrack. And it's like, okay, so that's why we don't do it that way that may have seemed obvious. Because then everybody remembers as opposed to...I mean, you could just go down this other path, and somebody asks you a question, "Why don't we go down this thing?" And then, they just...maybe they have to remember it, or it becomes a thing where it's like, oh, but, like, we were told that's a bad way to do it. And now you have this sort of, like, weird, like, absolutism about, like, oh, but, you know, Joël said that was bad. So, we just got to remember that's the bad thing.
And it's not about the morality of that choice that I think can come through when you're kind of declaring a path good and a path bad, but instead, having experienced, hey, we went down this path. There were some drawbacks to it, which is why we prefer this other path. And I think that tends to stick a lot more with students.
STEPHANIE: Yeah. I really like what you said about not wanting to inject that, like, morality argument or even kind of deny them the opportunity to decide for themselves how they thought that path went or, like, how they thought the solution was. If you just tell them like, "No, don't go there," you're kind of closing the door on it. And, yeah, they might spend a lot of time afterwards thinking that, like, that will always be a bad option without really forming an opinion for themselves, which I think is really important. Because, you know, once you do get more experience, that is pretty much, like, the work [laughs] that we're doing all of the time.
But another thing that I think is also such a skill is assessing your own work, like, after you go down the path or, like, once you have something working, being able to come back to it and look at it and be like, oh, like, can this be better, right? And I think that can only happen once you have something to look at, once you have, like, a first draft, if you will, or do the less optimal implementation or naive implementation.
JOËL: So, when you're trying to prompt someone to kind of build that skill of self-review or self-reflection on some of the work that they've done, how do you as a pair or a mentor help stimulate that?
STEPHANIE: Yeah. I think with early career folks, one thing that is an easy way to start the conversation is asking, "Are there any places that could be more readable?" Because that's, I think, an aspect that often gets forgotten because they're trying to hold so much in their heads that they are really just getting the code to work. And I think readability is something that we all kind of understand. It doesn't include any jargon about design patterns that they might not have learned yet. You know, even asking about extracting or refactoring might be not where they are at yet.
And so, starting with readability, for example, often gets you some of those techniques that we've learned that have, you know, specialized vocabulary. But I have found that it helps meet them where they're at. And then, in time, when they do learn about those things, they can kind of apply what they've already been doing when kind of prompted with that question as, like, oh, it turns out that I was already kind of considering this in just a different form.
JOËL: And I think one thing that you gain with experience is that you have kind of a live compiler or interpreter of the language in your head. And so, sometimes for more complex code, I, as an experienced developer, can look at it and immediately be like, oh yeah, here's some edge cases where this code isn't going to work that someone newer to the language would not have thought of.
And so, sometimes the way I like to approach that is either ask about, "Oh, what happens in this scenario?" Or sometimes it's something along the lines of, "Hey, now that we've kind of done the main workflow, there's a couple of edge cases that I want to make sure also work. Let's write out a couple of test cases." So, I'll write a couple of unit tests for edge cases that I know will break the code.
But even when we write the unit tests, my pair might assume that these tests will pass. And so, we'll write them; we'll run them and be like, "Oh no, look at that. They're red. I wonder why." And, you know, you don't want to do it in a patronizing way. But there's a way to do that that is, I think, really helpful. And then you can talk about, okay, well, why are these things failing? And what do we need to change about the code to make sure that we correctly handle those edge cases?
STEPHANIE: Yeah, that's really great. And now, they also have learned a technique for figuring out how to move forward when they think there might be some edge cases. They're like, oh, I could write a test, and they end up [laughs] maybe learning how to do TDD along the way. But yeah, offering that strategy, I think, as a supplement to having supported them in their workflow, I think, is a really cool way to both help them learn a different strategy or tactic while also not asking them to, like, completely change the way that they do their development.
JOËL: So, we've talked about ways that we can coach and mentor in a more of a one-on-one setting. But it can also happen in more of a group setting. And an initiative that I've been involved in recently is, once a quarter, the principals on thoughtbot's Boost team are running a training session on a topic that we choose.
And we chose this month to make it really interactive. We created an exercise. We talked a little bit about it, had people break out into breakout rooms for a pretty short time—it was like 20 minutes—and come up with a solution. And then brought it back to the big group to talk through some of the solutions. All of that within 45 minutes, so it's a very kind of dense-packed thing. And I think it went really well.
STEPHANIE: Yeah. So, hearing that makes me think that the group wasn't actually going to get to a solution in necessarily that short amount of time. But I'm wondering if that was maybe intentional. Like it was never really about coming to the optimal solution but just the act of thinking about it or practicing how you would do that problem-solving without as much of a focus on the outcome.
JOËL: So, yes and no. I think, as you said, the discussion, the journey is more important than the outcome. But also, because we wanted people to have a realistic chance at coming up with some kind of solution, we specifically said, "We don't want code. Don't write a code solution to this." Instead, we suggested people come up with some kind of diagram.
So, the problem was, we have some sort of business process where you start by...you have an endpoint that needs to receive some kind of shopping cart JSON and then goes through a few different steps. You have to validate it. You have to attempt to charge their card, and then eventually, it has to be sent off to a warehouse to be fulfilled.
And so, we're asking them to diagram this while thinking a little bit about data modeling and a little bit about potential edge cases and errors. People came up with some really interesting diagrams for this because there's multiple different lenses from which you could approach that problem.
STEPHANIE: That's cool. I really like that you left it up to the groups to figure out, you know, what kind of tools they wanted to use and the how. You mentioned different lenses. So, I'm taking it that you didn't necessarily share what the steps of starting to consider the data modeling would be. Did you prompt the group in any way? How did you set them up before they broke out?
JOËL: So, we had a document that had a problem definition; part of this involved talking to a few external services, so things like attempting to charge their card. I think there was a user service they needed to do to pull some user information. And then, there's that fulfillment center that we submit to the warehouse with your completed order. And so, we had sample JSONs for all of these. Again, the goal is not for them to write any code that deals with it but more to think about: okay, we need information from this payload to plug into this one. And then, if they want to add any sort of intermediate steps, they can do that.
And I think sort of two common lenses that you could look at this is from more of an action standpoint, so to say, okay, well, first, we receive this payload, and then we make a call to this endpoint, and we try to do a thing and then success or failure, and then kind of go down this path and success or failure, and kind of keep going down that path until you finally reach that fulfillment endpoint.
So, it's almost like a control flow diagram. But you could also take more of a data-centered approach and talk about how the data evolves as it goes through this process. And so, you start with, like, a raw JSON payload. And maybe that gets parsed into a shopping cart object, which then gets turned into a temporary order, which then gets turned into a validated order, which then is combined with a credit card charge to create a fulfillment order, which can then be sent off to the warehouse. And that perspective will completely change the way you think about what the code actually needs to be when you create it.
STEPHANIE: Got it. That's cool. So, I'm curious, you know, what went into figuring out what the prompt would look like? I guess, like, where did you start? Did you already know that there would be these two different ways of thinking about or lenses to data modeling that you're, like, oh, like, maybe these groups will go down this route? Or was it, I guess, a bit of a surprise that when you came together, you found out kind of the different approaches?
JOËL: We already knew that there would be multiple approaches, and we chose not to specify which one to take. I think now this is getting into almost like curriculum design and more kind of the pedagogy side of things, which I'm, you know, excited and passionate about. I don't know, is that something that you've done at all for some of your projects or areas where you've been coaching people?
STEPHANIE: It's not been. But I actually do think it's a bit of a goal of mine to lead a workshop at some point at a conference because I really like the hands-on stuff that I get to do day-to-day, you know, working one-on-one with people. And, you know, I also am on the conference circuit. [laughs] And I was thinking that maybe workshops could be a really cool way to bring together those two things of like, well, I am enjoying that experience of working one-on-one, but it is, oftentimes, you know, just on our regular day-to-day work. And so, I would be really curious about how to develop that kind of curriculum for teaching purposes.
Do you find yourself starting with problems you see on client work and kind of stripping that down into something maybe a little more general, or do these problems kind of just come up spontaneously? [chuckles]
JOËL: So, workshop design is, I think, its own really fascinating topic, and honestly, we could probably do a whole episode on it. But the short of it is I typically work backwards from an end goal. So, just like when I'm writing a blog post, I have one big thing I want people to learn from a workshop, and then everything works backwards from there. Anything that is part of the workshop has to be building towards that big goal, that one thing I want people to learn. Otherwise, I strip it out.
So, it's an exercise in ruthlessly cutting to make sure that I'm not overwhelming people and, you know, that we can fit in the time that we have because there's always not enough time in a workshop. And people can very easily get sidetracked or overwhelmed. So, as much as possible, have everything focusing in towards one goal.
Circling back to the mentoring side of things, I'm curious what you see is maybe some of the biggest challenges as a mentor or a coach.
STEPHANIE: Well, I think, for me, it was, in some ways, like, seeing myself in that role as mentor. Like, oftentimes, that was decided for me by someone else as, like, "Oh, hey. We have a new hire, and, like, would you be their onboarding buddy?" Or, you know, a manager kind of identifying, like, oh, like, Stephanie has been in this role for, you know, a few years now. She's surely ready to mentor [laughs] new folks or people joining the team.
And that was really hard for me because I was like, well, I still have so much to learn [laughs], you know, like, how could I possibly be in that position now? You know, I am still learning from all these other people who are mentors to me.
So, one thing that took me a long time was learning that I did have things that I knew that other people didn't. And I started to think of it more as this, like, ring of overlapping circles where, you know, we all probably do share some common knowledge. But we all are also experts in different things, and everyone always has something to teach. Even if you're just, like, a few months or, like, a year ahead of someone else, that is actually a really powerful spot to cultivate peer mentorship, and where I think learning can really thrive.
There's a really great talk about this by Adam Cuppy called Mentorship in Three Acts, where he talks about that peer mentorship, where someone just knows, like, a little more than someone else. That can be really powerful and can be a good entry point for people who are interested in getting into mentorship but are kind of worried that, like, oh, they are, you know, not a senior yet.
You know, when you're at a similar experience level as who you're working with, there is a little bit less of what we were describing earlier of, like, that dynamic of knowing what to do but kind of wanting to hold back and let them discover for themselves. In that peer mentorship dynamic, you know, both people are, like, really deep in it, kind of trying things out, experimenting, learning, and that ends up being really fruitful time for both of them.
JOËL: Based on your experience, would you say that maybe that's the best place to start for someone who's looking to get into mentorship, so kind of pursue more of a peer mentorship scenario?
STEPHANIE: Yeah. I would definitely say that it has helped me a lot. I've had a lot of peer mentorship relationships in the past, where maybe there just wasn't someone on the team who could mentor me at the time. Or maybe I was wanting to collaborate a little bit more and feeling like I did have some ideas and opinions that I wanted to talk about, or share, or get some feedback on. Reaching across my level was really helpful in starting to create that space.
Yeah, I was really surprised by all the things that I was learning and all the things that the other person was learning from me that I think was a good wealth of experience for me to then bring to the next step when I found myself kind of in that position of supporting others who were more junior.
JOËL: I'd like to also shout out Exercism.io as a great place to get started with mentoring. For those who are not aware, Exercism is a platform where they have a bunch of exercises that you can go through to learn a language. And you can go through them on your own, but you can also go through them with a mentor. Somebody will basically give you a little mini code review on your exercise or maybe help you out if you're stuck. And this all happens asynchronously.
And it's volunteer-run. So, they just have people from the community who volunteer to be mentors on there to help coach people through the exercises. We'll put a link in the show notes to the page they have, kind of explaining how the mentorship works and how to sign up. But I did that for a while. And it was a really rewarding experience for me. I thought that I'd be mostly helping and teaching, but honestly, I learned so much as part of the process.
So, I would strongly recommend that to anybody who wants to maybe dip their toe a little bit in the mentoring coaching world but maybe feels like they're not quite ready for it. I think it's a great way to start.
STEPHANIE: Ooh, that sounds really cool. Yeah, I know that, especially for folks who maybe are working a little bit more independently, or are a bit isolated, or don't have a lot of people on a team that they're able to access; that sounds like a really great solution for folks who are looking for that kind of support outside of their immediate circle.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie has another debugging mystery to share. Earlier this year, Joël mentioned that he was experimenting with a bookmark manager to keep track of helpful and interesting articles. He's happy to report that it's working very well for him!
Together, they discuss tactics to ensure the easiest route also upholds app health and aids fellow developers. They explore streamlining test fixes over mere re-runs and how to motivate desired actions across teams and individuals.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I have another debugging mystery to share with the group. I was working on a bug fix and was trying to figure out what went wrong with some plain text that we're rendering from a controller with ERB. And I was looking at the ERB file, and I was like, great, like, I see the method in question that I need to go, you know, figure out why it's not returning what we think it's supposed to return. I went down to go check out that method. I read through it ran the tests.
Things were looking all fine and dandy, but I did know that the bug was specific to a particular, I guess, type of the class that the method was being called on, where this type was configured via a column in the database. You know, if it was set to true or not, then that signaled that this was a special thing. You know, I made sure that the test case for that specific type of object was returning what we thought it was supposed to. But strangely, the output in the plain text was different from what our method was returning.
And I was really confused for a while because I thought, surely, it must be the method that is the problem here [laughs]. But it turns out, in our controller, we were actually doing a side effect on that particular type of object if it were the case. So, after it was set to an instance variable, we called another method that essentially overwrote all of its associations and really changed the way that you would interact with that method, right? And that was the source of the bug is that we were expecting the associations to return what we, you know, thought it would, but this side effect was very subvertly changing that behavior.
JOËL: Would it be fair to call this a classic mutation bug?
STEPHANIE: Yeah. I didn't know there was a way to describe that, but that sounds exactly right. It was a classic mutation bug. And I think the assumption that I was making was that the controller code was, hopefully, just pretty straightforward, right? I was thinking, oh, it's just rendering plain text, so not [laughs] much stuff could really be happening in there, and that it must be the method in question that was causing the issues.
But, you know, once I had to revisit that assumption and took a look at the controller code, I was like, oh, that is clearly incorrect. And from there, I was able to spot some, you know, suspicious-looking lines that led me to that line that did the mutation and, ultimately, the answer to our mystery.
JOËL: Was the mutation happening directly in the controller? Or was this a situation where you're passing this object to a method somewhere, and that method, you know, in another object or some other file is doing the mutation on the record that you passed in?
STEPHANIE: Yeah, that's a great question. I think that could have been a very likely situation. But, in this particular case, it was a little more obvious just in the controller code, which was nice, right? Because then I didn't have to go digging into all of these other functions that may or may not be the ones that is doing the mutation. But the thing that was really interesting is that it seems like that method that does this mutating is pretty key to the type of object that we're working with, as in most places it's used, that mutation happens. And yet, it's kind of separated from the construction of that object.
And I was, I think, a little bit surprised that it wasn't super obvious that throughout the application, this is the way that we treat this special object. And I had wished that maybe that was a bit more cohesive or that it was kind of clear that this is how we use this in our domain. And it was, like, lacking that bit of clarity around how things are used in practice as opposed to trying to keep those in isolation.
JOËL: Yeah. Because now you have sort of two different diverging use cases for using this object that are incompatible, one that's trying to use the object as is and the other that's depending on this mutation. Now you can't have both.
STEPHANIE: Right. As far as I can tell, in most cases, you know, we're using it with a mutation, and maybe there is a good reason for those ideas to be separate. But it certainly did not make my life easier trying to solve this particular bug.
JOËL: I'm hearing you mention the idea of ideas being separate; definitely kind of triggers some pathways in my modeling brain, where I'm already thinking, oh, maybe this should be a decorator, or maybe this is just a straight-up transformation. We just have a completely different kind of object rather than mutating the underlying record.
STEPHANIE: Yeah, definitely. I think there could have been some alternative paths taken. At the time, that was kind of decided as the way forward for how to treat this particular domain object. So, that's my fun, little mystery where I got to, you know, play the detective role for a bit. Joël, what's new in your world?
JOËL: So, earlier this year, on an episode of The Bike Shed, I mentioned that I was experimenting with a bookmark manager to kind of keep track of articles that I find are helpful that I might want to reference later on. I wasn't sure if it was going to be worthwhile and mentioned that I'd report back. I'm happy to report that it is working very well for me.
So, the tool is Raindrop.io. They have an app. They have a website. And I have been sort of slowly filling it with some of my most commonly referenced articles. I had pulled some in initially, and then kind of over time, when I find myself referencing an article using my old-fashioned approach, which was just remember the keywords in the title and Google them, now I'll do that, link the article to someone else, and then add it to Raindrop so that the next time I'm starting to look for things, I have these resources available.
And I try to curate a little bit by doing things like tagging them and categorizing them so that when I need references for something, I don't just have to go through my personal memory and be like, oh yeah, what articles do I have on data modeling that I think might be a good fit here? Instead, I can just go to the data modeling section of Raindrop and be like, oh yeah, these five are, like, my favorites that I link to all the time.
STEPHANIE: Wow. We did a whole episode on how to search recently. And we totally forgot to mention things like bookmark managers or curating your own little catalog of go-to articles. In some ways, now you have to search within your bookmark manager [laughs].
JOËL: That's true. Sometimes, it's searching if I'm looking for a particular article, and sometimes, it's more browsing where I'm looking at a category, or a tag, or something like that, which maybe would have been another interesting distinction to explore on the How to Search episode.
I do want to give a shout-out to the most recent article that I looked up in my bookmark manager here, Railway-Oriented Programming. It's an article on how to deal with pieces of code that can error and how to sort of compose those sorts of methods. So, you now have a whole sort of chain of different functions that can error or not in different sorts of ways.
And it uses this really powerful metaphor of railway with different types of junctions and how you might try to, like, fit them all together so that everything connects nicely. And it's just a really beautiful metaphor. And I was doing some work on error handling, in particular, and I wanted to reference something and that was a great resource for that piece of work.
STEPHANIE: Very cool. I'm really intrigued. I love a good metaphor. I am curious: is this programming language and framework agnostic and not about Rails?
JOËL: Actually, so this is written on an F# blog. So, the code is all in F#. And it leans a little bit into some functional programming concepts, but the metaphor is more generic. So, it's a really fun way to think about when you're programming, and you're not just going through the happy path. But what are all the side branches that you might have to deal with, and how do those side branches come back into the flow of your program?
STEPHANIE: Very cool. I can also see some really excellent visuals here if you were to use this metaphor as a way of understanding complexity.
JOËL: Absolutely. In fact, this article has some pretty amazing visuals, so strong recommend. We'll link it in the show notes.
STEPHANIE: So, a few episodes ago, we talked about code ownership at scale because my client project that I'm working on is for a company with hundreds of developers. So, it's quite a big codebase, quite a big team. One of the main issues that I, at least, struggle with on a day to day is flaky tests in CI. When I, you know, I'm wanting to merge a change, I often have to run the test suite a few times to get to green and be able to merge and deploy.
And this is an interesting topic to me because when you're really trying to just get your changes through and mark that ticket as done, it's very tempting to just hit that, you know, retry button and let it sit and just hope for the best, as opposed to maybe investigate a little further about why that test was flaking and see if there's something that you could do about it.
So, I wanted to talk to you about the idea of making the right thing easy or how, at both a team level and an individual level, we can set ourselves and our team up for success rather than shoulder this burden [laughs] and just assume that things are the way they are.
JOËL: That's a really powerful question. Because I think by default, oftentimes, the less helpful thing is the path of least resistance, so, in this case, hitting that rerun button on a test suite, which I've absolutely done. But there's a lot of other situations in our work where, just sort of by default, the path of least resistance is the thing that's maybe less helpful for the team.
STEPHANIE: Ooh, I noticed that you kind of reframed what I said. I was using the term, you know, the right thing, but you then reworded it into the helpful thing. And that actually gets me thinking about these words are kind of subjective, right? What is helpful to someone could be different to what's helpful to someone else. And I'm kind of curious about your definition of the helpful thing.
JOËL: Yeah, I mean, sometimes it's very easy to sort of bring absolutes and [inaudible 11:26] judgments to code, you know, when we talk about writing good code, and being good programmers, being good at our jobs, not doing the bad things. And I think that sort of absolutism sometimes can, like, be very restrictive, and kind of takes us down paths that are not optimal for ourselves, for our teams, for our products. So, I'd like to think a little bit more relativistically have a little bit more of elasticity in the way that we formulate some of these ideas.
STEPHANIE: Yeah. I really like that reframing, and I appreciate the nuance there. I think for me, when I think of doing the helpful thing, I'm hoping to ease the day-to-day workflow for other developers because that also includes myself, right? Like, I've certainly been there feeling frustrated or just kind of tired of retrying [laughs] the test suite over and over again.
I'm also thinking about helpful, as in what will be helpful for future developers regarding the product? And can we make it robust now so that we're not dealing with bug reports later for things that we maybe we're trying to throw under the rug or just kind of glance over? Do you have any other guiding principles around what is helpful and what's not?
JOËL: I think that the time horizon you mentioned is really interesting because you have to balance sometimes short-term value versus kind of long-term. And is it worth it to maybe not fix that flaky test today so that we can ship as soon as possible? Or is it worth investing a little bit of time today so that tomorrow or next week is better? If you're a solo developer on a tiny project, that might be of personal benefit only. But on a larger team, you know, that might benefit not just you but a larger amount of people.
STEPHANIE: I just imagined the trolley problem [laughs] a little bit about, you know, the future developers and whose lives, not lives but whose happiness, the developer happiness you'll save [laughs] when you're on that track and making the decision about benefit now versus benefit later.
JOËL: No connection to railway-oriented programming, by the way.
STEPHANIE: I am really interested in also talking about barriers to doing the helpful thing or taking that extra step, right? Because I think identifying those barriers is really important to then, hopefully, break them down so that we are creating that path of least resistance.
JOËL: That's interesting that you mentioned these as barriers because I think, in my mind, I was thinking about the same idea but from, like, a completely mirror perspective, the idea of incentives. Why do incentives push you in one way versus the other?
STEPHANIE: I like that a lot. I think maybe there are, like, two different levers, right? Or maybe they are two sides of the same coin, where you do have incentive, and then you also have things that disincentivize you.
JOËL: This reminds me a little bit of the idea of the tragedy of the commons. So, in the case of flaky tests, everybody as a whole on your project has a worse experience. And project velocity slows when there are flaky tests. But you, as an individual developer, are incentivized to ship features quickly and efficiently. And the fastest way you can get your individual feature to production is by hitting that rerun button. Even though, collectively as a whole, every time we do that, instead of fixing the flakiness, we're adding a tiny, little bit of extra slowness that will accrete over time.
STEPHANIE: Yeah. It's kind of difficult to imagine really the negative impact that it's having collectively, right? You're kind of like, oh, I'm feeling this pain, but you're not always, like, really hearing about it from others, and we might just be silently suffering together [laughs]. I do think that once it's been identified as like, oh, like, we're all actually, like, really impacted by this, okay, great, let's make it a priority.
And so, now, let's say I am slightly incentivized to go and investigate the flakiness of a test rather than hit the rerun button this time around. I am wanting to talk about those barriers I was referring to a little bit because I've been in this position where I'm like, okay, like, I have some extra time today. So, why don't I look into this?
But then I go down that path, and now I'm looking at a test written by someone many years ago, you know, I don't know this person, and I don't know this domain. I don't know who to talk to to figure out even where to start. I may or may not feel equipped with the right tools to be able to address it.
And then, I think the biggest challenge is not feeling like it matters, right? Once I'm hitting this barrier, I'm like, is it worth the effort? At that point, maybe a little bit demoralized because, well, this is just one, and we have so many other flaky tests, like, what's one more?
JOËL: That's really interesting that you mentioned that sort of morale factor because it's absolutely the case on every flaky test suite I've seen. I think that kind of points to almost, like, an exponential cost to ignoring the problem. If someone fixed it early, yeah, it's slightly annoying, but you get it done. You fix it, and then you move on. When it feels like there's now this insurmountable pile of these and that any work you do here doesn't bring you any closer to the goal because it's effectively infinite, yeah, now, there is no incentive at all to do that work.
STEPHANIE: So, to avoid that, we talked about incentives, right? And I'm kind of curious: what ways have you seen or experienced that did make you feel motivated to take that extra step or at least try to avoid that point of thinking that nothing matters? [laughs]
JOËL: So, I'm going to start with, I think, what's maybe a classic developer answer, and I'm curious to get your thoughts on it because I think I have very mixed feelings about this. The idea of programmer discipline—we just need to kind of take more pride in our craft and pursue excellence, choose to do the right thing, even when it's hard every day. Because I hear a lot of that in our communities. How do you feel about that sort of maybe a bit of a mindset change? And how effective has that been?
STEPHANIE: Whoa, yeah, that's a really great point because I think I also feel quite conflicted about it. Because sometimes I can find it in myself to be, like, you know, I have the energy today to want to uphold, like, a certain level of quality that would make me feel good about doing my best work. And then, there are other days where I am, you know, just tired, [laughs] or feeling a little bit lazy, feeling just not confident that it will be worth it. Because there's also, I think, some external forces, right?
I've certainly been in the position where we were only rewarded or celebrated for shipping fast. That was the praise we were getting at retros. But then that actually really disincentivized me from wanting to do the helpful thing when the time came, right? When I'm in my development process, and I'm at, like, that crossroads. Because I'm like, well, I've been trying to do the right thing, but, like, no one is celebrating me for it. What if it takes away time from doing the thing that is considered successful? And, like, does it have an impact on me and my job?
So, you can, you know, kind of go down that spiral pretty quickly. And, in that case, like, no amount of personal, like, individual [laughs] feelings of responsibility can really overcome those consequences if, like, you're working on a team where that is just simply not valued, and there are other people who have authority to impart consequences, I suppose.
JOËL: Yeah. It's, you know, you have that maybe some amount of personal motivation. You feel like you're swimming upstream. And so, maybe the question then is, how do we reorient maybe some of the incentive structures in a team to try to make it so that if you are, let's just say, chasing some of those extrinsic rewards and you're trying to get praise from your teammates, or move forward in your career, whatever it is that is rewarded and valued on your team, how can that be harnessed to push people in a direction to get some of these extra tasks done?
STEPHANIE: Yeah. I will say, though, it is a little bit of both. And I think that was maybe why we're both kind of conflicted about it. Because on the individual level and, in general, knowing what my values are and, like, wanting to do good work and uphold quality, there are times where I don't always behave according to those values. Like I said, I am feeling tired, or maybe it's, like, almost 5:00 o'clock, and I just want to, you know, push my thing [laughs] so I can go and go on with my day.
What has helped me is having an accountability buddy. Maybe it's in code review, or maybe it's when I'm pairing, or maybe just talking about a problem that we're facing, right? And I might get into that sort of lizard-brain mentality of just wanting to do the easy thing. But as soon as someone else points it out or is, like, you know, like, "That's not quite aligning with what I know your, like, hopes and goals are for this project or how you want to do your work," usually, that's enough to be like, "Yeah, you're right." [laughs] And I'll take another pass at it.
JOËL: And I think we've kind of come back full circle here in that you mentioned that sort of the lizard brain side of you wants to just do the easy thing with the implication that, in this case, the easy thing is the thing that's maybe not useful to the team long term. What if we could restructure things a little bit so that the easy thing was the most beneficial thing for the team? And now you're not having to use discipline to fight the lizard brain side of you, but you're actually working with it.
One thing I've seen teams do with flaky tests is to not necessarily fix them immediately. Like, maybe you do rerun them, but when they happen, create a ticket for them and put them into the planning board. And so, now, these are things that get prioritized. They're things that might be fairly quick to do. So, it might be a fun, like, fast ticket for someone to pick up at the end of the week. It now counts towards velocity if tracked, so people, hopefully, get rewarded for doing that work. Is that something that you've seen? And how effective do you think that is in maybe making fixing flaky tests easier thing for a team?
STEPHANIE: Yeah, that is actually something that we are doing on my team right now, and I think it's great. I like the idea of tracking it, right? Because then you could also see over time, like, whether we have been getting better at reducing flaky tests, you know, then it's also really clear when someone is preoccupied with that work, and they don't get assigned something else.
One way that I've seen it not work as well as we hoped is when other work consistently gets prioritized over those flaky test tickets, and, you know, sometimes it happens. I think that's also information, though, about the team and how the team is spending its time compared to how the team thinks it should be ideally spending its time where, you know, we can say all we want, like, yes, like, we really want to make sure our test suite is robust. But when other things are consistently getting prioritized over it, then you can point to it and say, do we actually believe this if we're consistently not behaving according to that belief?
I found that challenging to have that conversation. But I do think that the concreteness of adding it to our workload for a given period of time is at least providing information rather than it being things that, like, developers are doing one-off or kind of just on their own time.
JOËL: Another solution that I've seen people do, and this is a classic developer solution, is tooling. If you have better tooling around a particular problem, sometimes you can shift that cost, that time of work it takes so that it's easier to do the best thing rather than to not...or at least make it easy enough that it's less of a big decision, like, oh, do I really want to invest that much work into something?
And a classic example of this, I think, is when you're trying to get into more of a test-driven development workflow. Having a test suite that's fast and, really importantly, having a near-instant way to run a test from your code editor makes a big difference in terms of adopting that workflow.
Because if the cost of running a test is too high, then yeah, the easy path is to just say, you know what? I'm not going to run a test. I'm going to just run it once at the end after I've written 100 lines of code or an entire feature. And now I am not getting the benefits of TDD. And that might kind of get into a negative cycle where because I'm not seeing the benefits, I do it even less. And then eventually, I'm just, like, you know what? Forget this whole thing.
STEPHANIE: I think that also applies to testing in general, where if it, you know, is feeling really challenging. I have definitely seen people start to get into that mindset of, is this worth my time to do at all? And it's a very slippery slope, I think.
That almost makes me think about, like, okay, like, what are some other ways to lift that task up and to elevate it into something that's worthy of saying, like, "Hey, like, that was really hard, and you did a great job. And that was really awesome that you persevered through that challenging thing." Sharing the pain points is really important, not only to, like I mentioned earlier, to, like, communicate that, oh, maybe, like, more people than you think are going through the same thing, but also to be able to identify when someone went out of their way to do the helpful thing and seeing that someone was willing to do it because, for them, being helpful is important.
When I see someone on my team take on kind of a difficult task to make things easier for others and then share about it, I feel really inspired because I think, wow, like, that could be me as well. You know, there's that saying that many hands make light work. And I also think that's true of tackling these kinds of barriers, where if we all feel this collective responsibility or, like, wanting to help out the group, it ends up literally being easier.
JOËL: I think something I'm hearing here as well is the value of giving praise. If somebody goes out of their way to make life easier for the rest of your team, give them a shout-out, whether that's in your team's Slack channel, maybe it's at an all-hands meeting, and you shout out some work that they've done or, you know, you put their name on a slide in your slide deck at some point, or whatever it is the mechanism within your team. Having a way to shout out people who've done some of this work that can be sometimes a little bit thankless is a great way to motivate seeing more of that.
STEPHANIE: Yeah. I was just thinking that it can be really powerful because, like you said, a lot of it is thankless, but also, we may not even realize the impact it's having on others until you give that shout-out and express, like, "Wow, like, that change, you know, I've been bothered by this issue for so long and, you know, that really made an impact on me," just keeping that cycle of gratitude going.
JOËL: So, I think we've kind of identified three maybe main areas of ways where you can help to incentivize these behaviors. You can do it through kind of process. We talked about the example of pulling the flaky tests as actual cards to be worked through on a board.
It can be technical by introducing some tooling that makes it much easier to do the work that you're trying to do.
And it can be personal by praising people, preferably in public, for taking that extra step. And I think all three of those can be part of a strategy to make it easier or more attractive for people to do work that benefits the team as a whole, even if they don't see an immediate return to that on a per-day level.
STEPHANIE: Yeah. I like that we kind of talked about these three different categories. And people in all different types of roles can, hopefully, take something from what we've shared, right? If you are a manager or leader of a team, maybe you can investigate your processes. If you're an individual contributor and you notice your colleague doing something that you, you know, kept meaning to but just didn't have time for, recognize that work. It really does take a holistic approach, but I think an impact can be made at every level.
JOËL: Agreed. I think for managers and more senior team members, that's really almost, like, part of their job description is to think about these kinds of things. How can we incentivize this work? How can we shift the team in a particular direction? There's a particular onus on them to do this right, to think about this, to model some of this behavior.
STEPHANIE: Yeah, absolutely.
JOËL: For someone who's really senior on a team also, they're often the ones who are tasked or who maybe take the initiative to build some of this more complex tooling so that these tasks are easier for more junior people. Maybe that's tinkering with some things and building an editor plugin that makes it easier to do some work. Maybe it's building a Rails generator so that the proper files get generated that maybe people wouldn't think to have when they're building certain work. Maybe it's building an RSpec matcher to make it easier for people to test some of the nuances of what we're hoping to do, catching some of these edge cases.
Whatever it is, sometimes there are things that the more junior members of our teams aren't aware of, and having a senior person take time out of their day to build these things so that now the entire team can be more productive can be a really helpful thing to do.
STEPHANIE: Yeah, that's a great point. And I think that also comes from having a pulse on what people are struggling with, right? So, you know, oh, it would be good to invest my energy into building a script to make this manual process easier because I keep hearing about people having issues with it or it being a challenge.
So, I would even recommend posing the question of, like, how do people feel about being able to fix that flaky test, right? Like, is it intimidating? What are those barriers? Because your team knows best about what that experience is like. And if that is not something on your radar, maybe there are opportunities to incorporate it into where you're evaluating team morale and happiness.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël shares he has been getting more into long-form reading. Stephanie talks about the challenges she faced in a new project that required integrating with another company's system.
Together, they delve into the importance of search techniques for developers, covering various approaches to finding information online.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: Something I've been trying to do recently is get more into long-form reading. I read quite a bit of technical content, but most of it are short articles, blog posts, that kind of thing. And I've not read, like, an actual software-related book in a few years, or at least not completed a software-related book. I've started a few chapters in a few. So, something I've been trying to do recently is set aside some time. It's on my calendar. Every week, I've got an hour sit down, read a long-form book, and take notes.
STEPHANIE: That's really cool. I actually really enjoy reading technical stuff in a long-form format. In fact, I was similarly kind of trying to do it, you know, once a week, spend a little bit of time in the mornings. And what was really nice about that is, especially if I had, like, a physical copy of the book, I could close my computer and just be completely focused on the content itself.
I also love blog posts and articles. We are always talking on the show about, you know, stuff we've read on the internet. But I think there's something very comprehensive, and you can dig really deep and get a very deeper understanding of a topic through a book that kind of has that continuity.
JOËL: Right. You can build up a larger idea have more depth. A larger idea can also cover more breadth. A good blog post, typically, is very focused on a single thing, the kind of thing that would really probably only be a single chapter in a book.
STEPHANIE: Has your note-taking system differed when you're applying it to something longer than just an article?
JOËL: So, what I try to do when I'm reading is I have just one giant note for the whole book. And I'm not trying to capture elements or, like, summarize a chapter necessarily. Instead, I'm trying to capture connections that I make. So, if there's a concept or an argument that reminds me of something perhaps similar in a different domain or a similar argument that I saw made by someone else in a different place, I'll capture notes on that. Or maybe it reminds me of a diagram that I drew the other day or of some work I did on a client six months ago.
And so, it's capturing all those connections is what I'm trying to do in my notes. And then, later on, I can kind of go back and synthesize those and say, okay, is there anything interesting here that I might want to pull out as an actual kind of idea note in my larger note-taking system?
STEPHANIE: Cool, yeah. I also do a similar thing where I have one big note for the whole book. And when I was doing this, I was even trying to summarize each chapter if I could or at least like jot down some takeaways or some insights or lines that I like felt were really compelling to me. And, like, something I would want to, in some ways, like, have created some, like, marker for me to remember, oh, I really liked something in this chapter. And then, from there, if I didn't capture the whole idea in my note, I knew where I could go to revisit the content.
JOËL: And did you find that was helpful for you when you came back to the book?
STEPHANIE: Yeah, it did. I usually can recall how, like, I felt reading something. You know, if something was really inspiring to me or really relatable, I can recall that, like, I had that experience or emotion. And it's just, like, trying to find where that was and that this is a system that has worked well for me.
Though, I will say that summarizing each chapter did kind of remind me of, like, how we learned how to take notes in school. [laughs] And I think, you know, middle school, or whatever, I recall a particular note-taking format, where you, you know, split the page up into, like, an outline with all the chapters, and you tried to summarize it. And so, it did feel a little bit like homework [laughs]. But I can also see the value in why they taught me how to do that.
JOËL: I was recently having a conversation with someone else about the idea of almost, like, assigning yourself the college-style essay question after finishing a book to try to synthesize what you learned.
STEPHANIE: Whoa, that's really cool. I can see how that would really, like, push you to synthesize and process what you might have just consumed. And, also, I'm so glad I'm not in school anymore [laughs] so that I don't have to do that on a regular basis. [laughs] I'm curious, Joël, what book are you reading right now?
JOËL: I've been reading Domain Modeling Made Functional, which is a really interesting intersection between functional programming, Domain-Driven Design (DDD), and a lot of interesting kind of type theory. And so, that sort of intersection of those three Venn diagrams leads to this really fascinating book that I've been going through. And I think it connects with a lot of other things that I've been thinking about.
So, I'll be reading and be like, oh, this reminds me of this concept that we have in test-driven development. Or this reminds me of this idea that we do when we do a product design sprint. And this reminds me of this principle from object-oriented design. And now I'm starting to make all these really interesting connections.
STEPHANIE: Awesome. Well, I hope to hear more about what you've learned or kind of what you're thinking about going through this book in future episodes.
JOËL: This is not the last time we hear about this book, I'm pretty sure. So, Stephanie, what's new in your world?
STEPHANIE: So, I have a little bit of a work update to share. So, lately, I've been brought in to work on a feature that is integrating with another company's system. And the way that I was brought into this work was honestly just being assigned a task. And I was picking up this work, and I was kind of going through the requirements that had been specked out for me, and I was trying to get started. And then, I realized that I actually had a lot of questions. It just wasn't quite fully fleshed out for the level of detail that I needed for implementing.
And for the past couple of weeks, we've been chatting in Slack back and forth as I tried to get some of my questions answered. They are trying to help me, but also the things that I'm saying end up confusing them as well. And then, I end up having to try and figure out what they're looking for in order to properly respond to them.
And I had not met these people before. These are folks from that other company. And, you know, I'd only just seen their little Slack profile pictures. So, I didn't know who they were. I didn't know what role they had and kind of, like, what perspective they were coming to these conversations from. And after a while, I was feeling a little stressed out because we just kept having this back and forth, and not a lot of answers were coming to fruition.
And I really ended up needing the nudge of the manager on my client team to set up a meeting for us to all just talk synchronously. And I think I had...not that I had been avoiding it necessarily, but I guess I was under the impression that we were at the point where we could just, you know, shoot off a question in Slack and that there would be a clear path forward. But the more we kept pulling on that thread, the more I realized that, oh, like, we have a lot of ambiguity here.
And it really helped to meet them finally, not in person but, like, over a video call. [laughs] So, this happened yesterday. And, you know, even just, like, going around doing introductions, like, sharing what their role was at the company helped me just understand, like, who I was talking to. You know, I realized, oh, like, the level of technical details that I had been providing was maybe too much for this group.
And I was able to have a better understanding of what their needs were, like hearing kind of the problem that they had on their end. And I realized that, oh, like, they actually aren't going to provide me the details for implementation that I was looking for. That's up to me. But at least now I know what their higher-level needs are so that I can make the most informed decisions that I can.
JOËL: Fascinating. So, you thought that this was going to be, like, the technical team you're going to work with. And it turns out that this was not who they were.
STEPHANIE: In some ways. I think I thought by providing more technical details that would be helpful, but it ended up being more confusing for them. And I think I was similarly kind of frustrated because the ways that I was asking questions or communicating also wasn't getting me the answers that I needed as well.
But I felt really great after the meeting because I'm like, wow, you know, it doesn't have to be as stressful. You know, when you start getting into that back and forth on Slack, at least I find it a bit stressful. And it turns out that the antidote to that was just getting together and getting to know each other and hashing out the ambiguity, which does seem to work better in a more synchronous format.
JOËL: Do you have kind of a preference for synchronous versus asynchronous when it comes to communication?
STEPHANIE: That's a good question. I think it's kind of a pendulum for me. I'm in my asynchronous communication is a bit better for me right now phase, but only because I am just so burnt out on meetings a lot of the time that I'm like, oh, like, I really don't want to add another meeting to my calendar, especially because...I amend my statement; I'm burned out of meetings that don't go well. [laughs]
And this meeting, in particular, was different because, you know, I realized, like, oh, like, we are not on the same page, and so how can we get there? And kind of making sure that we were focused on that as an agenda. And I found that ultimately worked out better than the async situation that I was describing, which I'm thinking now, you know when things aren't clear, text-based communication certainly does not help with that.
JOËL: So, meetings, sometimes they're actually good.
STEPHANIE: Yeah, that's my enlightened discovery this week.
JOËL: So, this episode is kind of a special one. We've just hit 400 episodes of The Bike Shed. So, this is episode number 400. It's also my 50th Episode as a co-host.
STEPHANIE: Right. That's a huge deal. 400 is a really big number. I don't know if I've ever done 400 of anything before [laughs].
JOËL: The Bike Shed has been going on for almost ten years now. The first episode up on the website is from October 31st, 2014, so just about nine years from that first episode.
STEPHANIE: Wow. And it's still going strong. That's really awesome. I think it's really special to be a part of something that has been going on for this long. And, I don't know, maybe there are still listeners today from back in 2014. I would be really excited to hear if anyone out there has been listening to The Bike Shed throughout its whole lifespan. That's really cool.
JOËL: Looking back over the last 50-ish episodes you and I have done, do you have a favorite episode that we've recorded?
STEPHANIE: This may be a bit of recency bias. But the episode that we did about Software Heuristics I really enjoyed. Because I think we got to bring to the table some of the things we believe and the way we like to do things and kind of compare and contrast that with each other. And I always find people's processes very fascinating. Like, I want to know how you think and where your brain is at when you approach a problem. So, I really enjoyed that topic. What about you? Do you have any highlight episodes?
JOËL: I think there's probably two for me. One is the episode that you and I did on Specialized Vocabulary. I think this really touched on a lot of really interesting aspects of writing software that's going to scale, software that works for a team, and also kind of personal growth and exploration.
The second one that I think was really fun was the episode I did with Sara Jackson as a guest talking about Discrete Math because that's an episode that I got really excited about the topic. And right after recording the episode, it was the last day of the call for proposals for RailsConf. And I just took that raw excitement, put together a proposal, hit submit before the deadline. And it got accepted and got turned into a talk that I got to give on stage. So, that was, like, just a really fun journey from exciting episode with Sara and then, like, randomly turned into a conference talk.
STEPHANIE: That's awesome. That makes me feel so happy. Because it just reminds me about how the stuff we talk about on the show can really resonate with people, you know, enough to become a conference talk that people want to attend.
And I also really like that a lot of the topics we've gotten into in the past 50 episodes when we've taken over the show have been a bit more evergreen and just about, you know, the software development experience and a little bit less tied to specific news within the community.
Speaking of evergreen topics, today, I wanted to discuss with you an evergreen software skill, and that is searching or Search-Driven Development, even if you will.
JOËL: Gotta always get that three-letter acronym, something DD.
STEPHANIE: Yeah. I am really curious about how we're going to approach this topic because a lot of folks might joke that a big part of writing software is knowing what to Google. Do you agree with that statement or not?
JOËL: Yes and no. There's definitely value in knowing what to Google. It really depends on the kind of work that you're doing. I find that I don't Google that much these days. There are other tools that I use when I'm particularly, like, searching through documentation, but they tend to be less sort of open-ended questions and more where it's like, oh, let's get the actual documentation for this particular class or this particular method from the standard library.
STEPHANIE: Oh, interesting. I like that you pointed out that there are different scopes of things you might want to search for. So, am I hearing correctly that when you have something specific in mind that you are just trying to recall or wanting to look up, you know, you're still using search that way, but less so if you are trying to figure out how to approach solving a problem?
JOËL: So, oftentimes, if I'm working with a language that I already have familiarity with or a framework that I have familiarity with, I'm going to lean on something more specific. So, I'm going to say, okay, well, I don't exactly remember, like, the argument order for Enumerable's inject method. Is it memo then item, or item then memo? So, I'll just look it up. But I know that the inject method exists. I know what it does. I just don't remember the exact specifics of how to do that.
Or maybe I want to write a file to disk, and I don't remember the exact method or syntax to do that. There are some ways that you can do it using a bunch of instance methods. But I think there's also a class method that allows you to kind of do it all at once. So, maybe I just want to look up the documentation for the file class in Ruby and read through that a little bit. That's the kind of thing where I suppose I could also Google, you know, how to save file Ruby, something like that. But for those sorts of things where I already roughly know what I want to do, I find it's often easier just to go directly to the docs.
STEPHANIE: Yeah, yeah, that's a great tip. And I actually have a little shortcut to share. I started using DuckDuckGo as my search engine in the past year or so. And there's this really cool feature called Bangs for directly searching on specific sites. From my search bar, I can do, let's say, bang Rails and then my query. And it will search directly the Rails Guides website for me instead of, you know, just showing the normal other results that might come up in my regular search engine. And the same goes for bang Ruby doc. That one shows ruby-doc.org, which is my preferred [laughs] Ruby documentation website.
I've really been enjoying it because, you know, it just takes that extra step out of having to either navigate to the site itself first or starting more broadly with my search engine and then just scrolling to find the site that I'm looking for.
JOËL: Yeah. I think having some kind of dedicated flow helps a lot. I have a system that I use on my machine. It is Mac-specific. But I use a combination of the application Dash and the application Alfred. It allows me, with just a few keyboard shortcuts, to type out language names. So, I might say, you know, Ruby inject, and then it'll show me all the classes that have that method defined on it, hit Enter, and it pops up the documentation. It's downloaded on my machine, so it works offline. And it's just, you know, a few key presses. And that works really nicely for me.
STEPHANIE: Oh, offline search. That's really nice. Because then if you're coding on a plane or something, then [laughs] you don't have to be blocked because you can't look up that little, small piece of information you need to move forward. That's very cool.
JOËL: That is really cool. I don't know how often I've really leaned into the offline part of it. I don't know about you; I feel like I don't code on airplanes as much as I thought I would.
STEPHANIE: That's fair. I also don't code on airplanes, but the idea that I could is very compelling to me. [laughs]
JOËL: Absolutely. So, that's the kind of searches that I tend to do when I'm working in a language that I already know, kind of a day-to-day language that I'm using, or a framework that I'm already pretty familiar with. And this is just looking at all the things I haven't gotten to the point where I've fully memorized, but I have a good understanding of.
What about situations where maybe you're a little bit less familiar with? So maybe it's a new framework, or even, like, a situation where you're not really sure how to proceed. How do you search when there's more uncertainty?
STEPHANIE: Yeah, that's a good question. I do think I start a bit naively. The reason that we're able to be more specific and know exactly where to go is because we've built up this experience over time of scrolling through search results and clicking, you know, maybe all of them on the first page, even, and looking at them and being like, oh, like, this is not what I want. And then, seeing something else, it's like, oh, this is more helpful and kind of arrived at sources that we trust.
And so, if it's something new, I don't really mind just going for a basic search, right? And starting more broadly might even be helpful in that process of building up the experience to figure out which places are reputable for the thing that I'm trying to figure out.
JOËL: Yeah, especially when there's a whole new landscape, right? You don't really know what are the places that have good information and the ones that don't. For some things, there might be, like, an obvious first place to start. So, recently, I was on a project where I was trying to do an integration between a Rails app and a Snowflake data warehouse. And so, the first thing I did—I'm not randomly Googling—I went to the Snowflake website, their developer portal, and started reading through documentation for things.
Unfortunately, a lot of the documentation is a bit more corporatey and not really helpful for Ruby-specific implementation. So, there's a few pieces that were useful. There were some links that they had that sent me to some good places. But beyond that, I did have to drop to Google search and try to find out what kinds of other things the community had done that could be helpful.
Now, that first pass, though, did teach me some interesting things. It gave me some good keywords to search for. So, more than just Ruby plus Snowflake or something like that like, I knew that I likely was going to want to do some kind of connection via ODBC. So, now I could say, okay, Ruby plus ODBC integration, or Ruby plus ODBC driver and see what's happening there. And it turns out that one of the really common use cases for ODBC and Ruby is specifically to talk to Snowflake. And one of the top results was an article saying, "Hey, here's how you can use ODBC to get your Rails app to talk to Rails." And then I knew I struck gold.
STEPHANIE: That's really cool. The thing that I was picking up on in what you were saying is the idea of finding what is most relevant to you. And maybe that is something that the algorithm serves you because, like, it's, like, what a lot of people are searching for, you know, a lot of people are engaging with, or matching with all these keywords that you're using.
My little hack that I've been [chuckles] using is to use Slack and lean on other people who have maybe a little more, even just, like, a little more experience than me on the subject, and seeing, like, what things they're linking to, and what resources they're sharing. And I've found that to be really helpful as a place to start. Because, at that point like, my co-workers are narrowing down the really broad landscape for me.
JOËL: I really like how you're sort of you're redefining the question a little bit here. And that, I think, when we talk about search, there's almost this implicit assumption that search is going to be searching the public internet through Google or some other alternative search engine. But you're talking about actually searching from my private corpus of data, in this case, either thoughtbot or maybe the client's Slack conversations, and pulling up information there that might be much more relevant or much more specific to the work that you're trying to do.
STEPHANIE: Yeah. In some ways, I like to think of it as crowd-sourced but, like, a crowd that I trust and, you know, know is relevant to me and what I'm working on. I actually have a fun fact for you. Did you know that Slack is actually an acronym?
JOËL: No, I did not know that. What does it stand for?
STEPHANIE: It stands for Searchable Log of All Communication and Knowledge.
JOËL: That is incredibly clever. I wonder, is this the thing where they came up with that when they made the original name? Or did someone go back later on, you know, a few years into Slack's life and was like, you know what? Our name could be a cool acronym; here's an idea.
STEPHANIE: I'm pretty sure it was created in Slack's early days. And I think it might have even helped decide that Slack was going to be called Slack as opposed to some of the other contenders for the name of the software. But I think it's very accurate. And that could just be how I use Slack. I'm a very heavy search power user in Slack. [laughs]. So, I find it very apt.
You know, obviously, I use it a lot for finding conversations that happened. But I really do enjoy it as a source of discovery for a specific topic, or, you know, technical question or idea that I'm wanting to just, like, filter down a little bit beyond, like you said, the public internet. In fact, I have found it really useful for when you encounter errors that actually are specific to your domain or your app.
Obviously like, you will probably be less successful searching in your search engine for that because it includes, you know, context from your app that other people in the world don't have. But once you are narrowing it down to people at your company, I've been able to get over a lot of troubleshooting humps that way by searching in Slack because likely someone within my team has encountered it before.
JOËL: So, you mentioned searching for error messages in particular. And I feel like that is, like, its own, like, very specific searching skill separate from more general, like, how do I X-style questions. Does that distinction kind of line up with your mental map of the searching landscape?
STEPHANIE: Yeah. I guess the way that I just talked about it now was potentially a bit confusing because I was saying instead of how you might search for errors normally, but I did not talk about how you might search for errors normally. [laughs] But specifically, you know, if I'm popping error messages into my search engine, I am removing the parts of the stack trace that are specific to my app, right? Because I know that that will only kind of, like, clutter up my query and not be getting me towards a more helpful answer as to the source of my issue, especially if the issue is not my application code.
JOËL: Right. I want to give a shout-out to an article on the thoughtbot Blog with a wonderful name: Indiana Jones and the Crypt of Cryptic Error Messages by Louis Antonopoulos. All about how to take an error message that you get from some process in your console and how to make that give you results when you paste it into a search engine.
STEPHANIE: I love that name. Very cool.
JOËL: So, you've talked a little bit about the idea of searching some things that are not on the public internet. How do you feel about kind of internet knowledge bases, private wikis, that kind of thing? Have you had good success searching through those kinds of things?
STEPHANIE: Hmm, I would say mixed success, to be honest. But that's because of maybe more so the way that a team or a company documents information. The reason I say mixed results is because, a lot of the time, the results are outdated, and they're no longer relevant to me. And it doesn't take that much time to pass for something to become outdated, right? Because, like, the code is always changing. And if, you know, someone didn't go and update the documentation about the way that a system has changed, then I usually have to take the stuff that I'm kind of seeing in private wikis with a bit more skepticism, I would say.
JOËL: Yeah, I think my experience mirrors yours as well. Also, some private wikis have just become absolutely huge. And so, searches just return a lot of results that are not really relevant to what I'm searching for. The searching algorithms that these systems use are often much less powerful than something like Google. So, they often don't sort results in a way that are bringing relevant things up to the top. So, it's more work to kind of sift through all of the things I don't care about.
STEPHANIE: Yeah, bringing up the size of a wiki and, like, all of the pages, that is a good point because I see a lot of duplicate stuff, but that's just, like, slightly different. So, I'm not sure which one I'm supposed to believe. One really funny encounter that I had with a private wiki, or actually it was, like, a knowledge base article that was for the internal team...it was documenting actually a code process. So, it was documenting in more human-readable terms, like the steps an algorithm took to determine some result. But the whole document was prefaced by, "This information came from an email that was sent way long ago." [laughs]
JOËL: That's an epic start to a Wiki article.
STEPHANIE: Yeah. And there was another really funny line that said, "The reason for this logic is because of a decision made by (This person's name.)," like a business decision that (some random person name). No last name either, so I have no idea [laughs] who they could be referring to and any of the, like, historical context of why that happened. But I thought it was really funny as just a piece of, like, an artifact, of, at the time, when this was written, that meant something to someone, and that knowledge kind of has been diluted [laughs] over the years.
JOËL: Yeah, internal wikis, I feel like, are full of that, especially if they've had a few years to grow and the company has changed and evolved.
So, now it's time for hot takes.
STEPHANIE: Yeah, I'm ready for them.
JOËL: We are now in the fancy, new age of AI. Is ChatGPT going to make all of this episode obsolete?
STEPHANIE: I'm going to say no, but I'm also biased, and I'm not a ChatGPT enthusiast. I've said it on air. [laughs] I can't even say that I've used it. So, that's kind of where I'm coming from with all this. But I have heard from folks that, convenient as it may be, it is not always 100% accurate or successful.
And I think that one of the things I really like about kind of having agency over my search is that I can verify, as a human, the information that I'm seeing. So, you know, when you're, like, browsing a bunch of Stack Overflow questions and you see, you know, all these answers, at least you can, like, do a little bit of, like, investigation using context clues about who is answering the question, you know, like, what experience might they have?
If you encounter something on a blog post, for example, you can go to the about page on this person's blog and be like, who are you? [chuckles] And, like, what qualifies you to give this information? And I think that is really valuable for me in terms of evaluating whether I want to go down a path based on what I'm seeing.
JOËL: So, I've played with it a tiny, little bit, so not enough to have a good sample size. And I think it can be interesting for some of those less constrained kind of how do I style questions. I'm not necessarily looking for, like, an exact code sample. But even if it just points me towards, oh, I need to be looking at this particular class in this standard library and read through that documentation to build the thing that I want. Or maybe it links me to kind of the classic blog posts that people refer to when talking about this thing.
It's a good way sometimes to just narrow down when you're kind of faced with, you know, the infinity of the internet, and you're kind of like, oh, I don't even know where to start. It gives you some keywords or some threads to follow up on that I think can be really interesting.
STEPHANIE: The infinity of the internet. I love that phrase. I don't think I've heard it before, but it's very evocative for me [laughs]. And I like what you said about it helping you give a direction and to kind of surface those keywords. In fact, it almost kind of sounds like what I was mentioning earlier about using Slack for, right? And, in that case, the hive mind that I'm pulling from is my co-workers. But also, I can see how powerful it would be to leverage a tool that is guiding you based on the software community at large.
JOËL: Something I'd be curious to maybe lean into a little bit more are some of those slightly more specified questions where it does give you a code snippet, so something like writing a file to disk where, right now, it's, you know, five characters. I just pop up Alfred and type up Ruby F, and it gives you the file docs, and it's, you know, right there. There's usually an example at the top of the file. I copy-paste that and get working.
But maybe this would be a situation where some AI-assisted tools would be better. It could be searching through something like ChatGPT. It could be maybe even something like Co-pilot, where, you know, you just start typing a little bit, and it just fills out that skeleton of, like, oh, you want to write a file to disk in Ruby. Here's how it's typically done.
STEPHANIE: Yeah, you bring up a good point that, in some ways, even the approaches to searching we were talking about originally is still just building off of algorithms helping us to find what we're looking for, right?
Though, I did really want to recommend an awesome talk from Kevin Murphy, from a RailsConf a couple of years ago, that's called Browser History Confessional: Searching My Recent Searches. The main message that I really enjoyed from this talk was the idea of thinking about what you're searching for and why because that will, I think, help add a bit of, like, intentionality into that process. You know, it can be very overwhelming, but let that guide you a little bit.
One of the things that he mentions is the idea of revisiting your own assumptions with search. So, even if you think you know how to do something, or you might even know, like, how you might want to do it, just going to search to see if there's any other implementations that you haven't thought of that other people are doing that might inform how you approach a problem, or at least, like, make you feel even more confident about your original approach in the first place.
I thought that was really cool. That's not something that I do now, but definitely, something that I want to try is to be, like, I think I know how to do this, but let me see what other people are doing because that might spark something new.
JOËL: We'll put a link in the show notes to this talk. But I was lucky enough to see it in person. And also would like to second that recommendation. It is worth watching.
From this conversation that you and I have had, I'm having, like, two main takeaways. One is kind of what you just said, the idea of being a little bit more cognizant of, what kind of search am I doing? Is this a sort of broad how do I X, where I don't even really know where to start? Is this, like, something really specific where you just don't know what kind of syntax you want to use? Is it an error message where you just want to see what other people have done when they've encountered this? Or any other, like, more specific subcategories. And how being aware of that can help you search more effectively.
And secondly, don't limit yourself to the public internet. There's a lot of great information in your company's Slack or other instant messaging service, maybe some kind of documentation system internal, some kind of wiki. And those can be a great place to search as well.
STEPHANIE: If we missed any other cool searching tips or tricks or ways that we might be able to improve our processes for searching as developers, I would really love to hear about them. So, if any listeners out there want to write in with their thoughts, that would be super awesome.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie experienced bike camping. Joël describes his experience during a week when he's in between projects.
Stephanie and Joël discuss the concept of code ownership, the mechanisms to enforce it, and the balance between bureaucracy and collaboration. They highlight the challenges and benefits of these systems in large codebases and emphasize that scaling a team is as much a social challenge as it is a technical one.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: This weekend, I went bike camping for the first time. So, it was my turn to try out the padded bike shorts and go out on a long ride, combining two things that I really enjoy: biking and camping. It was so awesome. We did about 30 miles outside of the city of Chicago to close to the Indiana border. And we were at a campground that's owned by the forest preserves where I'm at. It was so much fun.
I packed all my stuff, including my tent and sleeping bags. And it was something that I never really imagined myself doing, but I'm really glad I did because I think it'll be something that I want to kind of do more of in the future, maybe even do multi-day bike camping trips.
JOËL: So, what's your verdict on the bike shorts?
STEPHANIE: Definitely a big help. Instead of feeling a little bit sore an hour or so along the bike ride, it kind of helped me stay comfortable quite a bit longer, which was really nice.
JOËL: Would you do this kind of trip again?
STEPHANIE: I think I would do it again. I think the next step for me is maybe to go even farther, maybe do multiple stops. Yeah, I was talking to my partner about it who came along with me, and he was saying, like, "Yeah, now that you've done that many miles in one day and, you know, camped overnight, you can really go anywhere. [laughs] You can go as far as you want."
And I thought that was pretty cool because, yeah, he's kind of right, where I can just pack up and go and, you know, who knows where I'll end up? Not that I would actually do that because of my need to plan. [laughs] I'm not that go-with-the-flow. But there was definitely something really special about being able to get from A to B with just, like, my physical body and not relying on any other kind of transportation.
JOËL: Yeah, there's a certain freedom to that spontaneity that's really nice.
STEPHANIE: Yeah. And I actually went with a group called Out Our Front Door. And if anyone is in Chicago and is interested in doing this kind of thing, they do group bike camping adventures, and they make it really accessible. So, it's a very easy pace. You are with a group, so it's just really fun. They make it really safe. And I had a really great time. There was about 60 of us actually at camp, and they had rented out the entire campground, so it was just our group.
And they even had a live reggae band come out and play music for us while we had dinner. And that was a really nice way for me to do it as a first-timer because there was stuff already planned for me, like meals. And I didn't have to worry about that because I was already, you know, just worrying about making sure I got there with all of my stuff. So, if that sounds interesting to you and you're in Chicagoland, definitely check them out.
JOËL: That's a great way to bring in newer people to say, let's have a semi-organized thing, where all you have to focus on is the skill itself, you know, can I bike the 30 miles? Rather than planning all the logistics around it.
STEPHANIE: Yeah, exactly. Joël, what's new in your world?
JOËL: So, speaking of planning and logistics, this week, I'm in between projects at thoughtbot. And on the Boost team that I'm on, we've introduced a kind of special rotation for people who are in between projects where we have an internal project, where just internal strategic initiatives that we want to push forward. Whoever is unbooked gets to work on that. So, it's a small team with a very high churn.
And one of the things that we do is every week; we have somebody act as the project manager for that team. And I was in between projects this week. I was assigned that project manager job. And so, I've been doing that in addition to some of the tickets myself, and that's been really interesting.
STEPHANIE: Cool. I really like how that role is rotated among team members.
JOËL: Yes. And the whole team itself is very high churn. So, somebody might be on that for just one week and then rotate off, and a new person comes in; maybe someone's on there a couple of weeks while we're waiting to find them a project.
But we're always looking to prioritize booking people onto new client projects. And so, whoever is on that team, typically, is there for a short period of time. And so that means that the project manager role has to rotate a lot. But also, just in general, as you're managing tickets, you have to deal with the fact that people are not going to be on this project long-term. This is just, they're here for a few days, and they get some things done, and then they're moving off. And I think that presents some unique challenges in terms of the project management side of things.
STEPHANIE: Yeah. What kind of challenges did you find interesting in this role for the week that you were on it?
JOËL: So, in particular, I think making sure that outstanding work from the previous week gets done, especially when the people who were working on that ticket are no longer working on it. So, they may have done some partial work and then moved on to something else. And then, you have to ascertain the state of the ticket. Has it been completed? If it's only partial, what parts have and haven't? Can this be passed on to somebody else? Is there some unique knowledge that the previous person had? Has the code been pushed up? That kind of thing.
STEPHANIE: So, that reminds me of something I heard about the idea of being expendable. You know, there are certain industries where anyone else with that skill set can kind of step in and take over for another worker without a lot of issues, and they can continue on doing that work. So, I'm thinking about, you know, maybe doctors or pharmacists where they have that, like, shared skill set, and everything is documented enough so that they can just take whatever their case is. And if someone is out, it's not a big deal because people can just step in. And I'm curious about if this is something that could work for software development.
JOËL: I think it is important to have a team where nobody is irreplaceable. When it comes down to individual tickets, one of the things that I've been pushing for is that, at the end of the week, I would like to not see any tickets remain in the in-progress column. We're using a Kanban-style board. So, ideally, all work either moves to the done column or moves to the to-be-done column for next week. And it's no longer owned by anyone, so people have removed their faces from it. Ideally, though, if you pick up a ticket during the week, you get it to completion.
So, one thing that I've been really pushing with our team this week is splitting tickets up. If this feels like it's bigger than a few days, then it needs to be split up, and part of it gets done moves to the done column. Part of it might be some work that somebody else is going to pick up next week; move that to the to-do column. And so, that way, at the end of the week, we have, ideally, a column full of things that were pushed over the finish line that are done.
And then, we have a column of things to be done next week that nobody has kind of called dibs on yet. So that then next week, when we have a new group of people coming in, you don't just look at this column of to-do things. It's like, well, all of these have someone's face on them. I'm not able to pick up anything, so now what do I do? By having those all kind of fresh and available to be picked up, you make it easy for the next batch of people to hit the ground running on Monday morning.
STEPHANIE: That's really interesting. You said you were doing this Kanban style, but it almost kind of sounds like one-week sprints in a way.
JOËL: Kind of, because the way we book people onto clients is typically on a per-week basis. And so, if there's going to be a gap between clients, typically, it's in increments of a week. Because they're on the project for a week, it doesn't necessarily mean that we're tracking the tickets on a per-week thing. So, it's not like, oh, we're committing to doing all of these tickets by the end of a particular time or anything like that.
We are working in a more Kanban style where there's a backlog, and you pull tickets, and whatever gets done gets done. What we do try to do, though, is not have individual tickets hang in the in-progress column over a week boundary. So, there's a nuance there. I guess there's some ways in which maybe it feels a little bit sprint-like. But I think we are running in much more of a Kanban-style workflow.
STEPHANIE: Yeah, that makes sense.
JOËL: It's really to deal with that churn and the idea that even though the ticket might stick around for a while or maybe it gets split up into multiple small tickets, the people are switching constantly. And so, making the workflow play nicely with the fact that the team is churning on a weekly basis kind of adds an extra, you know, a little bit of spice to the project management side of things.
STEPHANIE: Did you find yourself being the one to break down tickets to make sure that they weren't larger than a week's worth of work? Or did you work with the developer themselves to find opportunities to break out what they were working on if we got to the mid-week and progress wasn't looking like it would be completed by the end?
JOËL: I've left this up to individual developers. This is more of a broad conversation I had with our team, kind of saying, "Hey, here's our goal. We want to get some things done by the end of the week. If we don't think we can get them done, here are some strategies I recommend. I'm available to pair if people want it." But I didn't go through and estimate all the tickets and split them up.
I did a little bit of, like, grooming ahead of time. So, I had a sense of when we started the week if tickets felt roughly sized correctly. But oftentimes, you know, that kind of thing, you start working on it, and then you realize, wait a minute, this is a bigger ticket than I thought.
STEPHANIE: Yeah. I think even just having someone check in and be like, "Hey, how is progress? Can I support you in making sure that you're able to get to somewhere that feels completed by the end of the week so that the rest of the work is set up for someone new to take on?" That seems really valuable to me.
Because as an individual, I'm like, yeah, I don't know, I'm maybe heads down just deep and trying to get my thing done, but maybe not so aware of progress and relative to how much time I've spent on it. And having just someone prompt me on that could help kind of pull myself out a little bit, you know, come out for some air and be like, oh, actually, you know, this is a good spot for me to break this down.
Do you have any insights into this week that you might be bringing with you into client work or anything like that?
JOËL: And I think this has just given me an even deeper appreciation for breaking tickets down. Because of that arbitrary end-of-the-week deadline, I think that forces more tickets to break down in a way that I might say, oh, well, I picked up a ticket on Thursday for a client. It can totally bleed into next week; that's fine. It's still a fairly short ticket, just, you know, I started the work later. And so, trying to make sure that tickets get scoped down really tightly, I think, is an area where I could probably benefit from that discipline on client projects as well. You know, even if I'm not doing it to the extreme, I'm doing it this week.
STEPHANIE: Yeah. I would be really curious to find out if next week the folks who are on this project feel like they're in a good spot to, you know, keep on making forward momentum because they can just pull from the backlog and not have to go and do that knowledge transfer.
JOËL: Right. I will see with all of this, right? Maybe even with all the conversations and things, maybe we'll end the week, and I'll have 10 cards in the in-progress column. And it'll be like, okay, we tried a thing this week, mixed success. How do we want to iterate on that idea next week? Potentially with a different team.
STEPHANIE: Right, exactly.
JOËL: I feel like one way to maybe summarize the type of work that I was doing this week is that it's a kind of a scaling challenge. But over time, the team itself is small, but it's constantly churning. And I think you've been working on a team where it's kind of had a similar problem but in a different dimension. You're scaling over team size, actually a massively large team, and seeing some of the challenges there. What are some of the things that you've been facing?
STEPHANIE: Yeah. So, my current client project, I'm working on a codebase where there are hundreds of developers also working and committing to this codebase daily. And this codebase is really massive. There is so much stuff going on. And I've really only explored the world of the particular team that I'm on.
But I recently had to do a little bit of work in some code that is owned by a different team. And I actually really appreciated the way that we were able to collaborate across, I guess, ownership boundaries. And I was really interested in talking about some of the different ways that we've seen the idea of, like, who is owning, and, like, who is accountable for areas of the codebase once you reach a certain size.
So, what was really convenient about the way that I was working was that in my pull request, there was an automated step that told me I needed specific owner approval on the code that I was writing because I was touching some files that were owned by a different team. And it gave me all of the handles for the people on that team. So, I knew who to go talk to.
And it ended up being that that team had a public Slack channel specifically for people outside to ask them questions about their domain. And they had a rotating ambassador system. And so, in the Slack channel, in the channel topic, it said who was the ambassador for that week. So, you know, I saw who it was. I got to @ them and say, like, "Hey, like, I'm working on some of these files for this feature for my team. And, like, here's my pull request. Could you give me a review?"
JOËL: The more you're describing this, the more this is feeling very large team, almost bureaucratic systems. I'm hearing public Slack channels, which implies that teams have private Slack channels. I'm hearing, like, a rotating ambassador.
One word that you mentioned that I'd like to dig into a little bit is the idea of ownership because I think that the concept of ownership is present on probably most teams, but it probably means wildly varying things. And it sounds like on your team, it's a very kind of codified thing. So, what does ownership look like on your project?
STEPHANIE: Yeah, I love that you asked that question because you're right; it is codified, literally, in the codebase. There are ownership files that are in the repo itself where they've specified, like, all of the models that a team owns, you know, down to the names of the files themselves, or maybe a namespace. It has the team name and all of the team members' handles. So, that's how it was able to tell me in an automated way, like, hey, reach out to these people.
It was really interesting because it was pretty frictionless on my end, where all I had to do was see that, you know, I couldn't submit my pull request until I got that approval. But it was enough friction to be, like, well, you can't just, you know, change files in this domain without someone with extra context taking a look.
JOËL: This reminds me a little bit of a system that GitHub has where you have this CODEOWNERS file that you can add to a repo. Have you messed around with that at all, or kind of seen how that looks?
STEPHANIE: I have a little bit. I think I've only seen it in the context of being notified that someone is wanting to submit a pull request, but I'm not sure if it does gate merging based on ownership. Do you know if that's the case?
JOËL: I don't. I think you can set it up to automatically request reviews from owners. And on a large repo, the owner could be...I assume this is based maybe on directories, or it might be a regex pattern. I forget the exact details. But you can have owners for partial parts of the code instead of owners of the entire repository. So, then, if you make a change to a particular part of the code, it would ping the correct person automatically to review your code, which sounds like a really nice feature.
STEPHANIE: Yeah, absolutely. I think for the project that I'm working on, this definitely seemed like a custom process that they, at one point, decided to enforce. I'm not really sure about the history of how this came to be. But I found it actually quite a good way to meet people who are working in other parts of the codebase.
The person who happened to be ambassador that I pinged was so helpful in just, you know, making sure that I kind of understood the parts of the code that they owned that were honestly, like, quite complex. Like, I would not have felt confident just going ahead and making those changes necessarily myself because this is a pretty legacy codebase. There are quite a few gotchas, and they were able to point some [laughs] of them out to me.
Yeah, having that extra confidence was helpful for the particular feature I was working on. But it did also kind of give me a little pause because I've not worked at such a scale where there was so much uncertainty about the domain and that being so diffused across like I mentioned, hundreds of people.
JOËL: Do you think that this ownership system that's in place helps manage the complexity of scaling up to a team of hundreds of developers? Or does it feel like it kind of just adds a lot of process that gets in your way?
STEPHANIE: Ooh, that's a good question. It seems like kind of a chicken-and-the-egg situation because I felt better with someone else's input, right? Like, with someone else with more domain knowledge than me about what I was touching to be, like, "Yeah, like, this looks good to me," giving the plus one.
Whereas if that didn't exist, maybe I would have tried to seek it out on my own. But I would not have known where to start, right? I would have to ask around and be like, "Hey, like, who has worked in this directory before?" or whatever. Or I could have just went ahead and merged my code and hope my lack of context didn't really cause any huge problems, like outside of what was covered by the tests. But this is helpful for, like, where the codebase is at, you know, and the size it has grown.
JOËL: Do you think requiring an owner to review the code puts maybe an undue burden on the person who's the owner and that they might end up spending a lot of time reviewing code because they now kind of manage that part of the app?
STEPHANIE: Yeah, that's a really good question. I think it can. But I also feel a little better that that role is rotated, that everyone on the team gets the opportunity to really, like, focus on that. And I'm pretty sure the way it works is that that is their main focus for the sprint, or the week, or whatever, and that they're not assigned any other feature work but to prioritize being that ambassador. So, in some ways, that is a lot of process, right?
And there is that trade-off of having to allocate someone specific to answer people's questions. But at least from what I've seen, it does seem like a necessity because people do have questions, right? And I think they have figured out a system where it's very clear who you're supposed to talk to, and that accountability aspect of it has been met. Because I've also, like, worked on teams where that role is not well defined. People don't want to do it. And it's almost kind of a bystander effect where someone asks a question, but no one is specifically responsible for answering it, and so no one answers it.
JOËL: Oh yeah. Yes. And then you get kind of the cost of the bureaucracy without the benefits of kind of diffusing that knowledge.
So, we've been talking a lot about how this kind of ownership system can be really beneficial despite the overhead for a team that's 200 or 300 developers and where nobody knows all of the code or all of the nuances. And kind of at the other extreme, it's absolutely not worth it for a team of two or three developers where everybody knows the code, and there's kind of shared ownership of the project.
Somewhere in between, there is where you start having maybe some of those conversations about scaling the team, and do we need to introduce more process? In your experience, where do you think introducing some sort of ownership system like this starts becoming valuable? Or maybe what are some of the questions that a team should ask themselves to gauge, like, at the size we're at right now, would we get value from an ownership system?
STEPHANIE: Yeah, that's a really good point because as you were saying that, I was just starting to think of, yeah, I've certainly worked on projects where I have reviewed every piece of code that is to be merged, right? And then, at some point, that starts to change where, like, I can't do that anymore. And that transition has always been really interesting to me.
And then, I think there is, like you mentioned, another one where it's like, okay, now we aren't able to review everything, but, like, how do we trust that the code that is being merged, even if we don't all share that same context, is up to the quality we want it or is bug-free? Because without that context, there's always the opportunity that something might be missed.
I think I've seen that on teams, you know, really look like more bugs than usual, right? And maybe there is, like, actually, like, a big problem, and the site is down. And maybe there is, like, a post-mortem or something to discuss, like, why this happened. And, you know, it turns out that the siloing or, like, the lack of context sharing was partly involved.
And so, I do think there are definitely symptoms when we're starting to firefight a little more [chuckles] that might be kind of an indicator that the app has grown to a point where some context is being lost, and there are not guardrails in place to do our best to, like, share it, like, when we can and not when it's too late.
JOËL: Would it be fair to say that your recommendation is the team should not have an ownership system and kind of stick to everybody reviews all the code as they grow until they start hitting actual pain points, such as real bugs caused and, at that point, let that pain or maybe even the post-mortem be the thing that triggers the introduction of an ownership system?
STEPHANIE: I think so. I have not seen something like that proactively introduced. I would be curious if anyone has experienced something like that. But, you know, I think it's okay for change to be a little painful, right? And that's part of the growing pains of becoming a larger team, or organization, or codebase and continuing to reevaluate. Though, I guess I would be a little cautious about, you know, jumping straight to introducing processes or policies, right? Because those can be really hard to undo if they end up not being actually helpful for the root cause of the problem.
But, like, how you experimented with making sure that, you know, we didn't have any in-progress tickets for that project. The idea of just trying something and seeing how it works and kind of getting the team's feedback that is really valuable to me, at least as an IC. And, yeah, just making sure, like, you know, hearing from all of your team members on how those processes are changing the way they work and if it's feeling good or not.
Like I said, I enjoyed the process on my client project because it helped me feel more confident that the code that I was changing...because I can't possibly gain all of the knowledge that the owners of that area of the code have. It's just not going to happen. But also, I can imagine it being maybe not so good for someone else, right? It kind of being a barrier or being frustrating because, oh no, they really need to merge the code. And maybe they made the smallest change in a file owned by another team, right? And having to jump through that hoop.
JOËL: Yeah, that has absolutely never happened to me.
STEPHANIE: Really? Because it sounds like it has.
[laughter]
JOËL: Yes, it absolutely has. We've been kind of throwing around the idea of ownership the idea of team almost interchangeably as if they're one-to-one. And I want to lean a little bit into this idea of the team. Because I think there's an implicit assumption here that, within a team, there's enough knowledge sharing that happens, enough shared context, that everybody can kind of understand all of the code for their team, and that knowledge is just shared around, so you don't need these extra processes within teams.
But maybe once you start having multiple teams as part of your engineering department, then there starts to be some friction or some lost context that needs some mechanism to get around. Does that sound about right to you?
STEPHANIE: Ooh, I think that depends because even within my team, we are working on different projects, and I am definitely not on top of, you know, what some other folks are working on. And even within our team, there are silos. The difference there, though, is that I know who they are. Like I still am in contact with them in our daily syncs, that the barrier to finding someone who has the right information is much lower.
So, yeah, I think that is definitely a part of it, too, if, like...I think just the social barrier, even of, like, reaching out to someone you don't know and being like, "Hey, like, can you review my code?" that is [laughs] kind of...can be a little scary. And the dynamics definitely feel different within a team and between teams.
JOËL: Yeah, and definitely just the idea of, like, someone you see every day for your daily sync, you're going to feel much more comfortable reaching out to them for help or for a quick review than to a total stranger.
So, it's interesting that you mentioned the social aspects of things. I don't know if you're familiar with Conway's Law, the idea that the technical structures of our code, over time, end up reflecting the social structures of our teams.
STEPHANIE: Yeah, that is something that makes a lot of sense on the project that I'm working on now, where the boundaries, like I mentioned, between teams and between different namespaces are semi-rigid, I suppose, right? Rigid enough that, you know, there is a process but not so high that it becomes a burden, at least in my opinion.
But for another feature that I worked on, I actually had to interact with an external system that's owned more by the parent company of my current client. And that process was definitely more rigid. And I had to figure out who to email and had to, you know, look up this person's profile in the company directory to make sure that, you know, I was talking to the right person who had information that was relevant to me.
And then, you know, even, like, the technical aspect of talking to this external service had a lot of various barriers and, you know, special authorization and configuration that I needed to set up. So, definitely felt that in terms of the different levels of ease and talking to systems owned by different parties.
JOËL: So, the fact that there's, like, an actual, like, departmental or even, like, corporate boundary, definitely showed up in, like, a very hard boundary in the code as well.
STEPHANIE: Yeah, absolutely.
JOËL: And I think taking this to an extreme, I've seen this happen when teams want to introduce microservices. And oftentimes, the boundaries of those microservices are not necessarily driven entirely by technical reasons, but they're often by social reasons. So, we can say, hey, this team is going to own this service, and everybody else only needs to interact with a public API. And we can make all sorts of changes internally, and you never need to know that. They will never break your code. And also, we don't need to bother each other or feel the need to fully deeply understand the internals of each system.
STEPHANIE: Yeah. Once you're introducing APIs that are accessible to a certain group of people and having to navigate, you know, making changes to the API or aligning in the expected structure of the, you know, communication that you're sending between them, that is definitely a pretty rigid boundary [laughs] and ends up being a lot of overhead to talk to those systems. And I certainly have been in the position of trying to communicate with the people who built and designed those systems and figuring out how to get on the same page.
And even just recently, I was accidentally sending something as an array, and they were expecting it as a string. And that caused all these problems of making the request happen, you know, successfully. And we didn't even realize it until someone pulled out the doc that had the API schema and pointed out that there was some miscommunication along the way.
JOËL: And that can be such a hard boundary around even, like, the idea of ownership. So, you were talking about how, earlier, when you were working in code that's maybe owned by another team, they might want to review it before it gets merged. So, there's a bit of a gatekeeping there.
When a team transitions fully to microservices, I've seen it go almost, like, more extreme where it's even, like, you don't even change the code. You submit a ticket into our system. We will prioritize it, and then eventually, we will build your feature. But you don't even get to make a change to the code and have us approve it. We're going to make all that because we own it. So, it kind of feels like taking that ownership idea and then just really running to a full extreme.
STEPHANIE: Yeah, right. That makes a lot of sense in the lens of Conway's Law if those are the processes they have in place for navigating cross-team collaboration or communication. Because, at some point, maybe they just reached a level where it had to be enforced that way because maybe things were getting dropped, or more casual lower barrier connection was too overwhelming or just not working for the organization.
JOËL: I think what I've been hearing just now and then just more broadly throughout the episode is that while there's a lot of interesting technical solutions that can make things better, at its root, scaling a team is a social problem. And it's all about how your teams communicate with each other so that you can scale smoothly and that the system doesn't suffer from adding more people.
STEPHANIE: Yeah. I think this is an area where I would love to hear any thoughts from our listeners about how their organizations handle something similar because I find all of this really interesting. And, you know, it ends up impacting my day-to-day work in a very real way. And so, if other places have figured out how that scaling and, you know, social and technical boundaries work in a way that feels good, I would love to know.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up.
Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Want a cool cucumber salad? Joël's got you covered. Stephanie has evolved and found some pickles she enjoys.
Experienced programmers use a lot of heuristics or "rules of thumb" about what makes their code better. These aren't always true, but they work in most situations. Stephanie and Joël discuss a range of heuristics, how to use them, how to come up with them, how to know when to break them, and how to teach them to more junior devs.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, as of the recording of this, summer is in full swing, and it's the time of year where we have all these, you know, fresh vegetables out, so I've been really enjoying a lot of those. I think this week; in particular, I've been going into, like, all the variations on cucumber salads.
STEPHANIE: Ooh.
JOËL: Yeah. So, that's been kind of fun for me. A fun thing I've been doing to spice this up is pickling mustard seeds to add as a topping. That's actually really amazing. It adds just a little bit of acidity, a little bit of crunch, a little bit of texture. And it's pretty.
STEPHANIE: That sounds so delicious. And also, I was going to share something about pickles about what's new in my world. [laughs] But first, I am curious, what has been your go-to cucumber salad that you put this pickled mustard seed situation on top?
JOËL: So, cucumbers and tomatoes is just the base of everything. And then, it kind of goes with random things I have in my fridge. A little bit of goat cheese on top can be a great topping, big fan of balsamic glaze. You can just get, like, a bottle of that at the grocery store, the pickled mustard seeds. I've recently been trying topping with a fried egg.
STEPHANIE: Ooh, that sounds really fun. It kind of, like, adds a bit of savoriness and creaminess and maybe even, like, the crunchy fried edges. That sounds really yummy.
JOËL: Particularly if you do it over easy where the center is not fully cooked. When the egg breaks, you effectively get salad dressing for free.
STEPHANIE: That sounds so delicious.
JOËL: Summer vegetables, they're great.
STEPHANIE: They are great. Last year, I did have a cucumber garden, as in a garden, and a few cucumber plants that were too prolific for me, to be honest. I found myself overrun with cucumbers and having to give them away because we just didn't eat them enough. And this year, we scaled back a little bit [laughs] on the cucs. But I am so excited to bring up what's new in my world now because it's, like, so related, and we did not plan this at all. But I have a silly little thing to share about my own pickle journey. So, I used to be a pickle hater.
JOËL: You know what? Same.
STEPHANIE: Oh my gosh, incredible. Another new thing we've learned about each other. I really, like, wanted to like pickles because, you know, when you order a sandwich in a restaurant, it always comes with the pickle spear. And neither me nor my partner were into pickles, and we would always leave the spear uneaten on the plate, and we felt so bad about it. I felt really bad about it.
And so, every, like, three to six months or so, I'd be like, okay, I'm going to gather the courage to try the pickle again and see if maybe my taste buds have changed, and this time I'll like it. And, you know, I would try a bite and just be like, no, no, I don't think it's for me. [laughs] But I guess I was just so primed to do something about, like, wanting to eliminate this really inconsequential food waste. But every time it happened, I would just, you know, [laughs] be, like, oh, if only I loved pickles.
And I got my friend, who is a pickle connoisseur, to help me figure out, like, what pickles I might like. So, I asked her to come up with, like, a pickle sampler for me because I really hadn't tried all too many. And that actually really helped me find which ones were a little more palatable to me. So, I found out that I liked the sweeter ones. There's, like, a bread and butter pickle that can be quite sweet. Your diner pickle can be very different from a jar of, like, fancy pickles. [laughs]
JOËL: Definitely.
STEPHANIE: One day, she gifted me a jar of, like, Polish gherkins that were delicious.
JOËL: Hmmm.
STEPHANIE: I was like, wow, I can just snack on these. So, the thing that's new is that this time, I went to an Eastern European grocery store, and I bought my own jar of pickled gherkins. And that was something that Stephanie, like, two years ago, would never even do. [laughs]
JOËL: That's really cool that you got a chance to sort of explore a broader range of what was available in the pickle world and then were able to find kind of your niche there and discover something new that you actually like.
STEPHANIE: Yeah, it was very fun. And now I feel like my whole world has opened up to, you know, pickley and fermented things and just, like, get to enjoy even more snacks.
So, to move away from pickles, recently, on my client project, I've been pairing a lot more with other client developers. And one thing that has come up is, you know, talking about our reasoning or our thought process for when we're pairing on some code. And I realized that I have built up a lot of either intuition or maybe some rules that I like to follow when I'm writing code, writing a test, or even doing a code review. And I've realized that you know, as developers, we often use these kinds of shortcuts or heuristics to help orient us as we're doing our work.
JOËL: Yeah. I think that's definitely something that either comes yourself from experience or sometimes is passed along, and you get to benefit from somebody else's experience. They learned the hard way a lot of these tips and tricks, and now they kind of pass on some of these guidelines to us. Do you have any favorites that you reach for frequently?
STEPHANIE: So, one way I like to approach a problem is to start messy [laughs] and to kind of see what that gets me and then where to go from there. I find that it's a little bit easier for me to draw on things that I've, you know, learned or picked up and tips once I have something in front of me to react to. So, maybe I will just go with the naive implementation and just write all of the code in one method, you know, in a class. And from there, now that it's out of my system, can I kind of come back in with a finer tooth comb and then apply more of a sustained effort to clean things up, right?
And, to me, the question I find myself asking is, like, can this be extracted further? And so, you know, if I have everything in one giant method, then yes, [laughs] there is likely, you know, many opportunities to extract that, and maybe I will see something like, oh, the way that I spaced out this code that might be a signal to me that, like, these are some ideas that are grouped together, and I can pull something out there.
JOËL: Do you have a heuristic around when to stop extracting?
STEPHANIE: That's a good point. I think I tend to stop when I have kind of pulled out the classes that make sense to me. And, at that point, you know, like, maybe there is more extraction that can be done. But at a certain point, you know, you then get these really tiny classes that maybe don't hold their weight. And I think that's also true of methods that then call other methods, and that's the only thing that they do.
Then it's like, well, is this too extracted that it's not really giving a future reader helpful information, right? I want the extraction to improve readability. And that tends to be another lens through which I am applying to this idea of, like, can I extract further? Is this extraction helpful for understanding this code?
JOËL: I like the idea of looking at the code through multiple lenses. And so, sometimes you look at it through the lens of, yeah, are there enough moving parts here? Or does it feel kind of brittle and all in one place? And then sometimes completely shifting your lens and saying, you know what? Let's put myself in the seat of someone who's looking at this code for the first time. Can I understand it?
So, structuring and extracting code is a big part of the work that we do. And I also happen to have a couple of heuristics that I like to use. One is separate branching code from doing code. So, if I have an if...else condition, I try not to put ten lines of logic inside each branch; instead, I have just a call out to a method so that the only thing the conditional does is to choose which path you go, and then each individual path is its own method.
Similarly, if I'm writing a method, I'm not going to have a bunch of logic then a conditional mixed in together. So, my heuristic is a method gets to do one of two things. It either gets to choose a path to take or it gets to do a thing, but you can't mix and match both.
STEPHANIE: Yeah, that makes a lot of sense. I really appreciate a well-named method that is, you know, determining, like, what condition needs to happen because then that helps me, yeah, like, avoid having to hold all of this information about this condition or this other condition, and this other condition in order to figure out what path I'm trying to take.
JOËL: And the naming and the readability, I think, is a big part of this. Another heuristic that I like to use that kind of converges on the same result is trying to write each method at a single level of abstraction. So, if I am writing a method that has some kind of high-level terms it's using, I'm not going to also mix in a lot of low-level implementation. And then, similarly, if it's a method that's doing a lot of, like, low-level nuts and bolts things, I'm going to try not to pull in some of these higher-level domain name methods in there.
And so, by separating things out so that every method reads one level of abstraction, you make it much easier for the reader to go through and figure out what's happening. Are we kind of getting that more 10,000-foot view, getting a sense of what's happening, and saying, okay, we want to process the user form, and then we want to send off an email, and then we want to, you know, write to a file? Or are we going through, okay, we're going to increment a counter so that we get exponential back off on our [inaudible 10:28] request? Those two things do not belong together in the same method.
STEPHANIE: Yeah, absolutely. I really like this heuristic. And I have been applying it more and more and found it really useful for making sure that you're handling your errors correctly, especially because, at different levels of abstraction, you want to do different things with your errors, right?
An implementation error that's raised because, you know, you're calling something accidentally on nil, or maybe a third-party service is down, and you get a custom error, whatever that is, those concerns are different from how you want to handle things at the controller level. And oftentimes, I see those things really mixed together, and honestly, I think leads to a lot of buggy code when you're trying to handle things that can go wrong at the wrong level of abstraction.
JOËL: Yeah. Is there a good heuristic around what level you think is best to trigger an exception? Or maybe, more generally, just being aware of different levels of abstraction and knowing that catching or triggering errors at each level will have different impacts.
STEPHANIE: I think more of the latter, the having an awareness of what kinds of errors might be possible and what impact that has on the user, right? The user being either an actual customer or, you know, another developer who has to read a notification from an error monitoring service. [laughs]
JOËL: This is really interesting to me because I think we've now bridged the concept of heuristics into the idea of mental models. So, the heuristic is write your methods at a single level of abstraction, but that then leads into a mental model where maybe code is structured in three or four different layers. You've got a low level, a mid-level, a high level, something like that, of abstraction. And now, you can use that mental model to start thinking about what are the impacts of exceptions at each layer?
And then, maybe you complete the circle by creating a heuristic that relies on that mental model, maybe, I don't know, raise in the low-level rescue at the top level or something. I'm making something absolutely arbitrary up right now. But somehow, we've gone from heuristic, which creates a mental model, which then allows us to build new heuristics on top of that, and that seems like a virtuous cycle to me.
STEPHANIE: Yeah, absolutely. I think what I'm also picking up is the idea that you do need a mental model, or you do need to draw on your own ideas about something in order to apply the heuristic, right? You know, someone could tell you to separate branching code from doing code. But maybe you don't know what that means or, like, maybe you don't see why that's important. And sure, you can still apply it and try your best to follow it. But, in some ways, I think that the best heuristics are ones that you've kind of developed for yourself based on your own experience.
JOËL: That's really interesting. I think once you've built from your own experience, I definitely feel like they're really impactful because you've kind of synthesized 2, 5, 10, 20 years of experience doing some of this work into, oftentimes, like, you know, a pithy one-line sentence, 5, 6 words that convey an approach that you've found works best, you know, maybe 80% or 90% of the time. The power of synthesis for your own self-learning I think it's really hard to understate.
So, I'm curious if there's any other heuristics that you commonly use that you kind of created yourself based off your own experience rather than just having it be more of a broadly received idea from the community.
STEPHANIE: I think, for me, it's more so that the experience has helped affirm certain heuristics and also made me feel more comfortable with letting others go. And one that I heard a lot but, like, didn't quite understand until really working through it deeper is the idea of feeling pain when you write a test, and that being a signal of opportunities to try different design with your code. And I just didn't know what that pain was at the beginning. Like, what does that even mean? [laughs] Like, how can a test cause me pain?
But on my own, I realized, oh, like, actually, I get really frustrated when I need to stub out a whole method chain, right? And I find myself having to go look up how to do that or just spending a lot of time having to do something that I haven't done before. Maybe the pain comes from having to change a lot of files because, oh no, like, I also broke 20 other tests in the process.
But when you're first starting out, oftentimes, you, like, don't know that that is not normal [laughs]; at least, that was true for me. And so, that was something that I had heard about, like, if you are feeling pain when writing a test, then, like, maybe reconsider your code design. But when you don't know how to identify what that pain is, and you also, like, don't know where to go from there, I find that, you know, the heuristic can only help you so much.
JOËL: Yeah. Maybe that's something that's challenging with a heuristic in that they're often expressed as these pithy sentences. But if you're not familiar with some of the underlying concepts, that might make them harder to apply, which is unfortunate because, oftentimes, these heuristics that we've developed as a community are targeted to newcomers to help them kind of avoid the mistakes that we've made along the way.
STEPHANIE: I think what really helped me the most in connecting a heuristic that's commonly expressed and my own experience is when I've had someone ask me about how I'm feeling when I'm, you know, making some kind of decision or when I'm reading some code. Like, what do I think of this, or what has been my experience with this? And giving me the opportunity on the spot to synthesize that information. Because otherwise, it's hard to figure out, you know, like, what is just normal? This is just life as a developer [laughs]. And what are opportunities to maybe gain some more insight about the work itself?
JOËL: One thing that I've learned over time as a developer, and I'm not sure if this quite rises to the level of a heuristic, but a lot of, like, pain and frustration in development doesn't necessarily have to be that way. And it's not necessarily because I'm bad at the job or I'm too new to the technology or whatever. It can often be a sign of underlying design issues or the fact that the system was modeled with certain assumptions that are no longer true. These can often be signals that you can make things better.
So, I think if I had to reduce this idea down to a clever one-liner, it'd be something along the lines of, it can be better, or it doesn't have to be this bad. You're writing a test, and it's really annoying. There might be a better way to structure the underlying code that would make the test better. You're having to do some, like, really clunky code to deal with something. Is there maybe a better object design that would make a lot of that pain go away, or at least kind of quarantine it in a certain part of the codebase?
STEPHANIE: I actually think you're really onto something because what I was just hearing, I love that, like, it can be better. It's less prescribed, I guess, than some other heuristics, like, you know, do not repeat yourself, or whatever.
JOËL: Classic.
STEPHANIE: [laughs] It really encourages, like, the individual to think a little deeper. And it actually reminded me of another...this is actually a bit of a pithy saying, but I find it to be really useful. And I'm curious if you've heard it before. It's a systems thinking heuristic, and the phrase is, the purpose of a system is what it does.
JOËL: Ooh, I have heard that, and I'm trying to remember what context.
STEPHANIE: So, it was coined by a systems thinking expert. Stafford Beer, I think, is his name. And I recently learned about it from a friend. But I think the cool thing is that it can be applied to literally anything [laughs] because everything is a system, you know, or not just software. But I have found a lot of value in applying it to just, like, is this function doing what it says it does, right? Or is it actually also doing, like, a side effect? And turns out, maybe we want to bring that into alignment with what the name of the function is, or try pulling that out, or whatever. I think it can also be true of test suites.
I don't know if this is a heuristic or not. But the idea that we should always be testing or all tests are good, yeah, I guess that could pass as a heuristic. By bringing this perspective of the purpose of a system is what it does, it's like, well, is the test suite also so bloated and takes so long and so flaky that it is actually hindering development? And if that is the case, then maybe there is some reevaluation necessary, right? Rather than just claiming that it's helping us have more confidence in our code when that may or may not be true.
JOËL: You brought up an interesting idea here, which is that heuristics aren't always right. So, you're talking about the idea that a heuristic like good code is tested code might not be correct in 100% of the cases. Like, how accurate does a heuristic need to be in order for it to be really valuable? You know, you're hoping for something that's, like, 90% correct that you can follow most of the time, except in some edge cases, or something maybe as low as, like, 50% where it's a coin toss whether the heuristic applies in the situation or not. Are those still useful? Or are they maybe more confusing than otherwise?
STEPHANIE: Oh wow. That's a really interesting way to frame it because I don't know if I've ever stored information about how well my heuristics are serving me. [laughs] But I do really like the idea that you can use a heuristic as a guiding principle just to try and that you can always back out of it, right?
So, if you're wanting to take DRY to the extremist of extremes, just for fun or just to see how that might go, you can go down that path and, at any point, decide, okay, like, I like this, or I don't like this, and choose a different path. But the idea of kind of tracking, like, how well they're working for you that is really interesting to me, and not something I've tried before.
JOËL: I love the idea of taking a heuristic and, like, doing a side project whose whole goal is just to kind of push that heuristic to the extreme, to the breaking point so that, that way, you get an intuition of, like, when does it work for you? When does it not? That sounds like a really fun exercise for someone to do. Is that something that you've done yourself?
STEPHANIE: Not to the point of a whole side project, but just like I like to try pickles randomly every now and then to see if I like them, [laughs] will just try a new technique and see how it goes. In an episode a while back, we talked about whether we TDD or not, and, to be honest, I don't do it, you know, 100% of the time or all the time. But one day, I did decide to TDD a full-stack feature from start to finish just for fun [laughs], and I enjoyed it. I learned some things about it.
And I think now I've kind of integrated the parts that I liked about it into my development flow. Like, I'm not always going to do it. But I think it also just helped me figure out, like, okay, like, what is this thing about that people claim that is the pinnacle of how we should be writing our code? And how can I decide for myself, like, whether it works for me or just pick and choose the parts of it that work for me?
JOËL: Yeah. That just seems like a really valuable exercise. There are definitely too many heuristics out there to do that for everything. But I guess I've never thought of it quite so concretely. But I almost wonder if I should, like, add this to my kind of personal growth plan to say, like, once a year, I'm going to take a heuristic and kind of push it to an extreme and see what I can learn about it.
STEPHANIE: I actually think what's really cool is the process of, like, any individual developer figuring out what kinds of heuristics they want to follow, as opposed to, you know, like, a mass proclamation that, like, this is the way, right? Are there any heuristics that you have maybe picked up and then let go of because you realized that, you know, they weren't working enough or frequently enough for you or that you just didn't like?
JOËL: I don't know about, like, fully letting go, but definitely kind of recontextualize and sometimes even sort of rewrote them a little bit to work for me. So, a classic one would be the idea that shorter code is more readable. So, it's common to see comments on a pull request sort of like, "Hey, you could make this shorter by doing this." And that can be true to a certain extent. When you get to the point where you're playing code golf, it becomes absolutely unreadable.
But also, there's a point where sometimes using some other heuristics will result in longer code but actually make it more readable on the whole. And so, packing everything into one method might be overall shorter, so it's fewer lines to read going through a class. But maybe extracting some methods or doing that separating branching code from doing code might lead to an overall longer class but an also overall more readable one. So, I think there's probably a lot of caveats that go with that idea. Oftentimes, shorter can be more readable with, you know, two or three asterisks that maybe go a little bit more into the why that is the case.
STEPHANIE: Yeah. I like the contextualizing. That actually reminded me of a talk that I watched recently by Hillel Wayne. It's called Intro to Empirical Software Engineering. And he basically, like, does a deep dive into all these studies about software practices that we think are, quote, unquote, "good," like, as a community or as an industry. And it's like, well, like, how do we actually know? Like, show me the research, right?
And one of the studies that he included was trying to determine if using abbreviations for variable names or using the full words made the code easier to debug or not. And so, the main example that he was using was employee number as a variable, and the abbreviation was EMP num. And it turns out that there was no difference in how easy it was to debug. But the approach that each group that was studied differed.
So, the folks who had the full names, the full words for the variable names, were kind of using an approach of just scanning the code and being able to understand at a higher level the domain, right? Whereas the folks who were debugging with just abbreviations had to work at a bit of a lower level and, you know, or maybe using breakpoints and debugging the code that way.
And I thought that was really cool because, first of all, I think it kind of was trying to prove that, like, we don't actually know if one is better or not. But what is important and interesting to me is the idea that, like, you can choose the method that you like better or that works for you and the human side of it, right? The impact it has on our process.
JOËL: That's really cool. I'll have to go and watch that talk. Building this kind of context and nuance around a heuristic, though, takes a lot of time, takes experience. And part of the value of a heuristic is that we're collapsing down maybe our own experience or somebody else's experience into something that doesn't require you to necessarily do all that work upfront.
How do you feel about sharing and kind of targeting a lot of these heuristics to newer coders who are kind of trying to get better at their craft and looking for ways to improve without necessarily having to do, you know, five years of experience digging into a particular topic? Do you think heuristics are helpful, or do they maybe mislead?
STEPHANIE: I really value when they're presented as an opinion, as opposed to a true fact about code. [laughs] Because I really appreciate when someone is able to explain to me why they chose readability in this particular scenario or why they chose speed and performance. Or maybe they were making a trade-off between accessibility and, you know, something else. To just, like, tell someone, "Oh yeah, like, DRY code is better code," or to just tell someone that without the explanation with, like, offering them the opportunity to reflect themselves on, like, oh, like, where have I seen DRY code that was easier for me to read? That seems a little less helpful in terms of investing in their growth.
JOËL: Yeah. Definitely, I think sharing some of the purpose behind it can often be really useful because most of these heuristics are never an end unto themselves. They're a means to some other end. So, you're not writing code that's DRY just because you want to be cool. You're writing code to be DRY because you're trying to improve readability, make it easier to change so you don't have to change it in multiple places. You want to maybe reduce the chance of certain types of bugs.
These are all actual purposes of what you want to do in your code. DRY is just one way of getting there. But oftentimes, we might skip that part and just be like, hey, you should make your code DRY because DRY is the best. And it can be, but it's in service to these other goals.
STEPHANIE: I think when I am sharing those types of heuristics that are more commonly held, I also do like to preface, like, some people think this, or some people like to do things this way, just to be clear that they don't have to like it or do it. In general, I always prefer injecting more nuance [laughs] into the conversation. But yeah, like, it is a really personal process, I think, and figuring out, like, how any individual makes decisions about, like, all the code they're writing. You have to make a million [laughs] decisions every time you do it.
So, yes, like, those heuristics do provide a shortcut. And also, I think it's worth taking the time to think about if it's working, especially for the specific context that you're applying it, right? Because that also can change. And, I don't know, maybe I'm just skeptical of any one size fits all solution.
JOËL: I think for myself, with many heuristics, as a beginner coder, I had a bit of, like, a spiral journey, or maybe kind of going up a set of stairs. So, as a brand-new developer, I would make a lot of duplication bugs in my code, where, you know, I would have the same value in multiple places, and then I'd change it in one place, and I don't remember to change it in other places, and the code breaks.
And so, being introduced to the idea of DRY actually helped my code get quite a bit better. It was, like, a net positive on my experience because I was not getting burned by all these bugs quite so frequently. And so, for a while, just throwing more DRY into my code just made my life better. And then, eventually, you kind of hit that plateau where I don't run into the pain of these bugs anymore. But now I keep doing more DRY somewhat mindlessly. And I end up with this pile of abstractions that are actually really brittle or frustrating to work with. And now, I have to rethink some of the assumptions behind the heuristic.
And then, at that point, yep, maybe recontextualize a little bit, learn about when it's good, when are the trade-offs not worth it. Now I have a better understanding, and I kind of go on another growth bit where it makes a lot of my code better until maybe I hit another plateau. I've kind of maxed out the benefits. I start seeing some of the pain, and then, again, I have to go through this cycle again. And maybe the approach you were talking about earlier, where you do a side project and kind of push a heuristic to its breaking point, is a way to kind of speed run that process.
STEPHANIE: Yeah, that's really interesting because you're just committing to it and trying to learn everything you can from it in a very concentrated setting. I also wonder, and it's totally fine if you don't know, but if someone had told you kind of all of those reasons you listed about why DRY code, like, what that achieves, if that may have reframed how you were thinking about applying it. Or was that also something that had to come from doing it enough?
JOËL: I think as a brand-new developer, a lot of that would have gone over my head. I was still really shaky on the concept of abstraction. When is it useful? When is it not? So, a lot of those more subtle pitfalls, I think, would not have been relevant to me at that point in my career, even the concept of readability, right? When I'm a brand-new programmer, I'm still getting used to reading a lot of code.
And so, the idea that code might be written in a way that's unreadable or more challenging to read, it might just feel like, oh, I just need to get better, improve myself. It's not that the code is written in a hard-to-read way. It's just I don't have enough experience at reading code. And I think that's a common thing that we do as beginners at everything, right? We start by blaming ourselves when things get hard.
STEPHANIE: Yeah. I was just thinking that, you know, if you are sharing heuristics with a newer developer or an early-career developer, at the end of the day, like, really, I'm not sure about the value of just dropping it on them and letting them run [laughs] with it. But I think what could be really, really effective is just having a sustained relationship with them and, like, continuing that conversation. It's, like, maybe in a code review or in a pairing session being like, "Oh yeah, like, I see you're practicing DRY. Like, what do you think about how this made this piece of code different?" And kind of baking in that process of self-discovery along the way and speeding it up in that way as well.
JOËL: So, what you're really saying is the one heuristic to rule them all is code in community.
STEPHANIE: I love that. I'm totally with you.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up.
Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie is consciously trying to make meetings better for herself by limiting distractions. A few episodes ago, Joël talked about a frustrating bug he was chasing down and couldn't get closure on, so he had to move on. This week, that bug popped up again and he chased it down! AND he got to use binary search to find its source–which was pretty cool!
Together, Stephanie and Joël discuss dependency graphs as a mental model, and while they apply to code, they also help when it comes to planning tasks and systems. They talk about coupling, cycles, re-structuring, and visualizations.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I'm always trying to make meetings better for me [chuckles], more tolerable or more enjoyable. And in meetings a lot, I find myself getting distracted when I don't necessarily want to be. You know, oftentimes, I really do want to try to pay attention to just what I'm doing in that meeting in the moment. In fact, just now, I was thinking about the little tidbit I had shared on a previous episode about priorities, where really, you know, you can only have one priority [laughs] at a time. And so, in that moment, hopefully, my priority is the meeting that I'm in.
But, you know, I find myself, like, accidentally opening Slack or, like, oh, was I running the test suite just a few minutes before the meeting started? Let me just go check on that really quick. And, oh no, there's a failure, oh God, that red is really, you know, drawing my eye. And, like, could I just debug it really quick and get that satisfying green so then I can pay attention to the meeting? And so on and so forth. I'm sure I'm not alone in this [laughs]. And I end up not giving the meeting my full attention, even though I want to be, even though I should be.
So, one thing that I started doing about a year ago is origami. [laughs] And that ended up being a thing that I would do with my hands during meetings so that I wasn't using my mouse, using my keyboard, and just, like, looking at other stuff in the remote meeting world that I live in. So, I started with paper stars, made many, many paper stars, [laughs] and then, I graduated to paper cranes. [laughs] And so, that's been my origami craft of choice lately.
Then now, I have little cranes everywhere around the house. I've kind of created a little paper crane army. [laughs] And my partner has enjoyed putting them in random places around the house for me [laughs] to find. So, maybe I'll open a cabinet, and suddenly, [laughs] a paper crane is just there. And I think I realized that I've actually gotten quite good at doing these crafts.
And it's been interesting to kind of be putting in the hours of doing this craft but also not be investing time, like, outside of meetings. And I'm finding that I'm getting better at this thing, so that seemed pretty cool. And it is mindless enough that I'm mentally just paying attention but, yeah, like, building that muscle memory to perfecting the craft of origami.
JOËL: I'm curious, for your army of paper cranes, is there a standard size that you make, or do you have, like, a variety of sizes?
STEPHANIE: I have this huge stack of, like, 500 sheets of origami paper that are all the same size. So, they're all about, let's say, two or three inches large. But I think the tiny ones I've seen, really small paper cranes, maybe that would be, like, the next level to tackle because working with smaller paper seems, you know, even more challenging.
JOËL: I'd imagine the ratio of, like, paper thickness to the size of the thing that you're making is different.
STEPHANIE: At this point, they say that if you make 1,000, then you bring good luck. I think I'm well on my way [laughs] to hopefully being blessed with good luck in this household of my little paper crane army.
JOËL: It's interesting that you mentioned the power of having something tactile to do with your hands during a meeting, and I definitely relate to that. I feel like it's so easy, even, like, mindlessly, to just hit Command-Tab when I'm doing things on a screen. Like, my hands are on the keyboard. If I'm not doing something, I'm just going to mindlessly hit Command-Tab. It's kind of like on your phone sometimes. I don't know if you do this, like, just scrolling side to side. You're not actually doing anything. You just want motion with your fingers.
STEPHANIE: Yes. I know exactly what you're talking about. And it's funny because it's a bit of a duality where, you know, when you are in your development workflow, you want things to be as quick and convenient as possible, so that Command-Tab, you know, is very easy. It's just built in, and that helps speed up your, you know, day-to-day work. But then it's also that little bit of mindlessness, I think, that can get you down the distraction path.
When I was first looking for something to do with my hands, to have, like, a little tactile thing to keep me focused in meetings, I did explore getting one of those fidget cubes; I have to say. [laughs] It's just a little toy, you know, that comes with a bunch of different settings for you to fidget with.
There's, like, a ball you can roll, you know, with your thumb, or maybe some buttons to click, and it gives you that really satisfying tactile experience. And I know they work really well for a lot of people, but I've really enjoyed the, I guess, the unexpected benefits [chuckles] of getting better at a hobby [laughs] while spending my time at my work.
Joël, what is new with you?
JOËL: So, a few episodes ago, I talked about a really kind of frustrating bug that I was chasing down that was due to some, like, non-determinism in the environment. And it kind of came, and then it went away. And I wasn't able to get sort of closure on that and had to move on. Well, this week, that bug popped up again, and this time, I was actually able to chase it down. So, that felt really exciting. And I got to use binary search to try to find the source of it, which made me feel really cool.
STEPHANIE: Oooh, do tell. What ended up being the issue?
JOËL: I'm connecting to an external Snowflake data warehouse, and ActiveRecord tries to fetch the schema and crashes as part of that with some cryptic error that originates from the C extension ODBC Ruby driver package. I figured out that it's probably something to do with, like, a particular table name or something in the table metadata when we're pulling this schema that we're not happy about. But I don't know which table is the one that it's not happy with.
Well, this time, I was able to figure out, by reading through some of the documentation, that I can pull subsets of the schema. So, I can pull the first n values of that schema, and it won't crash. It only crashes if I try to fetch the entire set, which is what is happening under the hood. At that point, you know, I could fetch each row individually, but there's hundreds of these. So, you know, I try, okay, what happens if I try to fetch 1,000 of these? Is it going to crash? Because it's a massive system. So, yes, I get a crash.
So, I know that a table less than a thousandth in the list of tables is what's causing the problems. So, okay, fetch 500 halfway in between there. It's still going to crash. Okay, 250, 125. I then kind of keep halving all the time until I find one that doesn't crash. And now I know that it is somewhere between the last crash and this one. So, I think it was between 125 and 250. And now I can say, okay, well, let's fetch the first, you know, maybe 200 tables, okay, that crashes. And I keep halving that space until you finally find it. And then, like, okay, so it's this one right here.
Now, the problem is the bad table actually crashes. So, I think it ended up being, like, number 175 or something like that. So, I never get to see the actual table itself. But because the list of tables is in alphabetical order, and I can see because I can fetch the first 174 and it succeeds, so I can tell what the previous 5, 6, you know, previous 174 are.
I can pretty easily go and look at the actual database and the list of tables and say, okay, well, it's in the same order. And the next one is this one, and hey, look, there is some metadata there that has some very long fields that are longer than one might expect, specifically going over a potentially implied 256-character limit. That seems somewhat suspicious. And, oh, if we remove this table, all of a sudden, everything works.
STEPHANIE: Wow, binary search, an excellent debugging tool [laughs] when you have no idea, you know, what could possibly be causing your issue.
JOËL: It's such a cool tool. Like, I'm always so happy when I get a chance to use it. The problem is, you need a way to be able to answer the question, like, have I found it? Yes or no? Or, generally, is it greater or less than this current position?
STEPHANIE: Well, that's really exciting that you ended up figuring out how to solve the bug. I know last time we talked about it, you kind of had left off in a space of, hopefully, we won't run into this issue again because it's no longer happening. But it seems like you were also set up this time around to be able to debug once it cropped up again.
JOËL: Yes. So, binary search is really cool. It's got this, like, very, like, fancy computer science name. But in reality, it's a fairly simple, straightforward technique that I use fairly frequently in my development. And there's another kind of computer sciency fancy-sounding concept that I use all the time. You've all heard me reference this multiple times on the show. You're right; we're finally doing it. This is the dependency graph episode.
STEPHANIE: Woo. [laughter] It's time. I'm excited to really dig into it because, you know, as someone who has heard you talk about it a lot, you know, and is maybe a little less familiar with graph theory and how, you know, it can be applied to my day to day work, I'm really excited to dig into a little bit about, you know, what a regular developer needs to know about dependency graphs to add to their toolbox of skills.
JOËL: So, I think at its core, the idea of a dependency graph is that you have a group of entities, some of which depend on each other. They can't do a task, or they can't be created unless some other subtasks or dependent actions take place. And so, we have a sort of formal structural way of describing these things. Visually, we often draw these things out where each of the pieces is like a little bubble or a circle, and then we draw arrows towards the things that it depends on.
So, if A cannot be done without B being done first, we draw an arrow from A to B. That's kind of how it is in the abstract. More concretely, this kind of thing shows up constantly throughout the work that we do because a lot of what we do as developers is managing things that are connected to each other or that depend on each other. We build complex systems out of smaller components that all rely on each other.
STEPHANIE: Yeah, I think it's interesting because I use the word dependency, you know, very frequently when talking about normal work that I'm doing, you know, dependencies as in libraries, right? That we've pulled into our application, or dependencies, like, talking about other classes that are referenced in this class that I'm working in. And I never really thought about what could be explored further or, like, what could be learned from really digging into those connections.
JOËL: It's a really powerful mental model. And, like you said, dependencies exist all over our work, and we often use that word. So, you mentioned something like packages, where your application depends on Rails, which in turn depends on ActiveRecord, which in turn depends on a bunch of other things. And so, you've got this whole chain of maybe immediate dependencies, and then those dependencies have dependencies, and those dependencies have dependencies, and it kind of, like, grows outward from there.
And in a very kind of simplistic model, you might think, oh, well, it's more, like, a kind of a tree structure. But oftentimes, you'll have things like branches on one side that connect back to branches on the other. And now you've got something that's no longer really tree-like. It's more of a sort of interconnected web, and that is a graph.
STEPHANIE: I think understanding the dependencies of your system has also become more important to me as I learn about things that can go wrong when I don't know enough about what my system is, you know, relying on that I had kind of taken for granted previously. I'm especially thinking about packages like we were mentioning, and, you know, not realizing that your application is dependent on this other library, right? That's brought in by a gem that you're using. And there's maybe, like, a security issue, right? With that.
And suddenly, you have this problem on your hands that you didn't realize before. And I know that that has been more of a common discussion now in terms of security practices, just being more aware of all the things that you are depending on as really our work becomes more and more interconnected with the things available to us with open source.
JOËL: I think where understanding the graph-like nature of this becomes really important is when you're doing something like an upgrade. So, let's say you do have a gem that has a security problem, and you want to upgrade it to fix that security issue. But the upgrade that includes the security patch is also a breaking upgrade. And so, now everything else in your system that depends on that gem or on that package is going to break unless you have them in a version that is compatible with the new version of that gem.
And so, you might have to then go downstream and upgrade those packages in a way that's compatible with your app before you can bring in the security patch. And a lot of that can be done automatically by Bundler. Bundler is software that is built around navigating dependency graphs like that and finding versions that are compatible with each other.
But sometimes, your code will need to change in order to upgrade one of these downstream gems so that you can then pull in the upgrade from the gem that needs a security patch. And so, understanding a little bit of that graph is going to be important to safely upgrading that gem.
STEPHANIE: So, I know another application of dependency graphs that you have thought about and written a blog post for is RSpec let declarations and how a lot of the time when we are using let, you know, we are likely calling other variables defined by let. And so, when you are encountering a test file, it can be really hard to grok what data is being set up in your test.
JOËL: Yeah, so that is really interesting because you can define something that will get executed in a lazy fashion if it gets referenced. But then not only is the let lazy and will not trigger unless it's referenced, but a let can reference other lets, which are also lazy, and only get triggered if they get referenced.
So, you might have a bunch of lets defined in any order you want throughout a file, and they're all kind of interconnected with these references to each other. But they only get triggered if something calls it directly or it's in this, like, chain of dependencies. And getting a grasp on what actually gets created, which lets will actually execute, which ones don't in a file can quickly get out of hand. And so, thinking of this in terms of a dependency graph has been a really helpful mental model for me to understand what's going on in a complex test file.
STEPHANIE: Yeah, absolutely. Especially when sometimes the lets are coming from all over the place, you know, maybe a describe block hundreds of lines away, or even a completely different file if you are using a shared context that's being pulled in. So, I can see why this was a complex problem that could be made a little simpler with plotting out a dependency graph.
And in preparation for this episode, I was doing a little bit of my own exploration on this because I certainly know, you know, the pain of trying to figure out what is being executed in my tests when there are a lot of lets that reference each other. And in the blog post, you kind of gave a little step-by-step of how you could start with creating a dependency graph for the test that you're working with.
And I was really curious if this process could be automated because, you know, I do enjoy, you know, pulling out the pen and paper [chuckles] every now and then. But I'm not, like, a particularly visual person. God forbid I, like, draw a circle, but then, like, don't have enough space for the rest of the circles. [laughs] So, I was really hoping for a tool that could do this for me, especially if, you know, you do, you have a lot of tests that you have to try to understand in a relatively short amount of time. And so, I ended up doing something kind of hacky with RSpec and overriding let definitions to automate this process.
JOËL: That's really cool. So, is the tool that you're trying to build something where you feed it in a spec file, and it gives you some kind of graphical representation like an SVG or something as output?
STEPHANIE: Yeah. I did consider that approach first, where you feed in the file, but then I ended up going with something more dynamic where you are running the test, and then as it gets executed, tracing the let definitions and then registering them to build your dependency graph.
JOËL: So, you've got some sort of internal modeling that describes a dependency graph. And then, somehow, you're going to turn that, you know, a series of Ruby objects into some kind of visual.
STEPHANIE: Yeah, exactly. And the bulk of that work was actually done with a library called RGL, which stands for just Ruby Graph Library. [laughs] And what's nice is that it has a really easy interface for plugging in the vertices and edges of the dependency graph that you want to build. And then, it is already hooked up with Graphviz to, you know, write the SVG to a file. And so, I ended up really just having to build up an array of my dependencies and the connections to each other and then feed it into the constructor of the graph.
JOËL: And for all of our listeners, you mentioned Graphviz. That is a third-party tool that can be installed on your machine that can generate these SVG diagrams from...I believe it has its own sort of syntax. So, you create, I believe it's dot, D-O-T, so dot dot file. And based off of that, it generates all sorts of things, but SVG being potentially one of them.
STEPHANIE: Yeah. The nice thing was that I actually didn't end up having to use the DSL of Graphviz because the RGL gem was doing them for me.
JOËL: Nice. So, it plugs in directly.
STEPHANIE: Yeah, exactly. And I was really curious about using this gem because I, you know, just wanted to write Ruby, especially to plug into other things that are already in Ruby. And I found that surprisingly easy, thanks to all of the RSpec config options that they make available to you, including an option to extend the example group class, which is actually where let and let bang is defined.
And so, I ended up overriding those classes and using, you know, the name of the let that you're defining and then the block to basically register the dependencies. And I also ended up exploring a little bit with using Ruby's built-in parser to figure out in the block that's being passed to the let, what parts of that block could potentially be a reference to another let.
JOËL: That's really cool. Did you get any fun results from that?
STEPHANIE: I did. It worked pretty well in being able to capture all of the let declarations, and other lets that it references. And so, I was able to successfully, you know, like, generate a visual dependency graph of all of the lets, so that was really neat.
The part that I was really kind of excited about trying next, though I didn't end up having time to yet, was figuring out which of those let values are executed by way of the let bang, right? Which is eager or what is referenced in the test that then gets executed as well. And so, the RGL library is pretty neat and has some formatting options, too, with the Graphviz output. So, you can change the font color or styling options for different, you know, nodes and edges. And so, I was really curious to pursue this further, maybe, and use it to show exactly what gets evaluated now that I have successfully mapped my let graph.
JOËL: Right. Because the whole point of this exercise is that not the entire graph is going to get evaluated. The underlying question is, what data actually gets created when my test runs? And so, you build out this whole dependency graph, and then you can follow a few simple rules to say, okay, this branch gets called, this branch gets called, this series of things gets called. And okay, this subset of let blocks trigger, and therefore this data has been created for my given test.
STEPHANIE: Yeah. Though I will say that even where I got so far to, just seeing all of the let definitions in a spec file was really helpful to have a better understanding, you know, if I do have to add a test in here, and I'm thinking about reaching for a pre-existing let declaration, to be like, oh, like, it actually, you know, goes on to reference all of these other things that may be factories [chuckles] that are created might make me, you know, think twice, or just have a little better understanding of what I'm really dealing with.
JOËL: Right. The idea that when you're calling out to a let, or a factory, or something else that's just a node in a large graph, you're not necessarily referencing just one thing. You might actually be referencing the head of a very long chain of things that maybe you don't intend to trigger the whole thing.
STEPHANIE: Yeah, exactly.
JOËL: So, in that sense, having a sort of visual or at least an idea of the graph can give you a much better sense of the cost of certain operations that you might have to do.
STEPHANIE: The cost of the operations certainly, especially when, you know, you are working in a legacy codebase, and you, you know, like, maybe don't know how everything plays together or is connected. And it's very tempting to just reach for [chuckles] the things that have been, you know, created or built for you. And I'm certainly guilty of that sometimes on this client project, where the domain is so complex, and there are so many associated models.
And I'm like, well, like, let me just, you know, use this let that already, you know, has a factory set up for what I think I need for this test. But then realizing, oh, actually, like, it is creating all these things, and do I really need them? I think it can be really challenging to unravel all of that in your head. And so, with this very scrappy tool that I [chuckles] built for my own purposes, you know, maybe it makes it, like, one step easier to try to fully understand what I'm working with and maybe do something different.
JOËL: One aspect that I think is really powerful about dependency graphs is that it takes this kind of, like, abstract concept that we oftentimes have an intuitive sense around, the idea that we have different components that depend on each other, and it shows it to us visually on, like, a 2D plane. And that can be really helpful to get an understanding or an overview of a system.
You mentioned that RGL uses Graphviz to generate some SVGs. A visual tool that I've been using to draw some of my dependency graphs has been mermaid.js. It has a syntax that's, like, a text-based syntax, but it's almost visual in that you have a piece of text and name of a node. And then, you'll draw a little ASCII arrow, you know, two dashes and a greater than sign to say this thing depends on, and then write another name, and just have a row, like, a bunch of entries to say; A depends on B. A also depends on C. C depends on D, and so on, and, like, build up that list. And then Mermaid will just generate that diagram for you.
STEPHANIE: Yeah. I've used Mermaid a few times. One really helpful use that I had for it was diagramming out a bunch of React components that I had and wanting to understand the connections between them. And I think you can even paste the Mermaid syntax into your GitHub pull request description, and it'll render as the graph image.
JOËL: Yeah, that's what's really cool is that Mermaid syntax has become embedded in a lot of other places in the past few years. So, it's really easy to embed graphs now into all sorts of things. You mentioned GitHub. It works in pull requests descriptions, comments, I think pretty much anywhere that Markdown is accepted. So, you could put one in your README if you wanted.
Another place that I use a lot, Obsidian, my note-taking tool, allows me to embed graphs directly in there, which is really much nicer than previously; sometimes, when I wanted to express something as a visual, I would use some sort of drawing tool to do something and export an image, and then embed that in my note. But now I can just put in this text, and it will automatically render that as a diagram.
And part of what's really nice about that is that then it's really easy for me to go and change that if I'm like, oh, but actually, I want to add one more connection in here. I don't have to re go back to, hopefully, a file that I've saved somewhere and, like, change an image file and re-export it. I just, you know, I add one line of text to my note, and it just works.
STEPHANIE: That's awesome. Yeah, the ability to change it seems really useful.
So, we've talked a little bit about tools for creating a visual aid for understanding our dependencies. And now that we have our graph, maybe we might have some concerning observations about what we see, especially when perhaps some of our dependencies are pointing back to each other.
JOËL: Yes. So, I think you're referencing cycles, in particular. That would be the formal term for it. And those are really interesting. They happen in dependency graphs. And I would say, in many cases, they can be a bit of a smell. There's definitely situations where they're fine. But there are things that you look at, and you're like, okay, this is going to be a more complex kind of tricky bit of the graph to work with.
Some cases, you just straight up can't have them. So, I want to say that the way RSpec lets are set up, you cannot write code that produces cycles. But you might have...I think Ruby allows classes to reference each other in such a way that it creates a cycle, and not all languages do that. So, Elm and F#, I believe, require that modules cannot reference each other. The fancy term for this is a dependent acyclic graph, or DAG, which basically just means that there are no cycles in that graph.
STEPHANIE: Yeah. What you said about classes referencing each other is very interesting because I've definitely seen that. And then, if I have to go about changing something, maybe even it's just the class name, right? Now there's no way in which I can really make just one change. I have to kind of do it all in one go.
JOËL: I think that's a common property of a cycle, and a graph is that changes that happen somewhere in that cycle often need to be all shipped together as one piece. You can't break it up into smaller chunks because everything depends on everything else. So, it has to be kind of boxed together and shipped as one thing.
STEPHANIE: And you'd mentioned that cycles, you know, can be a bit of a code smell. And if the goal is to be able to break it up so that it is a little bit more manageable to work with, how would you go about breaking a cycle?
JOËL: So, I think breaking a cycle is going to vary a little bit based on your problem domain. So, are you modeling a series of classes that are referencing each other? Is this a function call graph? Is this even, like, a series of tasks that you're trying to do? But typically, what you want to do is make sure that eventually, at some point, like, something doesn't loop back to referencing something higher up in your hierarchy. And so, oftentimes, it ends up being about what is allowed to know about what? Do you have higher-level concepts that can know and depend on lower-level concepts but not vice versa?
And again, we are talking about this a little bit at the abstract level. But in terms of, let's say, different code modules, or classes, or something like that, commonly, you might say, well, we want some sort of layering where we have almost, like, more primitive types of classes at the bottom. And they don't get to know about anything above them. But the ones above that might be more complex that are composed of smaller pieces know about the ones below them. And you might have multiple layers kind of like that that all kind of point down, but nothing points up.
STEPHANIE: That is a very common heuristic. [chuckles] I think you were basically just describing how I also understand creating React components, where you want to separate your presentational ones from your functional ones. And, yeah, it makes a lot of sense that as soon as you start adding that complexity of, you know, those primitive classes at the bottom, starting to, you know, point to things higher up or to know about things higher up, that is where a cycle may be accidentally introduced.
JOËL: It's interesting just how many design principles that we have in software. If you dig into them a little bit, you find out that they're about decoupling things, and oftentimes, it's specifically breaking up cycles. So, one way that you might have something like this that actually has dependency in the name, the dependency inversion principle, where what you're effectively doing is you're taking one of those dependency arrows, and you're flipping it the other way. So, instead of A depending on B, you're flipping it. Now B depends on A, and that can be enough to break a cycle.
STEPHANIE: So, one thing I've picked up from our conversations about dependency graphs is that oftentimes, you know, when you're trying to figure out where to start, you want to look for those areas or those nodes where there's nothing else that depends on it.
JOËL: Yeah. I think you have those nodes that, if this were a tree, you would call them the leaf nodes. In the case of a graph, I'm not sure if that's technically correct, but they don't depend on anything. They're kind of your base case. And so, you can, you know, if it's a function, you can run it. If it's a file, you can load it; if it's a class, also you can load it up and not have to do anything else because it has no dependencies. And knowing that those are there, I think, can be really useful in terms of knowing an order you might want to execute something in. And this is really interesting for one of my favorite uses of a graph, which is breaking down a series of tasks that you need to do.
So, commonly, you might say, okay, I have a large task I need to do. I break it down into a series of subtasks. And, you know, maybe I draw out, like, a bulleted list and, you know, task 1, 2, 3, 4, 5. The problem is that they're not necessarily just a flat list. They all have, like, orders, like dependencies between each other. So, maybe one has to happen before 2, but it also has to happen before 3, which needs to happen before two, and, like, there's all these interconnections.
And then, you find out that you can't ship them independently the way you thought initially. So, by building up a graph, you end up with something that shows you exactly what depends on what. And then, like you said, the parts that are really interesting where you can start doing work are the ones that have no dependencies themselves. Other things might depend on them, but they have no dependencies. Therefore, they can be safely built, shipped, deployed to production, and they can be done independently of the other subtasks.
STEPHANIE: Yeah. I was also thinking about things that could be done in parallel as well. So, if you do have multiple of those items with no dependencies, like, that is a really good way to be able to break up that work and, yeah, identify things that are not blocked.
JOËL: For a complex set of tasks, it's great to see, okay, these two pieces have no dependencies. We can have them be done in parallel, shipped independently. And then you can just kind of keep repeating that process. Because once all of the tasks that have no dependencies have been done, well, you can almost, like, remove them from the graph and see, okay, what's the new set of things that have no dependencies? And then, keep doing that until you've eventually done the whole graph.
And that may sound like, oh okay, we're just kind of using a little bit of intuition and working through the graph. It turns out that this is a, like, actual, like, formal thing. When it comes to graphs, it's a traversal algorithm called topological sort is the fancy name for it, and it basically, yeah, it goes through that. It gives you a list of nodes in order where each node that you're given has no dependencies that have not been evaluated yet.
So, it works from effectively to use our tree terminology, from the leaf nodes to the root, potentially roots plural, of the graph, and each step is independent. So that's a lot of, like, fancy terminology, and getting a little bit of, like, computer science graph theory into here.
So, my, like, general heuristic is that graphs should be evaluated from the bottom up when you're trying to evaluate each piece independently. So, when you do that, you get to do each piece independently, as opposed to if you're evaluating from the top down. So, starting from the one thing that depends on everything else, well, it can't be shipped until all of its dependencies have been shipped.
And all the transitional dependencies can't be shipped until their dependencies have been shipped. And so, you end up being not able to ship anything until you've built the entire graph. And that's when you end up with, you know, a 2,000-line PR that took you multiple weeks and might be buggy. And it's going to take a long time to review. And it's just not what anybody wants.
STEPHANIE: I'm glad you brought this up because I think this is where I am really curious to get better at because oftentimes, when I am breaking down a complex task, it's quite hard for me to see all of the steps that need to happen. And so, you know, you maybe start out with that, like, top-level node, like, the task that needs to be done as you understand it immediately. And it's really hard to actually identify the dependencies and, like, the smaller pieces along the way. And because you're not able to identify that, you think that you do have to just do it all in one go.
JOËL: Yeah, that sort of root node is typically the overarching task, the goal of what you want to do. And a common, I think, scenario for something like this would be, let's say, you're doing a Rails upgrade. And so, that root node is upgrade Rails. And a common thing that you might want to do is say, okay, let's go to the gem file, upgrade Rails, see what breaks, and then just keep fixing those things. That's working from the top down.
And you're going to be in a long-running branch, and you're going to keep fixing things, fixing things, fixing things until you have found all the things but done all the things. And then you do a big bang upgrade that may have taken you weeks. As opposed to if you're working from the bottom up, you try to figure out, okay, what are all the subtasks? And that might take some exploration. You might not know upfront.
But then you might say, okay, here, I can upgrade RSpec versus a dependency, or I need to change the interface of this class and ship all these pieces one at a time. And then, the final step is flipping that upgrade in the gem file, saying, okay, now I've upgraded Rails from 4 to 5, or whatever the version is that you're trying to do.
STEPHANIE: I think you've really hit the nail on the head when it comes to trying to do something but not knowing what subtasks may compose of it and getting into that problem of, you know, having not broken it down, like, enough to really see all the dependencies.
And, you know, maybe this is a conversation [chuckles] for another episode, but the skill of breaking up those tasks and exploring what those dependencies are, and being able to figure them out upfront before you start to just do that upgrade and then see what happens, that's definitely an area that I want to keep investing in. And I'm sure other people would be really curious about, too, to help them make their jobs easier.
JOËL: I think one tip that I've learned that's really fun and that connects into all of this is sometimes you do end up with a cycle in your dependencies of tasks. A technique for breaking that up is a pattern that I have pitched multiple times on the show: the strangler fig pattern. And part of why it's so powerful is that it allows you to work incrementally by breaking up some of these cycles in your dependency graph.
And one of the lessons that I've learned from that is that just because you have sort of an initial set of subtasks and you have a graph of them doesn't mean that you can't change them. If you're following strangler fig, what you're actually doing is introducing one or more new subtasks to that graph. But the way you introduce them breaks up that cycle. So, you can always add new tasks or split up existing ones as you get a better understanding of the work you need to do. It's not something that is fixed or set in stone upfront.
STEPHANIE: Yeah, that's a really great tip. I think next time, what I really want to explore, you know, your heuristic of going from bottom up, yeah, sure, it sounds all fine and dandy. But how to get to a point where you're able to see everything at the bottom, right? And, like, when you are tasked, or you do start with the thing at the top, like, the end goal. Yeah, I'm sure that's something we'll explore [chuckles] another day.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up.
Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël has been fighting a frustrating bug where he's integrating with a third-party database, and some queries just crash. Stephanie shares her own debugging story about a leaky stub that caused flaky tests.
Additionally, they discuss the build vs. buy decision when integrating with third-party systems. They consider the time and cost implications of building their own integration versus using off-the-shelf components and conclude that the decision often depends on the specific needs and priorities of the project, including how quickly a solution is needed and whether the integration is core to the business's value proposition.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: My world has been kind of frustrating recently. I've been fighting a really frustrating bug where I'm integrating with a third-party database. And there are queries that just straight-up crash. Any query that instantiates an instance of an ActiveRecord object will just straight-up fail. And that's because before, we make the actual query, almost like a preflight query that fetches the schema of the database, particularly the list of tables that the database has, and there's something in this schema that the code doesn't like, and everything just crashes.
Specifically, I'm using an ODBC connection. I forget exactly what the acronym stands for, Open Database connection, maybe? Which is a standard put up by Microsoft. The way I'm integrating it via Ruby is there's a gem that's a C extension. And somewhere deep in the C extension, this whole thing is crashing. So, I've had to sort of dust off some C a little bit to look through. And it's not super clear exactly why things are crashing. So, I've spent several days trying to figure out what's going on there. And it's been really cryptic.
STEPHANIE: Yeah, that does sound frustrating. And it seems like maybe you are a little bit out of your depth in terms of your usual tools for figuring out a bug are not so helpful here.
JOËL: Yeah, yeah. It's a lot harder to just go through and put in a print or a debug statement because now I have to recompile some C. And, you know, you can mess around with some things by passing different flags. But it is a lot more difficult than just doing, like, a bundle open and binding to RB in the code.
My ultimate solution was asking for help. So, I got another thoughtboter to help me, and we paired on it. We got to a solution that worked. And then, right before I went to deploy this change, because this was breaking on the staging website, I refreshed the website just to make sure that everything was breaking before I pushed the fix to see that everything is working. This is a habit I've picked up from test-driven development. You always want to see your test break before you see it succeed.
And this is a situation where this habit paid off because the website was just working. My changes were not deployed. It just started working again. Now it's gotten me just completely questioning whether my solution fixes anything. The difficulty is because I am integrating [inaudible 03:20] third-party database; it's non-deterministic. The schema on there is changing rather frequently.
I think the reason things are crashing is because there's some kind of bad data or data that the ODBC adapter doesn't like in this third-party system. But it just got introduced one day; everything started breaking, and then somehow it got removed, and everything is working again without any input or code changes on my end. So, now I don't trust my fix.
STEPHANIE: Oh no. Yeah, I would struggle with that because your reality has come crashing down, [laughs] or how you understood reality. That's tough. Where do you think you'll go from here? If it's no longer really an issue in this current state of the schema, is it worth pursuing further at this time?
JOËL: So, that's interesting because it turns into a prioritization problem. And for this particular project, with the deadlines that we have, we've decided it's not worth it. I've opened up a PR with my fix, with some pretty in-depth documentation for why I thought that was the fix and what I think the underlying problem is. If this shows up again in the next few days, I'll have that PR that I can pull in and see if it fixes things, and if it doesn't, I'll probably just close that PR, but it'll be available for us if we ever run into this again.
I've also looked at a few potential mitigating situations. Part of the problem is that this is a, like, massive system. The Rails app that I'm using really doesn't need to deal with this massive database. I think there's, you know, almost 1,000 tables, and I really only care about a subset of tables in, like, one underlying schema. And so, I think by reducing the permissions of my database user to only those tables that I care about, there's a lower chance of me triggering something like this.
STEPHANIE: Interesting. What you mentioned about, you know, having that PR continue to exist will be really helpful for future folks who might come across the same problem, right? Because then they can see, like, all of the research and investigation you've already done. And you may have already done this, but if you do think it's a schema issue, I'm curious about whether the snapshot of the schema could be captured from when it was failing to when it has magically gotten fixed. And I wonder if there may be some clues there for some future investigator.
JOËL: Yeah. I'm not sure what our backup situation is because this is a third-party system, so I'd have to figure out what things are like in the admin interface there. But yeah, if there is some kind of auditing, or snapshots, or backups, or something there, and I have rough, you know, if I know it's within a 24-hour period, maybe there's something there that would tell me what's happening.
My best guess is that there's some string that is longer than expected or maybe being marked as a CHAR when it should be a VARCHAR, or maybe something that's not a non-UTF-8 encoded character, or something weird like that. So, I never know exactly what was wrong in the schema. There's some weird string thing happening that's causing the Ruby adapter to blow up.
STEPHANIE: That also feels so unsatisfying [laughs] for you. I could imagine.
JOËL: Yeah, there's no, like, clean resolution, right? It's a, well, the bug is gone for now. We're trying to make it less likely for it to pop up again in the future. I'm trying to leave some documentation for the next person who's going to come along, and I'm moving forward, fingers crossed. Is that something you've ever had to do on one of your projects?
STEPHANIE: Given up? Yes. [laughter] I think I have definitely had to learn how to timebox debugging and have some action items for when I just can't figure it out. And, you know, like we mentioned, leaving some documentation for the next person to pick up, adding some additional logging so that maybe we can get more clues next time. But, you know, realizing that I do have to move on and that's the best that I can do is really challenging.
JOËL: So, you used two words here to describe the situation: one was giving up, and the other one was timebox. I think I really like the idea of describing this as timeboxing. Giving up feels kind of like, defeatist. You know, there's so many things that we can do with our time, and we really have to be strategic with how we prioritize. So, I like the idea of describing this as a timeboxing situation.
STEPHANIE: Yeah, I agree. Maybe I should celebrate every time that I successfully timebox something [laughs] according to how I planned to. [laughs]
JOËL: There's always room to extend the timebox, right?
STEPHANIE: [laughs] It's funny you bring up a debugging mystery because I have one of my own to share today. And I do have to say that it ended up being resolved, [chuckles] so it was a win in my book. But I will call this the case of the leaky stub.
JOËL: That sounds slightly scary.
STEPHANIE: It really was. The premise of what we were trying to figure out here was that we were having some flaky tests that were failing with a runtime error, so that was already kind of interesting. But it was quickly determined it was flaky because of the tests running in a certain order, so--
JOËL: Classic.
STEPHANIE: Right. So, I knew something was happening, and any tests that came after it were running into this error. And I was taking a look, and I figured out how to recreate it. And we even isolated to the test itself that was running before everything else, that would then cause some problems. And so, looking into this test, I saw that it was stubbing the find method on an ActiveRecord model.
JOËL: Interesting.
STEPHANIE: Yeah. And the stubbed value that we were choosing to return ended up being referenced in the tests that followed. So, that was really strange to me because it went against everything I understood about how RSpec cleans up stubs between tests, right?
JOËL: Yeah, that is really strange.
STEPHANIE: Yeah, and I knew that it was referencing the stub value because we had set a really custom, like, ID value to it. So, when I was seeing this exact ID value showing up in a test that seemed totally unrelated, that was kind of a clue that there was some leakage happening.
JOËL: So, what did you do next?
STEPHANIE: The next discovery was that the error was actually raised in the factory setup for the failing tests and not even getting to running the examples at all. So, that was really strange. And digging into the factories was also its own adventure because there was a lot of complexity in the factories. A lot of them used hooks as well that then called some application code. And it was a wild goose chase.
But ultimately, I realized that in the factory setup, we were calling some application code for that model where we had stubbed the find, and it had used the find method to memoize a class instance variable.
JOËL: Oh no. I can see where this is going.
STEPHANIE: Yeah. So, at some point, our model.find() returned our, you know, stub value that we had wanted in the previous test. And it got cached and just continued to leak into everything else that eventually would try to call that memoized method when it really should have tried to do that look-up for a separate record.
JOËL: And class instance variables will persist between tests as long as they're on the same thread, right?
STEPHANIE: Yeah, as far as I understand it.
JOËL: That sounds like a really frustrating journey. And then that moment when you see the class instance variable, and you're like, oh no, I can't believe this is happening.
STEPHANIE: Right? It was a real recipe for disaster, I think, where we had some, you know, really complicated factories. We had some sneaky caching issues, and this, you know, totally seemingly random runtime error that was being raised. And it was a real wild goose chase because there was not a lot of directness in going down the debugging path. I feel like I went around all over the codebase to get to the root of it. And, in the end, you know, we were trying to come up with some takeaways.
And what was unfortunate was that you know, like, normally, stubbing find can be okay if you are, you know, really wanting to make sure that you are returning your mocked value that you may have, like, stubbed some other stuff on in your test. But because of all this, we were like, well, should we just not stub find on this really particular model? And that didn't seem particularly sustainable to make as a takeaway for other developers who want to avoid this problem.
So, in the end, I think we scoped the stub to be a little more specific with the arguments that we wanted to target. And that was the way that we went forward with the particular flaky test at hand.
JOËL: It sounds like the root cause of the problem was not so much the stub as it was the fact that this value is getting cached at the class level. Is that right?
STEPHANIE: Yes and no. It seems like a real pain for running the tests. But I'm assuming that it was done for a good reason in production, maybe, maybe not. To be fair, I think we didn't need to cache it at all because it's calling a find, which is, you know, should be pretty quick and doesn't need to be cached. But who knows? It's hard to tell. It was really old code. And I think we were feeling also a little nervous to adjust something that we weren't sure what the impact would be.
JOËL: I'm always really skeptical of caching. Caching has its place. But I think a lot of developers are a little too happy to introduce one, especially doing it preemptively that, oh well, we might need a cache here, so why not? Let's add that. Or even sometimes, just as a blind solution to any kind of slowness, oh, the site is slow; let's throw a cache here and hope for the best.
And the, like, bedrock, like, rule zero of any kind of performance tuning is you've got to measure before and after and make sure that the change that you introduce actually makes things better. And then, also, is it better enough speed-wise that you're willing to pay any kind of costs associated to maintaining the code now that it's more complex? And a lot of caches can have some higher carrying costs.
STEPHANIE: Yeah, that's a great point. This debugging mystery an example of one of them.
JOËL: How long did it take you to figure out the solution here?
STEPHANIE: So, like you, I actually was on a bit of the incorrect path for a little while. And it was only because this issue affected a different flaky test that someone else was investigating that they were able to connect the dots and be like, I think these, you know, two issues are related. And they were the ones who ultimately were able to point us out to the offending test if you will. So, you know, it took me a few days. And I imagine it took the other developer a few days. So, our combined effort was, like, over a week.
JOËL: Yep. So, for all our listeners out there, you just heard that Stephanie and I [laughs] both went on multi-day debugging journeys. That happens to everyone. Just because we've been doing this job for years doesn't mean that every bug is, like, a thing that we figure out immediately.
So, separately from this bug that I've been working on, a big issue that's been front of mind for me on this project has been the classic build versus buy decision. Because we're integrating with a third-party system, we have to look at either building our own integration or trying to use some off-the-shelf components. And there's a few different levels of this.
There are some parts where you can actually, like, literally buy an integration and think through some of the decisions there. And then there's some situations where maybe there's an open-source component that we can use. And there's always trade-offs with both the commercial and the open-source situation. And we have to decide, are we willing to use this, or do we want to build our own? And those have been some really interesting discussions to have.
STEPHANIE: Yeah. I think you actually expanded this decision-making problem into a build versus buy versus open source because they are kind of, you know, really different solutions with different outcomes in terms of, you know, maintenance and dependencies, right? And that all have, like, a little bit of a different way to engage with them.
JOËL: Interesting. I think I tend to think of the buy category, including both like commercial off-the-shelf software and also open-source off-the-shelf software, things that we wouldn't build custom for ourselves but that are third-party components that we can pull in.
STEPHANIE: Yeah, that's interesting because I had a bit of a different mental model because, in my head, when you're buying a commercial solution, you, you know, are maybe losing out on some opportunities for customization or even, like, forking it on your own. So, with an open-source solution, there could be an aspect of making it work for you. Whereas for a commercial solution, you really become dependent on that other company and whether they are willing to cater [laughs] to your needs or not.
JOËL: That's fair. For something that's closed-source where you don't actually have access to the code, say it's more of a software as a service situation, then, yeah, you're kind of locked in and hoping that they can provide the needs that you have. On the flip side, you are generally paying for some level of support. The quality of that varies sometimes from one vendor to another. But if something goes wrong, usually, there's someone you can email, someone you can call, and they will tell you how to fix the problem, or they will fix it on their end.
STEPHANIE: For the purposes of this conversation, should we talk about the differences, you know, building yourself or leaning on an existing built-out solution for you?
JOËL: The project I'm working on is integrating with a Snowflake data warehouse, which is an external place that stores data accessible through something SQL-like. And one of the things that's attractive about this is that you can pull in data from a variety of different sources, transform it, and have it all stored in a kind of standardized structure that you can then integrate with. So, for pulling data in, you can build your own sort of ingestion pipeline, if you want, with code, and their APIs, and things.
But there are also third-party vendors that will give you kind of off-the-shelf components that you can use for a lot of popular other data sources that you might want to pull. So, you're saying; I want to pull from this external service. They've probably got a pre-built connector for it. They can also do things like pull from an arbitrary Postgres database on some other server if that's something you have access to.
It becomes really attractive because all you need to do is create an account on this website, plug in a few, like, API keys and URLs. And, all of a sudden, data is just flowing from one third-party system into your Snowflake data Warehouse, and it all just kind of works. And you don't have to bother with APIs, or ODBC, or any of that kind of stuff.
STEPHANIE: Got it. Yeah, that does sound convenient. As you were talking about this, I was thinking about how if I were in the position of trying to decide how to make that integration happen, the idea of building it would seem kind of scary, especially if it's something that I don't have a lot of expertise in.
JOËL: Yeah, so this was really interesting. In the beginning of the project, I looked into a little bit of what goes into building these, and it's fairly simple in terms of the architecture. You just need something that writes data files to typically something like an S3 bucket. And then you can point Snowflake to periodically pull from that bucket, and you write an import script to, you know, parse the columns and write them to the right tables in the structure that you want inside Snowflake.
Where things get tricky is the actual integration on the other end. So, you have some sort of third-party service. And now, how do you sort of, on a timer maybe, pull data from that? And if there are data changes that you're synchronizing, is it just all append-only data? Or are you allowing the third-party service to say, "Hey, I deleted this record, and you should reflect that in Snowflake?" Or maybe dealing with an update. So, all of these things you have to think about, as well as synchronization.
What you end up having to do is you probably boot up some kind of small service and, you know, maybe this is a small Ruby app that you have on Heroku, maybe this is, like, an AWS Lambda kind of thing. And you probably end up running this every so many seconds or so many minutes, do some work, potentially write some files to S3. And there's a lot of edge cases you have to think about to do it properly. And so, not having to think about all of those edge cases becomes really enticing when you're looking to potentially pay a third party to do this for you.
STEPHANIE: Yeah, when you used the words new service, I bristled a little bit [laughs] because I've definitely seen this happen maybe on a bit of a bigger scale for a tool or solution for some need, right? Where some team is formed, or maybe we kind of add some more responsibilities to an existing team to spin up a new service with a new repo with its own pipeline, and it becomes yet another thing to maintain. And I have definitely seen issues with the longevity of that kind of approach.
JOËL: The idea of maintaining a fleet of little services for each of our integrations seemed very unappealing to me, especially given that setting something like this up using the commercial approach probably takes 30 minutes per third-party service. There's no way I'm standing up an app and doing this whole querying every so many minutes, and getting data, and transforming it, and writing it to S3, and addressing all the edge cases in 30 minutes. And it's building something that's robust.
And, you know, maybe if I want to go, like, really low tech, there's something fun I could do with, like, a Zapier hook and just, like, duct tape a few services together and make this, like, a no-code solution. I still don't know that it would have the robustness of the vendor. And I don't think that I could do it in the same amount of time.
STEPHANIE: Yeah. I like the keyword robustness here because, at first, you were saying, like, you know, this looked relatively small in scope, right? The code that you had to write. But introducing all of the variables of things that could go wrong [laughs] beyond the custom part that you actually care about seemed quite cumbersome.
JOËL: I think there's also, at this point, a lot of really interesting prioritization questions. There are money questions, but there are also time questions you have to think about. So, how much dev time do we want to devote upfront to building out these integrations? And if you're trying to move fast and get a proof of concept out, or even get, like, an MVP out in front of customers, it might be worth paying more money upfront to a third-party vendor because it allows you to ship something this week rather than next month.
STEPHANIE: Yeah. The "How soon do you need it?" is a very good question to ask. Another one that I have learned to include in my arsenal of, you know, evaluating this kind of stuff comes from a thoughtbot blog by Josh Clayton, where he, you know, talks about the build versus buy problem. And his takeaway is that you should buy when your business is not dependent on it.
JOËL: When it's not part of, like, the core, like, value-add that your business is doing. Why spend developer time on something that's not, like, the core thing that your product is when you can pay someone else to do it for you? And like we said earlier, a lot of that time ends up being sunk into edge cases and robustness and things like that to the point where now you have to build an expertise in a, like, secondary thing that your business doesn't really care about.
STEPHANIE: Yeah, absolutely. I think this is also perhaps where very clear business goals or a vision would come in handy as well. Because if you're considering building something that doesn't quite support that vision, then it will likely end up continuing to be deprioritized over the long term until it becomes this thing that no one is accountable for maintaining and caring for.
And just causes a lot of, honestly, morale issues is what I've seen when some service that was spun up to try to solve a particular problem is kind of on its last legs and has been really neglected, and no one wants to work on it. But it ends up causing issues for the rest of the development team. But then they're also really focused on initiatives that actually do provide the business value. That is a really hard balancing act that I've seen teams struggle with.
JOËL: Earlier this year, we were talking about the book Sustainable Rails. And it really hammers home the idea of a carrying cost for the code, and I think that's exactly what we're talking about here. And that carrying cost can be time and money. But I like that you also mentioned the morale effects. You know, that's a carrying cost that just sort of depresses the productivity of your team when morale is low.
STEPHANIE: Yeah, absolutely. I'm curious if we could discuss some of the carrying costs of buying a solution and where you've seen that become tricky.
JOËL: The first thing to look at is the literal cost, the money aspect of things. And I think it's a really interesting situation for the business models for these types of Snowflake connectors because they typically charge by the amount of data that you're transmitting, so per row of data that you're transmitting. And so, that cost will fluctuate depending on whether the third-party service you're integrating with is, like, really chatty or not.
When you contrast that to building, building typically has a relatively fixed cost. It's a big upfront cost, and then there's some maintenance cost to go with it. So, if I'm building some kind of integration for, let's say, Shopify, then there's the cost I need to build up front to integrate that. And if that takes me, I don't know, a week or two weeks, or however long it is, you know, that's a pretty big chunk of time. And my time is money.
And so, you can actually do the math and say, "Well, if we know that we're getting so many rows per day at this rate from the commercial vendor, how many weeks do we have to pay for the commercial one before we break even and it becomes more expensive than building it upfront, just in terms of my time?" And sometimes you do that math, and you're like, wow, you know, we could be going on this commercial thing for, like, two years before we break even. In that case, from a purely financial point of view, it's probably worth paying for that connector.
And so, now it becomes really interesting. You say, okay, well, which are the connectors that we have that are low volume, and which are the ones that are high volume? Because each of them is going to have a different break-even point. The ones that you break even after, you know, three or four weeks might be the ones that become more interesting to have a conversation about building. Whereas some of the others, it's clearly not worth our time to build it ourselves.
STEPHANIE: The way you described this problem was really interesting to me because it almost sounds like you found the solution somewhere in the middle, potentially, where, you know, you may try building the ones that are highest priority, and you end up learning a lot from that experience, right? That could make it easier or at least, like, set you up to consider doing that moving forward in the future if you find, like, that is what is valuable.
But it's interesting to me that you kind of have the best of both worlds of, like, getting the commercial solutions now for the things that are lower value and then doing what you can to get the most out of building a solution.
JOËL: Yeah. So, my final recommendation ended up being, let's go all commercial for now. And then, once we've built out something, and because speed is also an issue here, once we've built out something and it's out with customers, and we're starting to see value from this, then we can start looking at how much are we paying per week for each of these connectors? And is it worth maybe going back and building our own for some of these higher-volume connectors? But starting with the commercial one for everything.
STEPHANIE: Yeah, I actually think that's generally a pretty good path forward because then you are also learning about how you use the commercial solution and, you know, which features of it are critical so that if you do eventually find yourselves, like, maybe considering a shift to building in-house, like, you could start with a more clear MVP, right? Because you know how your team is using an existing product and can focus on the parts that your business are dependent on.
JOËL: Yeah, it's that classic iterative development style. I think here it's also kind of inspired by a strategy I typically use for performance, which is make it work before you try to make it fast. And, actually, make it work, then profile, then measure, find the hotspots, and then focus on making those things fast. So, in this case, instead of speed, we're talking about money. So, it's make it work, then profile, find the parts that are expensive, and make the trade-off of, like, okay, is it worth investing into making that part less expensive in terms of resources?
STEPHANIE: I like that as a framework a lot.
JOËL: A lot of what we do as programmers is optimization, right? And sometimes, we're optimizing for execution time. Sometimes we're optimizing for memory cost, and sometimes we're optimizing for dollars.
STEPHANIE: Yeah, that's really interesting because, with the buy solution, you know very clearly, like, how much the thing will cost. Whereas I've definitely seen teams go down the building route, and it always takes longer than expected [laughs], and that is money, right? In terms of the developer's time, for sure.
JOËL: Yeah, definitely, like, add some kind of multiplier when you're budgeting out that build alternative because, quite likely, there are some edge cases that you haven't thought about that the commercial partner has, and you will have to spend more time on that than you expected.
STEPHANIE: Yeah, in addition to whatever opportunity cost of not working on something that is driving revenue for the business right now.
JOËL: Exactly.
STEPHANIE: So, the direction of this conversation ended up going kind of towards, like, what is best for the team at, like, a product and company level. But I think that we make these decisions a lot more frequently, even when it comes to whether we pull in a gem or, you know, use an open-source tool or not. And I would be really interested in discussing more of that in another episode.
JOËL: Yeah. That gets into some controversial takes, right? It's the evergreen topic of: do we build it ourselves, or do we pull in some kind of third-party package?
STEPHANIE: Something for the future to look forward to. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie had a small consulting win: saying no to a client. GeoGuessr is all the rage for thoughtbot's remote working culture, which leads to today's topic of forming human connections in a virtual (work) environment.
Transcript:
JOËL: And this is just where it ends.
[laughter]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, I have a small consulting win, or even just a small, like, win as a human being [laughs] that I want to share, which is that I feel good about a way that I handled saying no to a stakeholder recently. And, you know, I really got to take them where I can get it because that is so challenging for me. But I feel really glad because we ended up kind of coming out the other side of it having a better understanding of each other's goals and needs.
And so, basically, what happened was I was working on a task, and our product owner on our team asked me if it could be done by next week. And immediately, I wanted to say, "Absolutely not." [laughs] But, you know, I took a second and, you know, I had the wherewithal to ask why. You know, I was kind of curious, like, where was this deadline coming from? Like, what was on her radar that, like, wasn't on mine?
And she had shared that, oh, you know, if we were able to get it out before this big launch, she was thinking that it actually might make our customer support team's lives easier because we were kind of taking away access to something before some new features rolled out. And, you know, there might be some customers who would complain. And with that information, you know, that was really helpful in helping me understand. And I'm like, yeah, like, that seems like a helpful thing to know, so I could try to strive for it. Because I also, like, want to make that process go easier as well.
But I told her that I'd let her know because I honestly wasn't sure if it was possible to do by next week. And after a little bit of, you know, more digging, kind of seeing how my progress was going, in the end, I had to say that I didn't feel confident that we could finish it in time for that deadline because of the other risks, right? Like, I didn't want to just release this thing without feeling good about the plan that we had. And so, that was my small, little win in saying no, and I feel very proud of myself for it.
JOËL: I'm proud of you too. That's not easy to just do in the first place, and then to do it well is a whole other level. It sounds, though, that you came out of the other side with the client with almost, like, a better relationship.
STEPHANIE: Yeah, I think so. In general, you know, I really struggle when people do end up getting into that debate of, like, "Well, I need this." And someone else says, "Well, I need this other thing." And, you know, at some point, it kind of gets a bit unproductive, right? But I think this was a very helpful way for me to see a path forward when maybe we, like, have different priorities. But, like, can we better understand each other and the impact of them to ultimately, like, make the best decision?
The other thing that I wanted to share that I learned recently was there was a recent RailsConf talk by Elle Meredith, and it was about strategies to say no, and I watched it. And one really cool thing that I learned was that the word priority, you know, when it was first created, it actually didn't really have, like, a plural form. There was really only ever, like, a singular priority. And it wasn't until, I think, you know, the recent century or something like that, that people started to use it in a plural form. And that was really enlightening to me.
I think it made me rethink the word and how I use it, and it made a lot of sense, too. Because at any given moment, you know, really, you can't be doing more than one thing; I mean, you can try. I know that I have been guilty of multitasking. But that, you know, doesn't always serve me. I never end up doing all of the things that I'm trying to do well. And I would be really curious to kind of, you know, when I do feel that urge, like, think a little bit about, like, what is the one thing that I should be doing right now that is the highest priority?
JOËL: I would definitely second that recommendation for this talk. I actually got to see it live at RailsConf, and it was excellent.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I got to participate in a really fun event at thoughtbot today. We got together with some other people on the Boost Team and played a few rounds of GeoGuessr. And for those who are not familiar with this game, it drops you randomly somewhere in the world in Google Street View. You can move around. And there's a timer, and you have to drop a pin on a map where you think you are.
So, you're walking through the streets, and you're like, okay, well, I don't know this language. I'm not sure where we're going. You know, with the vibes going here, I'll bet, you know, this looks like maybe southern China, and then you drop a pin. And oh no, turns out it was actually Singapore. And there's all these little hints and things. People who are really into it have learned all these tricks, and they can be really good. Sara Jackson, who is our resident GeoGuessr expert, is excellent at this. But it was a good time.
STEPHANIE: Yeah, it was really fun. I liked that we played a cooperative mode where we were all kind of helping each other out. And so, maybe someone is, like, exploring on the map and sees a street sign and is like, "Oh, like, that looks like this language." And someone else is like, "Oh yeah, like, that is that." Or like, "No, I think it's actually this other language," and sharing all of the different, like, pieces of information that we're finding to get closer and closer to what it might be.
And then we celebrate whoever ends up getting the closest because, at some point, it's kind of just, like, just a luck of the pin, right? Where maybe you happen to click on, like, the right place. But it's always really exciting when we're like, wow, like, Sara was only 500 kilometers away in finding the exact place that we were served. So, I had a good time as well.
JOËL: So, speaking of cooperative events, this was a work event that we did. We just got together and played a game. And, for me, that was a really fun way to connect with some of my colleagues. I'm curious, what are your thoughts on things that you've seen done well in companies that are remote-first that really foster a sense of connection and community among a team?
STEPHANIE: I think this worked especially well today because it was kind of scheduled in regular time that we have as a team to me. And sometimes, you know, the meeting topics are a bit more work-focused. But what I really like is that anyone on the team can host one of these meetings. We have them biweekly, and we just call them Boost biweeklies. Boost is the team that Joël and I are on.
JOËL: Naming is the hardest problem in computer science.
STEPHANIE: It really is. But I really like that people can bring, you know, a little bit of their own flavor to this meeting. So, whoever is host just kind of comes up with something to do. And sometimes it's like show and tell. You know, other times it is more of like, you know, what's the update on some of the projects that we're doing? Other times, it's the Spicy Takes Lightning Talks that we've kind of mentioned on the podcast before. And yeah, it is just a really nice, like, time for us to get together.
And I also feel like I learn something about my co-workers every time that we meet, whether it's the person who is hosting the meeting and kind of where their interests are. I think someone even did, like, chair yoga once and guided the team in doing that. Or because they are more casual, right? Sometimes we just play a game, and I really enjoy that nice break in my day.
JOËL: Do you find that the particular style of these meetings makes you feel more connected to your colleagues? Would you prefer just kind of game day one, like we had today, versus maybe, like, lightning talks or a presentation on security or something like that?
STEPHANIE: I actually think the diversity is what makes it special. I get to see, you know, a bunch of different sides of my co-workers and, you know, some days, the topic is a little more serious, and that can be really connecting. Another Boost Team member had hosted a biweekly where we kind of shared the challenges of, like, consulting work and, like, onboarding onto a new project and sharing what might be difficult and, like, how we might be feeling when we do join a new project.
And I think that was really helpful because it was very validating for something that I thought, like, maybe I felt a little bit more alone in. And the tone was a little bit more, like, earnest and serious. But I came away with it feeling very supported by my team, right? And other times, it is just silliness and fun [laughs], which, you know, is also important. Like, we need to have fun every once in a while.
JOËL: That's awesome. Do you feel like when you go to these meetings, you're looking more for knowledge or looking more for connection?
STEPHANIE: I think both because knowledge sharing is also, you know, can be really helpful. Like, I have enjoyed learning that, you know, so and so is, like, a GeoGuessr expert, Sara, right? And so, if I ever, like, find myself needing [chuckles] someone to go to about my Google Street View or world geography questions, I know that I can go to her. And, like, knowing that about her, like, makes me feel more connected to her. So, I think both are true.
So, we have been talking about a meeting style form of connecting in a remote workplace, but I'm really curious about your thoughts on asynchronous versus synchronous communication and how you find connection with a format that is more asynchronous, not just, you know, being in a meeting together.
JOËL: That's really challenging. I think I personally find that something that's mostly synchronous with maybe a little bit of a lag works pretty well for me, so something like Slack, where it's not exactly real-time because someone could take some time to come back to me. But for working hours overlap, there's likely some close-to-synchronous conversation happening.
But, you know, I can still get up and, you know, refill my cup of coffee, or it's not quite like I'm sitting in front of a camera. So, I think that, for many things, hits the sweet spot for myself. But there's definitely some things where I think you want a higher, like, information density. And that's, I think, where the synchronous face-to-face meeting really shines.
STEPHANIE: Information density. I haven't heard that phrase before, but I like it.
JOËL: The idea being, you know, how much information or how many words are you sharing back and forth, you know, per minute or something like that. And when you're talking on a call, you can do a lot more of that than you can going back and forth over Slack or writing an email.
STEPHANIE: So, I would say that at thoughtbot, we have a pretty asynchronous Slack culture, which I think can be quite different from other, you know, places I've worked at before or other Slack spaces that I've seen. And I actually find it a little bit harder to engage in that way. We have a dev channel where, you know, people chat about different technical topics. And sometimes, you know, those threads go, like, 40 replies long. And I think you tend to engage a lot more in those.
And I'm curious, like, does that scratch the itch for you in terms of that perfect, like, async, kind of some amount of lag for you to be doing other things, kind of doing your work, but then being able to come back and pick up the conversation where I left off?
JOËL: Yes, that is really nice because, you know, maybe I have a meeting or something, and I'm not there when the conversation starts, but I don't miss out. And I get to join in, you know, maybe 30 minutes after everyone else. You know, sometimes you don't want to just, like, restart a conversation that's happened and is done. But some of these things will kind of be going on and off all day. And those can be really fun, especially sometimes, like, a new person joins the thread and brings in a totally new perspective or a new angle that kind of, like, breathes new life into it and kind of gives everyone a new perspective.
STEPHANIE: Nice. I also think there's something to the idea of seeing more people engage with something that then invites other people to engage with it.
JOËL: I would agree with that. It's definitely exciting to see a thread, and it's not like, oh, it's empty, and I'm the only one who's put a response in here. When there is a lot of back and forth, you can almost feel the excitement. And that gets me hyped to, like, keep it going.
STEPHANIE: At a previous workplace in our Slack, we had a, like, virtual Jeopardy channel.
JOËL: Ooh.
STEPHANIE: And so, there was a little Jeopardy bot. And I guess whenever someone, you know, had a low on what they were doing, they would just start, you know, tagging the bot to pose a question. And anyone can answer, right? But once you kind of got the ball rolling, you would see other people start playing as well. And it would get really active for segments of 30 minutes or so.
And I always really enjoyed that because, yeah, it was a way for me to remember like, oh yeah, there's, like, other people also, like, typing away on their little keyboards, and we're all here together. But it was really interesting to see, like, when someone got it rolling, like, oh, other people, like, joined in.
JOËL: Yeah, being able to see small things like that can really build a sense of connection, even if you're not yourself directly participating.
STEPHANIE: Yeah. I think another thing I've been trying out lately is letting people know that I'm in a meeting space and offering to virtually co-work. So, you know, during the early days of when thoughtbot went remote, we had a lounge virtual meeting space for people to hang out with and, you know, get that face time that they weren't getting anymore since we weren't in the office. And, you know, I think that has kind of decreased in terms of engagement over, you know, several years now. And obviously, people have a lot of meeting fatigue and stuff like that.
But I was kind of in a mood to revive it a little bit because, yeah, I kind of got over the meeting fatigue and was wanting more face time with people. And the unfortunate thing, though, is that, like, no one was showing up to this room anymore. So, you know, even if someone wanted to hang out in it, you know, they go in. They see no one's there, you know, maybe they stay for a few minutes, but then they're like, okay, well, I'm just going to leave now.
And a couple of thoughtboters and I have been trying to revive it where we'll post in our general channel, like, "Hey, like, I'm in this meeting room. Like, come hang out for the next hour if you would like." And that's been working well for me. I have had a few, like, really nice lounge, virtual co-working hangout sessions. Even if one person shows up, honestly, like, that fulfills my want to just, like, speak to another human. [laughs]
JOËL: What does virtual co-working look like? Are you just kind of each doing work, but you've got a video camera on, and you're just aware of the presence of someone else? Do you kind of have, like, random breaks where you talk? What is that experience like?
STEPHANIE: Oh yeah, that's a good question. I have to say; for me, I'm just talking to the other [laughs] person at that point. I'm not really doing a whole lot of work. And, you know, in some ways, I almost think that, like, in those moments, I am really wanting to chat with someone and, like, that's okay, right?
JOËL: It's like a virtual water cooler for you.
STEPHANIE: Yeah, exactly. Like, that would be the moment if I were working in office that I would wander into the kitchen looking for a snack but also an unsuspecting victim to start [laughs] a conversation with.
JOËL: I feel you. I feel you. I have absolutely done that.
STEPHANIE: Yeah. And that's actually what makes me feel a little less guilty about it. Because, you know, when I was working in the office, like, that was such a big part of my day, and it's kind of what kept me motivated. And at home, I do find myself, like, a lot more productive. In fact, like, I think I am because I'm, you know, not spending that time wandering into the kitchen. But at what cost? [laughs] At the cost of, like, me feeling very, like, lonely and, like, kind of burnt out at the end of the day.
So, injecting my day with some of these moments, I think, is important to me. And also, again, like, I know that I'm being really productive in my, like, heads-down-time that I want to, you know, allow myself to just like, get that dose of connection.
JOËL: I know, for me, when we were doing things like this in person as well, those conversations that happen, yes, there's some random, frivolous stuff, but sometimes, it is a conversation related to work that I'm doing. Because, you know, someone who's not on my project is like, "Hey, how's your project going?" Or whatever. I'm like, "Oh, well, I'm, you know, doing this ODBC connection, and I'm kind of stuck." And, you know, we kind of talk about a few things. It's like, "Oh, did you know about this gem?" And it's like, "Wait, why didn't I talk to you earlier? Because this totally solves my problem."
STEPHANIE: Yeah, I think that being a sounding board is so valuable as well. So, I guess I enjoy virtual co-working, not necessarily, you know, us, like, sitting together and doing our work separately. Though I know that there's value in that, especially in real life. Like, I remember reading an article. I'll try to find it and link it. But the idea of just, like, sharing space with someone can be, like, a form of bonding.
But I do really enjoy just hearing about what other people are working on and just kind of, like, asking questions about it, right? And maybe we do take away, like, a new perspective or, like, have some insights about, like, the work itself. And, yeah, we don't really get that when we're working remotely by ourselves because there's no one to turn to and be like, "Hey, what do you think about this problem?"
JOËL: I love how no matter what the topic is that we're discussing on this show, you always have a book or an article or something that you've read that you can reference. And I think that's amazing.
STEPHANIE: Thank you.
JOËL: So, you're talking about things that have really helped you feel a deeper sense of connection. I had a realization recently about the power of physical items. In particular, as consultants, sometimes we work with clients who, for security reasons, want us to work on a dedicated laptop for this particular client. And so, we'll have clients maybe—well, now that we're remote—ship us a laptop, and we work on that laptop when we're doing client stuff, and then on our thoughtbot laptops when we're doing thoughtbot things.
And when I've been on clients like that, I have felt much more isolated from the thoughtbot team. And just, like, physically switching over to the thoughtbot laptop, all of a sudden, gives me that feeling of connection. And there's something I can't quite explain about the power of the physical item. And, say, I'm working on the thoughtbot laptop today with, you know, thoughtbot Slack in the background or whatever, and I feel more connected to my colleagues.
STEPHANIE: Yeah, that is really curious. Did you also have thoughtbot communication channels open in your client laptop during that time?
JOËL: I did, and yet still felt more separation.
STEPHANIE: Yeah, that's really interesting. The way you're describing it, it was almost like, you know, the main laptop that you work with, with your, like, all of the settings that you like, all of your little shortcuts, you know, the autocomplete to the whatever, like, channels of communication that you are used to seeing. In some ways, that almost feels like home a little bit. And I wonder if working on a client laptop almost kind of feels like, you know, being in a stranger's house, right?
JOËL: There's definitely an element of that. Yeah, all the little things I've fine-tuned, some of the productivity software I have on there that are just, you know, I can one by one set them up on the client laptop, depending on permissions. But yeah, it's never quite the same.
STEPHANIE: So, when you are in a situation where you're mostly working from a client laptop and maybe embedded in their Slack workspace, embedded in their team, how do you go about investing in connection with your client team?
JOËL: So, you know what's kind of weird? Is that when I'm on a client laptop, I feel less connected to my colleagues at thoughtbot, but the reverse is not necessarily true. I don't feel more connected to colleagues on a client team on a client laptop than I would on my thoughtbot laptop. So, I'm not exactly sure what the psychology is going on there. But I feel kind of most connected to both when I'm working on my thoughtbot laptop, which is perhaps a bit strange.
STEPHANIE: Oh, yeah, that is interesting. I think, in general, there's an aspect of joining a new client team and trying to figure out the culture there and how you might engage with it, right? And how what you bring to the table kind of fits in with how they do things, and how they talk about things, and how they behave. In some ways, it's kind of, like, you know, an outsider joining this, like, in-group, right? So, I've definitely realized that the ways that I engage and feel connected at thoughtbot, like, may or may not work for the client team that I'm joining.
JOËL: Yeah. And onboarding onto a client team is not just a technical exercise, right? It's also a social process where you want to get to know the other people on your team, get to sort of integrate into the way they work, their processes, hopefully, build a little bit of, like, personal connection with individuals because all of those are going to help me do my job better tomorrow, and the day after, and the week after that.
STEPHANIE: Yeah. I had mentioned previously that one thing that I've been enjoying on my client team is our daily sync question. So, a random question will be generated, you know, like, "What are you eating for dinner today?" Or, like, "What are you looking forward to this weekend?" And folks are able to share. And the fun thing is that sometimes the answer to the question is longer than their work update itself.
JOËL: Nice.
STEPHANIE: But that is actually the, you know, the beauty of it because we all just, like, get to laugh and get to, you know, chime in. And I'm like, "Oh yeah, like, that sounds delicious, like, what you're eating for dinner tonight." But, like, that would not work for our Boost Team's sync because, you know, it's a much bigger meeting with sometimes up to, you know, 20 to almost 30 people and, like, we can't quite have as much time spent talking about the fun question of the day. So, I definitely think that, you know, it depends your team size, and makeup, and whatnot.
JOËL: Are those questions kind of preset, or do you all get to contribute questions to the list?
STEPHANIE: We brainstormed the questions one retro when we were realizing that we were kind of getting a little bored of the existing question that we had. And we came up with a handful that is plugged into, like, a website, or, like, an app that randomly, you know, picks the question of the day. And so, I think, again, when we get a little bored of the ones that we have in rotation, we'll throw in some curveballs in there.
JOËL: Have you ever considered adding "What's new in your world?" to this rotation?
STEPHANIE: It's funny you mentioned that because it's actually the question that we got a little bit stale on. [laughs]
JOËL: Really?
STEPHANIE: And we needed to inject some new life into, yeah. It's a classic, you know. But I think the variety is nice, especially since we're meeting almost every day. And before we started recording, you and I were just talking about how even sometimes it's tough to think of something that's new in our world [laughs] because we don't always live the most interesting and, you know, new lives. And sometimes, we kind of just have to dig deep to come up with something, and we only meet weekly. [laughs]
JOËL: I can definitely see how doing this daily might be more challenging. I think there's also value in questions that are a little bit more focused. Part of what's fun for this podcast is that "What's new in your world?" is so kind of broad. But maybe for something daily, having something really specific, like, what did you eat for dinner tonight? Means that you aren't just kind of drawing blanks in your mind, like, uh, uh, what is new in my world? What have I done? I don't know; I have a boring life. I don't do anything. Kind of panic mode that you can sometimes get when you hit a meeting.
And so, I do know that when I've been sometimes in situations with people where you have questions like that, I've tended to really appreciate the more targeted ones.
STEPHANIE: Yeah, that's so interesting you mentioned that because I think in social situations, there's usually maybe, like, someone who is really good at asking those, like, specific questions to get the group talking and, like, you know, engaged in a fun conversation, and that specificity helps.
One thing that I was just wondering about is the value of meeting every day in a sync kind of format, and I'm curious if you think that is important to you. If you have been on other teams that don't meet every day, maybe they have, like, a virtual check-in, right? Like, a virtual reminder to share what they're working on as opposed to meeting synchronously.
JOËL: I think I've seen sort of different purposes for sync meetings. Sometimes it's very kind of project-heavy, right? You're talking about the tickets you're working on for today. The reason you're having that is specifically for status updates or because you are blocked, and you want somebody else to help unblock you. So, it's very process-focused. I think that varies team to team, but it can be really helpful.
Even I've been on projects where it's maybe me and one other person, and we'll have kind of an informal just call each other up every morning and say, "Hey, here's what I'm working on today. Here's kind of roughly the strategy I plan to take on it. And we'll go back and forth." And for something like that, it inevitably also somewhat turns into a bit of a social call, so that's planning and social. And I think that can be really strong.
STEPHANIE: Yeah, I like that a lot.
JOËL: That's not necessarily going to be the case for every team, every project, especially with larger teams. And I feel like for something like the Boost Team at thoughtbot, we have a daily sync. We're not all working on the same project. So, I don't want to know about the specific details of the ticket you're working on. I'm more interested in getting just a little bit of face time with the whole of our team to feel a connection.
And, you know, maybe if you've got something cool that you want to share, and that can be a win. It can even be a struggle. And we can all kind of empathize, right? That, like, "Oh, I dropped production database this morning, and I'm kind of freaking out," is a totally fine thing to share. But "I am working on ticket 1, 2, 3, 4 to add some text to a part of the page," that's not particularly useful to me in the kind of sync that we have for the thoughtbot Boost Team.
STEPHANIE: Yeah, absolutely. I think knowing, like, who the audience is of the meeting and, like, how they might be able to support you or be there for you is helpful in making them feel a little more relevant and personal. And I had mentioned that our Boost daily meetings or daily syncs, you know, are a little too big for people to really get into, you know, sharing a fun, personal anecdote, or whatever.
But one thing that I really enjoy is that whoever goes last in giving their update gets to choose the sign-off for everyone. So maybe that's like, okay, we'll just go out on a wave, and we all wave. Or maybe it's, you know, like, making a little heart with your hands. And then there's some folks on the team who go really wild and, you know, come up with something totally unexpected. And I think, you know, that spontaneity is so fun. And we all share it in this collective act of...I'm trying to think of a funny one lately, maybe, like, sinking down into your chair until you disappear from the view [laughs]. That's a good one.
JOËL: Sometimes it's those, like, small social rituals that can be really meaningful.
STEPHANIE: Absolutely. Do you have a favorite sign-off that you have either requested or have done?
JOËL: So, I typically just go for the wave if I'm last because I've not thought about it. But I generally think it's fun to have everybody try to mimic an emoji. So, it might be like, oh, everybody do the, you know, See-No-Evil emoji, or everybody do the party parrot. Those are pretty fun to sign off on.
STEPHANIE: Oh yeah, [inaudible 29:15] pausing is good. I think another one I like is, "Everyone do your best impression of a tree." [laughs]
JOËL: Sometimes, too, it's fun to do something that's relevant to the particular day. If there's something special happening that day, you get something relevant. I've done before, if it's on a Friday, say, "Everybody do your best Rebecca Black impression."
STEPHANIE: Yeah, also excellent.
JOËL: Because, you know, it's Friday.
STEPHANIE: Yeah, like, a little moment of collective celebration for the weekend. On that note, it's a Friday we're recording this episode. Shall we wrap up and look forward to the weekend?
JOËL: [laughter] Fun, fun, fun, fun.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël recently had a fascinating conversation with some friends about the power of celebrating and highlighting small wins in their lives. He talks about bringing this into his work life. May Stephanie interest you in a secret she learned regarding homemade pizza?
RubyConf is coming! Who's submitting talks?! It's hekkin scary. Don't fret! Joël and Stephannie are here to help. Today, they discussed submitting a conference talk proposal from start to finish.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we've come here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been having a really interesting conversation with some of my friends recently about the power of celebrating and highlighting small wins in our lives, both in, like, kind of sharing it with each other, like, you know, if something small happens, it's good for me to share it with my friends. But also, where it becomes really cool is where the friend group kind of gets together and celebrates that small win for one person, and how that can be, like, a small step to take, but it's just really powerful and encouraging for a friend group. And I think that applies not just among friends but in a team or other grouping in the workplace.
STEPHANIE: That's so fun. How are you celebrating these small wins, like, over text? Is that the main way you're communicating something good that happened?
JOËL: It depends on the friend group. I think, like, different friend groups will have, like, a different kind of cadence for the kind of things they do. And do they all hang out together? Do they have a group text, things like that?
One of the friend groups I'm a part of, we meet weekly to go climbing at a rock-climbing gym, so that's kind of our hang-out. And [inaudible 01:34], we're there to do stuff at the gym, but it's also a social thing. And it's an opportunity to be like, "Oh, you know, did that thing workout, you know, at work?" "You know, good for you," Or "Did you get this project accepted?" And yeah, when small wins come up, it's a great time to celebrate.
STEPHANIE: That's awesome. I think having regular time that you see people and being able to ask them about something that they had mentioned previously is so special and really important to me, like, in bonding and building the relationship.
I also love the idea of celebrating milestones. So, this is, I guess, more of a bigger win, but milestones that aren't traditionally celebrated. You know, so, yeah, we'll have, like, a party when someone graduates or someone gets married. But I also have really enjoyed celebrating when someone gets a promotion at work. And, you know, maybe that's not, like, a once-in-a-lifetime thing, but it's still so worthy of going out for dinner or buying them a drink.
I also will maybe, like, send my friends a little treat if I know that they did something small but hard for them, right? And sometimes that's even, like, responding to a scary email that they had sitting in their inbox for a while. Yeah, I really love that idea of supporting people, even in the small things in life that they do.
JOËL: Yeah, and that's really validating, I think when you've done something hard and then a friend or a colleague reaches out to you. And it's kind of like, hey, I saw that. Good for you.
STEPHANIE: How have you been thinking about bringing this into your work life?
JOËL: I think it's about being on the lookout for things that other people do. And I think one thing I like to do is kind of publicly calling that out. It sounds like a negative thing, right? But just giving people kind of a public shout-out when they've succeeded at something. I think we're all kind of socialized not to maybe talk too much about accomplishments, especially if they feel kind of small and mundane.
Being somebody else, I think, gives you a lot more leeway to say, "Hey, no, Stephanie, I see that you did that thing. And maybe it feels kind of like, oh no, you're just doing your job, but I think that's cool. And I want to, you know, just give you a shout-out in the company Slack channel or something." It doesn't have to be something big. You know, I'm not sending champagne to your home. But having that opportunity to just kind of celebrate something small and say, "Wait a minute, let's pause and acknowledge that you just did something cool."
STEPHANIE: Yeah, I was thinking about how that's kind of, like, amplifying the win a little bit. I've definitely done this before, too, when I see someone share a win of theirs, maybe in a smaller Slack channel or kind of a personal level, or even just to me individually. And I really want other people to know that that happened to you and that they, you know, did an awesome job. And so, I have enjoyed, you know, sharing them more publicly on their behalf if they are comfortable with it.
JOËL: And I'll say on the other end of that, I think it feels really good to be acknowledged by someone else that you've done something that they recognize. It's fun to share a win with other people because you're excited, but it's doubly fun when somebody else shares it for you.
STEPHANIE: I agree. I think one thing that you, Joël, do really well, actually, is sharing your own personal wins in a very casual way. That's something I've always admired about you is how you recognize the small wins for yourself.
JOËL: It's taken me, I think, a long time to get to that and find a way where, you know, you are sharing things that are fun for other people to see, things that might be inspiring, things that are kind of cool, and that are not just kind of, like, self-aggrandizing, like, bragging about stuff. It can be a fine line to walk. And, to a certain extent, you're a little bit marketing yourself. But yeah, I think I've kind of hit that right balance.
STEPHANIE: Yeah, I think the thing that makes it work is that there's usually, like, a challenge or something that maybe you, like, went through a journey or overcame a little bit. And I think that's what is the inspiring part that makes me feel like, oh, okay, so, like, this is a realistic thing that, you know, Joël went through and, you know, he struggled with it maybe. But then, like, ultimately, you know, had some insights or came out the other side with some learnings. And I like that it's real, right? It's not just, "Hey, like, I did this, like, cool thing." It's like, "I went on this journey." And I find that really motivating when I am in that kind of situation next time.
JOËL: There's a power to stories, right? And I think especially when you can make something relatable to other people. So, it's not just like, "Hey, I did a cool thing," which, you know, is also fun. But being able to say, "Hey, I messed up," or "I, you know, had this challenging problem dropped in my lap, and here's the journey I went on to resolve it. Hopefully, it acts a little bit as like a here's a template you could follow if you're ever in that situation." But maybe also a little bit of, like, inspiration for others as well, just being like, hey, Joël, messes up sometimes.
So, Stephanie, what is new in your world?
STEPHANIE: Speaking of small wins, I have finally perfected our at-home pizza situation for making pizza at home, which I have been struggling with for so long. Because I always was excited by the idea of making pizza and, you know, sometimes we would make our own dough. And sometimes, we would buy store-bought dough, but it never ended up being as crispy and cooked well-done the way that I want it to.
It was always, like, a little bit mushy on the inside. The dough wasn't totally baked. And I would inevitably be disappointed when I had been, you know, building that excitement for pizza. And the other week, I found a new recipe to try, and I think it will be my new go-to recipe for making pizza at home.
JOËL: I don't know if I'm allowed to ask this on air, but what's the secret?
STEPHANIE: The secret? Well, okay, the first secret and/or learning that I've gathered is to not put as much sauce, cheese, and toppings as you think you want to because that's definitely what contributes to the under-doneness of the dough. But I pivoted to trying a more grandma-style crust that is kind of more like focaccia; really, you know, it involves a lot of olive oil. And you're cooking it for a while on pretty high heat to ensure the crispness and, you know, that it's cooked through.
And, I mean, I love focaccia bread, so I don't mind it as, you know, the base of my pizza. It is a bit different from, you know, other kinds of pizzas. And if we had, like, a really, you know, fancy pizza oven to do the, like, super high heat, like, Neapolitan-style deal, I would also really enjoy that. But you know what? That's just not the reality of my home kitchen.
So, I have really been enjoying this pizza recipe by Alison Roman that I will link in the show notes. But yeah, it has really changed my at-home pizza game. And I hopefully won't have any of my, you know, soggy dough bottom problems anymore.
JOËL: So, you mentioned just kind of offhand, like, oh yeah, you know, the crust is just kind of, like, how you make focaccia. It sounds like you've made focaccia yourself before.
STEPHANIE: I have made focaccia at home, and so I think applying it to Pizza was a real, like, light bulb moment for me. But, you know, it's not, like, totally effortless. But I think it's a lot more forgiving than other types of bread and, therefore, other types of pizza crust.
And the one really enjoyable thing about making focaccia is there's a step where you use your fingers, and you're kind of holding your hands like you're playing a piano. And you, like, press into the dough after it has risen a little bit to create dimples and, you know, lets the oil kind of seep into the little holes. And it's very satisfying. It's a very good feeling.
JOËL: The kind of the tactile aspect of it?
STEPHANIE: Yeah, exactly. It's very fun. [chuckles] So, yeah, it's just an added bonus to my pizza adventures.
JOËL: A win on top of a win. We'll take it.
So, there's some news in the Ruby community this week because RubyConf has just opened their CFP, their call for proposals. And so, they're asking for people to submit their ideas for conference talks, and if you're lucky, you get picked to speak at the conference.
And, Stephanie, I know that over the course of a year, you have a document where you collect conference talk ideas so that you have ideas to work on when the CFP comes around. Are you looking at any of them to potentially submit to RubyConf this year?
STEPHANIE: Joël, I have to be honest with you; so far, I only have one idea on that list. [laughs] But that is one that I suppose could eventually become a conference talk proposal.
So, when I heard the news, I definitely went down the rabbit hole of revisiting that idea and kind of starting to think about if it's something I wanted to pursue. I think the answer is yes. I definitely got a big push of motivation when I was like, oh, it's open. Like, now I can just get started if I want to. And then I was like, well, it's open for a month, so I could also just sit on it a little longer, you know, put it aside and revisit it when I have a little more time.
But yeah, I was pretty excited because I think it gave me the motivation I needed to really think a little more deeply about this idea that I have. Otherwise, I think it would have continued to sit half-baked in my document for a long time.
JOËL: And just for all of our listeners, the CFP just opened on July 12th, and it closes on August 20th. So, if you are listening and it's before August 20th, you still have a shot to submit your idea to be a speaker.
STEPHANIE: Something that I've talked about with my other friends who enjoy speaking at conferences is how they come up with proposals, and I found that we all have different approaches. And I am really interested in digging into this further with you.
But I realized that, for me, I really struggle with just, like, throwing out ideas and submitting them before I feel really confident that it's something that I have interesting things to say really, or, like, kind of adding a new perspective, or maybe approaching a topic that hasn't been approached before. I feel sometimes a bit hindered by my process, where I need to feel really confident before submitting something.
Because a friend of mine she was telling me that her approach is to submit CFP for topic ideas that she wants to explore further. So, maybe it is something that she doesn't know a lot about yet, and she's using this process to learn more and dive deeper, and that, you know, gives her a reason to do that, whereas that seems really scary to me.
JOËL: That's really interesting because it sounds like kind of an underlying motivation for your friend for submitting these talks is curiosity, exploration. And thinking back to myself, I think I usually submit ideas that have me excited or passionate, so that's kind of my underlying motivation for a talk. What would you say is maybe your underlying motivation when you're pitching an idea?
STEPHANIE: Yeah, I think, for me, it is impact and, like, having an impact, especially for something that I've struggled with and wanting to share my experience and, hopefully, sharing something where other people can relate to.
It's funny you mentioned that your motivators are, you know, excitement and passion. Because another person that I kind of had this conversation with mentioned that she writes talks based on experiences that have been very aggravating [chuckles] and painful for her. So, that ends up being, you know, a big motivator because she's so frustrated. [laughs] And, you know, wants to share this journey that she went on from a point of, I guess, maybe similar to me, like, making it easier for someone else who might find themselves struggling with the same problem.
JOËL: I kind of like the idea of taking that to an extreme, and you're, like, rage submitting.
STEPHANIE: Yeah, I feel like there would just be an infinite number [laughs] of topics that you could come up with in that case.
JOËL: Like, I'm so angry at this bug. It cost me a week of my life. And now, it is going to get the spotlight on it at RubyConf. And I get to share that moment with everyone, express a lot of emotions, and, hopefully, save everyone else from having to do the same thing I did.
STEPHANIE: Yeah. Or this terrible bug cost me a week of my life, and now you all get to hear about it. [laughter] Let me tell you --
JOËL: Yes.
STEPHANIE: Exactly all the problems that I had to deal with.
JOËL: And, honestly, as a narrative, it kind of works, right? There are different types of talks. Sometimes you go to a talk because you really want to learn a deep topic. Sometimes I just want to go and listen to, like, a good horror story. If someone's a good storyteller, like, yes, there are lessons I can take away from it, and I can be like, okay, this is what I can do. And I heard Stephanie talk about this bug, and so I'm going to use inspiration from that the next time I hit a bug.
But sometimes it's also just good to, like, go there and sit and be, like, yes, I've been there. Yeah, kind of following along with the story and, you know, kind of the ups and downs because it is so relatable.
STEPHANIE: Yeah. And I like that you mentioned that there are different types of talks that leave the audience, you know, with different things. Because I know some people who have been interested in speaking in the past maybe feel a bit hesitant to because they don't think they have something to say, or, like, they don't have something to share that other people might find interesting.
And to that, I really believe that everyone has something that they are knowledgeable about and something that they can bring to others that is valuable. Even if it's not for every single person at the conference if you give a talk that is meaningful to a handful of people, right? Especially because, you know, there's people of all different kinds of levels at these conferences. Those are really important too. In fact, I think it can be even more powerful because they are targeting a specific audience.
JOËL: And I think you've hit on a key point, that is, it's great when you're building the talk, but even when you're pitching the idea is, who is this talk for? Who is the audience for this talk? And if the audience is whoever shows up at the conference center, maybe you need to workshop a little bit more.
STEPHANIE: Yeah, because one thing can't really be for everyone.
JOËL: Right. You're kind of diffusing its impact at that point. You were talking about how sometimes it's difficult to take an idea, flesh it out, and submit it until you're feeling, like, 100% confident about it. I'm curious how the transition goes from kind of the earlier phase of, like, you have a document, and I assume these are, like, bullet points with, like, one sentence, or maybe even just title idea. How does it go from bullet point to multiple paragraphs that might be submittable?
STEPHANIE: Yeah, that's a good question. I think it starts as a bullet point because maybe I notice something that caused me pain or caused a teammate pain, and maybe we had, like, kind of an interesting discussion about it. And, yeah, I write it down as something to explore further as, like, is this an idea that can be a little broader in scope, can have a few more applications beyond this particular instance that sparked it?
And so, maybe from there, I will think about, like, okay, like, the pain point that I jotted down was coupling and tests, right? And let me go, you know, jog through my memory of other times where I kind of felt a similar thing or was doing some code review and also noted a similar problem.
And I think if I am able to find enough, like, supporting examples that might go along with this, for me, it's really a feeling. [laughs] Then I'll try to extract that a little further and come up with a theme, right? A theme that's a little more encompassing because what I hope to do is to be able to come up with some kind of takeaway that can be a strong thesis for a conference proposal.
JOËL: And that's kind of how conference proposals work, right? There's a few different sections you have to fill out. But the really important one is the abstract, which is usually just a few sentences. It's character limited. And that's what is got to sell your talk both to the committee, but then also, that's what's going to be publicly viewable. And so, that's what's going to get people excited to show up at your conference room.
So, my kind of secret trick for writing a proposal is to do the abstract last. Even though it's that first section on the form, I struggle to write a compelling abstract. And so, I'll go through and fill out some of the other fields that are only for the committee, and there'll be, you know, a lot of detail in there. And then, sometimes, I find that I put, like, really good compelling sentences in there, and I'll pull them out and put them in the abstract and kind of use that to start.
But those other sections, like pitch and all that I think they're a great place to start because you get to go a little bit more into detail. And you can talk about here are the themes I want to address. Here are maybe the examples I'm going to be building around. Here's the audience that I want to speak to.
STEPHANIE: Audience is interesting for me because I tend to write the kind of talks that I wish I had watched earlier or, like, what really speaks to me. In fact, one of my first conference talks was literally called The Intro to Abstraction I Wish I'd Received. [laughs] So, that is a good place for me to start, is thinking about like, well, like, who was I at the time? Like, what kind of developer was I at the time that I, like, really needed this information or really wished for this information?
And similarly, I had mentioned, you know, like, maybe my ideas are coming from conversations I've had with other people. So, I'm imagining those other people, and I'm asking myself, like, who are they? Like, where are they in their development careers? And is there a specific demographic or audience persona that kind of fits them, and, you know, usually there is, right?
And what is nice is I can kind of go to them as well and be like, "Hey, like, I have this idea. Do you think this would be helpful for you? Or is this something you would be interested in watching?" And that at least helps me ground it in an audience that is real to me as opposed to kind of trying to imagine who might show up without a clear idea, like, of what they might get a takeaway or, like, be wanting in a conference talk.
JOËL: Would it be fair to say that when you're coming up with an idea for a presentation, the audience you have in mind is you or maybe a particular version of you, so you two years ago or you five years ago?
STEPHANIE: Yeah, I think that's a really compelling way for me to write these because, you know, I almost think it kind of goes back to the idea that everyone has something to say, right? It's like I have something to say to me, my past self. And I believe that other people, you know, are in that position as well. And so, that's been my approach.
But I'm curious about yours because I think the types of talks that you write are maybe less about, like, what you wished you had learned earlier and more for a different kind of audience.
JOËL: Yeah, I think they are...I start with a topic that I'm excited about. And then, sometimes, I have to find what element of it that I want to pull out because it can be kind of a whole kind of cloud of themes, and I have to pick one to commit to. Depending on the one I commit to and the approach I want to take, it will define the audience that...or vice versa. I can say, okay, this is specifically for this audience, and that will show how I want to approach it.
So, for example, I gave a talk at RailsConf this past spring on the math every programmer needs, talking a little bit about discrete math and how it's applicable in day-to-day programming. And I think I very quickly came to the realization that I wanted this talk to be for people who had never done a formal, like, discrete math class, likely people who don't have a traditional, like, CS background.
And so, once I knew this is the audience I'm speaking to, that really shaped how I pitched the talk, what elements I want to bring in, what examples I'm using, what do I want to emphasize during this talk. Finding that audience really helped that proposal come together. Even though I knew...before I found the audience, I knew I wanted to talk about discrete math and how cool and relevant it was to day-to-day programming. But that's not enough. I needed to really fit it to an audience.
STEPHANIE: Yeah, I have two thoughts about this. One was that when you were writing the proposal for this talk, I remember you had shared a bunch of your different ideas about the topic to your co-workers. And it was almost kind of, like, a buffet of topics. And you were asking for feedback about, like, hey, like, what is interesting to you? Like, what would be, like, helpful for you to know? And I think that ended up really helping you focus on, like, what your audience would want.
But I'm curious, do you recall, like, how you decided that you wanted to target people who didn't have that traditional CS background? Like, why was that important to you?
JOËL: I think I'm generally most excited about taking some, like, larger technical insights and bringing them to people who maybe have some of the intuition but don't always know why the things they do work the way they do and kind of bridging a little bit of that, like, practical, theoretical gap. That's the space that I'm probably most excited about when it comes to sharing and teaching, helping people go from things that are really practical and then just throwing just enough theory at them. But keeping it really grounded so that they can kind of hit the next level of where they want to be. Because that's an area that I think I thrive in, an area that gets me most excited to share about.
And so, I think, naturally, I'm kind of moving in that direction. But also, like you said, it's talking to other people and seeing, like, what are the elements that are interesting to you? And then, like, once you start seeing some of these, it's like, okay, well, what is exciting in talking about Boolean algebra? Do I want to go really deep on some of the theory? Do I want to say, you know, if someone has a vague notion of this because they've been writing code for several years but don't know the theoreticals behind it? That interaction, I think, was more compelling to me.
STEPHANIE: Got it. It's almost like knowledge sharing at just this really high level, or, like, at a really large scale. I like that a lot.
JOËL: So, you highlighted something interesting, and that is that writing a proposal doesn't have to be a solo activity, and getting feedback on ideas can totally transform your proposal. Do you find that you reach out to a lot of people to get feedback on your proposals? And what does that look like in practice?
STEPHANIE: Oh yeah, I definitely need someone to rubber-duck an idea for me. [laughs]
JOËL: So, even at the idea stage. So, you've got that topic sentence or whatever, and then you say, "Someone, can you sit down with me, and we'll just talk through places this might go?"
STEPHANIE: Yeah. I have found that really helpful for me. Otherwise, I think I get a little too precious about it, right? If I've just been working on it by myself. And then it feels really scary to submit it and be like, okay, I don't know if this is any good. It might get rejected.
But the first time that I did a conference talk, WNB.rb, the women and non-binary Ruby group I'm in, they had organized a CFP working group channel. And so, there were, you know, a handful of people, some of them writing conference talks for the first time, some of them having done it before, just getting together and holding each other accountable, and checking in and asking for feedback.
And, yeah, I think finding other people who either have done it before. I've also, you know, reached out to people whose conference talks I loved and felt really inspired by. And if they were available, like, kind of asking them how to get started.
But also, like, peer support as well, other people doing it for the first time can be really important in just making it feel a little more manageable, a little less lonely. I think there are, like, more people out there who are interested in dipping their toe in conference speaking than one might think because it can definitely feel very overwhelming. But with a support group, I think it makes it a lot easier.
JOËL: So, you've gotten feedback. You've gotten support. You've put this idea together. You're feeling pretty confident. You hit that submit button. And now you can't take it back. [laughs] How does that feel at that point?
STEPHANIE: Terrifying. [laughter] Like, for me, I have to exercise it from my mind and not think about it, not dwell on it at all. And like, ideally, you know, when I hear back, I will have forgotten all about it so that, you know, I didn't spend the whole month or however many weeks, like, ruminating about whether or not it was accepted.
Yeah, I really struggle with that part, I think, because I, yeah, have a hard time with rejection, you know, I'm just going to say it. [laughs] And, you know, it's hard for me not to take it personally. But I think that's actually one area that I want to get better at is to feel a little less, like, personally attached. And I think working with others helps me with that because it's not just something I've, you know, like, squirreled away and feel very attached to.
Working with others and then, like, hopefully, coming up with other ideas along the way, right? Within conversations that we have that might spark ideas for the future. So, knowing that if this one doesn't end up being submitted, there's always next time. There's always another conference season. And also, you know, celebrating others when their conference talks do get accepted that is also really buoying because it helps me direct that energy into wanting to celebrate my friends and inspiring me for next time.
Joël, I know you oftentimes submit more than one proposal, and I'm wondering if that helps with those feelings of being too attached to a topic idea or, you know, worrying about whether they will be accepted.
JOËL: I think it definitely helps with the attachment thing that I've not kind of put all of my work and all of my...like, pinned all of my hopes on one topic idea. Sometimes it can hurt, you know, if you've got, like, you know, two or three and, like, you just get multiple rejection notices in a day. That kind of sucks sometimes. But I think, in some ways, yes, it does help with that feeling of rejection because you've not tied yourself emotionally so much to a single idea that has to, like, succeed or fail.
STEPHANIE: Do you then submit those ideas to other conferences?
JOËL: The ones that get rejected? Yes. I've definitely resubmitted ideas. In fact, I plan to resubmit a rejection to RubyConf this year, so we'll see how that goes. Actually, now that I think of it, that could be a really fun opening line for a talk. Like, let's say it gets accepted. And, like, you know, you're on the stage, and you open it, and you're just like, "This talk got rejected." That'd be a fun intro.
STEPHANIE: Yeah, it would be. I think, oftentimes, you know, it's not always even about the idea itself, right? It's just about maybe the theme of the conference that year, and what they were looking for, and the direction they wanted to go. And there are other conferences or other another year, right? Where maybe there isn't another talk that touches on the same, like, area. And that will be the opportunity that it is a fit for the conference.
JOËL: Yeah, definitely. It is a little bit haphazard to get in. And just because your talk gets rejected does not mean it's a bad idea. It just means that it wasn't the best fit for that conference at that time.
STEPHANIE: I actually want to plug a website, speakerline.io, where people can post all of their, you know, proposals that they've submitted, whether they were accepted or rejected. And I found that resource really helpful in, you know, just knowing that, like, very good ideas get rejected sometimes, and that's okay. As well as, you know, kind of trying to get a sense of, you know, for the ones that were accepted, okay, like, what about these proposals really stood out or, like, really shine? And how might I get some inspiration from that to incorporate next time around?
JOËL: So, you've submitted a proposal. Terrifying. You're trying to not think about it for a couple of weeks, assuming you're submitting ahead a couple of weeks, I don't know. Are you a last-minute kind of submitter?
STEPHANIE: I'm a probably two or three days before the deadline kind of submitter.
JOËL: So, you've submitted the talk two or three days to the deadline. I guess, like, a couple of weeks after that to get review. And then, you get that notification that says, you know, you've got a response on your talk from the committee. Are you the kind of person that, like, drops everything and immediately looks at it? Do you kind of, like, wait for, like, maybe a moment where you're, like, more in a good zone emotionally before you open that email to find out if you're accepted or rejected? What's your strategy?
STEPHANIE: Oh God, I don't think I have the willpower to wait until I'm, you know, in an emotionally good state. I will just click on that thing. And yeah, I think, I mean, having been on the receiving end of accepting those rejections and once waitlisted, [laughs] which was a real doozy because it's like, great, like, now I have to write a talk. But, like, I don't know if it will actually be given or not.
I think this is also where the support group really shines as well because maybe some of my other friends are also sharing the results and making it okay, like, sharing a rejection, right? And I think it's nice to just have, like, an outlet for that, whatever the outcome is, and not having to just, like, sit alone in either the sadness or the happiness, right? Like, we're talking about celebrating small wins. Like, it really is even more special when someone else can validate your success.
JOËL: Have you ever had to navigate kind of, like, slight feelings of jealousy where it's, like, another friend gets in? Or maybe somebody else gets in with, like, your topic, and their talk got picked instead of yours?
STEPHANIE: Yeah, for sure. I think it's totally natural and human. I think one nice thing, though, is that there are so many conferences all of the time. You know, this is not a once-in-a-lifetime situation, right? And maybe the next conference, you know, the people who submit will be different, the people who review will be different. And you've kind of already done the hard part of writing the thing.
I actually was just thinking about a few of my friends are writers, and the submission process for them, you know, of spinning a book proposal or short stories for, like, a magazine or something like that, it's, like, fraught with rejections. And they've really built that muscle of acceptance and, like, knowing that it's not a reflection of their value, and building the resilience to keep trying.
And so, yeah, I think definitely going through that process has helped me feel a little bit more comfortable with that, not completely, but I guess it's like exposure therapy, [laughs], isn't it?
JOËL: I think that the not helpful answer here is that it gets better when you've given more talks. When you're trying to break in and give your first talk, right? It is such a big deal. And, you know, the high of getting accepted is just, you know, mountain top. But the feelings of rejection are also similarly intense. As opposed to when you've done a few, then it's like, you know what? Win some, lose some. And it's much easier to move on.
STEPHANIE: I think another suggestion that I might have would be to maybe start smaller, right? Even giving a talk at work for your co-workers, or even the next step is giving a talk at your local meetup or then a small regional conference. There are so many in-between steps, I think, that exist that bestow the benefits of giving a conference talk, and meeting new people, and feeling good about the impact you're having beyond some of the bigger, more traditional conferences.
So, if that does seem really scary or, you know, maybe you've given it a shot and feel a little bit demoralized from trying again, there is a group out there who will benefit and be interested in hearing what you have to say.
JOËL: That's a really important reminder because just because a conference rejected your talk doesn't mean that it's a bad idea. And yes, you can shop it around and bring it to other conferences, but maybe think about other venues for the idea. You've already done the hard work of crafting a pitch, so maybe turn it into a blog post and share it that way.
Maybe turn it into a pitch to be a guest on a podcast that you enjoy. Podcasts that do weekly guests are constantly looking for interesting people to talk to. And you've kind of, like, done all the work for them, where you can say, "Hey, here's the thing I'm an expert on. Ask me questions about this." And most places will gladly bring you on.
STEPHANIE: Yeah, I like to think of conference talks as really, like, a supplement of what you're learning and investing in in your career, right? You know, it is nice to be able to share all of those things in a perfectly wrapped package. But also, there are so many different ways for that to manifest. And there are people who know that speaking is not for them and really focus on writing, and that's, like, their avenue. But yeah, it's not...I don't think it's, like, a pinnacle of, like, something you have to do in your career at all. It's just something that can be fun.
JOËL: Yeah, and sharing takes many different forms. It can be a talk in a conference room, but it can just as easily turn into maybe some kind of video, some kind of written work. Like I said, it could be an interview on a podcast. There are so many different ways that you can share your ideas. And just because it didn't fit in one place, now that you've done the work to kind of polish that gem a little bit, oftentimes, it's very little additional work to just convert it to a different form.
STEPHANIE: Yeah, I like what you just said about polishing a gem. Really, I think the value for me is having a channel to funnel and reflect on my experiences, and, you know, conference talks happen to be, like, one form of that for me. But I hate to say it's about the journey, not the destination, but sometimes it is. And, yeah, I think everyone kind of has to, like, figure that out for themselves.
JOËL: That being said, sometimes the destination is pretty exciting. And when you open that email that says, "Congratulations, your talk has been accepted," wow, what a rush.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
It's updates on the work front today! Stephanie was tasked with removing a six-year-old feature flag from a codebase. Joël's been doing a lot of small database migrations.
A listener question sparked today's main discussion on gerunds' interesting relationship to data modeling.
Tally
EditionTranscript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, this week, I've been tasked with something that I've been finding very fun, which is removing a six-year-old feature flag from the codebase that is still very much in use in the sense that it is actually a mechanism for providing customers access to a feature that had been originally launched as a beta. And that was why the feature flag was introduced.
But in the years since, you know, the business has shifted to a model where you have to pay for those features. And some customers are still hanging on to this beta feature flag that lets them get the features for free. So one of the ways that we're trying to convert those people to be paying for the feature is to, you know, gradually remove the feature flag and maybe, you know, give them a heads up that this is happening.
I'm also getting to improve the codebase with this change as well because it has really been propagating [laughs] in there. There wasn't necessarily a single, I guess, entry point for determining whether customers should get access to this feature through the flag or not. So it ended up being repeated in a bunch of different places because the feature set has grown. And so, now we have to do this check for the flag in several places, like, different pages of the application. And it's been really interesting to see just how this kind of stuff can grow and mutate over several years.
JOËL: So, if I understand correctly, there's kind of two overlapping conditions now around this feature. So you have access to it if you've either paid for the feature or if you were a beta tester.
STEPHANIE: Yeah, exactly. And the interesting thought that I had about this was it actually sounds a lot like the strangler fig pattern, which we've talked about before, where we've now introduced the new source of data that we want to be using moving forward. But we still have this, you know, old limb or branch hanging on that hasn't quite been removed or pruned off [chuckles] yet. So that's what I'm doing now.
And it's nice in the sense that I can trust that we are already sending the correct data that we want to be consuming, and it's just the cleanup part. So, in some ways, we had been in that half-step for several years, and they're now getting to the point where we can finally remove it.
JOËL: I think in kind of true strangler fig pattern, you would probably move all of your users off of that feature flag so that the people that have it active are zero, at which point it is effectively dead code, and then you can remove it.
STEPHANIE: Yeah, that's a great point. And we had considered doing that first, but the thing that we had kind of come away with was that removing all of those customers from that feature flag would probably require a script or, you know, updating the production data. And that seemed a bit riskier actually to us because it wasn't as reversible as a code change.
JOËL: I think you bring up a really interesting point, which is that production data changes, in general, are just scarier than code changes. At least for me, it feels like it's fairly easy generally to revert a code change. Whereas if I've messed up the production database, [laughs] that's going to be unpleasant few days.
STEPHANIE: What's interesting is that this feature flag is not really supported by a nice user interface for managing it. And so, we inevitably had to do a more developer-focused solution to remove these customers from being able to access this feature. And so, the two options, you know, that we had available were to do it through data, like I mentioned, or do it through that code change. And again, I think we evaluated both options. But what's kind of nice about doing it with the code change is that when we eventually get to delete those feature flag records, it will be really nice and easy.
JOËL: That's really exciting. One thing that's different about kind of more mature projects is that we often get to do some kind of change management, unlike a greenfield app where you just get to, oh, let's introduce this new thing, cool. Oftentimes, on a more mature project, before you introduce the new thing, you have to figure out, like, what is the migration path towards that? Is that a kind of work that you enjoy?
STEPHANIE: I think this was definitely an exercise in thinking about how to break this down into steps. So, yeah, that change management process you mentioned, I, like, did find a lot of satisfaction in trying to break it up, you know, especially because I was also thinking that you know, maybe I am not able to see the complete, like, cleanup and removal, and, like, where can someone pick up after me? In some ways, I feel like I was kind of stepping into that migration, you know, six years [laughs] in the making from beta to the paid product.
But I think I will feel really satisfied if I'm able to see this thing through and get to celebrate the success of saying, hey, like, I removed...at this point, it's a few hundred lines of code. [laughs] And also, you know, with the added business value of encouraging more customers to pay for the product. But I think I also I'm maybe figuring out how to accept like, okay, like, how could I, like, step away from this in the middle and be able to feel good that I've left it in a place that someone else could see through?
JOËL: So you mentioned you're taking this over from somebody else, and this has been kind of six years in the making. I'm curious, is the person who introduced this feature flag six years ago are they even still at the company?
STEPHANIE: No, they are not, which I think is pretty typical, you know, it's, like, really common for someone who had all that context about how it came to be. In fact, I actually didn't even realize that the feature flag was the original beta version of the product because that's not what it's called. [laughs] And it was when I was first onboarding onto this project, and I was like, "Hey, like, what is this? Like, why is this still here?" Knowing that the canonical, you know, version that customers were using was the paid version.
And the team was like, "Oh, yeah, like, that's this whole thing that we've been meaning to remove for a long time." So it's really interesting to see the lifecycle, like, as to some of this code a little bit. And sometimes, it can be really frustrating, but this has felt a little more like an archaeology dig a little bit.
JOËL: That sounds like a really interesting project to be on.
STEPHANIE: Yeah. What about you, Joël, what's new in your world?
JOËL: So, on my project, I've been having to do a lot of small database migrations. So I've got a bunch of these little features to do that all involve doing database migrations. They're not building on each other. So I'm just doing them all, like, in different feature branches, and pushing them all up to GitHub to get reviewed, kind of working on them in parallel.
And the problem that happens is that when you switch from one branch where you've run a migration to another and then run migrations again, some local database state persists between the branch switch, which means that when you run the migrations, then this app uses a structure.sql. And the structure.sql has a bunch of extra junk from other branches you've been on that you don't want as part of your diff. And beyond, like, two or three branches, this becomes an absolute mess.
STEPHANIE: Oh, I have been there. [laughs] It's always really frustrating when I switch branches and then try to do my development and then realize that I have had my leftover database changes. And then having to go back and then always forgetting what order of operations to do to reverse the migration and then having to re-migrate. I know that pain very well.
JOËL: Something I've been doing for this project is when I switch branches, making sure that my structure SQL is checked out to the latest version from the main branch. So I have a clean structure SQL then I drop my local database, recreate an empty one, and run a rake db:schema:load. And that will load that structure file as it is on the main branch into the database schema.
That does not have any of the migrations on this branch run, so, at that point, I can run a rake db:migrate. And I will get exactly what's on main plus what gets generated on this branch and nothing else. And so, that's been a way that I've been able to kind of switch between branches and run database operations without getting any cross-contamination.
STEPHANIE: Cross-contamination. I like that term. Have you automated this at all, or are you doing this manually?
JOËL: Entirely manually. I could probably script some of this. Right now...so it's three steps, right? Drop, create, schema load. I just have them in one command because you can chain Unix commands with a double ampersand. So that's what I'm doing right now. I want to say there's a db:reset task, but I think that it uses migrate rather than schema load. And I don't want to actually run migrations.
STEPHANIE: Yeah, that would take longer. That's funny. I do love the up arrow key [laughs] in your terminal for, you know, going back to the thing you're running over and over again.
I also appreciate the couple extra seconds that you're spending in waiting for your database to recreate. Like, you're paying that cost upfront rather than down the line when you are in the middle of doing [laughs] what you're trying to do and realize, oh no, my database is not in the state that I want it to be for this branch.
JOËL: Or I'm dealing with some awful git conflict when trying to merge some of these branches. Or, you know, somebody comments on my PR and says, "Why are you touching the orders table? This change has nothing to do with orders." I'm like, "Oh, sorry, that actually came out of a different thing that I did." So, yep, keeping those diffs small.
STEPHANIE: Nice. Well, I'm glad that you found a way to manage it.
JOËL: So you mentioned the up arrow key and how that's really nice in the terminal. Something that I've been relying on a lot recently is reverse history search, CTRL+R in the terminal. That allows me to, instead of, like, going one by one in order of the history, filter for something that matches the thing that I've written. So, in this case, I'll hit CTRL+R, type, you know, Rails DB or whatever, then immediately it shows me, oh, did you want this long command? Hit enter, and I'm done. Even if I've done, you know, 20 git commands between then and the last time I ran it.
STEPHANIE: Yeah, that's a great tip.
So, a few weeks ago, we received a listener question from John, and he was responding to an episode where I'd asked about what the grammatical term is for verbs that are also nouns. He told us about the phrase, a verbal noun, for which there's a specific term called gerund, which is basically, in English, the words ending in ING. So, the gerund version of bike would be biking.
And he pointed out a really interesting relationship that gerunds have to data modeling, where you can use a gerund to model something that you might describe as a verb, especially as a user interaction, but can be turned into a noun to form a resource that you might want to introduce CRUD operations for in your application.
So one example that he was telling us about is the idea of maybe confirming a reservation. And, you know, we think of that as an action, but there is also a noun form of that, which is a confirmation. And so, confirmation could be a new resource, right? It could even be backed at the database level. And now you have a simpler way of representing the idea of confirming a reservation that is more about the confirmation as the resource itself rather than some kind of append them to a reservation itself.
JOËL: That's really cool. We get to have a crossover between grammar terms and programming, and being able to connect those two is always a fun day for me.
STEPHANIE: Yeah, I actually find it quite difficult, I think, to come up with noun forms of verbs on my own. Like, I just don't really think about resources that way. I'm so used to thinking about them in a more tangible way, I suppose. And it's really kind of cool that, you know, in the English language, we have turned these abstract ideas, these actions into, like, an object form.
JOËL: And this is particularly useful when we're trying to design RESTful either APIs or even just resources for a Rails app that's server-rendered so that instead of trying to create all these, like, extra actions on our controller that are verbs, we might decide to instead create new resources in the system, new nouns that people can do the standard 7 to.
STEPHANIE: Yes. I like that better than introducing custom controller actions or routes that deviate from RESTful conventions because, you know, I probably have seen a slash confirm reservation [laughs] URL. And, you know, this is, I think, an interesting way of avoiding having too many of those deviating endpoints.
JOËL: Yeah, I found that while Rails does have support for those, just all the built-in things play much more nicely if you're restricting yourself to the classic seven. And I think, in general, it's easier to model and think about things in a Rails app when you have a lot of noun resources rather than one giant controller with a bunch of kind of verb actions that you can do to it. In the more formal jargon, I think we might refer to that as RESTful style versus RPC style, a Remote Procedure Call.
STEPHANIE: Could you tell me more about Remote Procedure Calls and what that means?
JOËL: The general idea is that it's almost like doing a method call on an object somewhere. And so, you would say, hey, I've got an account, and I want to call the confirm method on it because I know that maybe underlying this is an ActiveRecord account model. And the API or the web UI is just a really thin layer over those objects. And so, more or less, whatever your methods on your object are, can be accessed through the API. So the two kind of mirror each other.
STEPHANIE: Got it. That's interesting because I can see how someone might want to do that, especially if, you know, the account is the domain object they're using at the, you know, persistence layer, and maybe they're not quite able to see an abstraction for something else. And so, they kind of want to try to fit that into their API design.
JOËL: So I have a perhaps controversial opinion, which is that the resources in your Rails application, so your controllers, shouldn't map one-to-one with your database tables, your models.
STEPHANIE: So, are you saying that you are more likely to have more abstractions or various resources than what you might have at the database level?
JOËL: Well, you know what? Maybe more, but I would say, in general, different. And I think because both layers, the controller layer, and the model layer, are playing with very different sets of constraints. So when I'm designing database tables, I'm thinking in terms of normalization. And so, maybe I would take one big concept and split it up into smaller concepts, smaller tables because I need this data to be normalized so that there's no ambiguity when I'm making queries. So maybe something that's one resource at the controller layer might actually be multiple tables at the database layer.
But the inverse could also be true, right? You might have, in the example that John gave, you know, an account that has a single table in the database with just a Boolean field confirmed yes or no. And maybe there's just a generic account resource. But then, separately, there's also a confirmation resource. And so, now we've got more resources at the controller layer than at the database layer. So I think it can go either way, but they're just not tightly coupled to each other.
STEPHANIE: Yeah, that makes sense. I think another way that I've seen this manifest is when, like you said, like, maybe multiple database tables need to be updated by, you know, a request to this endpoint. And now we get into [chuckles] what some people may call services or that territory of basically something. And what's interesting is that a lot of the service classes are named as verbs, right? So order, creator. And, like, whatever order of operations that needs to happen on multiple database objects that happens as a result of a user placing an order. But the idea that those are frequently named as verbs was kind of interesting to me and a bit of a connection to our new gerund tip.
JOËL: That's really interesting. I had not made that connection before. Because I think my first instinct would be to avoid a service object there and instead use something closer to a form object that takes the same idea and represents it as a noun, potentially with the same name as the resource. So maybe leaning really heavily into that idea of the verbal noun, not just in describing the controller or the route but then also maybe the object backing it, even if it's not connecting directly to a database table.
STEPHANIE: Interesting. So, in this case, would the form object be mapped closer to your controller resource?
JOËL: Potentially, yes. So maybe I do have some kind of, like, object that represents a confirmation and makes it nicer to render the confirmation form on the edit page or the new page. In this case, you know, it's probably just one checkbox, so maybe it's not worth creating an object. But if there were multiple fields, then yes, maybe it's nice to create an in-memory object that has the same name as the resource. Similar maybe for a resource that represents multiple underlying database tables. It can be nice to have kind of one object that represents all of them, almost like a facade, I guess.
STEPHANIE: Yeah, that's really interesting. I like that idea of a facade, or it's, like, something at a higher level representing hopefully, like, some kind of meaning of all of these database objects together.
JOËL: I want to give a shout-out to talk from a former thoughtboter, Derek Prior—actually, former Bike Shed host—from RailsConf 2017 called In Relentless Pursuit of REST, where he digs into a lot of these concepts, particularly how to model resources in your Rails app that don't necessarily map one to one with a database table, and why that can be a good thing. Have you seen that talk?
STEPHANIE: I haven't, but I love the title of it. It's a great pun. It's very evocative, I think because I'm really curious about this idea of a relentless pursuit. Because I think another way to react to that could be to be done with REST entirely and maybe go with something like GraphQL.
JOËL: So instead of a relentless pursuit, it's a relentless...what's the opposite of pursuing? Fleeing?
STEPHANIE: Fleeing? [laughs] I like how we arrived there at the same time. Yes. So now I'm thinking of I had mentioned a little bit ago on the show we had our spicy takes Lightning Talks on our Boost Team. And a fellow thoughtboter, Chris White, he had given a talk about Why REST Is Not the Best and for --
JOËL: Also, a great title.
STEPHANIE: Yes, also, a great title.
JOËL: I love the rhyming there.
STEPHANIE: Yeah. And his reaction to the idea of trying to conform user interactions that don't quite map to a noun or an obvious resource was to potentially introduce GraphQL, where you have one endpoint that can service really anything that you can think of, I suppose. But, in his example, he was making the argument that human interactions are not database resources, right?
And maybe if you're not able to find that abstraction as a noun or object, with GraphQL, you can encapsulate those ideas as closer to actions, but in the GraphQL world, like, I think they're called mutations. But it is, I think, a whole world of, like, deciding what you want to be changed on the server side that is a little less constrained to having to come up with the right abstraction.
JOËL: I feel like GraphQL kind of takes that, like, complete opposite philosophy in that instead of saying, hey, let's have, like, this decoupling between the API layer and the database, GraphQL almost says, "No, let's lean into that." And yeah, you want to traverse the graph of, like, tables under the hood? Absolutely. You get to know the tables. You get to know how they're related to each other.
I guess, in theory, you could build a middle layer, and that's the graph that gets traversed rather than the graph of the tables. In practice, I think most people build it so that the API layer more or less has access directly to tables. Has that been your experience?
STEPHANIE: That's really interesting that you brought that up. I haven't worked with GraphQL in a while, but I was reading up on it before we started recording because I was kind of curious about how it might play with what we're talking about now. But the idea that it's graphed based, to me, was like, oh, like, that naturally, it could look very much like, you know, an entity graph of your relational database.
But the more I was reading about the GraphQL schema and different types, I realized that it could actually look quite different. And because it is a little bit closer to your UI layer, like, maybe you are building an abstraction that is more for serving that as that middle layer between your front end and your back end.
JOËL: That's really interesting that you mentioned that because I feel like the sort of traditional way that APIs are built is that they are built by the back-end team. And oftentimes, they will reflect the database schema. But you kind of mentioned with GraphQL here, sometimes it's the opposite that happens. Instead of being driven kind of from the back towards the front, it might be driven from the front towards the back where the UI team is building something that says, hey, we need these objects. We need these connections. Can you expose them to us? And then they get access to them.
What has been your experience when you've been working with front ends that are backed by a GraphQL API?
STEPHANIE: I think I've tended to see a GraphQL API when you do have a pretty rich client-side application with a lot of user interactions that then need to, you know, go and fetch some data. And you, like, really, you know, obviously don't want a page reload, right?
So it's really interesting, actually, that you pointed out that it's, like, perhaps the front end or the UI driving the API. Because, on one hand, the flexibility is really nice. And there's a lot more freedom even in maybe, like, what the product can do or how it would look. On the other hand, what I've kind of also seen is that eventually, maybe we do just want an API that we can talk to separate from, you know, any kind of UI. And, at that point, we have to go and build a separate thing [laughs] for the same data.
JOËL: So we've been talking about structuring APIs and, like, boundaries and things like that. I think my personal favorite feature of GraphQL is not the graph part but the fact that it comes with a built-in schema. And that plays really nicely with some typed technologies. Particularly, I've used Elm with some of the GraphQL libraries there, and that experience is just really nice. Where it will tell you if your front-end code is not compatible with the current API schema, and it will generate some things based off the schema.
So you have this really nice feedback cycle where somebody makes a change to the API, or you want to make a change to the code, and it will tell you immediately is your front end compatible with the current state of the back end? Which is a classic problem with developing front-end code.
STEPHANIE: First of all, I think it's very funny that you admitted to not preferring the graph part of GraphQL as a graph enthusiast yourself. [laughs] But I think I'm in agreement with you because, like, normally, I'm looking at it in its schema format. And that makes a lot of sense to me.
But what you said was really interesting because, in some ways, we're now kind of going back to the idea of maybe boundaries blurring because the types that you are creating for GraphQL are kind of then servicing both your front end and your back end. Do you think that's accurate?
JOËL: Ooh. That is an important distinction. I think you can. And I want to say that in some TypeScript implementations, you do use the types on both sides. In Elm, typically, you would not unless there's something really primitive, like a string or something like that.
STEPHANIE: Okay, how does that work?
JOËL: So you have some conversion layer that happens.
STEPHANIE: Got it.
JOËL: Honestly, I think that's my preference, and not just at the front end versus API layer but kind of all throughout. So the shape of an object in the database should not be the same shape as the object in the business logic that runs on the back end, which should not be the same shape as the object in transport, so JSON or whatever, which is also not the same shape as the object in your front-end code. Those might be similar, but each of these layers has different responsibilities, different things it's trying to optimize for.
Your code should be built, in my opinion, in a way that allows all four of those layers to diverge in their interpretation of not only what maybe common entities are, so maybe a user looks slightly different at each of these layers, but maybe even what the entities are to start with. And that maybe in the database what, we don't have a full user, we've got a profile and an account, and those get merged somehow. And eventually, when it gets to the front end, all we care about is the concept of a user because that's what we need in that context.
STEPHANIE: Yeah, that's really interesting because now it almost sounds like separate systems, which they kind of are, and then finding a way to make them work also as one bigger [laughs] system. I would love to ask, though, what that conversion looks like to you. Or, like, how have you implemented that? Or, like, what kind of pattern would you use for that?
JOËL: So I'm going to give a shout-out to the article that I always give a shout-out to: Parse, Don't Validate. In general, yeah, you do a transformation, and potentially it can fail. Let's say I'm pulling data from a GraphQL API into an Elm app. Elm has some built-in libraries for doing those transformations and will tell you at compile time if you're incorrectly transforming the data that comes from the shape that we expect from the schema.
But just because the schema comes in as, like, a flat object with certain fields or maybe it's a deeply nested chain of objects in GraphQL, it doesn't mean that it has to be that way in your Elm app. So that transformation step, you get to sort of make it whatever you want.
So my general approach is, at each layer, forget what other people are sending you and just design the entities that you would like to. I've heard the term wish-driven development, which I really like. So just, you know, if you could have, like, to make your life easy, what would the entities look like? And then kind of work backwards from there to make that sort of perfect world a reality for you and make it play nicely with other systems. And, to me, that's true at every layer of the application.
STEPHANIE: Interesting. So I'm also imagining that the transformation kind of has to happen both ways, right? Like, the server needs a way to transform data from the front end or some, you know, whatever, third party. But that's also true of the front end because what you're kind of saying is that these will be different. [laughs]
JOËL: Right. And, in many ways, it has to be because JSON is a very limited format. But some of the fancier things that you might have access to either on the back end or on the front end might be challenging to represent natively in JSON. And a classic one would be what Elm calls a custom type. You know, they're also called tagged unions, discriminated unions, algebraic data types. These things go by a bajillion names, and it's confusing.
But they're really kind of awkward and hard, almost impossible to represent in straight-up JSON because JSON is a very limited kind of transportation format. So you have to almost, like, have a rehydration step on one side and a kind of packing down step on the other when you're reading or writing from a JSON API.
STEPHANIE: Have you ever heard of or played that Wikipedia game Getting to Philosophy?
JOËL: I've done, I think, variations on it, the idea that you have a start and an end article, and then you have to either get through in the fewest amount of clicks, or it might be a timed thing, whoever can get to the target article first. Is that what you're referring to?
STEPHANIE: Yeah. So, in this case, I'm thinking, how many clicks through Wikipedia to get to the Wiki article about philosophy? And that's how I'm thinking about how we end up getting to [laughs] talking about types and parsing, and graphs even [laughs] on the show.
JOËL: It's all connected, almost as if it forms a graph of knowledge.
STEPHANIE: Learning that's another common topic on the show. [laughs] I think it's great. It's a lot of interesting lenses to view, like, the same things and just digging further and further deeper into them to always, like, come away with a little more perspective.
JOËL: So, in the vein of wish-driven development, if you're starting a brand-new front-end UI, what is your sort of dream approach for working with an API?
STEPHANIE: Wish-driven development is very visceral to me because I often think about when I'm working with legacy code and what my wishes and dreams were for the, you know, the stack or the technology or whatever. But, at that point, I don't really have the power to change it. You know, it's like I have what I have. And that's different from being in the driver's seat of a greenfield application where you're not just wishing. You're just deciding for yourself. You get to choose.
At the end of the day, though, I think, you know, you're likely starting from a simple application. And you haven't gotten to the point where you have, like, a lot of features that you have to figure out how to support and, like, complexity to manage. And, you know, you don't even know if you're going to get there. So I would probably start with REST.
JOËL: So we started this episode from a very back-end perspective where we're talking about Rails, and routes, and controllers. And we kind of ended it talking from a very front-end perspective. We also contrasted kind of a more RESTful approach, versus GraphQL, versus more kind of old-school RPC-style routing.
And now, I'm almost starting to wonder if there's some kind of correlation between whether someone primarily works from the back end and maybe likes, let's say, REST versus maybe somebody on the front end maybe preferring GraphQL. So I'd be happy for any of our listeners who have strong opinions preferring GraphQL, or REST, or something else; message us at [email protected] and let us know. And, if you do, please let us know if you're primarily a front-end or a back-end developer because I think it would be really fun to see any connections there.
STEPHANIE: Absolutely. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël has a fascinating discovery! He learned a new nuance around working with dependency graphs. Stephanie just finished playing a 100-hour video game on Nintendo Switch: a Japanese role-playing game called Octopath Traveler II. On the work front, she is struggling with a lot of churn in acceptance criteria and ideas about how features should work.
Transcript:
JOËL: You're the one who controls the pacing here.
STEPHANIE: Oh, I am. Okay, great.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So long-time Bike Shed listeners will know that I'm a huge fan of dependency graphs for modeling all sorts of problems and particularly when trying to figure out how to work in an iterative fashion where you can do a bunch of small chunks of work that are independent, that can be shipped one at a time without having your software be in a breaking state in all of these intermediate steps. And I recently made a really exciting discovery, or I learned a new nuance around working with dependency graphs.
So the idea is that if you have a series of entities that have dependencies on each other, so maybe you're trying to build, let's say, some kind of object model or maybe a series of database tables that will reference each other, that kind of thing, if you draw a dependency graph where each bubble on your graph points to other bubbles that it depends on, that means that it can't be created without those other things already existing. Then, in order to create all of those entities for the first time, let's say they're database tables, you need to work your way from kind of the outside in.
You start with any bubbles on your graph that have no arrows going out from them. That means they have no dependencies. They can be safely built on their own, and then you kind of work your way backwards up the arrows. And that's how I've sort of thought about working with dependency graphs for a long time.
Recently, I've been doing some work that involves deleting entities in such a graph. So, again, let's say we're talking about database tables. What I came to realize is that deleting works in the opposite order. So, if you have a table that have other tables that depend on it, but it doesn't depend on anything, that's the first one you want to create. But it's also the last one you want to delete. So, when you're deleting, you want to start with the table that maybe has dependencies on other tables, but no other tables depend on it. It is going to be kind of like the root node of your dependency graph.
So I guess the short guideline here is when you're creating, work from the bottom up or work from the leaves inward, and when you're deleting, work from the top-down or work from the root outward or roots because a graph can have multiple roots; it's not a tree.
STEPHANIE: That is interesting. I'm wondering, did you have a mental model for managing deleting of dependencies prior?
JOËL: No. I've always worked with creating new things. And I went into this task thinking that deleting would be just like creating and then was like, wait a minute, that doesn't work. And then, you know, a few cycles later, realized, oh, wait, deleting is the opposite of creating when you're navigating the graph. And, all of a sudden, I feel like I've got a much clearer mental model or just another way of thinking about how to work with something like this.
STEPHANIE: Cool. That actually got me thinking about a case where you might have a circular dependency. Is that something you've considered yet?
JOËL: Yes. So, when you have a dependency graph, and you've got a circular dependency, that's a big problem because...so, in the creating model, there is no leaf node, if you will, because they both reference each other. So that means that each of these entities cannot be created on its own, the entire cycle. And maybe you've got only two, but maybe your cycle is, you know, ten entities big. The entire cycle is going to be shipped as one massive change.
So something that I often try to do is if I draw a dependency graph out and notice, wait a minute, I do have cyclical dependencies, the question then becomes, can I break that cycle to allow myself to work iteratively? Because otherwise, I know that there's a big chunk that can't be done iteratively. It just has to be done all at once.
STEPHANIE: Yeah, that's really interesting because I've certainly been in that situation where I don't realize until it's too late, where I've started going down the path thinking that, you know, I could just remove this one thing, or make this one change, and then find myself suddenly, you know, coming to the realization, oh, this other thing is now going to have to change.
And then, at that point, there's almost kind of like the sunken cost fallacy [laughs] a little bit where you're like, well, I'm already in it. So, why don't I keep going? But your strategy of trying to find a way to break that cyclica...that is two words combined. [laughs] I meant to say circular dependency [laughs] is the right way to avoid just having to do it all in one go. Have you had to break up a cycle like that before?
JOËL: Yes. I do it on a semi-frequent basis. The fancy term here for what I'm looking for when I'm building out a dependency graph is a directed acyclic graph. That's a graph theory or a computer science term that you'll hear thrown around a lot, DAG. I often like to...when building out a series of tasks that might also form a graph because you don't just model entities in your system; you might model a series of tasks as a graph.
If there's a cycle in the graph, typically, I can break that using something like the strangler fig pattern, which is a way to kind of have some intermediate steps that are non-breaking that then lead you to the refactor that you want. And I've used the strangler fig pattern for a long time, never realizing until later that, oh, what I'm actually doing is breaking cycles in my task dependency graph.
STEPHANIE: Hmm. I'm curious if you have noticed how these cycles come to be because I almost imagine that they get introduced over time, where you maybe did start with a parent and then you, you know, had dependencies. But then, over time, somehow, that circular dependency gets introduced. And I'm wondering if part of figuring out how to break that cycle is determining how things were introduced, like, over time.
JOËL: In my experience, this happens in a lot of different ways because I'm using dependency graphs like this to give myself a mental model for a lot of different kinds of things. So maybe I'm thinking in terms of database tables. And so those might get a circular dependency that gets added over time as the system grows.
But I'm also using it sometimes to model maybe a series of tasks. So I take a large task, and I break it down into subtasks that are all connected to each other. And that doesn't tend to sort of evolve over time in the same way that a series of database tables do. So I think it's very context-dependent. But there are definitely situations where it will be like you said, something that kind of evolves over time.
STEPHANIE: That makes sense. Well, I'm excited for you to get to deleting some potential code or database tables that are no longer in use. That sounds like a developer's dream [laughs] to clean up all that stuff.
JOËL: It's interesting because it's...a move operation is effectively what's happening. So I'm recreating tables in another system, pointing the ActiveRecord to this new system, and then deleting the existing ones in the local database. So, in a sense, I'm kind of traveling up this dependency graph from the leaf nodes into the root and then back down from the root to the leaves as I'm creating and then deleting everything or creating in one system, and then going back and deleting in the other system.
STEPHANIE: Got it. Okay, so not necessarily a net negative but, like you said, a move or just having to gradually replace to use a new system.
JOËL: That's right. And we're trying not to do this as, you know, okay, we're going to take the system down and move 50 tables from one system to another. But instead, saying, like, you know, one at a time, we're going to move these things over. And it's going to be small, incremental change over the course of a couple of weeks. And they're all pretty safe to deploy, and we feel good about them.
STEPHANIE: That's good. I'm glad you feel good. [laughs] We should all be able to feel good when we make changes like that.
JOËL: It's going to make my Fridays just so much more low-key just, like, yeah, hit that deploy button. It's okay.
So, Stephanie, what is new in your world?
STEPHANIE: So this is not work-related at all. But I just finished playing a 100-hour video game on my Nintendo Switch. [laughs] I finished a Japanese role-playing game called Octopath Traveler II. And I have never really played a game like this before. I've not, you know, put in many, many hours into something that then had an end, like, a completion.
So, at the end of this very long game that had a very, you know, compelling and engaging story and I was invested in all of these characters, and by the time the credits were rolling, I felt a little sad to be leaving this world that I have been in many evenings over the last couple of months. Yeah, I don't know, I'm feeling both a little sad because, you know like I said, I got really invested in this game, but now I'm also kind of glad to have some free time back in [laughs] my life because that has definitely been the primary, like, evening activity that I've been doing to relax.
JOËL: It sounds like this game had a very, like, a particularly immersive world that really pulled you in.
STEPHANIE: It did. It did. It has these eight, like, different characters that you follow, like, different chapters and all of their stories, and then they all kind of come together as well. And the world was huge in this game. There were so many little towns to explore. And I didn't realize I was a completionist type. But I found myself running around opening every chest, talking to every NPC, and making sure that I, you know, collected all of my items [chuckles] before moving on.
I also finished all of the side quests, which is, I think, you know, how I managed to put in over 100 hours into it. But yeah, it was very immersive, and I really enjoyed it. I don't know if this will become a norm for me. I know there are some people who are, you know, JRPG diehards and play a lot of these kinds of games, but they're a real, like, time investment for sure.
JOËL: Are there achievements for completing everything?
STEPHANIE: Not that I can tell on the Switch. I do know that, like, on other systems, you can see your progress on having done all of the things there are to do. But I think it's actually kind of better for me to just play [laughs] to just, like, think that I've done it all but not really, like, have something that tells me whether or not I've done it because then I would feel a lot more neurotic, I think, about being able to let it go where I am now. [laughs]
JOËL: Right. If we've got, like, an explicit checklist of things or a progress bar, then it feels like you got to get to all the things.
STEPHANIE: Yeah, exactly. I think there are still, you know, a couple more things that I wrote down on my little checklist of tasks that I would want to do once I feel like I want to come back to the game. But for now, like I said, I watched the credits roll. I teared up a little bit, you know, thinking about and reminiscing on my adventure with these characters, and I'm ready to put it down for a bit.
JOËL: Did I hear correctly that you made a checklist for this game of things you wanted to do?
STEPHANIE: Yes, [laughs] I did.
JOËL: That's amazing. I love that.
STEPHANIE: Yeah, you know, there are just so many things almost kind of like work where I had to, like, break down some of my goals. I wanted to, like, hit a certain level. I wanted to, you know, make sure I defeat these bosses that would help me get to those levels. And yeah, I got very into it. It was definitely a big part of my life for a couple of months.
I got it originally because I needed a game to play on my flight to Asia back when I went to Japan. And I'm like, oh, like, this looks, you know, fun and engaging, and it will distract me for my, you know, over 10-hour flight. Turns out it distracted me for many, many more hours over several months [laughs] since then. But I had a great time. So yeah, that's what's new for me. Again, it's something I'd never really done before. I will say though I am very behind on my reading goal as a result. [chuckles]
JOËL: I feel like this is a classic developer thing to do is, like, use the tools that we're used to in our job and then apply them to other parts of our life. And now it's just like, okay, well, I made a Kanban board to track my progress in this video game. You know, or, in my case, I'm definitely guilty of having drawn a dependency graph for the crafting tree for some video game. So I feel you really strongly there.
STEPHANIE: Yes, I'm nodding heavily in agreement. I think it just scratches the same kind of itch of, you know, achieving, like, little things and then achieving one big thing.
JOËL: So, speaking of places that are nice to have checklists and, like, well-defined requirements, you and I were talking earlier, and you have recently found some frustration around having user stories be defined well on your current project.
STEPHANIE: Yes. So I've been reflecting a little bit about my current project and noticing what I think I might call product smells; I'm not quite sure, just some things I'm seeing in our day-to-day workflow that is getting me thinking. And I'm curious to hear if you've experienced something similar.
But I find myself being tasked with a ticket that is quite vague. And maybe this was written by a product owner, or maybe it was written by another developer. And it is not quite actionable yet, so I have to go through the process of figuring out what I'm really needing to do here.
I think another thing that has been quite frustrating is, you know, maybe we do find out what we want to do. And, like, I'll go back into the ticket, write down the requirements that I gathered, and do the ticket. I'll ship whatever change was required, and then I'll hear back from someone in a meeting or either as a one-off request in Slack. And it'll be like, "Hey, like, actually, you know, we want this to be different." And maybe you previously said that "Oh, the value for something would be 30. But now we found out more information; it should be 20. And so could you, like, make that change?"
And then I'm not really sure what the best way to document a change like that is because it, you know, maybe existed in the previous ticket, but now it has changed. And do I create a new ticket for this, or do I just go ahead and make the code change? Like, who would know this information that we're now carrying about 20 being the value for, let's say, like, days or not meaning something in the code that we're writing?
And I guess I've just been really curious about how to make sure that this doesn't become the norm where a lot of these conversations are just happening, and, you know, the people who happen to be in them know that this change happened. But then later on, someone is asking questions about, like, hey, like, when did this change? Or I expected this to be 30. But is this, you know, behaving as expected?
So that was [laughs] a bit of a nebulous way of describing just, like, this churn that I feel with being the executor of work. But then, like, a lot of these things changing above me or separate from me and figuring out how to manage that.
JOËL: When you were describing this scenario where you've done the work, and then someone's like, "Oh, could we change this value from, like, 30 to 20?" I'm thinking in my mind of the sort of beam that a lot of our designers face where it's like, you know, they have a design. They work on it; they do it. And then show it to a client, and the client is like, "I love this design. But could we just shift this box over, like, one pixel?"
Like, they're, like, tiny, tiny, little changes that are kind of requested for change after you've done, like, this big thing. And, oftentimes, those pile-up. It's like, you shift it one pixel. It's like, oh, actually, you know what? Why don't we do it two pixels? And then it's like never-ending cycles, sometimes of, like, minute little changes.
STEPHANIE: Yeah. But the minute changes really add up into, I think, really different behavior than what you maybe had decided as a team originally. And in the process of changing and evolving, I don't really know where documentation fits in.
I've been working on this project that had a pretty comprehensive product design doc, where they had decided upfront on, you know, how the application is going to behave in many different scenarios. But again, like, that has changed over time. And when I recently had to onboard someone new to this project, you know, we sent over this document, and we're like, yeah, you can, you know, feel free to peruse it. But it's actually quite outdated.
And then, similarly, right now, since the features that I'm working on are going through QA, there's been a lot of back and forth about, I'm seeing this, but the doc said that Y is supposed to happen, and I'm not sure if that's a bug or not. And I or someone else has to respond with that context that we were holding in our head about when that change happened.
JOËL: That's really interesting. And I think it varies a lot based off the size of the organization. In a smaller organization, you're probably doing a lot of the requirements gathering yourself. You're talking to all the stakeholders. You're probably doing the QA yourself, or you're walking somebody else through QA. Versus a large organization, there might be an entirely separate product team, and a separate QA team, and a separate dev team.
And a danger that I've often seen is where all of these teams are just kind of tossing work over the fence. And all you're given is a, you know, a ticket of, like, execute on this. Basically, turn these specs into code. And then you do that, and then you toss it over the fence to the QA team. And they check does the code do these things? And there's so much context that can easily get lost from one step to another. That being said, I think a lot of devs find it frustrating to do some of the requirements gathering work.
How do you feel in general about scoping out a ticket or doing follow-up conversations with the product team about, like, "Hey, your idea for the ticket is this. How do you feel about doing these things? Or what if we cut these things?" Are those conversations that you enjoy having? Is that a fun part of the developer role for you? Or do you kind of wish that, like, somebody else did all of that so that you could, like, go heads down just writing code?
STEPHANIE: I think it depends. That's a great question. Actually, I have so many thoughts in response. So let me try to figure out where I want to go from here.
But I think I used to not like it. I used to be stressed out by it, and sometimes I still am. But when I thought my role was purely executing, to receive a ticket that is a bit vague, you know, I might have been left feeling, like, stuck, like, not knowing where to go from there.
But now that has changed a bit because I received some really helpful feedback from an old manager of mine who was kind of invested in my growth. And she really suggested learning to become more comfortable with ambiguity because that just becomes more and more your job, I think, as you progress in your career. And so now I at least know what information I need to go get and have, you know, strategies for doing so.
And also knowing that it's my job, like, knowing that no one else might be doing it, and it might just be me so that I can therefore get this ticket done. Because, like you said, that problem of throwing the work over the fence to someone else, at some point, that doesn't work because everyone has too much on their plates. And you have to just decide to be the one to seek the information that you need.
JOËL: I think one way that, as developers, we bring a lot of value is that we help to cut through a lot of that ambiguity. I think if we see our role as merely translating a requirements document into code, that's a very simplistic point of view of what a talented developer does. So, like you said, as we grow in our careers, we start dealing with less and less defined things. We often have to start defining the problems that we're given.
And we have to have these conversations with other teams to figure out what exactly we want to do. And maybe better understand why is it that we want to do this thing. What is the purpose of it? How are we going to get there? And my favorite: Do we have to do all of these things to hit the minimum value of this goal? Can I split this into multiple tickets? I love breaking down work. If I can make the ticket smaller, I'm all about that.
STEPHANIE: Yes. I'm well aware. It's interesting about what you said, though, is that, like, yes, that becomes, in some ways, our superpower. But, for me, where the pain comes in is when that's not part of the expectations, where I am maybe tasked with something that is not clear enough, and yet, the time that I need to find that clarity is not given the respect that it, I think, deserves to build a good product because the expectation is that I should already be making progress on this ticket and that it will be delivered soon.
You know, in that situation, I wish I had been in the room earlier. I wish I had been part of the process for developing the product strategy, or even just, like, have come in earlier to be able to ask, you know, why are we building this? And, like, what are some of the limitations on the technical side that we have? Because often, I find that it is a little too...not necessarily too late, but it is quite down the road that we then have to have these conversations, and it doesn't feel good.
JOËL: I think that's one of the powerful things that came out of the agile movement was the idea that you have these cross-functional teams, that you don't have a separate product team, a separate dev team, a separate QA team, a separate design team that are all these isolated islands. But instead, you say, okay, we have a cross-functional team that is working on this aspect of the product. And it will be some product people, some dev people, some designers kind of all working together and communicating with each other. I know, shocking concept.
And even depending on the context, a big idea is that the client or the customer is a part of that team. So, when we at thoughtbot work with a client, especially when they are maybe a smaller client like a startup founder, we make sure that they feel like they are a part of the team. They are involved in various meetings where we decide things. They have input. You know, they're part of that feedback cycle that we build. But that can also be the case for a larger company where your internal stakeholders are kind of built-in to be sort of part of your team.
STEPHANIE: I've seen so many different flavors of trying to do Agile [laughs] that it has lost a little bit of meaning for me these days. And maybe we've incorporated some aspects of it. But then that idea of the tight feedback loops and then a cross-functional team where everyone is communicating that part has gotten a little bit lost, at least on my project. And I imagine that this is common, and our listeners might be finding themselves in a similar situation where things are starting to feel a little more like handing off and a little more like waterfall. [laughs]
I'm curious, though, if you found yourself being requested to make a change from what the original decision was, how would you go about documenting that or not documenting it? Where do you think the best place for that information about how this feature now is supposed to work where should that live?
JOËL: Are you talking about where do we document that a decision was made to change the original requirements of a task?
STEPHANIE: Yes.
JOËL: In general, I think that should live on the ticket just because as long as the ticket is live, I think it's good to have all the context on that ticket for whoever's working on it to have access at a glance.
Sometimes it's worth it to say, you know what? We don't want to just keep this ticket live for weeks or maybe months on end. Let's ship this ticket, and create a follow-up to make a change later, especially if it's a change that's less important where it's like, you know what? It would be nice to have if...but, again, like, scope creep is a real danger. And so, again, me with the aggressive breaking up of tickets, I love to say, "That's a great idea. It would make a great change, not part of this ticket." So oftentimes, those changes I will push them into another ticket.
STEPHANIE: That's interesting. What about documentation beyond the current work? So I'm thinking about once, you know, a feature is delivered, how do people in the organization then know how this feature is supposed to work? Like moving forward as something that is customer-facing.
JOËL: That can vary a lot by organization, I think because there's a couple of different aspects to this. You have maybe some internal-facing documentation; maybe some customer support people need to know about the way the interface has changed. And then you also have customer-facing documentation where maybe you want some sort of, you know, you want a blog post talking about the new feature or some kind of release notes or something like that to be shared with your customers. And compiling that might look very different than what you do for your internal service reps.
STEPHANIE: Yeah, I like that. It's true that the customer documentation is really helpful. At least for, the product that I'm working on, it has very comprehensive documentation about how to use that for its customers. And that has been really helpful because, hopefully, that should be the truest [laughs] information out there.
But sometimes, you know, I find myself in meetings where none of us really know what happens. For example, a question that was asked recently is our product has a free trial capability. But it was unclear what happens to all of the data that the customer is getting access to as a feature. Like, what happens to that data after the free trial ends? Like, if they then have purchased a license, do they still have access to their free trial data? If, you know, there's a lapse between then, does it just get deleted, or will it show up again? And no one really knew the answer to that.
And I think that was another area that got my spidey senses tingling a little bit; I think because it reminded me of...there was a definition I read somewhere of legacy code that is basically when the person who has the most context about how a piece of code works and then they leave the company and that institutional knowledge no longer exists, like, that is legacy code. And I almost think that that also applies to product a little bit where a legacy product is something where no one quite knows what is supposed to happen, but it's still being used by users.
JOËL: That's a really fun definition there. I think there's sort of two related questions that are slightly different here, which is, one, how does the code behave? So, what happens when someone's trial period expires? And it's quite possible that no one on the team knows what actually happens when that time expires.
And then the second question is, what should happen when a trial expires? And it's possible, again, that the product team didn't think through any of the edge cases. They only went for the happy path. And so it's possible if that is also fully undefined and no one knows.
STEPHANIE: Yeah, I like that distinction you made a lot because they definitely go hand in hand, where someone realizes that some weird edge case happened, and then suddenly, they're asking those questions. And, you know, we realized, like, oh, like, that just didn't have enough, like, intention or thought behind how it was coded. So, like, it really is; who knows, right? Just whatever seems to happen.
And I think that this actually kind of reminds me of a previous episode we did about empowering other departments in the company because, ultimately, a lot of those questions about, like, how does this work? What happens? Ends up going to a developer who has to go and read the code and report back. And while, you know, we do have that power, it can also be a bit of a curse, I think. [laughs]
JOËL: I think this is an area where, as developers, we're maybe particularly skilled. Because of the work that we do, our brains are kind of wired to think about all of the edge cases, and sometimes they can be really annoying.
But I think there's a lot of value sometimes when maybe the product team comes to us with a maybe somewhat nebulously scoped ticket or a series of tickets for, let's say, a free trial period feature that only goes through the happy path. And then sometimes it's up to us to push back or to follow up and say, "Okay, great. We've got a bunch of tickets for a free trial period. Have you thought about what happens after a trial expires but the person hasn't converted to a paying customer?" And then, oftentimes, the answer is like, "Oh, no, we didn't think about that."
And I think oftentimes, as developers, our job is to kind of, like, seek out a lot of those edge cases. And we have a lot of techniques and methodologies that we use to try to find edge cases, things like test-driven development, various modeling tools that we'll try to use to make sure that we don't just crash or do something bad in our code.
But what should the actual behavior be? That's a conversation that we need to have. And hopefully, that's one that maybe the product team has already had on their own. But oftentimes, the benefit of having that cross-functional team is the ability to kind of have that back and forth and say, "Hey, what about this edge case? Have we thought about that? How do we want that to behave?"
STEPHANIE: Yeah, that actually made me think about the idea of tech debt but almost at a product level, where, hey, it turns out that we have all of these things that we didn't quite think through, and it's now causing problems. But how much do we invest in revisiting it? Because, you know, maybe this feature is several years old, and it was working just okay enough for it to, you know, be valuable. But we're now discovering these things and, you know, like, do we invest in them? Or are we more focused on, you know, coming up with new things and new features for our customers?
JOËL: That's a classic prioritization problem. It also kind of reminds me of the idea of an MVP. What are the actual, like, minimum set of features that you need in order to try out something or to ship something to customers? And, you know, maybe we don't need some special behavior if your trial account doesn't convert. Maybe we're okay [laughs] that you log in, and the app just crashes. Probably not, because we would probably want you to convert to a paying customer at some point. But maybe we're okay if you just get a screen that says, "You have no projects," when, in fact, you did have projects. It's just that you're no longer on the free trial.
Again, for business reasons, probably we want a call to action there that says, "You have five projects. They are not available to you. Please pay to unlock your projects again." That probably converts better. But, again, now that is a business decision. And that becomes a prioritization question that the team as a whole gets to address.
Sometimes it can also be some really fun prioritization things where if you're on a really tight schedule, you might ship some features live knowing that you have a time limit, but you don't have to necessarily ship other things. So let's say you've got a 30-day trial, and maybe you ship that before you've even implemented what the dashboard will look like after your free trial has expired, and that's fine because no one's going to hit that condition for 30 days. So now you've got 30 days to go out and handle that condition.
And maybe that's okay because it allowed you to get to market a little bit faster, allowed you to cut scope, break those tickets, yes, and just move that much faster. But it does require discipline because now you're on the clock. You've got 30 days to fix that edge case or potentially face some unhappy customers.
STEPHANIE: Yeah, I think that's quite a funny way to handle it. It's really ruthless prioritization [laughs] there.
But what you said was very interesting to me because I was thinking about how there is such a focus on new feature development and that being the thing that will attract customers or generate more money. But there is something to be said about investigating some of our old features of our existing system and finding opportunities there. And oftentimes, revisiting them will reduce the amount of pain [chuckles] that, you know, developers feel having to kind of keep track or have an eye on, like, where things are airing out, but then don't have the time to really invest in making it better or making that part of the product better.
JOËL: I think that's a great opportunity then to have a conversation with other parts of the team. Typically, I think you have to convert some of those into more of a business case. So the business people in the company or the product people might not care about the sort of raw metrics that you see as a developer. Oh, we got an exception with a stack trace in this part of our app. What does that even mean?
But if you say, hey, people who signed up for a free trial and then didn't immediately convert within 30 days who want to come back a month later and convert are unable to do so. And we've seen that that's about 10% of the people who signed up for a free trial. Well, now that's an interesting business question.
Are we losing out on potentially 10% of customer acquisition? I'll bet the sales and marketing people care a lot about that. I'll bet the business people care a lot about that. The product people probably care a lot about that. And now we can have a conversation about should we prioritize this thing? Are these metrics that we should improve? Is this a part of our code that's worth investing in?
STEPHANIE: Yeah, I like that because, in some ways, asking those questions about how does it work? Like, that is really an opportunity because then you can find out, and then you can make decisions about whether it's currently providing enough value as is or if there is something hiding under there to leverage.
JOËL: And I think that's one of the other places where, as developers, we provide value to clients is that we can sort of talk both languages. We can talk product language. We can talk business language. But we can also talk code. And so when we see things like that in code, sort of translate that into, like, what are the business impacts of this code change? Which then allows everyone to make the best possible decisions for the mission of the organization that you're a part of.
So we've talked about a variety of sort of patterns and anti-patterns that surround working through some of this churn on a product. I'm curious, Stephanie, for you, what's maybe one concrete thing that you've done recently that you've found has really helped you navigate this and maybe help reduce some of the stress that you feel as you navigate through this?
STEPHANIE: Yeah, I think, for me, one of the worst things is when that discussion is had in a meeting or a [inaudible 35:45] and then is not put anywhere. And so, one thing I've been making sure to do is either asking the person who made the request to write it down, either on the ticket or in Slack. Or I will write it down, you know, I will document the outcomes of what we talked about and putting it in a public space so that people are aware.
I think that small action has been helpful because we hold so much of this in our heads. And I've been finding that it ends up being hard for people to rotate onto different projects and, you know, get onboarded and up to speed effectively because there's so much knowledge and context transfer happening. But even just putting it in a place where maybe it's not relevant to everyone, but at least they see it. And then the next time that they're asked or maybe, like, do come around to working on this, they, like, have some fragment of a memory that they saw something about this. So that has been really helpful.
It actually dovetails really nicely into what we were talking about with opportunities, too, because once it's out there, like, maybe someone else will see it and have an idea about how it could be better or that change not being what they expected, and they can weigh in a little more. So that's what I'm trying to do.
And I think it's also nice to see how often that happens, right? If we're constantly seeing things changing because we have a written record of it, that could be helpful in bringing up and investigating further as to, like, why is this happening? Like, why do we experience this churn? And is that something we want to address?
JOËL: Yeah, because an element that we haven't talked at all about is any sort of feedback cycle or retrospective, where we can talk about these things and having that written trail and saying, "Oh, we changed this decision five times in the past week, like, really churned there." Now maybe that prioritizes it to be an important thing to talk about and to improve for the next cycle.
STEPHANIE: What I feel really strongly about is when, you know, each individual on a team is feeling this pain, but it not being known that it's actually a collective issue. Because maybe these things are happening in one-on-one conversations, and we don't realize that, like, oh, maybe there is something bigger here that we could improve on. And so the more eyes on it there are, the more visible it is, I think, that the easier it is to address.
JOËL: I love that, the power of writing things down. On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie went to her first WNBA game. Also: Bingo. Joël's new project has him trying to bring in multiple databases to back their ActiveRecord models. He's never done multi-database setups in Rails before, and he doesn't hate it.
Stephanie shares bits from a discussion with former Bike Shed host Steph Viccari about learning goals. Four elements stood out:
Adventure (try something new)
Passion (topic)
Profit (from recent learnings)
Low-risk (applicable today)
= APPL
Stephanie and Joël discuss what motivates them, what they find interesting vs. what has immediate business value, and how they advocate for themselves in these situations. They ponder if these topics can bring long-term value and discuss the impact that learning Elm had on Joël's client work.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: All right, I have a new-new thing and an old-new thing to share with you today. So the new-new thing is that I went to my first WNBA game [laughs] last week, which is also my third professional sports game ever, probably. I am not a sports person. But a rather new friend of mine invited me to go with her because they are fans, and so I was like, yeah, sure. I'll try anything once. And I went, and I had a great time. It was very exciting.
I mean, I know the basic rules of basketball, right? Get the ball in the hoop. But I was very surprised to see how fast-paced it was. And, you know, I was like, wow, like, this is so much fun. There's so much going on, like, the music, you know, the crowd. It was very energizing. And then my friend actually told me that that was a pretty slow game, [chuckles] relative to how they normally go. And I was like, oh, wow, like, if that was slow, then I can't wait for a real competitive [laughs] game next time.
So that's my new-new thing. I had a good time. Will do it again. I'm just, like, a 15-minute bike ride from the stadium for our team in Chicago. It's called The Sky. That's our WNBA team. So yeah, I'm looking forward to being basketball Stephanie, I guess. [chuckles]
JOËL: That's really cool. How does the speed compare to other sports you've gone to see?
STEPHANIE: I think this is why I was interested because I've really only seen baseball, for which I know very little. And that, I think, is, like, a much slower-paced kind of sport. Yeah, I have some memories of going to, like, college football games, which also, like, quite slow. I just remember standing around for a while. [laughs] So I think basketball might be the thing for me, at least in terms of engaging my interest.
JOËL: You want something that actually engages you with the sport the whole time. It's not just a social event themed around occasionally watching someone do something.
STEPHANIE: Yes, exactly. I also enjoyed the half-time performances, you know, there was just, like, a local dance team. And I thought that was all just very fun. And, yes, I had a lot to, you know, just, like, point to and ask questions about because there was just so much going on, as opposed to sitting and waiting, at least that was my experience [laughs] at other kinds of sports games.
As for the old-new thing, now that it's summer, there is a local bar near me that does bingo every week. But it's not just normal bingo. It's called veggie bingo, which I realize is kind of confusing [chuckles] if you just, like, call it veggie bingo, but it's bingo where you win vegetables or, like, produce from local community gardens and other, you know, small batch food items. And I had a great time doing it last year. I met some new friends. It just became our weekly hangout. And so I'm looking forward to doing that again.
And, I don't know, I'm just glad I have fun things to share about what's new in my world now that the weather is warm and I'm doing stuff again. I feel like there was one point in the winter where I was coming [chuckles] onto the show and sharing how I had just gotten a heated blanket in the middle of winter, and that was the most exciting thing going on for me. So it feels good to be able to bring up some new stuff.
JOËL: Seasonality is a thing, right? And, you know, there are rhythms in life. And sometimes things are more fast-paced, sometimes they're a bit slower. That's really exciting. Did you take any produce home, or did you win anything when you went to play?
STEPHANIE: I did. I won a big bag of produce the last time that I went. At this point, it was last season. But it was right before I was about to go on vacation. So I ended up --
JOËL: Oh no.
STEPHANIE: [chuckles] Right. I ended up not being able to, you know, keep it in the fridge and just giving it away to my friends who did not win. So I think it was a good situation overall. That's my tip, is go to bingo or any kind of prize-winning hang out as a group, and then you can share the rewards. It's very exciting. Even if you don't win, you know, like, probably someone else at your table will win, and that is equally fun.
JOËL: I think the closest I've been to that experience is going to play, like, bar trivia with some friends and then winning a gift card that covers our dinner and drinks for the evening.
STEPHANIE: Yeah, yeah, that's great. I used to go to a local trivia around me too. The best part about bingo, though, is that it requires no skill at all. [laughs] I, yeah, didn't realize, again, how into these kinds of things I would be until I just tried it out. Like, that was...bingo is another thing I don't think I would have internally decided to go do. But yeah, my friends just have all these great ideas about fun things to do, and I will happily join them.
So, Joël, what's new in your world?
JOËL: So I've recently started a new client project. And one of the really interesting things that I've been doing on this project is trying to bring in multiple databases to back our ActiveRecord models. This is a Rails app. I've never done multi-database setups in Rails before. It's been a feature since Rails 6, but this is my first time interacting with that system. And, you know, it's actually pretty nice.
STEPHANIE: Really? It ended up being pretty straightforward or pretty easy to set up?
JOËL: Yeah. There's a little bit of futzing around you have to do with the database YAML configuration file. And then what you end up doing is setting up another base class for your ActiveRecord models to inherit from. So, typically, you have that application record that you would inherit from for your primary database. But for other databases, if you want a model to be backed by a table from that system, then you would have a separate base class that all of those models inherit from, and that's pretty much it. Everything else just works.
A bunch of your Rake tasks get a little bit different. So you've got to, like, configure your setup scripts and your test scripts and all that thing a little bit differently. But yeah, you can just query, do all the normal things you do with an ActiveRecord model, but it's reading from a different database.
STEPHANIE: That's really cool that it ended up being pretty painless. And I'm thinking, for the most part, as a developer, you know, working in that kind of codebase; maybe they don't really need to know too much about the details of the other databases. And they can just rely on the typical Rails conventions and things they know how to do via Rails.
Do you suspect that there might be some future where that might become a gotcha or something that someone has to debug a little further because of the multi-database setup?
JOËL: There are some infrastructure things, but I think I'm handling all of them upfront. So like I said, configuring various setup scripts, or test scripts, or CI, that kind of thing to make sure that they all work. Once that's all done, I think it should pretty much just work. And people can use them like they would normal ActiveRecord models.
The one gotcha is that you can't join models across two different databases. You can't use ActiveRecord to write a query that would try to join two tables that are in different databases because the SQL won't allow for that. So, if you're ever trying to do something like that or you have some kind of association where you're trying to do some special join, that would not work. So, if somebody attempts that, they might get an unexpected error. Other than that, I think it just keeps working as normal, and people can treat it more or less as if it's one database.
STEPHANIE: That's interesting. How do you model relationships between tables on the two different databases, then? Like, how would that work?
JOËL: I've not gotten that far yet. For some things, I imagine just it's two queries. I'm not sure if the ActiveRecord associations handle that automatically for you. I think they probably will. So you probably can get away with an association where one model lives in one database. Let's say your article lives in one database, and it has many comments that live in a different database.
Because then you would make one query to load the article, get the article ID, and then you would do another query to the second database and say, hey, find all the comments with this article ID, which is already, I think, what ActiveRecord does in one single database. It is making two queries. It's just that now those two queries are going to be two different databases rather than to a single one.
STEPHANIE: Interesting. Okay. I did think that maybe ActiveRecord did some fancy join thing under the hood. And when you mentioned that that wouldn't be possible when the two tables are on different databases, I was kind of curious about how that would work. But that makes sense. That would be really cool if it is, you know, that straightforward. And, hopefully, it just doesn't become too big of an issue that comes back to haunt someone later.
JOËL: Right. So pretty much, if there is a situation where you were relying on a JOIN, you will now have to make two separate queries and then combine the results yourself.
STEPHANIE: Right. I also want to give you kudos for doing all the good work of setting it up so that, hopefully, future developers don't have to think about it.
JOËL: Kudos to the Rails team as well. It's nice to have that just kind of built into the framework. Again, it's not something I've needed in, you know, a decade of doing Rails, but then, you know, now that I have run into a situation where I need that, it just works out of the box. So that's been really nice.
So, a couple of weeks ago, we talked about the fact that we were going through review season and that we had to fill out reviews for ourselves then also fill out peer reviews for each other. You had brought up a really interesting conversation you had about reaching out to other people and trying to get feedback on what kind of review or feedback would be helpful for them.
STEPHANIE: I did, yeah. Though, I think in this case, the person writing that feedback actually reached out to me, but certainly, it goes both ways. Spoiler alert - that person was Steph Viccari, former [laughs] host of The Bike Shed.
JOËL: So Steph also reached out to me with similar questions. And that spawned a really interesting conversation around personal goals and what it looks like, particularly when it comes to what to learn next in technology. We started discussing things, and I listed out some different things that I was interested in. And then just kind of out of nowhere, Steph just pulls out this, like, oh, I noticed these four elements. And I'm going to list them out here because it's really cool.
So these four elements were adventure, so trying something new. Passion, so something that's really exciting to you. Profit something where you can leverage some recent things that you've done to get more value out of some work you've already done. And then finally, low risk, something that would be applicable today. And it just kind of turns out that this makes a funny little acronym: APPL. And apples are often a symbol of learning. So that was kind of a fun coincidence.
STEPHANIE: I love when someone is able to just pull apart or to tease out pieces of, you know, something that you might have just, like, kind of dumped all of into a message or something, and then to get, like, a second eye to really pick out the themes is so valuable, I think. And I'm obsessed with this framework. I think we might have come across something new that could really be helpful for a lot of other people.
JOËL: It's definitely...I think it shows capacity for a higher level of thinking when someone's able to just look at a bunch of concrete things and say, wait a minute; I'm seeing some larger themes emerge from what you're talking about. And I always really appreciate it when I'm having a conversation with someone, and they're like, "Hey, I think what I'm hearing is this." And you're like, "Whoa, you're totally right. And I did not even know that that's where I was going."
STEPHANIE: Absolutely. I'd love to go through this acronym and talk about a few different things that we've learned in our careers that kind of correspond with each of these elements.
JOËL: Yeah, that sounds great. So I think, you know, the first one here is adventure, trying something new. So, what's something where you tried something new or adventurous that you think was worthwhile?
STEPHANIE: Hosting this podcast. [laughs] It was a huge adventure for me and a really big stretch, I think. And that's what the idea of adventure evokes for me is, like, maybe it's uncharted territory for you, and you might have some reservations about it. But, you know, obviously, the flip side of an adventure is how fun and exciting and just new and stimulating it can be.
And so I think, yeah, like, when I first started doing this with you, and even when you first asked me, I was pretty nervous. I was really hesitant. It took me a long time to, you know, think it over. I was like, do I want to commit to something that I have never done before, and it's, like, a pretty longer-term commitment? And I'm really glad I did it.
It's certainly been an adventure. It's, you know, got its ups and downs. You know, not every week do I feel like that went really well, like, that was a great episode. Sometimes I'm like, that was just an okay episode, [laughs] and, you know, that's fine too.
But I feel like this was really important in helping me feel more confident in sharing my technical opinions, helping me feel more comfortable just kind of, like, sharing where I am and not feeling like I should be somewhere else, like, some other level or have already known something. Like, the point is for us to share the journey week by week, and that was something that was really hard for me. So being on this Bike Shed adventure with you has been very valuable for me.
JOËL: Yeah, it's sharing these new things we've learned along the way.
STEPHANIE: Literally. Yes. What about you? Do you have something adventurous that you learned?
JOËL: I think an important inflection point where I tried something new was when I learned the Elm programming language. So I had mostly done procedural languages back in the day. And then I got into Ruby, did a lot of OO. And then I got into Elm, which is statically-typed, purely functional, all these things that are kind of opposite of Ruby in some ways. But I think it shares with Ruby that same focus on developer happiness and developer productivity. So, in some ways, I felt really at home.
But I had to learn just a whole new way of programming. And, one, it's cool. I have a new tool in my belt. And I think it's been a couple of years just learning how to use this language and how to be effective with it. But then afterwards, I spent a couple of years just kind of synthesizing the lessons learned there and trying to see, are there broader principles at play here? Are there ideas here that I can bring back to my work in Ruby?
And then maybe even are there some ideas here that intersect with some theories and things that I know from Ruby? So maybe some ways of structuring data or structuring code from functional programming where some best practices there kind of converge on similar ideas as maybe some object-oriented best practices, or maybe some ideas from test-driven development converge on similar ideas from functional programming. And I think that's where, all of a sudden, I was unlocking all these new insights that made me a better Ruby developer because I'd gone on an adventure and done something completely out of left field.
STEPHANIE: Yeah, absolutely. Do you remember what was hard about that when you first embarked on learning Elm?
JOËL: All the things you're used to doing, you just can't do. So you don't have looping constructs in Elm. The only thing you can do is recursion, which, you know, it's been a long time since CS classes. And you don't typically write recursion in Ruby. So I had to learn a whole new thing. And then it turns out that most people don't write recursion. There's all these other ways of doing things that you have to learn. You have to learn to do folds or to use maps and things like that.
Yeah, you're just like, oh, how do I do X in Elm? And you have to figure it out. And then maybe sometimes it turns out you're asking the wrong question. So it's like, oh, how do I do the equivalent of a for loop with array indexes in Elm to, like, iterate through a data structure? And it's like, well, kind of here's technically the way you could do that, but you would never solve a problem in that way.
You've got to learn a new way of thinking, a new way of approaching problems. And I think it was that underlying new paradigm that was really difficult to get. But once I did get it, now that I have two paradigms, I think it made me a much more solid developer.
STEPHANIE: Right. That sounds very humbling, too, to kind of have to invert what you thought you knew and just be in that, you know, beginner's mindset, which we've talked about a little bit before.
JOËL: I think in some ways now being on the other side of it, it's similar to the insights you get from speaking multiple human languages, so being bilingual or trilingual or something like that where instead of just having assumptions about, oh, this is just how language works, because that's how your personal language works, now that you have more than one example to draw on, you can be like, oh, well, here's how languages tend to do things differently. Here's how languages are similar.
And I think it gives you a much better and richer feeling for how languages work and how communication works. And similar to having multiple paradigms in programming, I think this has given me a much richer foundation now for exploring and building programs.
STEPHANIE: That's really cool. I guess that actually leads quite well into the next element, which is passion. Because once you've tried some new things, you get the information of do I like this thing, or do I not like this thing? And then from there, you know, you gravitate towards the things you are passionate about to get a deeper understanding. And it becomes less about like, oh, just testing out the waters and like, knowing, hey, like, I constantly find myself thinking about this, like, let me keep going.
JOËL: Yeah. Or sometimes, it's deciding what do I want to learn next? And you just pick something that's interesting to you without necessarily being like, oh, strategically, I think this is another paradigm that's going to expand my mind. Or this is going to make me, you know, help me get that promotion next quarter, purely based off of interest. Like, this sounds fun.
STEPHANIE: That's really interesting because I think I actually came to it from a different angle, where one thing that I think was very helpful in my learning that came just, like, completely internally, like, no one told me to do this was reading books about design patterns. And that was something that I did a couple of years into my career because I was quite puzzled, I suppose, by my day-to-day experience in terms of wanting to solve a problem or develop a feature but not having a very good framework for steps to go about it, or not feeling very confident that I had a good strategy for doing it.
It was very, for me, it felt very just kind of, like, throwing pasta to the wall and seeing what would stick. And I was really interested in reducing that pain, basically. And so that led me to read books. And, again, that was not something, like, someone was like, hey, I really think that you could benefit from this. It was just like, well, I want to improve my own experience.
And, you know, some of the ones that I remember reading (And this was based off of recommendations from others kind of when I floated the idea.) was, you know, Sandi Metz's Practical Object-Oriented Design in Ruby. Design Patterns in Ruby by Ross Olsen. Those were just, like, purely out of interest. Yeah, I guess I'm curious, for you, what fun and passion look like.
JOËL: Yeah, I think one thing that's a really fun side effect of passion learning is that I find that I tend to learn a lot faster and go a lot deeper, or I get more for every individual hour I put into learning just because passion or interest is such a multiplier.
Similar to you, I think I went through a time where I was just gobbling up everything I could see on design patterns, and code structure, things like that. Yeah, I've always been really excited about data modeling in general and how to structure programs to make them easy to change while also not putting a high maintenance burden on it, learning those trade-offs, learning those principles, learning a lot of those ideas.
I think that desire came out of some pain I felt pretty early on in my programming career, where I was just writing purely self-taught at this point from a few tutorials online. Code beyond a few hundred lines would just kind of implode under the weight of its own complexity. And so, like, I know that professional programmers are writing massively larger programs that are totally fine. So what am I missing? And so I think that sort of spurred an interest. And I've kind of been chasing that ever since. Even though I'm at the point where that is no longer a problem in my daily life, it is still an interest that I keep.
STEPHANIE: Yeah. If I were to pull out another interest of yours that I've noticed that kind of seems in the same realm of, you know, you can just chase this forever, is working incrementally, right? And just all the ways that you can incorporate that into your day-to-day. And I know that's something we've talked about a lot. But I think that's really cool because, yeah, it just comes from just a pure desire on your own front to, like, see how far you can take it.
JOËL: I think you pulled out something interesting there. Because sometimes, you have an interest in a whole new topic, and sometimes the interest is more about taking something I already know and just seeing can I take it to an extreme? What happens when I really go to the boundaries of this idea? And maybe I don't need to go there ever for a client project. But let me put up a proof of concept somewhere and try it out just for the fun of it to see can I take this idea, then push it to an extreme and see does it break at an extreme? Does it behave weirdly? And that is just an enriching journey in and of itself.
Have you ever done, like, a...maybe not a whole learning journey but, you know, taken a few hours, or maybe even, like, some time on one of our investment Fridays to just explore some random idea and try it out? And it's like, huh, that was cool; that was a journey. And then maybe you move on to something next week because it's not like a big planned thing. But you're taking a few hours to dig into something totally random.
STEPHANIE: I actually think I'm less inclined to do that than maybe you or other folks are. I find the things I choose to spend my time on do have to feel more relevant to me in the moment or at least in my day-to-day work.
And I think that actually is another excellent transition into the last couple of elements in the APPL framework that we've now coined. The next being profit or, I guess, the idea of being valuable to you in your job in that moment, I suppose. Or I guess not even in that moment, but kind of connecting what you're learning to something that would provide you value.
So I know you were talking about learning Elm, and now you're able to see all of the value that it has provided, but maybe at the time, that was a little bit less of your focus. But for me, I find that, like, a driver for how I choose to spend my time. Often it's because, yeah, for the goal of reducing pain.
Being consultants, we work on a lot of different projects, sometimes in different frameworks, or languages, or new technologies. Like, you've mentioned having to, just now, on your new client project learning how to interact with different databases, and it sounds like older software that you might not have encountered before.
And I think that ends up falling higher on my priority list depending on the timing of what I'm currently working on is, oh, like, you know, TypeScript is something that has, like, kind of come and go as my projects have shifted. And so when it comes back to working on something using it, I'm like, oh, like, I really want to focus on this right now because it has very clear value to me in the next three to six months, or however long. But I have also noticed that once I'm off of that project, that priority definitely recedes.
JOËL: Yeah, I think that plays into that final element as well of the APPL, the low risk things that are applicable today that have value right now. Those tend to be things like, oh, I see that I'm going to be scheduled on a client that needs this technology next month. Maybe I should learn that, or maybe I should refresh this idea or go a little bit deeper because this is something new that I'm going to need. So, at some point, I knew that there was a Python project coming down the line. I was like, okay, well, maybe I'm going to spend a couple of Fridays digging into some Django tutorials and compare and contrast with Rails.
STEPHANIE: The low-risk element is interesting to me because I found it to be a challenging balance to figure out how much time to invest in becoming really comfortable in a new technology. I find myself sometimes learning just enough to get what I need to get done. And then other times really feeling like, wow, like, I wish I knew this better because that would make my life easier, or I would just feel a lot better about what I'm doing. And kind of struggling with when to spend that time, especially when there's, you know, other expectations of me in terms of my delivery.
JOËL: Yeah, that almost sounds like a contrast between technologies that fall in that low-risk bucket, like, immediately useful, versus ones that fall in the passion bucket that you're interested in taking deeply and maybe even to an extreme.
STEPHANIE: That's really interesting. What I like about this list of themes that we've pulled out is that, like, one thing can fall into a number of different categories. And so it's really quite flexible.
It actually reminds me of a book that I just finished reading. The book is called Quarterlife. And the thing that stuck out to me the most is the author, who is a psychotherapist; she has basically come up with two types of people, or at least two things, that end up being really big drivers of, like, human motivation and behavior. And that's stability types and meaning types, and the goal is to have a little bit of both.
So you may be more inclined towards stability and wanting to learn the things that you need to know for your job, to do well in your role, kind of like we were talking about to reduce that pain, to feel a little more in control, or have a little more autonomy over your day to day and how you work. And then there's the seeking meaning, and when we talked about adventure and passion, it kind of reminded me of that. Like, those are things that we do because we want to feel something or understand something or because it's fun.
And ironically, this list of four things has two that kind of fall into each category. And ultimately, the author, she, you know, was very upfront about needing both in our lives. And I thought that was a really cool distinction. And it was helpful for me to understand, like, oh yeah, like, in the early years of my career, I did really focus on learning things that would be profitable, or valuable, or low risk because those were the things that I needed in my job, like, right now.
And I am now feeling stable enough to explore the meaningful aspects and feel excited by trying out things that I think I just wasn't ready for back in the day. But it actually sounds like you may kind of have a different leaning than I do.
JOËL: That is really interesting. I think what was really fascinating as you mentioned those two sort of types of people. And, in my mind, now I'm immediately seeing some kind of two-dimensional graph, and now we've got four quadrants. And so are we leaning towards stability versus...was it adventure was the other one? Or meaning.
STEPHANIE: Meaning, yes.
JOËL: So now you've got, like, your quadrant that is high stability, high meaning, low stability, high meaning, like, all those four quadrants. And maybe these four things happen to fall into that, or maybe there's some other slightly different set of qualities that you could build a quadrant for here.
One that is interesting, and I don't know how closely it intersects with this idea of stability versus meaning, is how quickly the things you learn become useful. So that low risk, like that L from APPL, those are things that are immediately useful. So you put a little bit of work learning this, and you can immediately use it on the job. In fact, that's probably why you're learning it.
Whereas me going off and learning Elm is not because we've got any clients in the pipeline using Elm. It's purely for interest. Is it going to pay off? I think most learning pays off long-term, especially if it helps you build a richer understanding of the different ways software works or helps you have new mental models, new tools for doing things. And so I think, you know, 5, 6, 7 years later, learning Elm has been one of the highest payoff things that I've done to kind of take my coding career to the next level. That being said, I would not have seen that at the time. So the payoff is much more long-term.
How do you kind of navigate when you're trying to learn something, whether you want something with a short-term payoff or a longer-term payoff?
STEPHANIE: Yeah, that's so interesting. I wonder if there was maybe someone who could have helped you identify the ways that Elm could have possibly paid off. And I know, you know, you're looking back on it in retrospect, and it's easy to see, especially after many years and a lot of deep thinking about it. But kind of referring back to this idea of seeking meaning and that just being important to feeling happy at your job, like, maybe it was just valuable because you needed to scratch that itch and to experience something that would be interesting or stimulating in that way to prevent burning out or something like that.
JOËL: Oh, I like that. So the idea that you're learning a thing, not specifically because you're expecting some payoff in the long term but because of the joy of learning, is reward in and of itself, and how that actually keeps you fresh in the moment to keep going on a career that might, you know, last 5, 10, 20, 30 years, and how that keeps you refreshed rather than like, oh, but, like, I'm going to see a payoff in five years where now, all of a sudden, I'm faced with a problem. And I can be like, ah, yes, of course, monads are what we need here. And that's a nice side effect, but maybe not the main thing you look for when you're going for something in that passion bucket.
STEPHANIE: Yeah, absolutely. To go back to your question a little bit, I had mentioned that I was wondering if there was someone who could help point out ways that your interests might be useful. And I think that would be a strategy that I would try if I find myself in that conundrum, I suppose, of, like, being like, hey, like, this is really interesting to me. I'm not able to see any super immediate benefits, but maybe I can go find an expert in this who can share with me, like, from their experience, what diving deep into that topic helped them.
And if that's something that I need to then kind of justify to a manager or just kind of explain, like, hey, this is why I'm spending my time doing this is because of this insight that I got from someone else. That would be, I think, a really great strategy if you find yourself needing to kind of explain your reasoning. But yeah, I also think it's, like, incredibly important to just have passion and joy in your work. And that should be a priority, right? Even if it's not immediately clear, the tangible or valuable to the company benefits in the current moment.
JOËL: And I think what I'm hearing is that maybe it's a bit of a false premise to say there are some things that you follow for passion that only pay off in the long term. Because if you are in it for passion, then you're getting an immediate payoff regardless. You may also get an additional payoff in the long term. But you're absolutely getting some kind of payoff immediately as well.
STEPHANIE: Yeah, I think that's true for adventure because knowing what you don't like is also really valuable information. So, if you try something and it ends up not panning out for you, you know, I think some people might feel a little bit disappointed or discouraged. They think, oh, like, they kind of wasted time. But I don't know; I think that's all part of the discovery process. And that brings you closer and closer to, yeah, knowing what you want out of your learning and your career.
JOËL: So I'm really curious now. This whole, you know, APPL framework came out of a very random conversation. Is this something that maybe you're going to take into your own sort of goal-setting moving forward? Maybe try to identify, like, okay, what is something adventurous that I want to do, something I want to do for passion, something that I think for profit, and then something low risk? And then maybe have that inform where you put some energy in the next quarter, the next year, whatever timeline you're planning for.
STEPHANIE: Yeah, I thought about this a little bit before we started recording. But one very loose goal of mine...and this actually, I think, came up a little more tangibly after coming back from RubyKaigi and being so inspired by all of the cool open-source tooling and hearing how meaningful it was for people to be working on something that they knew would have an impact on a lot of people in their development experience.
Having an impact is something that I feel very passionate about and very interested in. And the adventure part for me might be, like, dabbling a little bit into open-source tooling and seeing if there might be a project that I would be interested or comfortable in dipping my feet into.
What about you? Do you have anything in the near or long-term future that might fall into one of these buckets?
JOËL: So I do have a list of things. I don't know that I will pursue all of them or maybe any of them. But here's my kind of rough APPL here. So something adventurous, something new would be digging into the language Rust. Again, the idea is to try a completely new paradigm, something low-level, something typed, something that deals with a lot of memory, something that does well with concurrency and parallelism. These are all things that I've not explored quite as much. So this would be covering new ground.
Something that is a passion, something that's interesting to me, would be formal methods, so I'm thinking maybe a language like TLA+ or Alloy. Data modeling, in general, is something that really excites me. These techniques that I think build on some of the ideas that I have from types but that go, like, to an extreme and also in a slightly different direction are really intriguing to me. So, if there's something that maybe I'm staying up in the evenings to do, I think that might be the most intriguing thing for me right now.
Something that might be more profitable, I think, would be digging into the world of data science, particularly looking at Notebooks as a technology. Right now, when I need to crunch data, I'm mostly just doing spreadsheets. But I think there are some really cool things that we could do with Notebooks that come up in client work. I manage to do them when you're with a random command-line script or sometimes with Excel. But I think having that tool would probably be something that allows me to do that job better.
And then, finally, something low-risk that I know we could use on a client project would be digging in more into TypeScript. I know just enough to be dangerous, but we do TypeScript all the time. And so, mastering TypeScript would definitely be something that would pay off on a client project.
STEPHANIE: I love that list. Thank you for sharing.
JOËL: Also, I just want to note that there are only four things here. It doesn't fully spell APPL because there's no E at the end. And so when I see the acronym now, I think it looks like a stock ticker.
STEPHANIE: It really does. But I think it's pretty trendy to have an acronym that's basically a word or a noun but then missing a vowel so...
JOËL: Oh, absolutely. Time to register that applframework.com domain.
STEPHANIE: Yeah, I agree. I also love what you said. You called it a rough APPL. And that was very [laughs] evocative for me as well. And just thinking about an apple that someone has, like, bitten into a little bit [laughs] and has some rough edges. But yeah, I hope that people, you know, maybe find some insight into the kinds of learnings and goals that they are interested in or are thinking about. And using these themes to communicate it to other people, I think, is really important, or even to yourself. Maybe yourself first and then to others because that's how your co-workers can know how to support you.
JOËL: That's really interesting that you are thinking of it in terms of a tool for communication to others. I think when I first encountered this idea, it was more as a tool of self-discovery, trying to better understand why I was interested in different things and maybe better understanding how I want to divide up the energy that I have or the time that I have into different topics. But I can definitely see how that would be useful for communicating with team members as well.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël's new work project involves tricky date formats. Stephanie has been working with former Bike Shed host Steph Viccari and loved her peer review feedback.
The concept of truthiness is tough to grasp sometimes, and JavaScript and Ruby differ in their implementation of truthiness.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So I'm on a new project at work. And I'm doing some really interesting work where I'm connecting to a remote database third-party system directly and pulling data from that database into our system, so not via some kind of API. And one thing that's been really kind of tricky to work with are the date formats on this third-party database.
STEPHANIE: Is the date being stored in an unexpected format or something like that?
JOËL: Yes. So there's a few things that are weird with it. So this is a value that represents a point in time, and it's not stored as a date-time value. Instead, it's stored separately as a date column and a time column. So a little bit of weirdness there. We can work with it, except that the time column isn't actually a time value. It is an integer.
STEPHANIE: Oh no.
JOËL: Yeah. And if you're thinking, oh, okay, an integer, it's going to be milliseconds since midnight or something like that, which is basically how Postgres' time of day works under the hood, nope, that's not how it works. It's a positional digit thing. So, if you've got the number, you know, 1040, that means 10:40 a.m.
STEPHANIE: Oh my gosh. Is this in military time or something like that, at least?
JOËL: Yes, it is military time. But it does allow for all these, like, weird invalid values to creep in. Because, in theory, you should never go beyond 2359. But even within the hours that are allowed, let's say, between 1000 and 1100, so between 10:00 and 11:00 a.m., a clock only goes up to 59 minutes. But our base 10 number system goes up to 99, so it's possible to have 1099, which is just an invalid time.
STEPHANIE: Right. And I imagine this isn't validated or anything like that. So it is possible to store some impossible time value in this database.
JOËL: I don't know for sure if the data is validated or not, but I'm not going to trust that it is. So I have to validate it on my end.
STEPHANIE: That's fair. One thing that is striking me is what time is zero?
JOËL: So zero in military time or just 24-hour clocks in general is midnight. So 0000, 4 zeros, is midnight. What gets interesting, though, is that because it's an integer, if you put the number, you know, 0001 into the database, it's just going to store it as 1. So I can't even say, oh, the first two digits are the hours, and the second two digits are the minutes. And I'm actually dealing with, I think, seconds and then some fractional part of seconds afterwards. But I can't say that because the number of digits I have is going to be inconsistent.
So, first, I need to zero pad. Well, I have to, like, turn it into a string, zero pad the numbers so it's eight characters long. And then, start slicing out pairs of numbers, converting them back into integers, validating them within a range of either 0 to 23 or 0 to 59, and then reconstructing a time object out of that.
STEPHANIE: That sounds quite painful.
JOËL: It's a journey for sure.
STEPHANIE: Do you have any idea why this is the case or why it was created like this originally?
JOËL: I'm not sure. I have a couple of theories. I've seen this kind of thing happen before. And I think it's a common way for developers who maybe haven't put a lot of thought into how time works to just sort of think, oh, the human representation. I need something to go in the database. On my digital clock, I have four digits, so why not put four digits in the database? Simple enough. And then don't always realize that there's all these edge cases to think about and that human representations aren't always the best way to store data.
STEPHANIE: I like how you just said that that, you know, we as humans have developed systems that are not quite, you know, the same as how a computer would. But what was interesting to me...something you said earlier about time being a fixed point. And that is different from time being a value, right? And so here in this situation, it sounds like we're storing time as a value, but really, it's more of the idea of, like, a point.
JOËL: Interesting. What is the difference for you between a point and a value?
STEPHANIE: I suppose a value to me...And I think we talked about this a little bit on a previous episode about value objects and also how we stored numbers, like phone numbers and credit card numbers and stuff like that. But a value, like, I might want to do math on. But I don't really want to do math on time. Or, specifically, if I have this idea of a specific point in time, like, that is fixed and not something that I could mutate and expect it to be the same thing that I was trying to express the first time around.
JOËL: Oh, that's interesting because I think when it comes to time and specifically points in time, I sometimes do want to do math on them. And so, specifically, I might want to say, what is the time that has elapsed between two points in time? Maybe I have a start time and an end time, and I want to say how much distance is there between the two? If you use this time system where you're storing it as an integer number where the digits have positional values, because there's all those gaps between, you know, 59 and 99 that are not valid, math breaks down. You've broken math by storing it that way.
So you can't get an accurate difference by doing math on that, as opposed to if you store it as a counter, which is what databases do under the hood, but you could do manually. If you just wanted to use an integer column, then you can do math because it's just a number of seconds since the beginning of the day. And you can subtract those from each other. And now you have these number of seconds between the two of them. And if you want them in minutes or hours, you divide by six here, 3600, and you get the correct response.
STEPHANIE: Yeah, that is really interesting because [chuckles] in this situation, you have the worst of both worlds, it seems like. [laughs]
JOËL: The one potential benefit is, I think, it's maybe more human-readable. Although, at that point, I would say if you're not doing math on it and you want something human-readable, you probably don't want an integer. You probably want a string. And maybe you even store it as, like, ISO 8601 time string in the database, or even just hour:minute:second split by a colon or whatever it is but just as a string. Now it's human-readable.
You can still sort by it if you go from largest to smallest increment in your format. You can't do math, but then you weren't doing math on it anyway. So that's probably a nice compromise solution. But, ideally, you'd use a native, you know, time of day column or a date-time or something like that.
STEPHANIE: For sure. Well, it sounds like something fun to contend with. [laughs]
JOËL: One thing that was brought to my attention that I'd never heard about before is that potentially a reason it's stored that way is because of an old data format called EDI—I think it's Electronic Data Interchange—that dates from ages ago, you know, the '60s or '70s, something like that. Before, we had a lot of standards for data; this is how...an emerging standard that came for moving data between systems. And it has a lot of, like, weird things with the way it's set up.
But if you're dealing with any sort of older data warehouses or older business systems, they will often exchange. And sometimes, you're going to store data in something that approximates this older EDI format. And, apparently, it has some weirdness around dates where it kind of does something like this.
So someone was suggesting, oh, well, if you're interacting with maybe an older, you know, a lot of, like, e-commerce platforms or banking systems, probably airline systems, the kind of things you'd expect to be written in, let's say, COBOL...
STEPHANIE: [laughs]
JOËL: Have a system that's kind of like this. So maybe that wouldn't be quite as surprising.
STEPHANIE: Yeah, that is really interesting. It just sounds like sometimes you're limited by the technology that you're interacting with. And I guess the one plus side is that, in your system, you can make the EDI work for you, hopefully. [laughs] Whereas perhaps if you are talking to some of those older technologies that don't know how else to convert date types and things like that, like, you just kind of have to work with what's available to you.
JOËL: Yeah. And that's got me realizing that a lot of these older, archaic systems are still online and very much a part of our software ecosystem and that there's a lot of value in learning some software history so that I'm able to recognize them and sort of work constructively with them when I have to interact with that kind of system.
STEPHANIE: Yeah, I really like that mindset.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So, last week, we talked about writing reviews for ourselves and our peers. And one thing that happened in between the last episode and this one is Steph Viccari, former co-host of this podcast, who I've been working with really closely on this project of mine; she was writing a peer review for me. And one thing that she did that I really loved was she sent me a message and asked me a few questions about the direction of the review that I was wanting and what kind of feedback would be helpful for me.
And some of the things she asked were, you know, "Is there a skill that you're actively working on? Is there a skill you'd like to start working on?" And, like, what my goals are for the feedback. Like, how can she tailor this feedback to things that would help my progression and what I hope to achieve? And then my favorite question that she asked was, "What else should I know but didn't think to ask?" And I thought that was a really cool way of approaching.
You know, she's coming to this, like, wanting to be helpful, but then even still, like, there are things that she knows that I am kind of the expert on in my own career progression, and I really liked that. I think I'd mentioned last week that part of the feedback you want to be giving is, you know, something that will be helpful for that person, and centering them in it, instead of you is just a really awesome way to do that. So I was very appreciative that she asked me those questions.
JOËL: That's incredibly thoughtful. I really appreciate that she sent that out to you. What did you respond for the is there something else I should know but didn't know to ask?
STEPHANIE: Yeah. I mentioned that more and more, I'm realizing that I am not interested in management. And so what would be really helpful for me was to ground most of the feedback in terms of my, like, technical contributions. And also, that one thing that I'm thinking about a lot is how to be an individual contributor and still have an impact on team health and culture because that is something I care about. And so I wanted to share that with her because if there are things that she can identify in those aspects, that would be really awesome for me. And that can kind of help guide her away from a path that I'm not interested in.
JOËL: I think having that kind of self-awareness is really powerful for yourself. But then, when you can leverage that to get better reviews that will help you get even further down the path that you're hoping to go, and, wow, isn't that just, like, a virtuous cycle right there that's just building on itself?
STEPHANIE: Yeah, for sure. I think the other thing I wanted to share about what's new in my world that has been just a real boost to my mood is how long the days are right now because it's summer in North America. And yesterday was the summer solstice, and so we had the longest day of the year. The sun didn't set until 8:30 p.m. And I just took the opportunity to be outside. I took a swim in the lake, which was my first swim of the season, which was really special. And my friend had just a nice, little, like, backyard campfire hang out. And we got to roast some marshmallows and just be outside till the sunset. And that was really nice.
JOËL: When you say the lake, is that Lake Michigan?
STEPHANIE: Yes, I do mean Lake Michigan. [laughs] I forget that some people just don't have a giant lake next to them [laughs] that they refer to as the lake.
JOËL: It's practically an inland sea.
STEPHANIE: Yes, you can't see the other side of it. So, to me, it kind of feels like an ocean. And yesterday, when I was in the water, I also was thinking that I felt like I was just in a giant bathtub. [chuckles]
JOËL: So I'm in New England, and most of the bodies of water here are not called lakes. They're called ponds.
STEPHANIE: Really?
JOËL: No matter the size.
STEPHANIE: Oh.
JOËL: I guess lakes is reserved for things like what you have that are absolutely massive, and everything else is a pond.
STEPHANIE: That's so funny because I think of ponds as much smaller in scale, like a quaint, little pond. But that's a really fun piece of regional vocabulary.
So one interesting thing happened on my client project this week that I wanted to get your input on because I've definitely seen this problem before, and still, it continues to crop up. But I was working on a background job that we were passing a Boolean value into as one of the parameters that we would then, you know, use down the line in determining some logic.
And we, you know, made this change, and then we were surprised to find out that it continued to not work the way we expected. So we got some bug reports that we weren't getting into one of the branches of the conditional based on that Boolean value that we were passing in. And we learned, after a little bit of digging, that it turns out that those values are serialized because this job is actually saved in --
JOËL: Oh no.
STEPHANIE: [chuckles] Yeah. It inherits from the ActiveRecord, actually, and is saved in our database. And so, in that process, the Boolean value got serialized into a string and then did not get converted [chuckles] back into a Boolean. And so when we do that if variable check, it was always evaluating to true because strings are truthy in Ruby.
JOËL: Right. The string false is still truthy.
STEPHANIE: A string false is still truthy. And we ended up having to coerce it into a Boolean value to fix our little bug. But it was just one of those things that was really frustrating, you know when you feel really confident that you know what you're doing. You're just writing a conditional statement. And it turns out the language beguiled you. [laughs]
JOËL: I've run into similar bugs when I'm reading from environment variables because environment variables are always strings. But it's common that you'll be setting some kind of flag. So when you're setting the environment variable, you're setting something to true or false. But then, when you're reading it, you have to explicitly check if this environment variable double equals the string true, then do the thing. Because if you just check for the value, it will never be false.
STEPHANIE: Right. And I kind of hate seeing code like that. I don't know; something about it just rubs me the wrong way because it just seems so strange, I suppose.
JOËL: Is it just, like, those edge cases where you specifically have to do some kind of, like, double equals check on a value that feels like it should be a Boolean? Or do you kind of feel a bit weird about the concept of truthiness in general?
STEPHANIE: I think the concept of truthiness is very hard to grasp sometimes. And, you know, when you're talking about that edge case where we are setting...we're checking if the string is the string true. That means that everything else is false, right? So, in some ways, I think it's just really confusing because we've expanded the definition of what true and false mean to be anything.
JOËL: That's really interesting because now you have to pick. Are you checking against the string true, or are you negatively checking against the string false? And those are not equivalent because, like you said, now you're excluding every other string. So, is the string "Hello, World" put you in the false branch or the true branch?
STEPHANIE: Who's to say? [laughs] I think a similar conundrum also occurs when we use predicate matchers in our tests. I think this is a gripe that I've talked about a little bit with others when we're writing tests and especially if we're writing a predicate method, and then that's what we're testing, right? We kind of are expecting a true or false value.
And when our test expects something to be truthy rather than explicitly saying that we expect the return value to be true, that is sometimes a bit confusing to me as well because someone could theoretically change this method and just have it return "Hello, World," like you said, as a string, like, anything else. And that would still pass the test.
JOËL: And it might even pass your code in most places.
STEPHANIE: Right. And I suppose that's okay. Is it okay? I don't know. I'm not sure where I land on this.
JOËL: I used to be a kind of hardcore Boolean person.
STEPHANIE: [laughs] That's a sentence no one has ever [laughs] said.
JOËL: I like my explicit trues and falses. I don't like the ambiguity of saying, like, oh, if person do a thing, it's, like, oh, what is person here? Is this a nil check? Is it explicitly false? Do you just want to know that this person is non-empty? Well, what exactly are you checking? So I like the explicitness of saying, oh, if person dot present, or if person dot empty, or if person dot nil.
And I think maybe spending some time in some more strongly typed languages has also kind of pushed me a little bit in that direction, where it's nice to have something that is explicitly either just true or just false. And then you completely eliminate that problem of, like, oh, but what if it's neither true nor false, then what do we do for that branch there? And the answer is your compiler will reject that program or say, "You've written a bad program." And you never reach that point where there's a bug.
I've slowly been softening my stance. A fellow thoughtbot colleague has written an article why there is no such thing as a Boolean in Ruby. Everything is just shades of gray and truthiness and falsiness. But from the perspective of a program, there is no such thing as a Boolean. And that really opened my eyes to a different perspective. I don't know that I fully agree, but I'm kind of begrudgingly acknowledging that Mike makes a good point.
STEPHANIE: Yes, I read the blog post that he wrote about this exact problem. And I think it's called "Booleans Don't Exist in Ruby." And I think I similarly, like, came away with, like, yeah, I think I get it if I just suspend my disbelief, you know, hard enough. [laughs] But what you were saying about, like, liking the explicitness, right? And liking the lack of ambiguity, right? Because when you start to believe that Booleans don't exist, I think that really messes with your [laughs] head a little bit.
And one takeaway that I got from that blog post, kind of like we mentioned earlier, is that there is such thing as false, and then everything else is true. And I guess that's kind of how Ruby operates.
JOËL: Sort of, because then you have the problem of nil, which is also falsy.
STEPHANIE: That's true, but nil is nothing. [laughs]
JOËL: That's one of the classic problems as well when you're trying to do a nil check, or maybe some memoization, or maybe even, say, cache this value, or store this value, or initialize this value if it's not set. And assuming that doing nil is falsy, so you'll do some kind of, like, or equals, or just some kind of expression with an or in it thinking, oh, do this extra work if it's nil because then it will trigger the branch. But that all breaks down if potential for your value to be false because false and nil get treated the same in conditional code.
STEPHANIE: Right. I think this could be a whole separate conversation about nil and the idea of nothingness. But I do think that, as Ruby developers, at least in the Ruby world, based on what I've seen, is that we lean on nil in ways that we maybe shouldn't. And we end up having to be very defensive about this idea of nil being falsy. But that's because we aren't necessarily thinking as hard about our return values and what our arguments are that; it ends up causing problems in evaluating truthiness when we're having to check those objects that could be nil.
JOËL: In terms of the way we communicate with the readers of our code, and, as a reader, I generally assume that a Ruby method that ends with a question mark will return a true Boolean, either true or false. Is that generally your expectation as well?
STEPHANIE: I want to say yes, but I've clearly experienced enough times where that's not the case that, you know, it's like, my ideal world and then reality [laughs] and having to figure out how to hold both of those things.
JOËL: It's one of those things that's mostly true.
STEPHANIE: I want to believe it because predicate methods and, like, the Ruby Standard Library mostly return Boolean values, at least to my knowledge. And if we all kind of followed that [laughs] pattern, then it would be clear. But I think there's a part of me that these days mostly believes it to be true that I will be getting a Boolean value (And, wow, even as I say this, I realize how confusing [laughs] this is starting to sound.) and that until I'm not, right? Until I'm surprised at some point.
JOËL: I think there's two things I expect of predicate methods in Ruby. One is that they will return, like, a hard Boolean, either true or false. The second is that they are purely query methods; they don't do side effects. Neither of those are consistent across the ecosystem.
And a classic example of violating that second guideline I have in my mind is the valid question mark method from Rails. And this really surprised me the first time I tripped into this because when you call that on an object, it doesn't just tell you whether or not the object is valid. It actually mutates the underlying object by populating the error messages' hash. So if you have an invalid object and you examine its error messages' hash, it will be empty until you call the valid question mark method.
So sometimes, you don't even care about the return value. You're just calling valid to mutate the object so that you can access the underlying hash, which is that's weird code when you call a predicate method but then totally ignore the output.
STEPHANIE: Yeah, that is strange because I have definitely seen it where we are calling the valid method to validate, and then we end up using the error messages that are set on that object later. I think that's tough because, in some ways, you do care about whether the object is valid or not. But then also, the error messages are helpful usually and when you're trying to use that method. The point is to validate it so that you can hopefully, like, tell the user or, like, the consumer of your system, like, what's wrong in validation. But it is almost, like, two separate things.
JOËL: It is. And sometimes, it's really hard to split those two apart. So I'm not throwing shade at the Rails dev team here. Some of these design decisions are legitimately difficult to make. And what's most useful for the most people the most time is often a compromise. I think you brought up the idea of separating those two things. And I think there's a general principle here called command-query separation. That's, like, the fancy way of talking about what you were saying.
STEPHANIE: One thing that I was just thinking about kind of when we initially picked off this conversation was the idea of how things outside the Ruby ecosystem or the Ruby world interact with what we're returning in terms of Boolean values. And so when I mentioned the object being serialized because of, you know, our database and, like, background job system, that's an entity that's figuring out what to do with the things that we are returning from Ruby.
And similarly, when you're talking about environment variables, it's like, our computer system talking to now our language and those things being a bit different. Because when we, like, suspend our disbelief about what is truthy or falsy in Ruby, at least we're doing it in, like, the world of Ruby. And as soon as we have to interact with something else, like, maybe that's when things can get a little hairy because there's different ideas about truthiness there. And so I'm kind of also thinking about what we return in APIs and maybe, like, that being an area where some explicitness is more required.
JOËL: Whenever I'm consuming third-party data, I'm a big fan of having some kind of transformation or parse step. This is inspired in part by the "Parse, Don't Validate" article, which I'll link in the show notes. So, if I'm reading data from a third-party API and I want it to be a Boolean, then maybe I should do the transformation myself. So maybe I check literally, is it the string true or the string false, and anything else gets rejected?
Maybe I have...and maybe I'm a little bit more permissive, where I also accept capital T or capital F, and I have, like, some rules for transforming that. But the important thing is I have an explicit conversion step and reject any bad output. And so for something like an environment variable, maybe that would look like looking for true or false and raising if anything else is there. So that we try to boot the app, and it immediately crashes because, hey, we've got some, like, undefined, like, bad configuration that we're trying to load the app with. Don't even try to keep running. Hard crash immediately. Fix it, and then come back.
STEPHANIE: Yeah, I like that a lot because the way we ended up fixing this issue with the background job that I mentioned was just coercing our string value into a Ruby Boolean in the job that we were then, like, running the conditional in. But really, what we should have done is have fixed that at a higher level and where we parse and deserialize, like, the values we're getting from the job to prevent this kind of in the future because right now, someone can do this again, and that's a real bummer.
JOËL: I always love those deeper conversations that happen after you've had a bug that are like, how do we prevent this from happening again? Because sometimes that's where you have the deepest learnings or the most interesting insights or, you know, ideas for Bike Shed episodes.
I'm really curious to contrast JavaScript's approach to truthiness to Ruby's because even though they both use the same idea, they kind of go about it differently.
STEPHANIE: Tell me more.
JOËL: So, in Ruby, an empty array and an empty string are truthy. JavaScript decided that empty things are falsy. And I forget...there's a whole table that shows the things that are truthy and falsy in JavaScript. I want to say zero is falsy in JavaScript but don't quote me on that, which can also lead to some interesting edge cases you have to think about.
STEPHANIE: Okay, yes. This is coming back to me now. I think depending on what, you know, ecosystem or language or world I'm in, I have to just only be able to think about what is true in this world [laughs] and then do that context switching when I am working in something else. But yeah, that is a really interesting idea. Someone decided [laughs] that this was their idea of true or false.
JOËL: I'm curious if you have a preference for sort of JavaScript's approach to falsiness where a lot more types of values are falsy versus Ruby, which said pretty much only nil and false are falsy. Everything else is truthy.
STEPHANIE: Hmm, that is an interesting question.
JOËL: Because in Ruby then or, I guess, in Rails, we end up with the present predicate method that is specifically checking for not only nil and false but also for empty array, empty string, those kinds of things. So, if you find yourself writing a lot of present matchers in your code, you're kind of leaning on something that's closer to JavaScript's definition of falsiness than Ruby's. But maybe you're making it more explicit.
STEPHANIE: Right. In JavaScript, I see a lot of double bangs in lieu of those predicate methods. But I suppose by nature of having to write those predicate methods in Ruby, we're, like, really wanting something else, I think. And maybe...I guess it is just a question of explicitness like you're saying, and which I prefer. Is it that I need to be explicit to convey the idea that I want, or is it nice that the language has just been encoded that way for me?
JOËL: Or maybe when you write conditionals, if you find yourself doing a lot of presence checks, do you find that you typically are trying to branch on if not null, not false, not empty more frequently than just if not null, not false? Because that's kind of the difference between Ruby's model and JavaScript's model.
STEPHANIE: Hmm, the way you posed that question is interesting because it makes me think that sometimes it's quite defensive because we have to check for all these possible return values. We are unsure of what we are getting back. And so this is kind of, like, a catch-all for things that we aren't really sure about.
JOËL: Yeah, I mean, that's the fun of dynamic programming languages. You never know exactly what you're going to get as long as things respond to certain methods. You really lean into the duck typing. And I think that's Mike's argument in his article that "Booleans Don't Exist" in that as long as something is responding to methods that you care about, it doesn't matter if you're dealing with a true Boolean or some kind of other value.
STEPHANIE: Right. So I suppose the ideas of truthiness then are a little bit more dependent on how people are using the language though it seems like a chicken-and-egg situation to me. [laughs]
JOËL: It is really interesting to me in terms of maybe thinking about use cases in my own code if I'm having to...if I'm writing code that leans on truthiness where I can say just, you know if user. But then knowing that, oh, that doesn't account for, like, an empty value. Do I then also need to add an extra check for emptiness? And maybe if I'm in a Rails project, I would reach for that present matcher where I wouldn't have to do that in JavaScript because I can just say, if user, and that already automatically checks for presence.
So I'm kind of wondering now in my mind, like, which default would fit my use cases more? Or, if I go back to an older version of myself, I will say I don't want any of these defaults. They're all too ambiguous. I'm going to put explicitly if user dot nil question mark, if that's the thing that I'm checking for, or if user dot empty question mark because I want my reader to know what condition I'm checking.
STEPHANIE: Yeah, that is interesting, this idea of, like, which mode do you find yourself needing to use more and if that is accommodated for you because that's just the more common, like, use case or problem.
I think that's something that I will be thinking about the next time I write a conditional [laughs] because, like I was saying earlier, I think I end up just leaning on what someone else has decided for me in terms of truthiness and not so much how I would like it to work for me.
JOËL: And sometimes we don't want to fight the language too much, you know if I'm writing Elm, that everything is hard Booleans. And I know I'm never going to get a nil in a place where I'd expect true or false because the compiler would prevent that from happening. I know that I'm not going to get an empty value, potentially.
There's ways you can do things with a type system where you can explicitly say no empty values are even allowed at this point. And if you do allow them, then the type system will say, "Hey, you forgot to check for the empty case. Bad program. I'm rejecting that." And then you have to write that explicit branch for, oh, if empty versus if present. So I really appreciate that style of programming.
But then, when you're in a language like Ruby where you're not dealing with explicit types on purpose, how do you shift that mindset so that you don't need to know the type of the value that you're dealing with? You only want to say, hey, in this context, here's the minimal interface that I want it to conform to. And maybe it's just the truthy or falsiness interface, and everything beyond that is not relevant.
STEPHANIE: I think it's kind of wild to me that this idea of a binary that theoretically seems very clear turns out is actually quite confusing, ambiguous, philosophical, even. [chuckles]
JOËL: Yeah. It's definitely...you can get into some deep, philosophical questions there, language design as well.
One aspect, though, that I'm really curious about your thoughts is bringing new people in who are learning a language. It's really common for people who are learning a language for the first time, learning to code for the first time to write code that you and I would immediately know, like, that's not going to work. You can't add a Boolean and a number. You're just learning to code. You've never done that before. You don't know. And then how the language reacts to that kind of thing can help guide that experience.
So, do you think that truthiness maybe makes things more confusing for newcomers? Or, maybe on the other side, it helps to smooth that learning curve because you don't have to be like, oh, wait, I have a user here. I can't put that in a condition because that's not a strict true or false. I'm going to coerce it, or I've got to find a predicate method or something. You can just be like, no, put it in. The interpreter will figure it out for you.
STEPHANIE: Wow. That's a great question. I'm trying to put myself in the beginner's mindset a little bit and think about what it's like to just try something and the magic of it working. Because, like you said, the interpreter does it for you, or whatever, and something happens, and you're like, wow, like, that was really cool. And I didn't have to know all of the ins and outs of the types of things I was working with.
That can be really helpful in just getting them, like, started and getting them just, like, on the ground writing code. And having that feeling of satisfaction that, like, that they didn't have to, you know, have to learn all these things that can be really scary to make their program work.
But I do think it also kind of bites them later once they really realize [laughs] what is going on and the minute that they get that, like, unexpected behavior, right? Like, that becomes a time when you do have to figure out what might be going on under the hood. So two sides of the same coin.
JOËL: What you're saying there about, like, maybe smoothing that initial curve but then it biting them later got me thinking. You know how we have the concept of technical debt where we write code in a way that's maybe not quite as clean today so we can move faster but that then later on we have to pay it back? And I almost wonder if what we have here is almost like a pedagogical debt where it's going to cost us a month from now, but today it helps us move faster and actually kind of get that momentum going.
STEPHANIE: Pedagogical debt. I like that. I think you've coined a new term. Because I really relate to that where you learn just enough to do the thing now. But, you know, it's probably not, like, the right way or, like, the most informed—I think most informed is probably how I would best describe it—way of doing it. And later, you, yeah, just have to invest a little more into it. And I think that's okay.
I think sometimes I do tend to, like, beat myself up over something down the line when I have to deal with some piece of less-than-ideal code that I'd written earlier. Like, I think that, oh, I could have avoided this if only I knew. But the whole point is that I didn't know. [laughs] And, like, that's okay, like, maybe I didn't need to know at the time.
JOËL: Yeah, and code that's never shipped is of zero value. So having something that you could ship is better than having something perfect that you didn't ship.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie just got back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. Joël started a new project at work.
Review season is upon us. Stephanie and Joël think about growth and goals and talk about reviews: how to do them, how to write them for yourself, and how to write them for others.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And, together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I just came back from a smaller regional Ruby Conference, Blue Ridge Ruby, in Asheville, North Carolina. And I had a really great time.
JOËL: Oooh, I'll bet this is a great time of year to be in Asheville. It's The Blue Ridge Mountains, right?
STEPHANIE: Yeah, exactly. It was perfect weather. It was in the 70s. And yeah, it was just so beautiful there, being surrounded by mountains. And I got to meet a lot of new and old Ruby friends. That was really fun, seeing some just conference folks that I don't normally get to see otherwise. And, yeah, this was my second regional conference, and I think I am really enjoying them. I'm considering prioritizing going to more regional conferences over the ones in some of the bigger cities that Ruby Central puts on moving forward. Just because I really like visiting smaller cities in the U.S., places that I otherwise wouldn't have as strong of a reason to go to.
JOËL: And you weren't just attending this conference; you were speaking.
STEPHANIE: I was, yeah. I gave a talk that I had given before about pair programming and nonviolent communication. And this was my first time giving a talk a second time, which was interesting. Is that something that you've done before?
JOËL: I have not, no. I've created, like, a new bespoke talk for every conference that I've been at, and that's a lot of work. So I love the idea of giving a talk you've given before somewhere else. It seems like, you know, anybody can watch it on the first time on YouTube, generally. But it's not the same as being in the room and getting a chance for someone to see you live and to give a talk, especially at something like a regional conference. It sounds like a great opportunity. What was your experience giving a talk for the second time?
STEPHANIE: Well, I was very excited not to do any more work [chuckles] and thinking that I could just show up [chuckles] and be totally prepared because I'd already done this thing before. And that was not necessarily the case. I still kind of came back to my talk after a few months of not looking at it for a while and had some fresh eyes, rewrote some of the things. I was able to apply a few things that I had learned since giving it the first time around, which was good, just having more perspective and insight into the things that I was talking about. Otherwise, the content didn't really change, just polished it further.
I think in the editing process, you could edit forever, really. So I imagine if I revisit it again, I'll find other things that I want to change. But this time around, I also memorized my slides because, last time, I was a little more dependent on my speaker notes. And part of what I wanted to do this time around, because I had a little more time in preparing, was trying to go from memory. And that went pretty well, I think.
JOËL: How did you feel about the delivery of it? Because now you had a chance to have a practice run in front of a real audience. And, as much as you practice at home in front of the mirror, it's not the same as actually giving a talk in front of an audience.
STEPHANIE: Yeah. I was surprised by how the audience is also different, and the things that they'll react to is slightly different. There were some jokes that landed similarly and others that didn't land a little bit with this crowd, but maybe other parts, there was more of a reaction. So that was surprising. And I think I had to kind of adjust those expectations on the fly as I delivered whatever, you know, line I was kind of expecting some kind of reaction to.
And I also, other than memorizing my slides, you know, I think had the mental capacity to focus a little more on the delivery component that you're talking about because I wasn't, you know, up until the last minute still working on the content itself, and just being able to direct my mental energy to, I guess, the next level of performance when giving a presentation.
And, yeah, I would definitely give this talk again. I really liked that it was something that feels pretty evergreen, something I care a lot about. I don't think it will be a topic that I get kind of bored of anytime soon. So those were all some of the things I was thinking about in giving a talk a second time.
JOËL: When you write your speaker notes, do you give yourself directions for expected audience reactions, so something like a pause for laughter after a joke or something like that?
STEPHANIE: No. I think I am too nervous about presuming [laughs] how the audience will react to put something in and then have to be, like, super surprised and figure out what to do if they don't react the way that I think they will. So it ends up being that I just kind of go forth. And if I do get a reaction out of them, that's great. But not expecting it works for me because then, at least, I can control how I am presenting and how I'm showing [chuckles] up a little bit more.
JOËL: So you're really working with the energy in the room then.
STEPHANIE: Yeah, I think so.
JOËL: Was this talk recorded? So if people in the audience want to go and watch this talk.
STEPHANIE: Yeah. The first version that I gave of it is online if you search for the title "Empathetic Pair Programming with Nonviolent Communication." And this version was recorded as well. So, eventually, it'll also be up. And, I don't know, maybe I'll watch it back and [chuckles] see the difference in presentation. I would be very curious. I've never watched any one of my conference talks fully through the recording from start to end before. But I know that that's something that I could continue to improve on. So maybe one day I'll find the confidence.
My other highlight that I wanted to share about this regional conference is how well-organized it was. So it was mainly organized by Jeremy Smith, and I thought he did such an awesome job. He organized a bunch of activities in Asheville for the Saturday after the conference if folks wanted to stay a little longer and just check out the city. There was a group that went hiking, a group that did a brewery tour. And the activity I chose to do was to go tubing.
JOËL: Fun.
STEPHANIE: Yeah, it was my first time. So you're basically in an inner tube floating down a very calm river, just hanging out. You...we were on the group, and you could clip yourself to the rest of the group so you're all, you know, kind of floating down together. But some people would unclip themselves and just go free for a little while. And, yeah, when you get too hot, you can dip into the water to cool off. And I just had such a great time. [laughs] It was almost like being on a Disney ride but out in nature, which I just, like, is totally my jam.
JOËL: I tried tubing once in Texas. And the inner tubes are black, and in the Texas sun, they get really hot. So every, I don't know, 20 minutes or so, I had to get off the inner tube. It was too hot to sit on. And I had to flip it just because it absorbed so much heat.
STEPHANIE: Wow. Yeah, that does sound like it would get very hot. I think the funny thing that I wasn't expecting was how hard it would be to get back into the inner tube after you had gotten in the water, at least for me, because the inner tubes were quite large. And so I couldn't get enough leverage to pull myself [laughs] back up onto it, and ended up several times just, like, flopping belly first into the inner tube and then having to, like, flop over so that I could be on my back and be sitting in it again.
And other times that I had to wait a little while until the river got shallower so I could actually stand and just sit in it. So there were times that it was kind of a struggle, but 90% of it was very chill and fun.
So, Joël, what's new in your world?
JOËL: I started a new project at work. I'm working with a data warehouse, pulling data in from a variety of sources, getting it all into one kind of unified schema, doing some transformations on it. And then also setting up some sort of outgoing plugins to allow different sources to access that unified data. So this is not in a Rails app, but we do have a Rails app connecting to this data warehouse.
Data engineering is, at least in this style, is newer to me. So I think it's a really interesting world to get into. I don't know if, technically, this counts as big data. I don't think the term is cool anymore. But five or so years ago, everybody was all about the big data, and that was the hip term to toss around.
STEPHANIE: So, is this something pretty new to you? You haven't had too much experience doing this kind of data engineering work before?
JOËL: Yeah, at least not with, like, a data warehouse. I think a lot of the work around data transformations, or creating unified schemas, thinking in terms of data in different stages that are at different levels of correctness...I've done a fair amount of ETL, Extract, Transform, Load, or sometimes people shift it around and say, ELT, Extract, Load, Transform. I've done a fair amount of those because I've done a lot of integrations with third-party systems.
STEPHANIE: So I've always thought of data engineering as, in some ways, a separate role or a track. And I'm really curious about you having, you know, mostly been doing software development if that gives you an interesting lens to look at these problems.
JOËL: So, to get the full answer, you should probably ask me again in six months.
STEPHANIE: That's fair.
JOËL: Initial thoughts is that there's a shocking amount of overlap between some of these ideas, again, because I've done ETL-style projects a lot. You know, if you've got any kind of Rails app and you're integrating with a third-party API, you're often doing ETL at a very small level. To a certain extent, even if you're doing, let's say, some front-end code, and you're interacting with a back end, depending on how you want to deal with that transformation of getting data from your API, you might be doing something kind of like an ETL.
Designing types in something like a TypeScript or an Elm and thinking in terms of the data that you have, the transforms that you're doing has a lot of similarities to what you would do in a data warehouse. I think a lot of the general ideas apply.
I know I talked at the beginning of this year articles that were impactful for me. And one of those articles that was really impactful was Hillel Wayne's "Constructive Versus Predicative Data," which is all about structuring data and when you can enforce constraints via the data structure versus when you need to enforce it via code.
Similarly, a lot of the ideas from the article "Parse, Don't Validate" by Alexis King. The articles focused on designing types. But it also, I think, applies to when you're thinking of schemas because schemas and types are, in a sense, isomorphic to each other.
STEPHANIE: I like what you said there about as a software developer; you've probably done this at a much smaller scale. And, yeah, like you were saying, things that you had already learned about before or thought about before you're able to apply to this different set of problems or, like, different approach to programming. Is there anything that has been challenging for you?
JOËL: Yes, and it's a weird one. Because we're working with enterprise systems, navigating the websites for these enterprise systems and the documentation for them is not a pleasant experience, trying to get a feel for how the system is made to work. It's just so different when you're used to tools and documentation written by the open-source community.
Even third-party tutorials and things it's never, like, oh, here's a great article where you can scan and find the thing that you want. It's, hey, I'm a consultant guru on this thing. Sign up for my webinar, and you can have a 15-hour course on how to use this tool. And that's not what I want to do. I just want give me the five-paragraph blog post on how to do data imports, or how to set up a staging area for data, or something like that.
STEPHANIE: Right. You're basically being asked to develop skills in using the enterprise software rather than more general skills for the problem or task; it sounds like. Because apparently, there are people making a business out of teaching other people how to use or navigate the software.
JOËL: And I think that's fine. I love that people are making businesses of teaching these. But just the way things are structured, information is not generally as available for this large enterprise software as it is in the open-source world, and even when it is, it's just different patterns of access. So even you go to a particular technology's website, and it's all marketing copy. It's all sales funnel and not a lot of actually telling you really what the technology does. It's all, like, really vague, you know, business speak on, you know, empowering your team, and gathering insights, and all this stuff.
So you really do a lot of drilling down. And what you need to find is the developer site. That's where you get the actual tech documentation. Depending on the tech, it's more or less good. But yeah, the official website of the technologies is just...it's not aimed at me as a developer. It's speaking to a different audience.
STEPHANIE: That is interesting. I didn't realize that once you are, you know, working on a data warehouse, it is because you are consuming so many different external sources of data, and having to figure out how to work with each one is part of the process to get what you need.
JOËL: So there's the external services but the data warehouse itself that we're using is an enterprise product.
STEPHANIE: Got it.
JOËL: So, just figuring out how this data warehouse works, it feels like it's a different culture, a different developer culture.
STEPHANIE: That's cool. I'll definitely ask you again in a few months, and I look forward to hearing what you report back.
So the other topic that I wanted to get into today is reviews, specifically self-reviews. To be honest, our review cycle is happening right now. And I have very much procrastinated [chuckles] on writing them until, you know, one or two days before. So I came into our conversation today, like, in that mind space of thinking about my growth, and my goals, and that kind of stuff.
And it got me thinking that I don't hear a lot of people talk about reviews, and how to do them, how to write them for yourself, how to write them for others, how people approach them. Though I would guess that the procrastination part is pretty common, [chuckles] just based on what I'm hearing from other folks on our team too, and what they're up to for the next couple of days before they do. Joël, have you written your review yet?
JOËL: So it's interesting because this review cycle has a few different components. You write a self-review. You write a review of your manager, and then you write a review of several of your peers who have nominated you to write a review. So I've done my own review. I've done my manager's review. I've not completed all of my peer reviews yet.
STEPHANIE: That's pretty good. That's better than me. I've only done my own. [laughs] So, yeah, the deadline is coming up. And I'll probably get back to it right after this.
I'm curious about your process, though, for writing a self-review. Do you come into it having thought about how you've been doing so far in the last six months or so? Or, when you sit down to write it, are you thinking about these things for the first time in a while?
JOËL: Combination. So I think I do come in without necessarily having, like, planned for the review cycle. That being said, throughout the year, I try to build a fair amount of, like, personal self-reflection, professional self-reflection at various points throughout the year. So I'm not coming into the review cycle being like, oh, I have not thought about professional growth at all. What have I done this year?
I think one thing I haven't done quite as well is when I'm doing these moments of self-reflection on my own throughout the year, writing down notes that I could then use to apply when the review cycle comes up. So I am having to rely on memory on, like, oh yeah, last month, when I kind of sat down and thought about areas that I want to improve in or areas that, like, what are my goals that I want to have? And I just commit that to memory. So, yeah, I think live in the moment; now that you've asked me this question, you've made me think that maybe I should be taking more regular notes about this.
STEPHANIE: One thing I've been really liking about the software that we're using for reviews and other professional growth things is...it's called 15Five. And you can give your co-workers shout-outs using this tool. And as I was writing my review, I could actually open all of the kudos and shout-outs that I received from my peers and just remember some of the things that I worked on or a lot of the things that other people noticed.
I tend to sometimes have a hard time remembering some of the smaller things that I've done that made an impact, but other people are usually better about pointing that out than I am. [chuckles] And that has been really helpful because it's, yeah, nice to see like, oh, like, you know, so and so really appreciated when I paired with them on, you know, debugging this thing. And maybe I can pull that into something that I'm writing about the kind of mentorship I've been doing in the last few months.
JOËL: How do you feel about the aspect where you have to then give feedback on colleagues?
STEPHANIE: I really value and enjoy this aspect because most of the time, I am just gassing my colleagues up [chuckles] and writing, you know, really encouraging things about all of the awesome work that they're doing. So, for me, it actually feels really good.
And I was thinking a little bit about my approach to reviewing my peers and review culture in general. I have worked at companies where we have had a very, like, healthy and positive review culture. So it happens often enough that it's become normalized. It's not a really scary thing. And I also like to think about feedback in two types, where you have feedback that you want to give someone so that they can change behavior in a way that helps you work with them better, and then feedback you have for someone for their growth.
And once I separated those two things, I realized that really, the former, if you're, you know, giving someone constructive feedback because you maybe would like them to be doing something different. That's not necessarily what you want to be writing in their annual review. Those things are usually better communicated in a more timely manner, like, right when you are noticing what you might want to be changed.
And so then when you are doing reviews, like, you've hopefully already kind of gotten all of that stuff out of the way. And you can just focus on areas of growth for them, which is the fun part, I think, in reviewing peers because, yeah, you can give some suggestions to further support them in, like, where they want to go.
JOËL: I like that distinction between just general growth, suggestions, and then interaction suggestions. And just to give an example, it sounds like interaction suggestions would be like, "Oh, when we pair, I would like it if you used this style of communication from, let's say, nonviolent communication. Here's a talk; go watch it."
STEPHANIE: [laughs] Yeah, I did talk on this; go watch it. There used to be a framework for reviews that I've done before that I actually don't quite like. It's the Stop, Start, Continue framework where you answer questions about, okay, what should this person stopped doing? What should they continue doing? And what should they start doing? And the things that you would put in stop, I think, are probably what you would want to have communicated in a more timely manner, like, not necessarily it happening, you know, really divorced from whatever behavior you might be asking.
And, in general, I think focusing on what you would like others to be doing instead is usually a better approach to handling that kind of feedback just because it avoids making someone feel bad about having done something wrong and, instead, kind of redirecting them into what you would like them to be doing.
JOËL: So you're saying if you have something in the stop category, let's say stop interrupting me all the time when we're in meetings, you're saying this is something you prefer not to bring up at all or something that you prefer to bring up one on one and not in the context of review?
STEPHANIE: Something to bring up one on one. Ideally, pretty soon after, that might have happened. It's a little more top of mind. And then you don't end up in that position of maybe misremembering or having the other person misremember and having to figure out, like, who was in the right or in the wrong in understanding how that interaction went. Especially if you're able to do it a little sooner after it happened, you can point out, like, hey, this happened. And instead of framing it as please stop interrupting me, you could say, "Could you please make some space for some folks who've been a little more quiet in the meetings to make sure that they've been able to share?"
Still, I think once you've made more space to give that kind of constructive feedback when you are writing reviews, you can then, like, focus on the growth aspect and not the redirection of how others are doing their work.
JOËL: That makes sense. So, what would be an example of the kind of feedback that you like to give to other people in the context of a review?
STEPHANIE: Yeah, I think especially if I know what someone is wanting to focus on, right? If I'm working with someone, hopefully, we've kind of gotten to talk about what they like to work on, what they don't like to work on, what they are hoping to spend more time doing, or yeah, just their hopes and dreams for their professional [chuckles] development, being able to point out some things that they maybe haven't thought about trying it I really like to do.
I was thinking about a time when I gave a co-worker some feedback as a mentee of theirs where they had been really awesome at providing information to me about things that I was unfamiliar with. But one thing that I was really hoping for was more tools to figure things out on my own. So instead of sending me a link to some documentation, maybe helping me figure out how to search for the documentation that I'm looking for. And that was something that I could share with them because I knew that they wanted to work on their mentorship skills and an opportunity, I think, for them to take it to a level where it's closer to coaching and not just providing information.
JOËL: That makes a lot of sense. Maybe flipping it around, is there a point in time where you've received a review feedback that has been really valuable to you or really helped you hit the next level in your career?
STEPHANIE: I really appreciate feedback that encourages me when I'm maybe a little bit too timid to go seek the things out myself. So there were times when I received some feedback about how great of a leader I could be before I thought I was ready to be a leader. And they pointed out the qualities of leadership that I had demonstrated that led them to believe that I would be ready for a role like that. And that was really helpful because I don't think that was even necessarily a short-term goal of mine. And it took someone else saying, "I think you're ready," that made me feel a lot more confident about opening that door.
I guess this is all to say that I really love review season because of, you know, all of the support I get from my co-workers. And, yeah, just remembering that it's not just a journey I have to take all by myself, that the point of working with other people is for all of us to help each other grow.
JOËL: I think something that you mentioned earlier really connected with me, the idea of trying to give feedback in the...even, like, feedback that's about changing or improving, phrasing it in a more positive way, or at least framing it in a more positive way. So here's an opportunity for growth rather than here's the thing you're doing wrong. Because that reminds me of two pieces of review that I got when I was a fairly junior developer that have stuck with me ever since. And one of them was really a catalyst for growth, and the other one kind of haunted me.
So this first one I got, someone in a review just mentioned that they thought that I was just generally a slow developer, just not fast at writing code. Not a whole lot of context; just that's who I was. And, in a sense, it was almost like I'd been given this identity, like, oh, I am now Joël, the slow developer. And I didn't want that identity. So I'm kind of like, I want to refuse to accept it. But at the same time, there's always that self-doubt in the back. And now, anytime I'm on a project with someone else, I'm comparing, oh, am I shipping stories quite as fast as someone else? And if not, why? Is it because I'm a slow developer?
Or if I'm having a rough day and I'm not getting the ticket done that I was hoping to get done by the end of the day, you know, you just get that voice in the back of your head that's like, oh, it's because you're a slow developer. Someone called that out last year, and they were right. So, in a sense, it kind of haunted me.
On the flip side, I once got some feedback talking about an opportunity for growth. If I focused on working in more iterative, incremental chunks, it would help have a smoother workflow and probably help me work faster as well. And that was really kind of an exciting opportunity. It's also stuck with me for years but not in the sort of haunting sort of way or this, like, bring in self-doubt but more in terms of opportunity.
Because now I'm always like, oh, can I break this down into even smaller chunks? Would that help me move faster? Would that help me be less blocked on other people? Would that be easier for our QA team? Would this be easier for review for my colleagues? Just a lot of different opportunities for benefits with working in smaller iterative chunks.
And, for years, I've just been kind of honing that skill. And now, looking back over, you know, a decade of doing this, I think it's one of the best skills that I have. And so, in a sense, I feel like both of these people that left me that review, in a sense, they're trying to get me to maybe have a slightly higher velocity. But they're different approaches, radically different in terms of how it impacted me as a person.
STEPHANIE: Yeah, I am really glad you brought that up. Because I definitely have also received, quote, unquote, "constructive feedback," but maybe wasn't phrased in the right way, that also haunted me. And it doesn't feel good. I think that that sucks. That person wasn't really able to frame it in a way that pushed you to progress in the positive way that you mentioned with learning to work incrementally.
And in fact, I almost think that the difference in those two phrasings is encapsulated by a framework for giving feedback that's actionable, specific, and kind. So suggesting you to work incrementally is all of those things, especially if they know that you do want to increase your velocity. But you're being supported in doing it in a way that is positive and growth-oriented as opposed to, like, out of fear that other people think that you are a slow developer. And, you know, that's certainly a way that people are motivated. But I would say that that's not the way that we want to be motivated. [laughs]
JOËL: I'm glad we're having this conversation because I think it just reinforces to me just the value of good communication skills for developers. And, you know, you can see that when developers have to write documentation, or even things like comments or commit messages. You see it when developers write blog posts. So it's really valuable to work on your communication skills in a lot of these technical areas.
But reviews are a very particular area where it's easy to maybe have not the impact that you wanted because you communicated a core idea that's probably right, but just the way it was communicated was not going to have the impact that you're hoping for. And so getting good at communicating specifically in the area of reviews, which I assume most of us in the software industry are doing on a semi-regular basis, is probably a good tool to have in your professional tool belt.
STEPHANIE: Absolutely.
JOËL: We recently hit a big milestone at thoughtbot, where thoughtbot turned 20 years old in early June. And so, throughout June, we've been doing a lot of fun internal things and some external things to celebrate turning 20. And one of those is we're hosting a live AMA with a variety of thoughtbot devs. That's going to be on Friday, June 23rd, so a couple of days after this podcast goes live.
So, to our listeners, if you're listening to this, in the first few days after it goes live, you get a chance to join in on the live AMA and ask your questions of our team as we celebrate 20 years. There's a blog post with all the details, and we'll link to that in the show notes.
STEPHANIE: One other thing that I think we're doing that's really cool for our 20th anniversary is we published a short ebook with a curated collection of 20 hits from our blog, the thoughtbot blog, over the course of its history, some of the more popular and impactful blog posts that we've ever published. So I highly recommend checking that out. You know, the thoughtbot blog is such an awesome resource. And I discovered a few things that I hadn't read before on the blog from this ebook. So that will also be linked in the show notes.
JOËL: I mentioned earlier how one of my opportunities for growth through review was getting better at working iteratively. And, a couple of years ago, I took a lot of the lessons that I'd learned over the years of getting better at working iteratively, and I put them in a blog post, and that blog post made it into that 20th Anniversary ebook. So we can probably link the blog post itself in the show notes. But also, if you're picking up that ebook, you'll get a chance to see that article on my lessons learned on how to work iteratively.
STEPHANIE: Awesome. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Joël has a bike shorts update; Stephanie has a garden one.
Often, power is centralized within the dev team. This is usually because they are the only ones able to execute. Sometimes this ends up interfering with team processes and workload. Joël is a fan of empowering other teams to do things themselves.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, in a recent episode, I had mentioned that I was going to go on vacation on a bike trip and that I had purchased a pair of bike shorts to try out on that trip to see if that would help. And, wow, that was a great purchase. It literally saved my butt.
STEPHANIE: That's awesome. I'm really glad that they worked out for you.
JOËL: Still sore. This was a five-day biking trip. And I think day two was the worst, but after that, things got better. But the shorts definitely helped.
STEPHANIE: I think my favorite part about us talking about biking and bike shorts is that we're finally living up to the name of our podcast. [laughs] Turns out that bikeshedding is actually even more, bikeshedding when it's about actual bike stuff. And a listener named James even wrote in with some pro tips about, you know, how to care for your bike shorts and, you know, have a comfortable biking experience. And gave some good tips for me on some longer rides to check out near me in Chicago.
JOËL: So it sounds like there's some crossover between the software developer community and bike enthusiasts community who also tune into this podcast.
STEPHANIE: I do think that we have gotten tweets before from, I think, like, the motorcycle Twitter tagging us @_bikeshed, perhaps maybe trying to tag a different account but, yeah, ended up in our Twitter inbox instead.
JOËL: Now we just need some sweet, sweet bicycle sponsorships. So, Stephanie, what's been new in your world?
STEPHANIE: I have a garden update. Last year, we purchased a small fig tree from the internet. It turns out that you can get little fruit trees delivered to your door. And this was, I think, around the fall, so it was getting a little cooler. And here in Chicago, we have to bring some of our plants inside to overwinter. And so we brought the fig indoors, and it's maybe, like, two or three feet tall. And, you know, over the few months, we were just, like, caring for it. And I was really excited to see that it had started fruiting several months ago. And I got to show it to all of my co-workers in a call.
I, like, picked up this kind of large pot with our little fig tree, and I, like, held it really close to my camera and tried to point out the fruit to the other people on the call, which I realized was perhaps not a very effective way to show off my plants. Like, you could just take a picture and send it in Slack. And I was like, yes, I could have done that.
But yesterday (Now our fig tree has been outside for a little while since it's warmer.) I noticed that they were ripe, and I got to harvest our figs and eat them, and they were delicious. And I got to update the team on my little fig adventure. And this time I did take pictures of the fruits and sent them in Slack instead of trying to bring this tree in from the outdoors.
JOËL: That's exciting. Because I'm a fan of the design pattern, I have to ask, is this a strangler fig?
STEPHANIE: It's not a strangler fig, though I have seen one in the wild on a trip to Florida. I saw a really big strangler fig, you know, completely, like, enveloping another tree, and that was really cool. If you ever get to see one in person, I think it's just, I don't know, just really amazing how nature works.
JOËL: I did not realize they were wild in Florida, something to keep an eye out for next time I'm there.
STEPHANIE: Definitely. So, in a recent retro on my client team, we were discussing the one-off requests our team has been getting from the folks over on the sales and client support side. Oftentimes, this involved running a script in our production console to fix some issue that a customer was experiencing. And we were talking about what we could do to make this process a little more automated, make it a little less time-consuming on our end. Even though it would just take a few minutes to run this script, we were seeing that we were getting this request repeatedly.
I'm curious if you've kind of been in this situation before where dev work is required and kind of eating into time that we are trying to be delivering on other feature work for similar one-off requests or to support other folks at the company.
JOËL: Yeah. I think it's a pretty common pattern that I've seen. And I think sometimes it can actually start from a healthy place. If you're taking very much of an MVP philosophy and you're building a small version of your product to start with, you're not going to have a whole suite of admin tools available. You might not even have any admin people. It might just be a founder and a technical co-founder. And so, for the first few hundred customers you have, maybe the way you make changes is by loading up the Rails production console and making a change. And that's good enough, but that doesn't scale.
STEPHANIE: Yeah. You bring up a good point that I think one thing that we get to experience as consultants is seeing many different companies at different stages in their business. And I think I've seen this issue in many different iterations based on the size of the company, right? So you were saying for an MVP product, there's no admin support at all. Maybe you have a project that is now thinking of how to introduce a little bit of admin tooling and might reach for something lightweight like a gem. I've also seen custom admin dashboards, and that being its own namespace and having all of that feature set hand-rolled, and then maybe some other company might opt for a Software as a Service solution.
JOËL: Yeah, there's a lot of different implementations that happen at various stages of companies. I think one thing that does seem to stay pretty constant, though, is oftentimes; other teams don't have the tools they need to make the changes they need to. So, if you have a customer service person and they're receiving a complaint or they're having to make a change, they're not always empowered to make the changes they need to. They need to talk to the dev team, who then need to make changes.
And the dev team don't really want to spend their day doing admin work. They are incentivized to ship features. And so both sides are unhappy. And it kind of comes from a sort of fundamental, I think, over-empowering of the dev team and kind of a disempowering of some of the other departments within the company, if that makes sense.
STEPHANIE: Yeah, that's interesting because I don't think it necessarily is intentional, the way that that happens, right? It's not like you start building a product, and you are saying, "Okay, we only want to give devs the power to change all of this stuff at the production level." It's just something inherent, I suppose, to the work that we do. And there's a lot of active effort that needs to be taken to spread some of that empowerment around.
JOËL: Yeah, generally, it is not some sort of, like, nefarious corporate politics that's happening where the CTO is, like, hoarding all the power, or it's a turf war or anything like that. Like you said, it's kind of an emergent property. As developers, we're often used to being sort of ultra-empowered to do what needs to be done. In general, development teams are highly respected within companies, and so people listen to them. But also, in order to do their job, they need to have access to a lot of things.
So you often have production access to all the things and the admin credentials. And if there's something that doesn't work, you write code, and you can change the sort of fundamental underlying platform that you're working with. And so you're generally empowered to make the changes you need to make your life better or if you're blocked on something. And that's not necessarily true for other departments who are working in the system that we're building.
STEPHANIE: Yeah, it's kind of interesting the duality that you have identified where we do have all of this power or capability to change the system. But you had mentioned earlier that sometimes it actually gets in the way of our work, that it can be a drag to do if we have other competing priorities, and that those mundane tasks end up being something that we also don't enjoy doing. And so, like you're saying, like, no one is quite happy. I wonder at what point you, as a developer, having repeatedly been asked to do these kinds of tasks for other departments when you, would start advocating for building tooling.
JOËL: I don't think there's a kind of a clear dividing line, like, oh, after three requests, you must build a dashboard. It's probably more about just general communication with the other teams. I like to think of it from kind of two perspectives. From the perspective of the developers, how can we keep them efficiently working on what they need to prioritize, which is typically new feature development?
And then, from the perspective of other teams, how can they be empowered to do the work that you need to do without getting blocked? Because just like the dev team doesn't like to get blocked on all sorts of things, other teams don't like it either. And so, how can we make sure that other team members within the company are empowered to do their work as efficiently as possible?
STEPHANIE: Yeah, that's interesting. I think as an IC, I've been in different positions, depending on the context of my work. There have been times when I've been happy to help with that kind of request, right? Because I know that I'm unblocking someone else. I'm facilitating their work. And they usually appreciate it too. And so maybe if that's still the case and that there's not necessarily any pain that comes with that being just the process that it is from both sides, like, that's perfectly fine.
But then it's totally fair for, you know, either party, once they do feel like it's blocking other work, to start looking into maybe how much time you're spending on these one-off requests, especially if it's being spread around to other team members. You know how much effort you're making, but, like, a manager might actually be more aware of how it's affecting multiple folks on the team and wanting to figure out, like, how that sits in with the other priorities that the team is working on.
JOËL: Yeah, I'm glad you mentioned talking to other people because I might be quite happy to say, "Oh, I'm going to go and, you know, go into the database and make a small change." But just because it's easy for me to do and I can take, you know, 10 minutes out of my day to do it, doesn't mean that that experience is good for, let's say, a customer service person who had to get blocked or had to ask someone else to help to move this ticket forward. When if it was something they could do themselves, that would have been a much better experience.
So, even though it's a very fairly, you know, cheap request and because I don't get them a lot, I'm happy to do them, it's maybe not a good experience for my customer service colleague. So, like you said, it is important to get people's experiences on all sides.
STEPHANIE: One thing that I have seen a lot is for these things to start as configuration in a YAML file that requires developers to change and then commit to the codebase whenever, you know, maybe it's, like, a list of products or a list of prices, something that is, you know, really the business domain. And yet we are hard coding it and, like, codifying it into our source control.
JOËL: Oof, yes. I have been in those projects, yeah. Now, every time you want to make a change, a business person has to reach out to the dev team, and then you have to make a code change, and then you have to deploy it. And that just becomes a whole thing. And then they come back to you the next day and say, "Oh, actually, we talked about it, and we want it a little bit differently." And you have to go through that process again.
STEPHANIE: I think we reach for that just because we think it's faster maybe to set up, you know, some kind of, like, lightweight configuration file, rather than if you're working in Rails, you know, setting up a whole MVC for whatever thing you're trying to configure. And I'm curious if you think that's true or not.
JOËL: I think it depends. Sometimes it can be because this data feels very static, kind of hard-coded. And so it's not a thing you would necessarily want to have. In a database, it's more like a constant that you would have in your source control, except that then you find out that your constant is not quite as constant as you thought it was. And I think maybe that's okay.
Writing software is all about kind of discovering the problem in the domain as it evolves and trying to not over-engineer things ahead of time. So, if we have a small set of values, maybe they're U.S. states that you deliver to or a small list of products or something that you feel is relatively hard coded, maybe it starts as a constant array hard coded into Ruby, maybe it is a YAML file that you load. Then, over time, there comes a point where you decide this should be a database table, and if it needs to be sort of pre-seeded, then there's a mechanism for that with database seeds in Rails.
STEPHANIE: Yeah, that's fair. I find it so interesting because most of the time, I've not seen that transition happen, right? It almost feels like some form of the bystander effect where everyone is just, well, I'm adding just one more thing. So I don't want to make this really big change now.
JOËL: And that's true for everything in code, right? You say, "Oh, this deeply nested condition, yeah, it should probably be restructured. But I'm just going to add, you know, an eighth nested level in there. And, like, eight is probably the limit, but mine is going to be the eighth, so it's going to be good." And then somebody comes in and says, "Well, you know, nine is not that bad, but the next person probably should refactor it." And then it's a mess.
STEPHANIE: Yeah, it's kind of like the entropy of code, I suppose, [laughs] where, you know, we had said it requires a lot of active energy and effort to make those changes to support other folks in different departments of the company. And I think that's, like, one very common area that we see things starting as configuration but then end up being something that you are needing to support in changing.
And I wonder if maybe that is a signal in itself, right? If you are getting this information from another team, like, someone external to the development team, I wonder if that's kind of a clue that this is something that should be reconsidered about whether you start with it being hard-coded.
JOËL: That's an interesting thought. There's a sense in which I think these always come from places external to the development team because it's a form of kind of product research when you're trying to understand what the features need to be, what this needs to happen. Unless this hard-coded data is purely structural or internal values, but it rarely is. There probably is a broader discussion to be had about the use of any sort of hard-coded data in a configuration file in a Rails app versus just always starting with a database table.
One thing that's nice about always having a database table is that if you ever need to connect it to other data in your system, now you can do things like table joins, where you can't join your users on some kind of YAML array, or you have to do some sort of Ruby Enumerable logic. You can't just do it in SQL.
STEPHANIE: Yeah. This is a bit of a tangent, I think. But that reminds me of when I worked at a product company where we had a very robust data warehouse, and all of that information was available to teams on the marketing side and on the data science side. And I actually really liked that because they were able to, you know, construct their own dashboards and queries to get the things that they need. And I've certainly seen what you're saying, this pretty important business information being hard-coded, and that ends up being less accessible, right? And less insightful, really.
One other area of this topic that I think I've also bumped into before is specifically a QA engineer or, like, a QA team and empowering them to be able to do their jobs. Oftentimes, I've noticed that QA environments are not as well-maintained as maybe they should be, where the data that's seeded or, you know, kind of overtime in this environment is a little wonky.
I've also experienced, while working on a feature, kind of having to go back and forth between whoever is helping QA my work telling them, like, "Oh, this isn't finished yet. So, like, don't worry about this that you're testing," or, you know, "Actually, that does look wrong. But let me look into it over on this end." And I found it sometimes difficult to navigate because I want them to be more empowered to test their feature without that uncertainty over whether something is intentional or actually broken.
JOËL: In this case, do you think it's more about communication between development and QA, clear acceptance criteria, or clear descriptions of what changes have been pushed up for review and what's not in scope? Is that where you're headed?
STEPHANIE: I think that's a part of it. But I actually think there are more technical considerations, especially in terms of whether our environments all align in terms of the data we're expecting, right? Does our dev environment kind of look like our QA environment, which looks like our production environment? Because I've certainly been in projects where that's, like, all over the place, and that really messes up the different expectations we have.
We all know the "Oh, it worked in my local" [laughs] response to when things come up in other environments that are unexpected. And I wonder if there is more attention that we should be having towards making sure that just because this environment is not the main one that we're working in as developers, that people who are having to use it have an equally good user experience.
JOËL: I like that you brought up the term user experience because oftentimes, as developers, and just, I think, product teams in general, we're trying to make something good for the customers of the application. But there are a lot of other people that have to interact with it in sort of more ancillary roles; like you mentioned, it might be QA. It might be customer support. It might be business development. And they're not the customer. And so because of that, they're often kind of a second thought or even sort of no thought at all.
And so they do their jobs as best they can with what they've got, and sometimes get really creative getting around some of the hurdles that are in place. But we can often, with very little effort because the bar is so low, make these people's lives a lot better by applying just a little bit of the same approach that we would use to make software great for a customer to use for teammates in these other roles.
STEPHANIE: Yeah, absolutely. Especially because we have that line of communication open with them, and, like you said, they are also our users of our applications. And especially for QA folks, too, in some ways, they're the first line of defense of our users, right? They are a resource for us to know if the customers will eventually have a poor experience or not.
And I was thinking about that back-and-forth communication I mentioned with QA, you know, trying to explain, like, oh, this isn't finished yet, so maybe, like, you should not expect this to happen. But, oftentimes, that perhaps is a signal that we haven't thought about the way that we're developing our feature to be able to be released to customers in a more incremental way. Or we might be hand-waving over something that ends up being a bug later on.
JOËL: Definitely. For myself, I see that as a... code smell is maybe not the right term here, but maybe acceptance criteria smell. If I'm trying to write out some acceptance criteria, and I'm having to say, like, "Oh, but, like, ignore this, and, like, pretend this doesn't exist. All of these, like, weird edge cases and exceptions we're trying to put in, are oftentimes a sign that maybe the work was not scoped correctly.
STEPHANIE: I'm curious, in your workflow, will you just make those improvements if you have the opportunity to? Or do you end up bringing that to the team or creating a ticket for it? How does that fit in when you identify areas that could be improved?
JOËL: I think it depends on the team's current workload. Oftentimes, if it's just something small, it's something I can just slip into my day, and it makes somebody else's life easier, that's great. Otherwise, it can be a thing that needs to be brought up with the team in general. And then it's the thing that we prioritize, and we put it in the backlog because, like you said, the main users of our app are customers. But all of these other teammates are also users of our app in other ways, and so they need certain features to get their job done.
And so it's worthwhile to, I think, at a product planning level, take those into account and prioritize them with the customer-facing things. And sometimes, because those other teams don't have as much of a voice at the table, it's up to us as developers because we sometimes have that direct communication where we're talking to them and sort of going back and forth about, "Oh, can you change this in the database for me?" or "Can you do this?" And it can be up to us to champion these other teammates' needs with the team and with the product organization in general.
STEPHANIE: Yeah, I really like the way you put that. I think you used the word champion. And I've seen this also go the other direction, where we add more processes in place, where the direct communication that needs to be gatekept a little bit through a manager because the manager is trying to protect the time of the team. And that is one way to handle the issue of these requests taking too much of the team's time.
But I think at some point, as an IC, you also have the agency to champion or advocate for how you use your own development time. And that reminds me of something I heard from Rose Wiegley over at Shopify about what it means to be, like, a staff or a senior developer, and that is sharing that I'm going to do this, and this is why. And that means that I won't have time to do this other thing that I may be committed to earlier. But you know, these are my reasons. And if anyone sees any problems with that, let me know.
And I've been thinking about that a lot in terms of figuring out how to do the kind of work that I value. And for you and me, that does sometimes mean building those tools to empower people who aren't developers. But that is, yeah, just a way that we can assert a little bit of that agency rather than having to get the buy-in to even consider setting time aside for that work.
JOËL: Yeah, I think some of the really fulfilling work that I've done has been just sometimes taking a morning and, quote, unquote, "pairing" with, like, a business development or a customer service person and just saying, "Hey, can I just sit with you while you process this kind of request or problem that you're trying to do?" And then just really seeing what they do and all the steps and being able to ask a lot of questions and kind of putting my product developer hat on. And because then I know, internally, all the things that are happening, I can quickly see, oh, okay, you're- having to do these, like, five steps to get around this really annoying thing that's just, like, a rough corner that we have that I can, like, just easily smooth the way, you know, with a 10-minute one-line fix. I'm going to go and do that, and, you know, by the afternoon, that's already done, and that's saved them so much time or so much annoyance because it's not always time. Sometimes it's just annoyance. And their life is better. And I put very little effort into it. Most of it is just taking the time just to talk to each other and to try to understand each other.
So I think we brought up the idea in the beginning of trying to empower other teams to not sort of centralize all the ability to execute on change within the development team. And sometimes, you can go to fairly extreme lengths to that. One that I've seen is the idea of end-user programming or end-user development, where the using the software rather than the team developing it has some sort of way where they can sort of customize or build on, or sometimes even script or -code -their experience. Is that something you've ever had to deal with or interact with on a project?
STEPHANIE: Yeah, it's really interesting that you brought that up because I had mentioned going with a SaaS solution earlier as something that I've seen before. And that reminds me of when I worked on a client project where we were using Freshsales to integrate with the business domain of the client. And this would eventually be the main software that the sales team would use. And the reason that we went with Freshsales was because it allowed for a lot of that custom configuration and workflows that they could create for themselves for their needs.
Though ironically, as we were kind of butting up into the limitations of Freshsales and how it didn't necessarily work for the way we were representing our data, we kind of joked that we almost wished we had to build the tool from scratch ourselves. So I think there are trade-offs there, you know, folks had done a lot of research to figure out the right SaaS solution for this project that we were doing. And yet, you know, inevitably, like, there were some cons with the third-party and how we were able to integrate with it. And it was actually also replacing something that had been built in-house and had become difficult to maintain or something that the company decided that they didn't want to maintain anymore. So I hate to say it, but I really think it [laughs] depends.
JOËL: And now you're getting into the classic build versus buy dilemma for chunks of your software.
STEPHANIE: Yeah, absolutely.
JOËL: I think a way that I've seen, and this happens in a kind of a smaller sense, is providing escape hatches for data. And so maybe you've got a couple of small dashboards, or you've got just a lot of things that happen in your system. And you don't have the development time, or you don't want to prioritize that time right now to build something custom for maybe your business development team. But you provide certain reports that can be exported as CSV or Excel, which then the business development team will load into Excel and do the work that they need.
And now they're empowered to do what they want instead of having to ask for more information or just being limited to what was on that web UI. Similarly, sometimes, when you're able to Import a CSV, I've seen this happen a lot, where in software that's not built just right for a customer service team, they'll often export a CSV of data, put it into Excel, manipulate it the way they really want it to be, and then re-import it into the system. And so that can be a great way to temporarily empower people. I think it's also a product smell. Oftentimes, there's a fundamental flaw of the way that your product works because your team is trying to go around it. It's so bad. But as a shorter-term solution, that can be great.
STEPHANIE: That makes me think that Excel is the real end-user programming software.
JOËL: It really is. It really is. I do really like the idea of end-user Programming, though. And rather than developers or even product people having to decide how our software should work for our users, shifting that to the masses and letting them have all of that empowerment and agency that we're talking about.
There's a technology research lab called Ink & Switch, and they build a lot of really cool end-user programming tools. I think I've seen some, like, note-taking software, that they've done, and just other research into why it's important and how it can impact users. And I have read a little bit of their work, and I think it's really cool. So I'll be sure to add that to our show notes.
JOËL: I think even as developers, we like some of these ideas of end-user development. We have that a lot in our tooling. But then, even when we interact with other people's software that we don't own...because we're used to interacting with our own software, we own it. We can change anything we want. We've got complete freedom. But the moment we interact with somebody else's software and, of course, it doesn't work 100% the way we need it to, it is sometimes nice to have sort of ways to hook into it so that we can get the things we want and then maybe do some extra manipulation on our own. And APIs are often how we do that. And so the equivalent of providing an API for another developer, well, what is that for our other teams?
STEPHANIE: Yeah, great questions to consider.
JOËL: You know, it could be a CSV export. It could be maybe there's some easy way to connect to a Zapier plugin. And now, you know, they don't need to ask the dev team, "Oh, we want to receive this notification email internally when this event happens. They can just connect to a Zapier plugin and have it send an email or do something in Salesforce or all these other things that are helpful in their workflows but that we've not taken the time to build into the core software. And now they're empowered to do their work and not blocked on us.
STEPHANIE: That's interesting because as you were talking about that, it made me think of development tooling that we get to integrate with and how those APIs are usually very flexible. And let us decide what we need and ask the API for that as opposed to it dictating it for us. And, you know, if that's something that we get to enjoy, then, yeah, we should certainly think about how you can spread that to others.
JOËL: I love that. On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Stephanie is joined by very special guest, fellow thoughtboter, Senior Developer, and marathon trainer Mina Slater.
Mina and Stephanie had just been traveling together for two weeks, sponsored by WNB.rb for RubyKaigi in Matsumoto, Japan, and together, they recount their international adventure!
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn and today. I'm joined by a very special guest, fellow thoughtboter Mina Slater. Mina, would you like to introduce yourself to our audience?
MINA: Yeah. Hi, everyone. I am Mina. I am a Senior Developer on Mission Control, which is thoughtbot's DevOps and SRE team.
STEPHANIE: So, Mina, what's new in your world?
MINA: Well, I start marathon training this week. So I hope that this conversation goes well and lasts you for three months because you're probably not going to see or hear from me all summer.
STEPHANIE: Yes. That sounds...it sounds hard, to be honest, marathon training in the summer. When I was doing a bit more running, I always thought I would wake up earlier than I did and, you know, beat the heat, and then I never would, and that really, like, was kind of rough.
MINA: Yeah, actually, I was thinking about my plans for today. I didn't wake up early enough to run in the morning. And so I was calculating, like, okay, by midday, it's going to be too hot. So I'm going to have to wait until, like, 6:00 p.m. [laughs]
STEPHANIE: Yeah, yeah. Or, if you're like me, there's a very real chance that you just skip it altogether.
[laughter]
MINA: Well, I have a deadline, so... [laughs]
STEPHANIE: That's true. When is your marathon race?
MINA: This is actually the first year I'm doing two in a calendar year. So I'm doing Berlin in September. And then, three weeks after that, I'm going to run one in Detroit.
STEPHANIE: Nice. At least you'll be ready. You'll, like, have done it. I don't know; it kind of sounds maybe a bit more efficient that way. [laughs]
MINA: Theoretically. But, you know, ask me in October. I'll let you know how it goes.
STEPHANIE: That's true. You might have to come back on as a guest. [laughs]
MINA: Just to talk about how it went. [laughs]
STEPHANIE: Yeah, exactly.
MINA: So that's what's new with me. What's new in your world, Steph?
STEPHANIE: So, a while back on a previous Bike Shed episode, I talked about joining this client team and, in their daily team syncs, in addition to just sharing what we were up to and what we were working on, we would also answer the question what's something new to us. And that was a space for people to share things that they learned or even just, like, new things that they tried, like food, or activities, or whatnot. And I really enjoyed it as a way to get to know the team, especially when I was new to that client project.
And recently, someone on the team ended up creating a random question generator. So now the question for the daily sync rotates. And I've been having a lot of fun with that. Some of the ones that I like are, what made you laugh recently? What's currently playing on your Spotify or YouTube? No cheating.
MINA: [laughs]
STEPHANIE: And then, yesterday, we had what's for dinner? As the question. And I really liked that one because it actually prompted me to [chuckles] think about what I was going to do for dinner as opposed to waiting till 5:00 p.m. and then stressing because I'm already hungry but don't have a plan [chuckles] for how I'm going to feed myself yet. So it ended up being nice because I, you know, kind of was inspired by what other people mentioned about their dinner plans and got my stuff together.
MINA: That's shocking to me because we had just come off of two weeks of traveling together. And the one thing I learned about you is that you plan two meals ahead, but maybe that is travel stuff.
STEPHANIE: I think that is extremely correct. Because when you're traveling, you're really excited about all the different things that you want to eat wherever you are. And so, yeah, we were definitely...at least I was planning for us, like, two or three meals [laughs] in advance.
MINA: [laughs]
STEPHANIE: But, when I'm at home, it is much harder to, I don't know, like, be motivated. And it just becomes, like, a daily chore. [laughs] So it's not as exciting.
MINA: I think I'm the same way. I just had a whole bunch of family in town. And I was definitely planning dinner before we had breakfast because I'm like, oh, now I have to be responsible for all of these people.
STEPHANIE: Yeah. I just mentioned the questions because I've been really having fun with them, and I feel a lot more connected to the team. Like, I just get to know them more as people and the things they're interested in, and what they do in their free time. So, yeah, highly recommend adding a fun question to your daily syncs.
MINA: Yeah, we started doing that on Mission Control at our team sync meetings recently, too, where the first person...we actually have an order generator that somebody on the team wrote where it takes everyone's first and last name and scramble them and then randomizes the order. So you kind of have to figure out where in the queue you are and who's coming up next after you. But the first person that goes in the queue every day has to think of an icebreaker question.
STEPHANIE: That's kind of a lot of pressure [laughs] for a daily meeting, especially if you're having to unscramble names and then also come up with the icebreaker question. I personally would be very stressed [laughs] by that. But I also can see that it's...I also think it's very fun, especially for a small team like yours.
MINA: Yeah, yeah, just seven of us; we get to know really well what letters are in everyone's names. But I was first today, and I didn't have an icebreaker question ready. So I ended up just passing. So that's also an option.
STEPHANIE: That's fair. Maybe I'll link you to our random question generator, so you can find some inspiration. [laughs]
MINA: Yeah, it's a ChatGPT situation.
STEPHANIE: So you mentioned that you and I had just been traveling together for two weeks. And that's because Mina and I were at RubyKaigi in Matsumoto, Japan, earlier this May. And that's the topic of today's episode: Our Experience at RubyKaigi. And the really cool thing that I wanted to mention was that this was all possible because Mina and I were sponsored by WNB.rb, which is a global community of women and non-binary people working in Ruby. And I've mentioned this group on the show before, but I wanted to plug it again because I think that this was something really special that we got to do.
WNB runs a lot of initiatives, like, meetups and panels supporting people to speak at conferences and book clubs. And, you know, just many different programming events for supporting women and non-binary Rubyists in their career growth. And they are recently beginning a new initiative to sponsor folks to attend conferences. And Mina, you and I were the first people to get to try this out and go to an international conference. So that was really awesome. It was something that I don't think I would have done without the support from WNB.
MINA: And you almost didn't do. I think there was a lot of convincing [chuckles] that went on at the beginning to kind of get you to, like, actually consider coming with me.
STEPHANIE: It's true. It's true. I think you had DMed me, and you were, like, so, like, RubyKaigi, like, eyeball emoji. [laughs] I was, I think, hesitant because this was my first international conference. And so there was just a lot of, like, unknowns and uncertainty for me. And I think that's going to be part of what we talk about today. But is there anything that you want to say about WNB and how you felt about being offered this opportunity?
MINA: Yeah. When Emily and Jemma, the founders of WNB, approached us with this opportunity and this offer, I think I was...taken aback is not really quite the right words but, like, surprised and honored, really, I think it's a better word. Like, I was very honored that they thought of us and kind of took the initiative to come to us with this offer.
So I'm really grateful for this opportunity because going to RubyKaigi, I think it's always something that was on my radar. But I never thought that...well, not never. I thought that I had to go as a speaker, which would have been, like, a three to five-year goal. [laughs] But to be able to go as an attendee with the support of the group and also of thoughtbot was really nice.
STEPHANIE: Yeah, absolutely. That investment in our professional development was really meaningful to me. So, like you, I'm very grateful. And if any of our listeners are interested in donating to WNB.rb and contributing to the community's ability to send folks to conferences, you can do so at wnb-rb.dev/donate. Or, if you work for a company that might be interested in sponsoring, you can reach out to them at [email protected].
MINA: I highly recommend doing that.
STEPHANIE: So, one of the questions I wanted to ask you about in terms of your RubyKaigi experience was, like, how it lined up with your expectations and if it was different or similar to what you were expecting.
MINA: Yeah, I have always heard that when people talk about RubyKaigi as a conference and about its contents, the word that everyone uses to describe it is technical. I have already had sort of a little bit of that expectation going in. But I think my interpretation of the word technical didn't really line up with how actually technical it was. And so that was one thing that was different than what I had expected.
STEPHANIE: Could you elaborate on what was surprising about the way that it was technical?
MINA: Yeah. I think that when I hear technical talks and having been to some Ruby and Rails confs here in the States, when I hear about technical talks, it's a lot more content about people using the technology, how they use Ruby to do certain things, or how they use Rails to achieve certain goals in their day-to-day work or side projects. But it seems at RubyKaigi; it is a lot more about the language itself, how Ruby does certain things, or how interpreters implement Ruby, the language itself. So I think it's much more lower-level than what I was expecting.
STEPHANIE: Yeah, I agree. I think you and I have gone to many of Ruby Central conferences in the U.S., like RubyConf and RailsConf. So that was kind of my comparison as well is that was, you know, the experience that I was more familiar with. And then, going into this conference, I was very surprised that the themes of the talks were, like you said, very focused on the language itself, especially performance, tooling, the history and future of Ruby, which I thought was pretty neat.
Ruby turns 30, I think, this year. And one thing that I noticed a lot was folks talking about using Ruby to reflect on itself and the possibilities of utilizing those capabilities to improve our experience as developers using the language.
MINA: Yeah. I think one of the things I was really fascinated by is...you had mentioned the performance. There were several talks about collecting how Ruby performs at certain levels. And I thought that that was quite interesting and things I had never thought about before, and I'm hoping to think about in the future. [laughs]
STEPHANIE: Yeah. One talk that I went to was Understanding the Ruby Global VM Lock by Ivo Anjo. And that was something that, you know, I had an awareness of that Ruby has this GVL and certain...I had, like, a very hand-wavy understanding about how, like, concurrency worked with Ruby because it hasn't been something that I've really needed to know too deeply in my day-to-day work. Like, I feel a little bit grateful not to have run into an issue where I had to, you know, dive deep into it because it was causing problems. [laughs]
But attending that talk was really cool because I liked that the speaker did give, like, an overview for folks who might be less familiar but then was able to get really deep in terms of, like, what he was doing workwise with improving his performance by being able to observe how the lock was being used in different threads and, like, where it might be able to be improved. And he shared some of his open-source projects that I'll link in the show notes.
But, yeah, that was just something that I was vaguely aware of and haven't yet, like, needed to know a lot about, but, you know, got to understand more by going to this conference. And I don't think I would have gotten that content otherwise.
MINA: Yeah, I agree. The talk that you are referencing is one of my favorite as well. I think, like you, kind of this vague idea of there's things going on under the hood in Ruby is always there, but to get a peek behind the curtain a little bit was very enlightening. I wrote down one of the things that he said about how highly optimized Ruby code can still be impacted and be slow if you don't optimize GVL. And he also shared, I think, some strategies for profiling that layer in your product, if that is something you need, which I thought was really cool.
STEPHANIE: Yeah. I think I had mentioned performance was a really big theme. But I didn't realize how many levers there were to pull in terms of the way Ruby is implemented or the way that we are able to use Ruby that can improve performance. And it's really cool to see so many people being experts at all of those different components or aspects of making Ruby fast. [laughs]
MINA: Yeah. I think that part of the work that we do on Mission Control is monitoring performance and latency for our clients. And while I don't expect having to utilize some of the tools that I learned at RubyKaigi, I expect being aware of these things helping, I think, in the long run.
STEPHANIE: Yeah, absolutely. Joël and I have talked on the show about this idea of, like, push versus pull learning. So push, being you consume content that may not be relevant to you right now but maybe will be in the future. And you can remember, like, oh, I watched a talk on this, or I read something about this, and then you can go refer back to it.
As opposed to pull being, like, I have this thing that I don't understand, but I need to know right now, so I'm going to seek out resources about it. And I think we kind of landed on that both are important. But at Kaigi, especially, this was very much more push for me where there's a lot of things that I now have an awareness of.
But it's a little different, I think, from my experience at Ruby Central conferences where I will look at the schedule, and I will see talks that I'm like, oh, like, that sounds like it will be really relevant to something I'm working through on my client project or, like, some kind of challenging consulting situation.
And so the other thing that I noticed that was different was that a lot of the U.S. conferences are more, I think like business and team challenges-focused. So the talks kind of incorporate both a technical and socio-cultural aspect of the problems that they were solving. And I usually really like that because I find them very relatable to my day-to-day work. And that was something that was less common at Kaigi.
MINA: Also, that I've never been to a conference that is more on the academic side of things. So I don't know if maybe that is more aligned with what Kaigi feels like.
STEPHANIE: Yeah, that's true. I think there were a lot of talks from Ruby Committers who were just sharing, like, what they've been working on, like, what they've been thinking about in terms of future features for Ruby. And it was very much at the end of those talks, like, I'm open to feedback. Like, look out for this coming soon, or, like, help contribute to this effort.
And so it was interesting because it was less, like, here are some lessons learned or, like, here are some takeaways, or, like, here's how we did this. And more like, hey, I'm, you know, in the middle of figuring this out, and I'm sharing with you where I'm at right now. But I guess that's kind of the beauty of the open-source community is that you can put out a call for help and contributions.
MINA: Yeah, I think they call that peer review in the academic circles.
STEPHANIE: [laughs] That's fair.
MINA: [laughs]
STEPHANIE: Was there anything else that you really enjoyed about the conference?
MINA: I think that one of my favorite parts, and we've talked about this a little bit before, is after hours on the second day, we were able to connect with Emori House and have dinner with their members. Emori House is a group that supports female Kaigi attendees specifically. I think it's that they, as a group, rent out an establishment or a house or something, and they all stay together kind of to look out for each other as they attend this very, I think, male-dominated conference.
STEPHANIE: Yeah. I loved that dinner with folks from Emori House too. I think the really cool thing to me is that it's just community and action, you know, like, someone wanted to go to this conference and make it easier for other women to go to this conference and decided to get lodging together and do that work of community building. And that social aspect of conferences we hadn't really talked about yet, but it's something that I really enjoy. And it's, like, one of the main reasons that I go to conferences besides learning.
MINA: Yeah, I agree. At the Ruby Central conferences, one of my favorite parts is always the hallway track, where you randomly meet other attendees or connect with attendees that you already knew. And like I mentioned, this dinner with Emori House happened on the second night. And I think by midday second day; I was missing that a little bit. The setup for RubyKaigi, I noticed, does not make meeting people and organizing social events as easy as I had been used to, and part of that, I'm sure, is the language barrier.
But some places where I had met a lot of the people that I call conference friends for Ruby Central conferences had been at the lunch table. And Kaigi sets up in a way where they send you out with food vouchers for local restaurants, which I thought was really cool. But it doesn't make meeting people and organizing groups to go out together with people you don't already know a little more difficult. So meeting Emori House on the second night was kind of exactly what I had been missing at the moment.
STEPHANIE: Yeah, agreed. I also really thrive off of more smaller group interactions like organically, you know, bumping into people on the hallway track, ideally.
I also noticed that, at Kaigi, a lot of the sponsors end up hosting parties and meetups after the conference in the evenings. And so that was a very interesting social difference, I think, where the sponsors had a lot more engagement in that sense. You and I didn't end up going to any of those drink-ups, are what they're called.
But I think, similarly, if I were alone, I would be a little intimidated to go by myself. And it's kind of one of those things where it's like, oh, if I know someone, then we can go together. But, yeah, I certainly was also missing a bit of a more organic interaction with others. Though, I did meet a few Rubyists from just other places in East Asia, like Taiwan and China. And it was really cool to be in a place where people are thinking about Ruby differently than in the U.S.
I noticed in Japan; there's a lot more energy and enthusiasm about it. And, yeah, just folks who are really passionate about making Ruby a long-lasting language, something that, you know, people will continue to want to work with. And I thought that was very uplifting because it's kind of different from what the current industry in the U.S. is looking like in terms of programming languages for the jobs available.
MINA: It's really energizing, I think, to hear people be so enthusiastic about Ruby, especially, like you said, when people ask me what I do here, I say, "Developer," and they say, "Oh, what language do you work in?" I always have to be kind of like, "Have you heard of Ruby?" [laughs] And I think it helps that Ruby originated in Japan. They probably feel a little bit, like, not necessarily protective of it, but, like, this is our own, and we have to embrace it and make sure that it is future-facing, and going places, and it doesn't get stale.
STEPHANIE: Right. And I think that's really cool, especially to, you know, be around and, like, have conversations about, like you said, it's very energizing.
MINA: Yeah, like you mentioned, we did meet several other Rubyists from, like, East Asian countries, which doesn't necessarily always happen when you attend U.S.-based or even European-based conferences. I think that it is just not as...they have to travel from way farther away. So I think it's really cool to hear about RubyConf Taiwan coming up from one of the Rubyists from Taiwan, which is awesome. And it makes me kind of want to go. [laughs]
STEPHANIE: Yeah, I didn't know that that existed either. And just realizing that there are Rubyists all over the world who want to share the love of the language is really cool. And I am definitely going to keep a lookout for other opportunities. Now that I've checked off my first international conference, you know, I have a lot more confidence about [laughs] doing it again in the future, which actually kind of leads me to my next question is, do you have any advice for someone who wants to go to Kaigi or wants to go to an international conference?
MINA: Yeah, I think I have both. For international conferences in general, I thought that getting a buddy to go with you is really nice. Steph and I were able to...like, you and I were able to kind of support each other in different ways because I think we're both stressed [laughs] about international travel in different ways. So where you are stressed, I'm able to support, and where I'm stressed, you're able to support. So it was really nice and well-rounded experience because of that.
And for RubyKaigi specifically, I would recommend checking out some of the previous year's talks before you actually get there and take a look at the schedule when it comes out. Because, like we said, the idea of, I think, technical when people use that word to describe the content at RubyKaigi is different than what most people would expect. And kind of having an idea of what you're getting into by looking at previous videos, I think, will be really helpful and get you in the right mindset to absorb some of the information and knowledge.
STEPHANIE: Yeah, absolutely. I was just thinking about...I saw in Ruby Weekly this week Justin Searls had posted a very thorough live blogging of his experience at Kaigi that was much more in the weeds of, like, all of the content of the talks. And also had tips for how to brew coffee at a convenience store in Japan too. So I recommend checking that out if folks are curious about...especially this year before the videos of the talks are out.
I think one thing that I would do differently next time if I were to attend Kaigi or attend a conference that supports multiple languages...so there were talks in Japanese and English, and the ones in Japanese were live interpreted. And you and I had attended, like, one or two, but it ended up being a little tough to follow because the slides were a little bit out of sync with the interpretation.
I definitely would want to try again and invest a little more into attending talks in Japanese because I do think the content is still even different from what we might be seeing in English. And now that I know that it takes a lot of mental energy, just kind of perhaps loading up on those talks in the morning while I'm still, you know --
MINA: [laughs]
STEPHANIE: Fresh-faced and coffee-driven. [laughs] Rather than saving it for the afternoon when it might be a little harder to really focus.
MINA: I think my mental energy has a very specific sweet spot because definitely, like, late in the afternoon would not be good for that. But also, like, very early in the morning would also not be very good for that because my coffee hasn't kicked in yet.
STEPHANIE: That's very real as well.
MINA: Do you think that there is anything that the conference could have done to have made your experience a little tiny bit better? Is there any support that you could have gotten from someone else, be it the conference, or WNB, or thoughtbot, or other people that you had gone with that could have enhanced this experience?
STEPHANIE: Hmm, that's an interesting question. I'm not really sure because I was experiencing so many new things --
MINA: [laughs]
STEPHANIE: That that was kind of, like, what was top of mind for me was just getting around even just, like, looking at all the little sponsor booths because that was, like, novel for me to see, like, different companies that I've never heard of before that I think when I asked you about expectations earlier, like, I actually came in with not a lot of expectations because I really was just open to whatever it was going to be.
And now that I've experienced it once, I think that I have a little more of an idea of what works for me, what I like, what I don't like. And so I think it really comes down to it being quite a personal experience and how you like to attend conferences and so --
MINA: For sure.
STEPHANIE: At the end of the day, yeah, like, definitely recommend just going if that opportunity is available to you and determining for yourself how you want that experience to be.
MINA: Certainly. I think just by being there you learn a lot about what you like in conferences and how we like to attend conferences. On a personal level, I'm also an organizer with Ruby Central with their scholarship committee. And that's somewhere where we take new Rubyists or first-time conference attendees and kind of lower the barrier for them to attend these conferences. And the important part I wanted to get to is setting them up with a mentor, somebody who has attended one of these conferences before that can kind of help them set goals and navigate. And I thought that someone like that would...at RubyKaigi, being both our first times, might be useful.
STEPHANIE: Yeah, absolutely. I think that's totally fair. One thing I do really like about the Ruby Central conferences is the social support. And I think you had mentioned that maybe that was the piece that was a little bit missing for you at this conference.
MINA: Yeah. I know that someone had asked early on, I think, like, the night before the conference officially kicked off, whether there is a Slack or Discord space for all conference attendees so that people can organize outings or meals. And that is definitely something that at least the Ruby Central conferences have, and I imagine other conferences do too, that was missing at Kaigi as well.
STEPHANIE: I'm wondering if you would go to Kaigi again and maybe be that mentor for someone else.
MINA: I think so. I think I had different feelings about it when we were just leaving the conference, kind of feeling like some of these things that I'm learning here or that I'm being made aware of rather at RubyKaigi will come up important in the future, but maybe not right away. So then I was kind of walking away with a sense of, like, oh, maybe this is a conference that is important, but I might deprioritize if other opportunities come up.
But then I started to kind of, like, jot down some reflections and retroing with myself on this experience. And I thought what you mentioned about this being the sort of, like, the push learning opportunity is really nice because I went in there not knowing what I don't know. And I think I came out of it at least being a little bit aware of lots of things that I don't know.
STEPHANIE: Yeah, yeah. Maybe, like, what I've come away with this conversation is that there is value in conferences being different from each other, like having more options. And, you know, one conference can't really be everything for everyone. And so, for you and I to have had such a very different experience at this particular conference than we normally do, that has value. It also can be something that you end up deciding, like, you're not into, and then you know. So, yeah, I guess that is kind of what I wanted to say about this very new experience.
MINA: Yeah, having new experiences, I think, is the important part. It's the same idea as you want to get a diverse group of people in the room together, and you come out with better ideas or better products or whatever because you have other points of view. And I think that attending conferences, even if not around the world, that are different from each other either in academia or just kind of, like, branching out of Ruby Central conferences, too, is a really valuable experience. Maybe conferences in other languages or language-agnostic conferences.
STEPHANIE: Yeah, well said. On that note, shall we wrap up?
MINA: Let's do it.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
If you're in the market for bicycle shorts, Joël's got you. Stephanie just returned from RubyKaigi in Japan and shares details of her trip.
Recently at thoughtbot, there have been conversations around an interesting data modeling exercise. Joël and Stephanie discuss the following:
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've made an unusual purchase this week. I went out and bought a pair of bicycle shorts. And, for those who are not aware, these are special shorts that have padding built into them. Typically, they're, like, skin-tight, but I got, I guess, what are called mountain biking shorts. So, they kind of look more like the cut of a normal short. But they've got this, like, built-in padding for biking.
STEPHANIE: So. Just to confirm, you did get these shorts for biking purposes, right?
JOËL: Yes. I purchased these shorts for biking purposes.
STEPHANIE: Okay. [laughs]
JOËL: And I got these because I was talking to a friend about this and mentioning that this was, like, probably the most ambitious cycling thing I've ever done in my life. And they recommended if you have not done bike shorts, you really should get them. They make a big difference.
STEPHANIE: Wow. Okay, I have two thoughts here. First of all, you prefaced this saying that this was an unusual purchase. So I thought maybe that you bought these bike shorts for some other purpose. [laughs] But I am excited to talk about this because I've also been curious about trying bike shorts.
I bike a lot in Chicago in the summer, and I've been doing, like, longer rides on the Lakefront trail. And one of my goals, actually, this summer is to do a bikepacking trip. But I have not been super comfortable on longer rides. And I was just thinking that this might be something really helpful to make them a little more enjoyable.
JOËL: So, is the kind of biking that you're doing closer to what might be considered commuting?
STEPHANIE: Yeah, mostly commuting. But also, just, like, going on long rides on the weekends, in addition to this, hopefully, forthcoming bikepacking trip up to a state park. So not too long, maybe, like, 60 miles, but definitely long enough to start getting a little uncomfy on your seat.
JOËL: Yeah, is 60 miles, like, in one day?
STEPHANIE: Yeah, exactly.
JOËL: That's a lot. Yeah, the friend who recommended biking shorts to me told me that pretty much anything over maybe 10 miles is worth getting shorts.
STEPHANIE: Wow, okay. I clearly have been suffering [laughs] for way too long, then. Tell me more about your cycling trip.
JOËL: So this is a bikes plus beer trip. Basically, I plotted a bunch of breweries in Belgium on a map and constructed an itinerary that could hit a bunch of them while keeping fairly short rides between towns. And the goal is to do maybe 30-35 miles in a day. And so I'll be going probably, like, cycling in the morning, and then exploring and drinking in the afternoon and evening.
STEPHANIE: That sounds amazing. That's really cool to do a little bit of a tour of the area and then also traveling by bike.
JOËL: Yeah, I'm excited because other modes of transport really just give you the origin and the destination, whereas cycling, you kind of get all of the in-between places. You get a much better feel for the area that you're in. And you can make all these unexpected stops if you want. You can make detours. So I feel like you get the sort of being in the moment, being in the place effect that you would have as a pedestrian but with a much longer reign.
STEPHANIE: Yeah, absolutely. That's exactly what I was going to say. I love cycling. And there's something really special about being able to be present in your surroundings and seeing people on the street or a cool building as you're going. But also going at a speed where it feels very fun and very freeing to just be cycling through a town and making stops when you want to, and traveling greater distances than you could be able to on foot.
JOËL: So I just received these bike shorts yesterday in the mail. So today, at the end of the day, I'm going out for a bike ride, and I'm going to see if they perform as advertised.
STEPHANIE: That's exciting. Keep us posted [laughs] on if you end up liking them or not.
JOËL: Yeah, yeah. The next episode or two, I'll have to report bike shorts; yay or nay?
STEPHANIE: Yeah, The Bike Shed will now become bike gear reviews.
JOËL: The name will actually line up, then with what the people googling, it might think it actually is. Stephanie, what's new in your world?
STEPHANIE: Speaking of vacation, I just got back from a two-and-a-half-week trip myself. I mentioned on the podcast a couple of episodes ago, I think, that I was traveling to Japan for RubyKaigi, an international Ruby Conference over in Japan. And then I spent another week in Taiwan, just on my own time. So, yeah, I had a really big, long trip, and it was really great. It was my first time going abroad in a really long time. It was my first time being somewhere where I didn't speak the language.
So, in Japan...I don't speak any Japanese. And it was both challenging and also, like, not too bad. I found my way around through a lot of gesturing and smiling, and nodding. [laughs] And, hopefully, people were able to understand what I was trying to communicate. Also, pointing at menus, I highly recommend going to places that have pictures of the food, and then you can just point when you want to order. [laughs]
JOËL: So, did you find that English was not particularly useful then in Japan as a tourist?
STEPHANIE: Yeah, I would say so. The next thing was that most signs were translated. So we ended up taking public transportation a lot. And that was quite easy to navigate, especially since I have kind of navigated subways in other cities before, and reading the signs is no problem. But when you're trying to communicate with locals, that was a little harder.
JOËL: Did you use any, like, apps on your phone or anything like that to help navigate kind of the different language?
STEPHANIE: Yeah, the Google Translate Lens app. I can't remember exactly what it is. But this was my first time really using it. And I was really impressed by how it was able to translate things that you're using your camera to take pictures of, or just, like, having your camera view. I did feel a little silly, like, holding my phone up to everything and trying [laughs]...so I could understand what I was reading. But for menus that did not have pictures, that was my backup strategy. [laughs]
JOËL: Did you ever have to have your phone translate something and then just show your phone to someone else?
STEPHANIE: No, I didn't have to go that far. Though I do think that it has a feature where you can have someone speak into the phone, and it will translate that into your native language. And then you respond by speaking into it and then playing the sound for them, which, you know, I bet really works in a pinch. But I think that required a little more investment into the interaction [laughs] with the other person than I was ready for. Like I said, the gesturing served me quite well.
JOËL: I got the experience of being on the other side of that a while back. So, here in Boston, I was just walking down the street, and someone stopped me and just holds up their phone. And they've typed something in Chinese on there. And they hit a button, and it comes in English.
STEPHANIE: [laughs]
JOËL: And they're asking for directions. And I think I typed a sentence back on their phone in English, and then they hit the translate button and got it back in Chinese. We went back and forth a few times. And eventually, I think he got what he wanted, and we went our separate ways. And I was kind of amazed that this whole interaction happened.
STEPHANIE: Yeah, that's really cool.
JOËL: Yeah, kudos to that person for having the courage to stop someone on the street when you don't speak their language.
STEPHANIE: Yeah, absolutely. I think even when I was struggling to communicate with someone because of the language barrier, I could tell from their gesturing in return that we were, like, willing to help each other out. And that, like, there was still an ability to find some kind of connection, even though, you know, we didn't completely understand each other. And that was definitely one thing that I really enjoyed was being in a place with, you know, people different from me and having that exposure. It's been a really long time since I've got to experience that, and that was really valuable.
JOËL: So, other than the conference, what would you say are some highlights of the trip for you, maybe one from Japan and one from Taiwan?
STEPHANIE: So one of my favorite things about being in Tokyo was all the green space that was around. I ended up walking a lot just to explore the neighborhoods. And I always just stumbled across a local park or even a shrine that had really great nature around it, a lot of big trees. You know, some, like, water features, maybe like a pond, and a lot of really fun plants that I got to learn about.
And, yeah, that was really nice, especially in such a dense urban area, like, coming across green space to just sit for a little while. And it was such a nice relief from the density and busyness of a big city. That was just one thing that I was really impressed by being in Japan.
JOËL: That's really cool. I think that really speaks to the quality of their urban planning. I know that the stereotype of Tokyo that I have in my mind is that it's, like, you know, ultra-modern, ultra-urban, you know, it's the largest city in the world. So the idea that they've taken the time to set up all these little parks everywhere is really endearing.
Particularly, I think the idea of smaller parks at the neighborhood level where you don't need, you know, something massive like, let's say, New York's Central Park, which is, you know, really cool. But having just a little green space in your neighborhood where you can, like, stop by, I think it's a wonderful upgrade to local people's quality of life.
I was recently listening to a video on YouTube from a city planning channel talking about just all the thinking that goes behind city parks, and having them at different scales, and how that impacts the residents of different areas. So it's really cool to hear that Tokyo has done a great job with that.
STEPHANIE: Yeah, absolutely. I think part of the joy of just stumbling upon it was that you know, even when I wasn't seeking it out, it would just come along during my walks. And, yeah, it really was very refreshing.
JOËL: What about Taiwan?
STEPHANIE: So, in Taiwan, what I really enjoyed about it it's a bit of a smaller island. And so you can actually get to a lot of places within a few days. And a lot of folks take day trips out to the coast from Taipei. And I was able to do a two-day trip to another county that had some hot springs, and I got to enjoy an outdoor hot springs in the rain. And that was really nice because it was, like, surrounded by trees.
And it happened to be raining that morning, but, you know, we were all kind of already getting wet, so it didn't really matter. And it was just, like, this really serene and gorgeous experience being able to enjoy that. And I think that was another place where I was in a very urban area, and then being able to escape a little bit was really nice.
JOËL: That sounds like a magical moment. Have you visited hot springs before, or was this your first time going to a hot spring?
STEPHANIE: I have been to a few in the U.S. before. I like to take road trips to national parks. And there are some really great hot springs in the U.S. as well. And so this was kind of something that I really wanted to do somewhere else just to experience it elsewhere. And, yeah, I'm really glad to have checked that off my bucket list.
JOËL: That's really cool. I've never been to a hot spring, and it sounds like a fun thing to do. So it's on my kind of greater bucket list. It's maybe not a top-five thing to do, but definitely, something I want to do one day.
STEPHANIE: Cool. Love it. That was vacation talk from Joël and Stephanie. [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So recently at thoughtbot, we've been having conversations around this really interesting data modeling exercise, where let's say this is a company, and you want to purchase T-shirts for everyone at the company. You have already some T-shirts on hand because you've done this kind of thing before in a couple of different warehouses. And you need to know how many new T-shirts you need to order in order to have enough for everyone.
So as long as you keep things simple, the math is pretty easy because you sum the number of people at your company, and then you sum the number of shirts across all of your warehouses, and that gives you the T-shirts that you need, the T-shirts that you have. You get the difference between those two numbers, and that tells you how many new T-shirts you need to order. Where things get more complicated is once you start introducing T-shirt sizes, and that's where the fun data modeling comes in.
If everyone at your company has a T-shirt size that they want and then at your warehouses, you store...the object that represents a warehouse stores a hash of sizes and how many of each size you have. Now, how do you do all this, like, summing across things? And it's not really just a single number that you want. Now you need to know how many small, mediums, and larges.
And, sometimes, you've got a hash. Sometimes you've got just symbols on a user, and you've got a sum across hashes. Maybe do some differences across hashes. And it gets kind of tricky to work with. So that's sort of the problem as it's initially presented. And we've been having a really interesting conversation around different ways to try to solve it in a way that's really kind of clean and nice.
STEPHANIE: Yeah, that's interesting because what you described sounds like the first iteration of solving the problem is, oh, the warehouse stores this information as a hash. So maybe I will create a new hash for the counts of T-shirt sizes that I need and then do the comparison on those two hashes. It sounds like maybe there was some unwieldiness or maybe even some duplicated code there. Is that what you think you all were trying to solve by modeling this differently?
JOËL: I think we kind of quickly hit some limitations with hashes. One thing that is fun before we start trying to combine a bunch of hashes is that some of the data exists as a hash on the warehouses. But to get the T-shirts that we need, all we have are an array of users and a size on all of them.
And we can use this fun method from Enumerable called Tally to give us a kind of Tally hash that is just a mapping of size, two counts of that size in the array. And so that's a really fun method. You don't get to bring it out that often in Ruby. And it's nice because that hash format happens to match the same format as the hashes stored on the warehouse objects.
STEPHANIE: Right. So now you're comparing apples to apples. But it sounds like maybe this hash representation does hold some kind of significance.
JOËL: Yeah. I guess, for me, I tend to see anytime you're doing fancier operations on a hash more than just reading in and out; it probably wants to be some kind of value object. And, in this case, we kind of want to do math on hashes. I think the equation is kind of still the same thing. We're trying to get the difference between the two, between the want versus have, but you can't just subtract one hash from another directly.
There's some things that you can do with the hash merge method that allows you to pass a custom block and do some things there. But we're going to have to do this sort of repeatedly. And now we're kind of leaking some of that knowledge a little bit. So it feels like something where you might want to actually name this concept and make it an object of its own that can then have its own kinds of domain operations as methods on it.
STEPHANIE: Yeah, I like that a lot. Because even just as I was thinking about it when you are storing data like that in just a hash, what do you call it? Like, what do you name it? I think I've seen things like that named, like, T-shirt data, or, like, warehouse data, or warehouse T-shirt counts, or T-shirt counts. You know, that is when it starts to diverge, and you end up maybe seeing the same, like, data represented, but it being named different things in different parts of the code. And I, in experience, have found that very painful.
JOËL: Yeah, because I guess you could have, like, T-shirts on hand from your warehouse; that's one hash. But the hash generated from the users might get called something like user preferences. And if you're reading through that code and you see a hash, and you're like, okay, do these two hashes that I'm looking at, maybe in a test, just kind of coincidentally have the same keys? Or are these kind of fundamentally the same thing? Or is the idea of, like, T-shirts on hand like a stock different from, like, a preference? And do they represent different things that just happen to be similar in this particular scenario?
STEPHANIE: Right. And especially if then there are methods where you're passing that data structure that really represents the same thing. But you're passing it as arguments, and then, suddenly, one variable name, user preferences, or user T-shirt preferences becomes, you know, T-shirt count. That has been really confusing for me before.
JOËL: One thing that does get, I think, clunky very quickly is that you have all of these warehouse objects that have that hash of, like, stock on hand on them. And what you really want is a kind of aggregate object that tells you not what's the stock on hand for one warehouse but across all warehouses. So you've got to go through, I guess, that array of warehouses and somehow kind of aggregate all of those hashes together. And because they're already tallies, you can't just do Enumerable Tally on it anymore. You've got to find some way to combine them together, and that gets tricky really quickly.
STEPHANIE: Right. I can see they're starting to be, like, nested loops, especially if you're just working with primitives.
JOËL: I think some initial implementations that we saw ended up doing either, like, some kind of reduce block or each_with_object, or something like that, which are, I think, fine solutions here. But what lives inside of those blocks is what gets complicated. And I don't know about you, but I feel like if I'm reading through some code and then all of a sudden I see a reduce block, and it's, like, ten lines of logic with maybe some, like, nested things, like, maybe some nested loops or some conditions inside of it, that's kind of intimidating. Reduce is not a super easy method to wrap your head around, especially when the block has got a lot of logic.
STEPHANIE: Yeah, that's a really good point. It definitely gives me pause. And I have to, like, you know, commit to reading the method in its entirety to fully understand [laughs] what's going on.
JOËL: Sometimes, like, really pause and, like, annotate with comments and all this stuff.
STEPHANIE: So, what did you end up thinking about in terms of solving that problem of aggregating the sums of all the different T-shirt sizes for each warehouse?
JOËL: So I think, for me, oftentimes, it's easier to make the problem a little bit smaller, solve that smaller problem, and then try to kind of scale up back up again and particularly when you're dealing with something like reducing or aggregating a large collection. Like, forget about dealing with a collection. Just how could I combine two items of this type? So if I had two of these hashes. And forget about fitting it for an array. But if I have two of these hashes, how could I combine them together?
And you could do this with hash merge. I wanted to do things a little bit more encapsulated. And because I also knew that we're building some more logic around these, I actually wrote a custom object. I called it a tally, maybe inspired by that Enumerable method, and implemented an operator plus on this tally object. So a tally object can plus another tally object. And the response from that is you get a third tally object that's gone through all of the keys and summed them together. So it's kind of an aggregate sum.
STEPHANIE: This is a cool example of a method that's a verb also representing a noun to name the return value, right? So the Tally method on Enumerable returns a hash, which we have been talking about for a while as, like, a data structure that's, you know, perfectly fine, but maybe we can leverage turning it into like you said, a value object to give it more meaning or to make it easier to work with. And it seems like the naming part just kind of fell into your lap.
JOËL: Yeah, tally is interesting in that it is both a noun and a verb in English. I'm not sure what the grammatical term for that kind of word is.
STEPHANIE: So, once you extracted this new class out, what insights or observations did you have about this problem?
JOËL: What becomes really cool about this is that once you have a way of combining two objects together, reduce is a way to just kind of scale that up to an arbitrary number. And so, just like you can sum an array of numbers by reducing plus over the array. Because I have plus on my tally object, I can reduce plus operator over an array of tally objects. And they all just kind of sum together in a single tally that's the combination of all of them. So this is really cool.
What used to be an intimidating reduce block, the intimidating logic gets moved into a plus method, which I think is much more approachable. Because I can go in the context of an object and say, okay, I've got this tally object, and I'm trying to add it to another tally object. And we're just going one key at a time, adding them together. Simple enough.
And then in the place where we're reducing, all we're saying is list of tallies reduce plus. And I know that pattern already because I do it with integers to sum them together. And so now I've just got this really simple one-line in the scary part. And the actual complex logic is much more approachable.
STEPHANIE: That is very cool. I found it really interesting that this came about because we were trying to do math on these two hashes. So it seems like, you know, a tally because it represents a score or, like, a number. Like, we were able to implement those plus operators and get to a simple solution because we're working with numbers.
JOËL: Yeah, I think it might be fair to describe it as maybe a compound number is the term that I use. I don't know if that's mathematically correct. Oftentimes, when you're dealing with things that represent a number or something that's represented numerically but that might have more than one number involved in it. But you still want to do math with this kind of compound, multi-number value anyway.
And one example that you might have is, let's say, a point in 2D space. You have an X coordinate and a Y coordinate. And you can do math on points. In fact, there's a whole field of math to deal with that kind of thing. That's an important thing that you have to do. You might want to be able to add or subtract points. You might want to do certain types of multiplication on them. And so just because something has more than one number associated to it doesn't mean that it can't be used for math. In fact, oftentimes, that's where the fancier math does come into play.
But when we treat them as primitives, and we just have, let's say, our XY pair was a hash, or, like, a two-element array, then we lose the ability to do math nicely. If we create, let's say, a point class that has an X and Y, and then we define plus, we define minus, we define scalar and vector multiplication, things like that, now we can do all those operations. And we can treat it like math, even though it's not just a simple integer anymore.
STEPHANIE: Yeah, I like that a lot because we do end up working with data, you know, maybe even from our database. But then, inevitably, we want to, like, learn something about it. And so I was thinking about how frequently I use GROUP BY in MySQL queries and how, oftentimes, I care about counts, or, like, number of records.
And perhaps this is why we see, like, the hash primitive used so frequently in codebases that then become pretty complicated once we're trying to, like I mentioned, like, learn something about it or, like, compare things or whatever logic that we need to do. And transforming them into objects that then know how to do math on themselves [laughs] is very cool.
JOËL: Hashes are interesting because they're pretty much just basic data structures. And I think, very often, they're sort of pre-objects. They're things that want to eventually become objects. And, oftentimes, what I find is that hashes get passed around a system. And various other classes or subsystems all have bits of logic that act on the hash because the hash can't own that.
And so you end up with the logic around the concept of whatever the hash represents kind of scattered and maybe duplicated across three or four places in the application. And then, all of a sudden, if you give that a name, if you create a class for it, you can pull all of that logic into one place. And, all of a sudden, it probably cleans up all of the surrounding places because now they don't have to care about the implementation of exactly what operating on the hash is.
But, also, it means that these operations generally have, like, nice domain names. And, in the case of a complex number, you might even have that represented through math operations, like, plus or minus. And that allows your code to read really nicely.
STEPHANIE: Right. Which gets me thinking about how I mentioned, like, tally as a noun, and, you know, you implemented your custom class. But do you think there's any value in the idea of a tally being specifically like a hash-like thing with a number as the value for each key, like, that existing as a more general class for people to use?
JOËL: Oh, that's interesting. So, in my personal implementation, I hard-coded values for small, medium, and large because those were the T-shirt sizes from the example. But you're talking about some sort of generic tally object that maybe would be a gem or something like that that people could use that represents counts of arbitrary things or multiple counts of arbitrary things that might then implement some common math operators so that you could add or subtract them.
STEPHANIE: Yeah, exactly. Because I was just thinking, you know, like I mentioned, I often represent that when I count number of records in my database. Or even I can recall a problem that I encountered previously where I had to figure out the number of orders for an e-commerce store based on the location. And I held that in a hash data structure, but really, it's a tally. [laughs] And so, yeah, I think that maybe we've kind of stumbled across a very useful representation of very common problems.
JOËL: Yeah, I can see there being use for a generic version of this. Maybe that's your chance to go out and create some open source, or maybe this already exists. We should maybe research that first.
STEPHANIE: Yeah, if any one of our listeners know, [laughs] send us an email.
JOËL: So something that was really interesting to me about all of these changes, introducing the value object, cleaning up the reduce, all that stuff, is that, in the end, once the...there was this object that represented the sort of aggregate compound value, the tally, then the equation stayed the same. And I can just slot in those variables as before.
Whereas previously, when we switch from just a single count to this, like, we need to take into account sizes that, like, broke the initial implementation of the code. So it's funny how you sort of go from a simple implementation and then a new requirement, which breaks it. But then just changing the hash to be an object all of a sudden made the original code, which didn't really need to change; it just worked again.
STEPHANIE: Hmm. That's really interesting because it makes me think about how maybe the primitives were perfectly fine, you know, in the first set of requirements, and not until, like, an additional complexity or something new emerged that we needed to reach for an object that could support the change.
JOËL: Yeah. And I think I'd argue that if you're doing just raw T-shirt count, an integer is probably the right value to use there. But if you're doing counts broken out by T-shirt size, then having an object that's a single thing that responds to plus and minus so that you can use it in the same equation where you're saying sum up all of these things from the warehouse, and then do a difference with the T-shirts that we need that becomes really nice.
STEPHANIE: Do you think there was some value in going through the hash implementation first, though, and then arriving at using a more custom object? I'm curious, kind of, like, what that journey was like.
JOËL: It's hard to say. I would say maybe yes. But I could also see someone who's done this a lot, who's built the sort of heuristics, the instincts around this could immediately be like, oh, wait, we're trying to sum hashes here. Clearly, these need to be objects. Clearly, what we need is something that implements a plus operator that we can reduce.
STEPHANIE: Yeah, I like that a lot. Because part of, you know, knowing what to reach for is having seen it enough times and seeing patterns, right?
JOËL: This reminds me of a particular pattern that comes from the world of functional programming. It has a kind of scary-sounding name. It's monoid, not monad, monoid. And the idea in the context of Ruby is it's some kind of object that implements a plus method. So two of these objects can combine each other. And typically, you also have some sort of empty version of this object or some sort of, like, zero value.
And there's a few rules that go around, like, kind of how this object has to behave. Like, you can't just put any implementation you want in that plus method. Certain requirements that have to be met for it to be considered, like, a valid plus method in this pattern. But if you do meet those requirements, then arrays of this type of object are just inherently reducible because you can just reduce plus over them.
And so I think anytime you're trying to aggregate some sort of unwieldy data structure, that's probably a useful pattern to have because, you know, wait, as long as I have a way to combine two items together and potentially some way to generate an empty state, I can aggregate this whole list.
STEPHANIE: I'm curious, does that also apply to non-numerical values?
JOËL: Yes, any kind of aggregation combination, whatever. So maybe what you're doing is you're combining strings together.
STEPHANIE: Got it.
JOËL: String concatenation is a form of combination. And so you could be reducing some kind of concatenation over an array of strings, and you end up with one aggregate string that's the combination of all of them. Sometimes, though, you're not just taking values and putting them next to each other so that what you have is kind of all of them at the same time. You might instead do some kind of comparison.
An example here might be Boolean values. You might say the way that I'm sort of, quote, unquote, "aggregating" two values, two Boolean values is with the operator AND. And so you have two Boolean values, and you get a new sort of combo value out of them, that is, are both of these values true?
STEPHANIE: Whoa, that's blowing my mind right now. Because I had never thought of the, like, AND operator on Booleans, essentially aggregating them into a single true or false value. [laughs]
JOËL: It's kind of weird, right? But I guess we do the same thing with numbers. One plus one doesn't give us 11 unless you're writing JavaScript.
STEPHANIE: [laughs]
JOËL: You know, we get a new number too, that is some sort of, like, combination of the two. So, similarly, it kind of makes sense that two Booleans might combine to create a new sort of third Boolean value. Where it gets really interesting, though, is that once you have this sort of combination, if you try to reduce AND over an array of Booleans, what you effectively have created is Ruby's Enumerable all method that checks to say, are all values in this array true?
STEPHANIE: Interesting. But really, the way that's implemented is just, like, a definition of what aggregate means for Booleans, right?
JOËL: Right. But it's taking that idea of aggregating two values and scaling it up to an array of many values. So we know Boolean AND. Another way to think about it is, are both of these values true? Is the question it's trying to answer. And then we're scaling that out to say, is both of these values true for everything? So are all of these values true? Because we're going from two to many.
STEPHANIE: Cool. So maybe the takeaway for some of our listeners could be, like, next time they find themselves having to deal with a collection or an Enumerable and, you know, using a reduce or, like, trying to break it down to compare two of those elements first, and figuring out how they want that interaction to work. Does that sound right?
JOËL: Yeah, absolutely. Once you have a way to combine two elements together, if you want to scale it up to n elements, you just plug it into reduce, and it does the rest of the work for you.
My big takeaways from this exercise were one: the value of creating custom objects. Wrapping primitives like hashes in an object and adding a few domain methods on them made such a difference in my final implementation.
Secondly, I think it's what you're saying, this whole thing about breaking down complex reduce problems by figuring out how to combine two items and then just using reduce to scale it to an array.
And then, finally, I think this is a point that we've mentioned on this podcast before, the value of specific vocabulary - being able to name things and patterns. And so knowing some of the details of this monoid pattern and having a name for it means that now I start seeing it in places. And so the moment I see, oh, wait, we're aggregating values; we're combining two values together and then doing this in a reduce, immediately, my mind goes, wait, that feels like monoid. And then, I can explore that with my custom object to try to make the code better.
STEPHANIE: Yeah. And even if you don't remember the monoid part specifically, the idea of Tally, like, that is something that I think is really cool and really applicable to a lot of codebases.
JOËL: So, for those who are interested in more practically what this code looks like, I've put this all in a Gist, and I'll link to it in the show notes. This was a really fun exercise for me because I used sort of two development techniques to help sort of build this out.
One, I went with a kind of literate programming approach, where I had just a Ruby file and would have put in some big comment blocks talking about what the setup was, what I was trying to do, and then describing how I'd like to use the code, and then try to write code that made that happen. And then, for the actual objects that I was using under the hood, I used TDD to test drive and build them out.
So you've got all of that in the Gist. We've got the tests and that sort of literate programming script that almost reads like a mini blog post, except it's executable Ruby. So, if you're curious to see about that, the link is in the show notes.
STEPHANIE: That's a very cool format. I'm excited to take a look.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël is joined by thoughtbot Software Developer and Dirt Jumper Daniel Nolan. Dirt jumping is BMX-style riding 🏍️ with really enormous dirt jumps.
But for a person who loves excitement in his spare time, for Daniel at work, it's not the new and shiny that interests him. When he dives into something, the "boring" parts of tech are what he finds most fulfilling. He wants to know the "why," and in this conversation, he explains how it sustains his career.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined with a guest, Daniel Nolan.
DANIEL: Hey.
JOËL: And, together, we're here to share a bit of what we've learned along the way. So, Daniel, what's new in your world?
DANIEL: So, recently, I just picked up a dirt jumper bicycle, and I've been learning to get better at dirt jumping. I ride mountain bikes quite a bit. But jumping is something that I haven't been super comfortable with.
JOËL: What is dirt jumping?
DANIEL: So, Dirt Jumping is kind of more like BMX-style riding with really huge dirt jumps. If you do it right, you don't pedal. So you should be jumping and pumping and making your way around the track or the course without the need to pedal. So it's actually pretty interesting. And it's supposed to level up your mountain bike skills if you get good at this.
JOËL: So the idea is you start up high somewhere, and you just kind of let the gravity bring you down?
DANIEL: Yep, that's the idea. So you start up on a platform; usually, you drop in. And then, from there, you start the series of jumps or rollers, pick up speed, and then kind of go into some bigger jumps, and berms, and stuff and make your way around the course. It's pretty fun.
JOËL: So you're coming down from a high, and then you hit a dirt ramp somewhere. You go up in the air. You fly off, and you're doing, like, a flip or something like that?
DANIEL: Yeah, not quite there yet. Some of the people I ride with can do flips, and no handers, and stuff; definitely not there, but just getting comfortable on big dirt jumps. I think the scariest thing is not being able to see the landing. So it's, like, if it's just a little jump, like, you know where you're going. But if it's like one of those big jumps with a huge lift, you just have no idea what's on the other side. And no matter how, you know, even if you've hit it ten times, it's still scary because you can't see it.
JOËL: How do you land safely when you can't see your landing place?
DANIEL: There's a technique where you kind of push the bike down. So, like, once you're in the air and you've kind of leveled the bike out, and you spot the landing, you force the bike down to kind of accentuate that movement and make the bike go down.
JOËL: Just so I get a better mental picture here, how high up are we talking about when you're flying off this ramp?
DANIEL: So some of these dirt jumps are probably...on the ones that I'm riding, they lift to probably, like, you know, eight, nine-feet high, and you're probably getting, like, three to four feet in the air over that to clear it.
JOËL: Wow. That's a little bit of elevation right there.
DANIEL: Yeah.
JOËL: I would probably be scared.
DANIEL: The safe jumps have what they call a table on top, so there's no risk. Like, if you land on top of the jump, you're not going to die. But, yeah, typically, they're flat on top. So you have to have enough air and enough momentum to clear that flat part and land on the downside.
JOËL: I like to do a lot of bouldering. In this case, I do it in a gym, so you're climbing up a wall that's maybe 15 feet high. Even at that height, I feel a little scared; not very good with heights. How do you feel when you're up 15-20 feet on a bicycle, and you don't know where you're going to land?
DANIEL: It's scary. I mean, just, there's no way to get around it. But that's the whole reason I started getting into the dirt jumping is just try to get it to where it's more second nature, and you're not so terrified.
JOËL: Kind of pushing some of your personal limits, then.
DANIEL: Yeah, for sure.
JOËL: So it sounds like you're introducing a lot of excitement and novelty in your personal life. And that contrasts to a recent conversation that we had where you'd mentioned that, at work, it's not the kind of shiny, new tech that excites you, or even kind of the scary parts. But you find that the boring parts of tech are what are most fulfilling to you.
DANIEL: Yeah, I actually really do like diving into the more boring parts. And I think to give just a little history about myself and maybe why that might be, I'm a second career programmer. My original career, or what I thought was going to be my lifelong career, was I was an auto mechanic. So I was a certified VW tech in my early 20s. And I've always kind of had this passion for, like, why things are. I want to know why something is. So, when I dive into something, it's like, I want to know the why. I don't want to just know what the fix is. I want to know why that thing fixes it or whatever.
So I find that getting into the more boring parts of programming, and especially in the Rails stack, allowed me to do this. So, for example, like, a gem that Dependabot can't upgrade, and it just sits there. The PR just sits there, and nobody wants to touch it. So then I come along, and I'm like, well, why won't it upgrade? Why can't we upgrade this thing? And I start diving into sort of breaking changes. Is there stuff like that?
So fixing things, for me, has been something...since I was just a little kid, my mom said I always used to take things apart and put them back together. I always want to know the why. Doing some of the more boring stuff, you get to do a lot more of that.
JOËL: So it sounds like really you're motivated by curiosity pretty strongly.
DANIEL: Yeah, for sure. I don't want to just know what a quick fix is or something like that. I want to actually get in. I want to read this, you know, like, an example, like, a gem that won't upgrade, like, I want to go dive into that source code. I want to see what the source code is doing. I want to figure out the why, you know. I don't want to just Google for, like, hey, I can't upgrade this gem. What do you think I should do? So I've always been super curious. That's how I've been able to sustain in software development and not really get burn out. It's what makes me tick.
JOËL: How do you feel about bug fixing or, like, chasing down bugs in general? Is that something that really scratches that itch?
DANIEL: It definitely does. I feel it's, you know, very similar to somebody comes to you, and they've got a broken car. And they're like, "Hey, this thing's making this noise when I'm going down the highway at 50 miles an hour, you know, what is it?" You know, it's very much the same thing. Like, you get an end user, and they're like, "Hey, when I click this button in the browser, and, you know, this thing doesn't load," or, you know, I'm getting a 500 error." It's very relatable. I love diving into those type of things. Like, I love fixing bugs.
JOËL: It's interesting that you related that back to your work with cars because it sounds like you were doing sort of the mechanical version of debugging.
DANIEL: Definitely the mechanical version of debugging. But it's still...it's a lot of the same stuff. It's a lot of process of elimination and stuff like that, right? Like, you got a noise coming from the front left. It could be anything, you know, it could be the wheel. It could be brakes. It could be, I mean, there's a number of things it could be. So you kind of got to start going down the path of like, you know, well, it's not this, and it's not this, and it's not this.
And it's very similar when you have a bug, you know, and you start down the path of, like, oh, well, I can click the button. The post is getting sent to the server. But, for some reason, you know, the parameters aren't going past the controller or something like that. So, you know, you maybe go look for some primitive params or something, I don't know. But it's very similar as, you know, just going through the process of, like, checking things off and trying to get to the root cause.
JOËL: Yeah. So, when you joined software, you already had this skill kind of really built up pretty well.
DANIEL: Yeah, I definitely did. Being a mechanic, a lot of times, I would get, like, the problems that nobody else wanted to deal with. Because people were like, oh, he likes troubleshooting electrical issues and stuff like that, so give it to him, you know. Whereas the other mechanics are just, you know, like, were more, like, oh, I want to rebuild the engine, or I want to put a new trans...like, it visibly needs an engine.
Like, oh, there's a rod through the side of the block. It's leaking oil everywhere. Okay, yeah, like...versus, oh, it's got some electrical bug where, you know, one injector doesn't fire every 50th time, or something like that. And something that you just really have to, like, trace down and figure out why it's not happening. So I feel like I had a pretty good, no pun intended, toolbox coming into...
JOËL: Nice.
DANIEL: Software development as far as just problem-solving skills, I guess.
JOËL: It's interesting. A few years ago, I interviewed a bunch of people about debugging and the parts they liked, the parts they didn't like, hopes, and fears. And most people I talked to actually enjoyed debugging, except that, oftentimes, when bugs come up, it's because they're blocking something else. And there are time pressures.
And so all that extra context is what makes debugging stressful for most people that I talk to. But the actual act of debugging, that kind of process of elimination or trying to hunt down the source of a bug, many people I interviewed actually found that highly fulfilling. So it sounds like it taps into a lot of the same interests that you have.
DANIEL: That's super interesting, yeah. And it is fulfilling, too, right? Like, you're going this hunting, and you kind of, like, put on your detective hat and go try to, like, figure out what the breaking thing is. And then you get the payoff also of like, okay, well, you know, if you actually fix it, you resolved it, and you get that little bit of payoff.
And I think for any job for me to be fulfilling, I have to have that kind of that payoff where you start with something broken, and you fix it. You know, you start with an empty editor, and then you build out a web application or something like that. So it's just, like, having that payoff is definitely huge. You know, I just find that part of software development super fulfilling.
JOËL: So you've mentioned debugging. You've talked a little bit about gnarly gem upgrades. What other types of work fit under that boring part of software heading for you?
DANIEL: Putting in some tools for best practices maybe, you know, like setting up linters and stuff like that, automated code review kind of things. It's stuff that you tend to see, like, teams and stuff want, but they just never have the time. They're always building, you know, new features and stuff. So I think a lot of that stuff, like, gets pushed by the wayside. Refactoring code that's good enough. It's good enough, and it's working, but it could be a little cleaner, a little easier to read; kind of enjoy that, too. I don't know, do you have any things that you would consider boring programming work?
JOËL: I think some types of features sometimes can feel boring, maybe a little bit beyond boring. It's scary or unpleasant to work on. Sometimes there are just parts of the code that are really gnarly to work with. I'm like, oh no, I've got the ticket that requires touching some of that gnarly code in a particular part of the app. There's one app, in particular, I'm thinking of that this was the wizard code, or the multi-step form processing code that had gotten really gnarly, and so nobody wanted to touch it. And if you had a ticket that required touching that code, it's like, oh no, you drew the short straw.
DANIEL: Yeah. I've definitely had experiences like that. I had a feature I worked on at a previous job where it was...the feature was referred to as the black box because nobody knew how it worked. Nobody knew what it actually did. But they knew that it didn't produce the results they wanted. And they knew it needed to be refactored, so that was definitely one. I don't even know if I would say that was boring, but definitely, a scary part that nobody wants to touch.
There's just all kinds of stuff that's boring. Like, if you're just constantly adding new features and doing new things, and adding to the app, there's code that's probably not used anymore. So using something like Coverband and going in and finding unused code and cleaning out that kind of stuff. Optimizing queries, again, you know, you build something, and it works. It's there. It's doing its thing. And nobody's complained that the endpoint's slow. But when you run it, you notice that there's like, you know, 70 N+1 queries. So you go, you know, you go touch that up a little bit.
I feel like a lot of people and a lot of programmers just don't want to do that work, or it may not even be that they don't want to do that work. It's just a lot of times; there's maybe no time for it. So that's no fault of anyone in particular. But I think we need to, you know, figure out a way to make some more of these things fun. Maybe more teams need to build in, like, gem upgrade day or something. And, you know, like, go upgrade the ones that are hard to upgrade. Upgrade the ones that Dependabot can't, that have breaking changes. Or, I don't know, there's got to be some way where we can make some more of this, like, the tasks that keep the car running more enjoyable, right?
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: Would it be fair to describe the types of work that you've been talking about here, you've been describing as the boring parts of development, would it be fair to put those under the heading of maintenance?
DANIEL: I think it would be fair to put it under maintenance, maybe even relating that back to cars. It's the same thing, right? Like, you can put a new paint job on the car, and you can get some new, shiny wheels. And you can, you know, put a turbocharger on it or something but, eventually, you know, you got to change your oil. You got to change your tires. You need to change your air filter, new windshield wipers, you know, so you can see when it's raining.
These things are all just things that need to be done. Otherwise, no matter how shiny your car is, it's just not going to go anymore, right? So I feel like maybe most of these tasks are maintenance. It's not the shiny, new thing. It's just keeping the thing running.
JOËL: And, I guess, traditionally, at thoughtbot, we've done engagements where we're either building new software or new features on existing software. Or we might be coming in and fixing some larger problems, maybe doing something like helping with a Rails upgrade or helping to backfill a test suite, some larger kind of chronic problems.
But we recently introduced a maintenance service that is, instead of having people full-time there to do a particular task, it's more of so many hours a month to just do a lot of those boring things where we're doing like you said, potentially gem upgrades, or fixing bugs, or things like that. Is that a team that you would be interested in joining?
DANIEL: So I actually got to work with Jeanine and [inaudible 16:05] on support and maintenance for about a month, month and a half maybe. I worked on an upgrade. And that's exactly what I did. It was upgrading a Rails 5.2 app to Rails 7. And yeah, it was not only super fun, but the other fun side of that, for me, is that a lot of times when I'm doing these things, and you find breaking changes, the gem is either, like, ten years old, and it can't be upgraded because there's nobody maintaining it anymore. So you maybe have to create a fork, or you maybe submit a patch or something.
So this is a way that I've been able to get, you know, my feet wet in open source without really contributing to a specific open-source project. So I have tons of little commits on different gems here and there fixing stuff up or something I found along the way that couldn't be upgraded or something like that. So yeah, the support and maintenance team is definitely something that I'm interested in, and I had a good time working with them for that rotation.
JOËL: And I think it's really interesting you're talking about the pattern of open-source contributions that you were having. And I think that's something that's really valuable to the community, just those little patches in various places because it's broken or is no longer compatible with other things. What you're doing not only helps unblock you and your client but also is probably unblocking a lot of other people in the community, so might have a larger impact towards other people than if you were putting all of your time into contributing to one more well-known gem.
DANIEL: Yeah, for sure. You know, I know for sure, like, some of them I have a commit on Honeybadger, something that broke recently, a Sidekiq upgrade that broke, and there was just a small change to the way the error handling worked. And it was, like, causing just this flood of errors. And it was just a simple change. But I'm sure not only did it fix it for us and the app I was working on, but, yeah, I'm sure quite a few people benefited from that one.
JOËL: So, for those listeners out there who are hearing you talk about some of this maintenance or boring work and maybe are feeling inspired to go and do that on their team, how would you recommend getting into that?
DANIEL: Well, I mentioned Dependabot. If your team's already using something like Dependabot for, like, minor gem upgrades, maybe there's a PR that's stuck that Dependabot can't upgrade because there's some breaking changes in one of the gems it's trying to upgrade. That's a great place to start. You could run, I believe; it's bundle-outdated. And that will tell you what gems are in your gem file that are outdated and need to be updated.
So any of them that are going to be major version bumps, you know, going from, like, two to three, typically, you'll usually have breaking changes somewhere you can kind of jump in and go fix those breaking changes. Maybe there's even breaking changes in another gem that may be related or something that you're trying to upgrade. And, you know, you can't upgrade past version two because the new gem you're trying to upgrade depends on that gem-specific version or something like that. So I feel like that's a great way you could jump in.
Maybe some other ways would be if, you know, maybe you want to optimize queries or something like that. Maybe you have Sentry or some other type of software that reports on these things, New Relic, you know, so something like that you could go dive into and pick up an endpoint that's responding slow or something that has some N+1s being reported and go dive in, see if you can maybe touch those up.
JOËL: Those are all great suggestions. I know I once worked with a developer who would dedicate...I think it was the first hour of his day. So he'd come into work in the morning, and before jumping in on feature work, the first hour of his day, he would just do small improvements on things and not just, like, refactoring for the sake of refactoring. But they're things like you're describing, like, oh, do we have a gem that needs to handle an update?
Did one of our monitoring services highlight maybe some slow queries that I could tweak a little bit this morning? Or are there areas where we're feeling pain that we can make things better? And just by doing a little bit every day, he became known as the person on the team who is, like, having an impact, and making everybody's lives better, and making the codebase better, making the product better. And I really appreciated this person.
DANIEL: Yeah, sounds like an angel. Like I was saying, you know, I kind of hinted out a little bit before...I think these things...and it could be because they're boring, or it could just be because you have stakeholders that are, like, hey, we need to get this new feature out. And I just feel like a lot of this stuff definitely gets pushed to the back burner often, so figuring out a way to incorporate some stuff into your day like that, or automating some of it, you know, using things like Dependabot and stuff like that. I think they're all just great ways to keep the app or the project in good shape.
Another thing that I've done adding custom RuboCop rules to enforce things the way that you want them. So, like, it comes with a standard set of rules, but you find some pattern that's being, you know, repeated, and we don't want that pattern repeated. You know, spend the time to write a RuboCop rule so that that pattern doesn't get repeated. And you don't have to constantly police this in PRs, you know, you let the automated tool do it for you. But I've never really heard anybody get super excited about writing a Rubocop rule.
JOËL: And they're valuable.
DANIEL: Yeah, they're definitely valuable.
JOËL: I think the most excited I've seen people get about RuboCop rules is typically as part of an incident report. So something went terribly wrong, and maybe production went down. And then you're doing a post-mortem, and then you realize, oh, in this way, some bad code made it through. And you decide how can we prevent this from happening again? And the consensus is, oh, maybe a Rubocop rule would have prevented this. So I think that's generally where people actually start caring about a Rubocop rule is after there's been some larger incident.
DANIEL: Sure. We had something where I think, like, we first started using system specs on an app I was working on, and some people were using Path Helper, and some people were using URL Helper. And, for some reason, the ones that were using Path Helper would fail randomly. I don't really recall right off the top of my head why, but we wrote a RuboCop rule to just enforce using the URL for or the URL Helper instead of the Path Helper just to enforce that rule. So we didn't have to constantly police it, and it just made everybody's lives easier.
Figuring out a way to set some time aside for this stuff or automating this stuff is definitely beneficial because you may not always have somebody on the team that's interested or that wants to champion this stuff.
JOËL: Hey, you mentioned the word champion, and I like that word because it's the kind of thing that often doesn't get prioritized. And so you need somebody to advocate for that work getting done. And, generally, I've found this work is often cheaper to do sooner rather than later. If you postpone it too long, and now it's been ten years, and you've not done a Rails upgrade, and your app is still running on Rails 3. It's going to be very expensive to do that work.
DANIEL: Yeah, the biggest cost of software is maintenance is definitely true.
JOËL: Maintenance is valuable work, and we should celebrate it more.
DANIEL: For sure.
JOËL: On that note, shall we wrap up?
DANIEL: I think so.
JOËL: Thanks for joining us, Daniel. Where can people find you online?
DANIEL: You can find me on Twitter or on GitHub. Both are danielnolan.
JOËL: All right, thank you very much for joining us.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël gives a recap after attending RailsConf 2023 in Atlanta, Georgia (and yes, there was karaoke! 🎤 🎶). Stephanie plugs the The Tightly Coupled Book Club Podcast from friends and fellow thoughtboters Aji and Mina Slater where they're reading The Rails Guides from cover to cover and treating it like a book club and having a discussions about the documentation as they read it together.
Stemming from a Twitter thread by Joël, their main topic focuses on not all numbers being numbers. So: if someone is submitting a phone number through a form:
Thoughts, Dear Listener?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've just returned from attending RailsConf 2023 in Atlanta, Georgia, and it was so much fun. I got a chance to attend some really good talks. I got a chance to connect with other people in the community. I always love the hallway track or any of the events that happen in the evenings after the conferences. It's a lot of fun.
STEPHANIE: Nice. I found it funny that you emphasized just returned because you literally walked through the door of your home and then got online to record this podcast with me. [laughs]
JOËL: Productivity.
STEPHANIE: What were some of your highlights from the conference?
JOËL: I really appreciated a talk by Elle Meredith about how to say no. And she gave...I think it was nine sort of different ways that you might want to have a conversation about saying no to a request. They're almost like design patterns but for interpersonal conversations. And so she covered situations where you might want to say no to maybe someone who's higher up in a hierarchy than you, so a manager.
But also, what it's like sometimes when it's the other way around if you're a manager and you have to say no to someone who reports to you, and how to handle some of those conversations. So I really appreciated the nuance that she had to share. And I think the strategies were just really practical.
STEPHANIE: That's awesome. Yeah, I know we, as developers, love some good patterns and frameworks. That actually reminds me of something else that I'd read recently, a newsletter called Culture Study by this journalist Anne Helen Petersen. She recently sent out a dispatch about how to say no or decline requests through email. And I was just thinking that it's such a challenging thing to do. But having a script or seeing examples of how other people do it is really helpful to just make it easier the next time that you want to say no, but then you're not sure how.
JOËL: It's not easy to say no. I think most people want to please, and it's much easier to say yes. And maybe even you want to believe that you can say yes, that you can do everything with limited resources and not have to prioritize. And then, of course, reality hits you, and you're in a worse situation than if you'd said no upfront or had at least an honest conversation about the limitations that we have and the prioritization that needs to happen.
STEPHANIE: Yeah, absolutely. So, over in the thoughtbot Slack, I saw a lot of really awesome praise and even photos from the conference, including stuff about a thoughtbot-sponsored event that we hosted over at RailsConf. What was that like?
JOËL: That was so much fun. So we hosted an event out in Centennial Park after day one and just had some lawn games, some snacks, some drinks. And people just came to hang out in the park and had a good time. And I got to chat with a lot of people. I think it somehow just felt really relaxed yet really social.
Sometimes when you're in the hallway in the convention center, you'll see groups of people talking, but also kind of people awkwardly walking around and struggling to connect. And there are just so many people around. And if you don't know anyone, it can be really hard to kind of break in. And I felt that this setting...I'm not sure exactly why; maybe it was a smaller amount of people. Maybe it's because it's a more relaxed atmosphere. You're outside. Everyone was kind of mingling and talking and seemed to be having a good time.
STEPHANIE: That's really cool. I'm so bummed to have missed it. Yeah, I hear you about the hallway track. It's like you're still kind of, you know, either in a convention center or a hotel, so there is just that vibe of formality. And taking it out of that venue and making it super casual, you know, I think that also almost allows people to not talk about tech, or the conference, or anything work-related and just have fun.
JOËL: Definitely. Although there's definitely fun that happens in all sorts of ways after the conference as well. One of the evenings, I went out with a few other thoughtboters and some other attendees at the conference; we went to a karaoke bar.
STEPHANIE: That sounds like a lot of fun. I think that's become almost a tradition for the thoughtbot crew whenever we do things out in the world, karaoke. And I'm trying to get escape rooms to be a conference tradition among my group of friends. So you and I, we participated in escape room the last conference that we were at together, and that was a lot of fun. So this is a thing that I hope to keep doing [laughs] next time I see you in person.
JOËL: We didn't just participate. We broke the record.
STEPHANIE: It's true. Yeah, I didn't want to brag on the podcast. But we did break the record. And we got to, like, write our names on a little poster to put in the office. And I hope it's still there, at that escape room company in Providence, Rhode Island.
JOËL: I'd be curious to hear from some of our listeners what some of their traditions are when they go to a conference. Do you have something that you like to do with your conference friends when you meet up? Let us know at [email protected]. And we'll give you a shout-out on one of the upcoming episodes. So, Stephanie, what's new in your world?
STEPHANIE: Speaking of friends and community, I have a podcast to plug today. So our friends and fellow thoughtboters and married couple Aji and Mina Slater, they just launched a podcast called The Tightly Coupled Book Club Podcast. And what they're doing is reading The Rails Guides from cover to cover and kind of treating it like a book club and having a discussion about the documentation as they read it together.
And I listened to the first two episodes this morning, and I really enjoyed it. I thought it was such a cool idea and a really great format as a person who enjoys talking about books and things I've read with you. I mean, sometimes we've joked that this is kind of like our two-person book club. But it's really cool to listen in on other people who are, you know, are really knowledgeable, or have a lot of experience about a thing, and then also share their experience reading something as they come across it. I thought that was really interesting too. There's a real-time aspect of it that I liked.
JOËL: I love the idea of taking the book club concept and then reading the Rails Guides. What a really original idea.
STEPHANIE: It's funny because the Rails Guides do kind of read like a book. I went on to the documentation today just to kind of give myself a refresher as I was listening. And you can download the guides on your Kindle. So, in some ways, they've kind of leaned into that format. And since a lot of it is also just, like, prose, it is more like paragraphs rather than quick bits or API reference.
JOËL: Right, right. It's something that's meant to be read in larger chunks rather than just found through a search and then referencing one entry in a list of methods or something like that.
STEPHANIE: Yeah. I think my other favorite part about doing an idea like this is I think we all kind of have our own different experiences with the Rails official docs and guides. And I really liked that they're just like, yeah, like, you know, the way that an individual developer approaches the documentation can be totally different. And they kind of talk about how they do it or how they don't do it. Kind of all with the intention of wanting to better understand Rails and also wanting to better support people who want to get into Rails and are kind of entering the universe from the documentation for the first time.
JOËL: Yeah, and I think the Rails Guides, in particular, are often that first point of contact for a lot of newer Rails developers. I know that, for myself, when I was first getting into Rails, I was on those guides all the time. Between that and cherry-picking examples from the Michael Hartl Rails tutorial, that's kind of how I learned Rails.
STEPHANIE: Yeah, I like that a lot. For other folks who want to hear more about just Mina and Aji's experience with the Rails Guides, you should all check out that podcast.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So this is a conversation we had the other day, and I'm curious what your take is on this. If someone is submitting a phone number through a form, how would you store that in the database? Would you store it as a string? Because sometimes it comes with some extra formatting. Would you normalize it and try to store it as an integer because it's a number? What's your take?
STEPHANIE: Okay, to answer that question, I think I'm going to gripe a little bit first because the thing with phone numbers is that there are so many ways to format them. So, when you were saying, like, oh, you know, how would you input a number? I'm like; I don't know. Even if I were to write down a phone number on any given day, would I add the parentheses? Would I include the dashes? Do I add the country code? It honestly really depends.
And it's just totally different just based on what the form is asking of me or what channel I'm doing it in. So I think phone numbers is an interesting example because there are so many ways of representing it. But I think that actually also answers the way that I would do it is saving it as a string. Because while a phone number is made up of numbers, it's not exactly a numerical value.
JOËL: You're hitting on something pretty deep here about the fundamental nature of what a phone number is. What do you mean by saying it's not numeric?
STEPHANIE: I guess it doesn't mean anything in mathematical terms, I suppose, is what I was trying to get at. Like, it happens to be in the U.S., at least, you know.
JOËL: Always gotta qualify that, right?
STEPHANIE: Right. It happens to be a 10-digit identifier, I suppose. I almost said number there, but I think I'm trying to avoid it for the sake of my argument. But it's not necessarily something we would try to do arithmetic on. It's not something that...
JOËL: I'm kind of laughing at the idea of trying to do math on a phone number, adding two phone numbers together, doing some sort of, I don't know, add 20% to your phone number. Like, what does that even mean?
STEPHANIE: Yeah, absolutely. It's something that mostly remains static, right? It's like when it changes, that means something different. It means that the way that you would reach someone via a phone call or just that kind of communication is more of an event that has changed rather than the value transforming doesn't mean anything on its own.
JOËL: So you mentioned that this is not the kind of number that you can do arithmetic on. That kind of reminded me of an article from the UK's I want to say government design guidelines and how to design web forms that are going to be used by any service that's part of the UK Government. And they have this category of numbers that they call non-incrementable.
And their suggestion is that even though these things are represented as numbers, they should not use the HTML number input for them because you're never going to hit that little plus or minus arrow to go up and down and change the number. And that, instead, it should just be a regular text input.
STEPHANIE: Yeah, that's interesting because the idea of incrementing, for me, is definitely more attributed to the idea of a numerical value where it's a number and then what that number represents. So it could be weight. It could be money. It could be some other measurement. Another one could be quantity, right? If you have a shopping cart or something, and you're incrementing that value because you want to buy more of whatever item. That, to me, is where I see that type of input more frequently.
And so it would make sense that, you know, a phone number where the person is usually inputting a fixed thing that, you know, because that is not something that they can change, or they are wanting to change in that form, would be represented with a different HTML input.
JOËL: I don't know about you, but I feel like when I'm having a conversation with someone, occasionally, they'll just sort of mention something that my ears will kind of prick up because it feels like a keyword. And you mentioned earlier the keyword identifier, that a phone number to you feels more like an identifier than a number. And I want to dig into that because that feels really key to this conversation.
STEPHANIE: Yeah. I think there are a lot of things that happen to have numbers in them because that's the characters we use to denote some kind of unique or mostly unique value [laughs] of something. So, like, most people have a phone number, some people don't, some people may have multiple. But, usually, that phone number is tied to a person or a business or some kind of...it's like a record of how to reach someone.
JOËL: Yeah. I guess to a certain extent, a phone number is like a...it's not an ID for a person, but it's an ID for a phone, or for a SIM card, or a line, an account, something like that that's publicly shared out. But even though it's publicly shared out, it's effectively...I guess you might call it a third-party ID by the phone companies. If they're supposed to be globally unique, you might say the global telecoms consortium that have come up with this set of rules.
STEPHANIE: [laughs] Yes, we're clearly not telecommunications experts over here [laughs] or just have a, you know, regular human-level understanding of how telephones work. [laughs]
JOËL: But I think that's a really interesting idea because I think my instinct with phone numbers is I want to normalize them when I get them through a form. And yet I have very strong instincts that anytime I get an ID from a third-party service, let's say I'm interacting with an API, and I get a new user through there. The ID from the third-party service is there. When I save it into my database, I will create a new primary key for that row because I want to own my own primary keys.
But I will have a row for that third-party ID, and I will make sure to not modify it. I will store it in its raw form. So why do I feel the need to normalize phone numbers? If they are third-party IDs, then maybe I should just be storing them in raw format as entered by the user. At the very least, maybe I should store them as strings rather than trying to turn them into numbers.
STEPHANIE: Yeah, that's interesting. I mean, I do think phone numbers are a bit of an exception here in that you could think of it as a third-party ID. But it's also so ubiquitous in how our society functions. And the way people use it, it's like it's not a single service that this is owned by where you want to make sure that you're capturing it the way that that third party would. It's something that is so commonplace that it can end up having different forms because of the way that users interact with it.
JOËL: I think one thing that's really interesting with third-party IDs is that even though they might come back as numbers like you said earlier, we don't do math on them. The main thing you ever do with a third-party ID is use it to make requests back to that third-party service. So, if I have a user I've pulled from some other service and I ever want to talk to that service again, I would need to use that third-party ID that I have stored in order to make sure that everything stays in sync.
That's kind of what we do with a phone number, right? We store it. And then the reason we might want to store a phone number is because we might want to programmatically make a phone call or send a text message that's interacting back with that telecom's world. We might want a human to do that. But the effect is still kind of the same.
We're not doing any transformations or work on it internally within the software. It's always when we want to make some kind of phone call, either manually by a person or automatically with a computer. And the only way we can do that and reach the right person is by using the ID that was given to us.
STEPHANIE: Yeah, that's a really good point. I mean, I have seen many applications that do hand-roll their own validation or normalization on phone numbers. But I do think that there is a library for this published by Google I think. It's called Libphonenumber. And so it's an open-source library for parsing, validating, and formatting phone numbers for, I think, most countries. It's very comprehensive.
JOËL: I think the difference maybe with a phone number and a form where you might want to do a little bit of polishing on it is that it is manually entered by a user. It definitely needs validation because a user manually typing in an ID into a form absolutely could have an error in it. Normalizing it for storage purposes, maybe, but at the very least, yeah, it needs to be validated because someone hand-typing in an ID is not something you want to rely on too heavily.
STEPHANIE: Yeah. I mean, I'm curious if we can extrapolate this conversation further beyond phone numbers. It sounds like we're talking about this idea that values with numbers in them should not always be treated as integers.
JOËL: Yes. And I have a great example of that that has actually burned me before.
STEPHANIE: Ooh, what is it?
JOËL: Zip codes. So, again, prefixing this, in the U.S., zip codes are typically a five-digit number. There's a variant that has more digits in it. And you'd think that you can store that...it's a five-digit number; you can store that in an integer column. And that does not work because you can start with a zero. And so if you store that as an integer, then what you really have is a four-digit integer. And then, when you try to put that back into an address, things get messed up.
So it's really important to store U.S. zip codes as strings so that you can keep that leading zero if you need it. And, of course, the moment you introduce international zip codes, or I think postal codes is what most countries call it, now, all of a sudden, you've got letters in addition to numbers. I wanted to share one of the most delightful postcode bits of knowledge that I have.
STEPHANIE: Please.
JOËL: Which is that in Canada, postal codes alternate letters and numbers. And Canada decided that Santa Claus needed his own Canadian zip code. And his zip code is H0H 0H0.
STEPHANIE: [laughs]
JOËL: H0H 0H0
STEPHANIE: Of course it is. I like that a lot. Makes sense that he would reside in Canada, up in the frigid, chilly north. So you've mentioned that you were burned by this. Does that mean you were working with an application that did store postal codes as integers? And what was the impact or consequence of that?
JOËL: So I was building a feature that required interacting with a zip code. And, as one does, I tested it out in development. And, as one also often does, I put in my own address. Now, I happen to live in Boston, and my zip code starts with a zero. So I happen to live in the one place that has that weird edge case. And immediately when I saw in dev how things happen, I was like, wait a minute, that's broken. That's not going to work.
STEPHANIE: Yeah. Wow. I wonder if that has just been impacting users for a long time before that discovery.
JOËL: It probably depends on the application, right? I guess you could...if you introduce that as a problem, you could try to add hacks on hacks to make it better. So you store it as an integer, but then when you get the integer out of the database, and you need to use it as an address, you then reformat it back. So you left-pad the number with zeros.
STEPHANIE: Yeah, I can see some really interesting ways of trying to work around that.
JOËL: But yeah, I think the best practice is definitely store your zip/postal codes as a string, not as an integer.
STEPHANIE: I wonder what it is about these types of values that make us think that they are numbers or want to store them as integers. I think that the Rails default is to store primary keys in integers. And if you wanted to use UUIDs, for example, instead, you do have to have done that initial setup.
I'm curious about the origin of using ints. And I know that that's like a whole story [laughs] in terms of the bite-size of that value. But yeah, it's just one of those origin stories that I'm wondering if that has kind of impacted our understanding of what primary keys look like.
JOËL: At its core, I think this goes back to what we were talking about earlier. Primary keys the default in Rails is an auto-incrementing integer. And we store it as an integer so that the database can do a plus-one on it every time we insert another record. We don't want to do that with zip codes or with phone numbers. Now, quite possibly, within the U.S. Postal Service or whatever the standard is for establishing phone numbers, they might, because these are numbers, they might be doing some kind of incrementing somewhere.
But there are patterns, definitely, that have been established. But they're not always necessarily incrementing. And they're not transparent to us in a way that we would care about them normally in an app. And we might care that, oh, we know that if it has a leading zero in the zip code, you're in this broad region or something like that. But, again, this is not really doing math so much as it's knowing the pattern in the ID.
I know some ID-like numbers have checksum logic built into them; credit card numbers are something like this. So I don't know if you've ever tried to type in your credit card number in a form. Like, the outline goes red. It'll tell you there's an error and you've not submitted it. Nobody's making a background request to your bank. And the bank is like, wait a minute; this is a bad card number.
The validation logic on the front end just immediately was able to tell the number is wrong. And that's because there are some checks and logic built in such that your number can be almost like self-verifying. We don't know that it's Stephanie's number, but we do know that it is a valid Mastercard.
STEPHANIE: Yeah, that's really interesting because, you know, I know that there are just various combinations of digits in a credit card number that are straight up and valid. And I think that's another example of something that is more like an identifier as well.
JOËL: You're right. I think so. It identifies a particular card, a piece of plastic that you have, and maybe line of credit or something like that. I'm not sure what the underlying modeling is exactly. But as a user of that credit card, [laughs] it represents a piece of plastic in my pocket. And that is a number that other services can use to talk to the card issuer. So it's an ID that you can use to make requests to Visa, or MasterCard, or whoever.
So I think you're right; that does fall under the umbrella of a third-party ID. I think I probably would tend to try to store that as a string based on this conversation rather than as an integer, even though it is all numbers. I say that, though, and, of course, now I'm going to get people tweeting at me saying, "Did you know that American Express sometimes put a letter in their credit card numbers?"
STEPHANIE: Oh, man. That would really throw a wrench in my understanding of how credit cards work.
JOËL: [laughs] But, again, if you store it as a string, it doesn't matter.
STEPHANIE: I'm really interested in the idea that a lot of these things we're talking about, you know, are often collected in forms and saved in our applications. Because we need to save information about our users in the context of whatever domain our application is working in. And I can kind of see it going a couple of ways where it's either, like, don't give that too much thought.
Or we try to introduce a library that does a lot to make sure that it's kind of covered all of the different cases. Or, like, it's really covered kind of how we've been talking about credit card numbers and phone numbers, like, a really wide breadth of logic that exists because of the way that they are very prevalent in the human world, at least.
And I'm curious, at what point do you think, you know, like, writing that first migration, how much energy would you put into making sure that those values are normalized correctly or that you're doing the right thing with those pieces of data?
JOËL: I think, based on this conversation, I would probably lean into doing minimal normalization. And I think the thing that's special about this type of number that we're talking about is that we don't need to do any logic on it internally. We typically only use it when calling out to some third-party system. And assuming that third-party system will accept that identifier in its raw form, non-normalized, then why do I need to care or put that effort in?
STEPHANIE: Yeah, that's a really interesting point. I think it definitely depends on what you're doing with it, right? Like, if you're a payment platform, obviously, you want to make sure that you get those credit card [laughs] values right and the way that you operate on them is as robust as possible. But for most applications, you might just be displaying the phone number, and that's about it. And minimal normalization and just formatting based on the way that your application handles it seems reasonable enough.
JOËL: I think the main thing you need to be able to do with that number is to make a call to that third-party service system and have it work. So a phone number you need to be able to call it and connect to the right person on the other end. With a credit card number, you need to be able to charge it, and we need to be able to charge it successfully. That's really the main thing that you're concerned with it. If you can do that without normalizing, then you're fine.
Now, you might need validating, which I think is a separate thing from normalizing. And that's really interesting because you can get all this fancy validation logic to check that there's the correct number of digits in the phone number or in the credit card and all that. And that's great. But you can also sometimes just try it. So I think we see this really commonly with things like email validation where we don't really trust that we can validate emails. Instead, we send you an email with a confirmation link. And the validation is that you were able to receive that and click that confirmation link.
STEPHANIE: Oh yeah, that's an interesting way. I've never thought about that. But a way that we've solved that problem without having to accommodate every single way a person might try to input their email.
JOËL: Similarly, you could do for a phone number, send a text with a confirmation number. And if you can receive that, then your phone number is good. And if you can't, then try to input your phone number again, assuming I'm dealing with a provider that can send to most numbers. I don't have to deal with all of the region-specific variations on how phone numbers work.
Payments are really interesting because, typically, that is the main mode that you're trying to use them. It's not even like a validation thing. It's that we're going to make a call to your bank right now. And if this card gets declined, you're going to have to put your number in again.
STEPHANIE: Yeah, I've certainly seen some differing trends over time around how those inputs are validated, though, right? Like, there are some websites that give you that real-time feedback. And, as a user, I think for me, it depends on whether or not I like it, right? It's, like, I've certainly encountered forms where I am like, oh, I'm appreciative that they're doing some input masking so that I don't accidentally type in a value that they are not willing to accept, and other times where the validation ends up getting in the way. When do you think that real-time feedback is important?
JOËL: So I think it's all about shortening the cycle of making a mistake and fixing it. So, you know, it's annoying if you typo your credit card number, and then you submit it. And it takes a few seconds to go back to the provider. And then, oh, it comes back up and says, "Sorry, that was wrong." And then you've got to figure out what went wrong.
If they had some logic that did that checksum math or something like that and said, "Wait a minute, this number is wrong," then you could fix it immediately without having to do a full-page submit and potentially lose or have to reload data that you filled in and if it's a larger form. So to get functional, you don't need that. But it's a nice layer on top of things to be able to have a shorter feedback loop where you immediately get feedback that tells you, wait a minute, that card number is not quite right. Maybe consider don't hitting that submit button.
STEPHANIE: Yeah, I think it also definitely depends on the goals of your system. I'm remembering now that article you mentioned from the UK government's design system. I was perusing that. And I found that they have some very explicit guidelines around form inputs because part of their goal is to make this portal as accessible as possible for all of their citizens.
And one thing that I remember was really interesting about how they considered their users was how if you enter a phone number, you can use the phone keypad-style of input. And the justification was that a lot of people are using this on mobile, so we want to make sure that we make this as accessible as possible for them since not everyone has access to a computer. And I thought it was really cool that they justified their reasoning.
And I think they even have all of this stuff open to feedback. So I had found a GitHub issue, I think, called, like, telephone numbers. And they're like, hey, like, we want to hear your thoughts about how telephone numbers should be received as inputs and whether this is working for you. And I thought that was a really committed way to make sure that the way that the system is implemented really reflects the user's needs.
JOËL: Yeah. And what's really cool about HTML is that we now have the ability to kind of decouple some of these things. And so, you might be typing in a text input, but you want to limit certain characters. You only want to be able to type in number characters. Does that mean that you have to use a number input? Well, not necessarily. With HTML5, you can put a regular expression pattern to limit what can be typed in this input.
You can also have an input mode, which, for mobile, will control which keyboard shows up. Or it won't control; it tells the mobile operating system or the browser what you would like them to show, and it's up to them to implement that. And so, like you were saying, for a phone number, even if you're typing it into a, let's say, text input because you want to be able to...maybe the user wants to put in whitespace, or dashes, or whatever. But you still want that number pad to show up by putting an input mode numeric on it. You can get that keyboard, even though you're not in a numeric input.
STEPHANIE: Yeah, I think that's a blessing and a curse that we do have these separate layers of abstraction, right? Because, on one hand, it gives us more flexibility. And then I've also seen it just run amok [laughs] and cause a lot of confusion about what being the source of truth, but I think that's a conversation for another day.
JOËL: Yep. Form design accessibility, keyboard inputs. So yeah, I think coming back to the core question of today, when is a number maybe not a number?
STEPHANIE: That's a big question that I think only the developer with the task at hand can answer.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Engineering manager at Vox Media and author Nicole Zhu joins Stephanie on today's episode to discuss her writing practice.
nicoledonut is a biweekly newsletter about the writing process and sustaining a creative life that features creative resources, occasional interviews with creative folks, short essays on writing and creativity, farm-to-table memes and TikToks, and features on what Nicole is currently writing, reading, and watching.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. And today, I'm joined by my friend and special guest, Nicole Zhu.
NICOLE: Hi, I'm so excited to be here. My name is Nicole, and I am an Engineering manager at Vox Media and a writer.
STEPHANIE: Amazing, I'm so thrilled to have you here. So, Nicole, we usually kick off the show by sharing a little bit about what's new in our world. And I can take us away and let you know about my very exciting weekend activities of taking down our Halloween skeleton. And yes, I know that it's April, but I feel like I've been seeing the 12-foot Home Depot skeletons everywhere. And it's becoming a thing for people to leave up just their Halloween decorations and, just as the other holidays keep rolling on, changing it up so that their skeleton is wearing like bunny ears for Easter or a leprechaun hat for St. Patrick's Day.
And we've been definitely on the weird skeleton in front of the house long past the Halloween train for a few years now. Our skeleton's name is Gary. And it's funny because he's like a science classroom skeleton, so not just plastic. He's actually quite heavy.
NICOLE: He's got some meat to the bones. [laughs]
STEPHANIE: Yeah, yeah, and physiologically correct. But we like to keep him out till spring because we got to put him away at some point so that people are excited again when he comes back out in October. And the kids on our block really love him. And yeah, that's what I did this weekend. [laughs]
NICOLE: I love it. I would love to meet Gary one day. Sounds very exciting. [laughs] I do get why you'd want to dress up the skeleton, especially if it's 12 feet tall because it's a lot of work to put up and take down for just one month, but that's fascinating. For me, something new in my world is the return of "Succession," the TV show.
STEPHANIE: Oh yes.
NICOLE: I did not watch yesterday's episode, so I'm already spoiled, but that's okay. But I've been getting a lot of Succession TikToks, and I've been learning a lot about the making of the show and the lives of the uber-rich. And in this one interview with Kieran Culkin, the interviewer asked him, "What's something that you learned in shooting the show about the uber-rich about billionaires that's maybe weird or unexpected?" And Kieran Culkin says that the uber-rich don't have coats because they're just shuttled everywhere in private jets and cars. They're not running to the grocery store, taking the subway, so they don't really wear coats, which I thought was fascinating. It makes a lot of sense.
And then there was this really interesting clip too that was talking about the cinematography of the show. And what is really interesting about it is that it resists the wealth porn kind of lens because it's filmed in this mockumentary style that doesn't linger or have sweeping gestures of how majestic these beautiful cities and buildings and apartments they're in.
Everything just seems very matter of fact because that is just the backdrop to their lives, which I think is so interesting how, yeah, I don't know, where I was like, I didn't ever really notice it. And now I can't stop seeing it when I watch the show where it's about miserable, rich people. And so I like that the visual language of the show reflects it too.
STEPHANIE: Wow, yeah, that makes a lot of sense. The coat thing really gets me because I'm just imagining if I could be perfectly climate controlled all the time. [laughs]
NICOLE: Right? Oh my gosh, especially you're based in Chicago [laughs], that is when you can retire the winter coat. That is always an important phase.
STEPHANIE: Yeah, seriously. I also am thinking now about just like the montages of showing a place, just movies or shows filmed in New York City or whatever, and it's such...so you know it's like the big city, right?
NICOLE: Mmm-hmm, mm-hmm.
STEPHANIE: And all of that setup. And it's really interesting to hear that stylistically, that is also different for a show like this where they're trying to convey a certain message.
NICOLE: Yeah, yeah, definitely.
STEPHANIE: So I'm really excited to have you on The Bike Shed because I have known you for a few years. And you write this really amazing newsletter called "nicoledonut" about your writing practice. And it's a newsletter that I open every other week when you send out a dispatch. And last year at RubyConf, they had a conference track called Bringing Your Backgrounds With You.
And there were talks that people gave about how the hobbies that they did outside of work or an identity that they held made them a better developer, like, affected how they showed up at work in a positive way. And as someone who has always been really impressed by the thoughtfulness that you apply to your writing practice, I was really curious about how that shows up for you as an engineering manager.
NICOLE: Definitely a great question. And to provide a bit of context for listeners, I feel like I have to explain the newsletter title because it's odd. But there's a writer who I really love named Jenny Zhang, and her handle across the Internet is jennybagel. And so I was like, oh, that would be so funny. I should be nicoledonut. I do love donuts. My Neopets username was donutfiend, so it was --
STEPHANIE: Hell yeah.
NICOLE: But anyway, so that was kind of...I was like, I need to come up with some fun title for this newsletter, and that is what I settled on. But yes, I've written personal essays and creative nonfiction. And my primary focus more recently these past few years has been fiction. And this newsletter was really kind of born out of a desire to learn in the open, provide resources, act as kind of a journal, and just process ideas about writing and what it means to kind of sustain a creative life.
So it has definitely made me more reflective and proactively, like you said, kind of think about what that means in terms of how that transfers into my day job in engineering. I recently moved into management a little over a year ago, and before that, I was a senior full-stack engineer working on a lot of our audience experiences and websites and, previously, more of our editorial tools.
So I think when it comes to obviously writing code and being more of an individual contributor, I think you had previously kind of touched on what does it mean to treat code as a craft? And I do think that there are a lot of similarities between those two things because I think there's creativity in engineering, of course. You have to think about going from something abstract to something concrete. In engineering, you're given generally, or you're defining kind of requirements and features and functionality. You may be make an engineering plan or something like that, an EDD, given those constraints.
And then I think writing is very similar. You outline, and then you have to actually write the thing and then revise. I do think writing is not necessarily as collaborative as coding is, perhaps, but still similar overall in terms of an author having a vision, dealing with different constraints, if that's word count, if it's form or structure, if it's point of view, things like that. And that all determines what the outcome will be.
You always learn something in the execution, the idea that planning can only take you so far. And at a certain point, you gather as much background knowledge and information and talk to as many people. Depending on the kinds of writing I do, I have or haven't done as much research. But at a certain point, the research becomes procrastination, and I know I need to actually just start writing.
And similarly, with engineering, I think that's the piece is that once you actually start implementation, you start to uncover roadblocks. You uncover questions or complications or things like that. And so I think that's always the exciting part is you can't really always know the road ahead of you until you start the journey. And I also think that in order to benefit from mentorship and feedback...we can talk more about this. I know that that's something that is kind of a larger topic.
And then another thing I think where the two are really similar is there's this endless learning that goes with each of them. I guess that's true of, I think, most crafts. Good practitioners of the craft, I think, take on that mindset. But I do think that obviously, in engineering, you have industry changes, new technologies emerging really frequently. But I do think that good writers think about that, too, in terms of what new novels are coming out. But also, how do you build a solid foundation?
And I do think it's that contrast that applies in any craft is, you know, you want to have a good solid foundation and learn the basics but then keep up to date with new things as well. So I think there was this...there's this meme I actually did include in the newsletter that was...it's the meme of these two guys looking at different windows of a bus, and one looks really sad, and one looks really happy. But the two of them have the same caption, which is there's always more to learn.
And so I think that is the two sides of the coin [laughs]. I think that is relevant in engineering and writing that I've kind of brought to both of those practices is trying to be optimistic [laughs] about the idea that there's always more to learn that that's kind of the thought of it.
And then certainly, when it comes to management, I do think that writing has proven really valuable in that very obvious sense of kind of practical communication where I just write a lot more. I write a lot more things that are not code, I should say, as a manager. And communication is really at the forefront of my job, and so is demonstrating curiosity and building empathy, fostering relationships with people.
And I do think that particularly writing fiction you have to be curious about people I think to be a writer. And I think that is true of managers as well. So I do think that has been a really interesting way that I didn't anticipate writing showing up in my day job but has been a really helpful thing and has made my work stronger and think about the people, the process, and kind of what we do and why a little differently.
STEPHANIE: Yeah, absolutely. Wow, you got into a lot of different things I'm excited to keep discussing further. But one thing that I was thinking about as you were talking was, have you heard of the adage, I guess, that code is read many more times than it's written?
NICOLE: Hmm, I think I have, yeah.
STEPHANIE: I was thinking about that as you were talking because, in some ways, in most ways, actually, if you ascribe to that adage, I suppose, we write code for others to read. And I think there's an aspect of code telling a story that is really interesting. I've heard a lot of people advocate for writing, thoughtbot included, writing your tests like they're telling a story.
And so when a future developer is trying to understand what's going on, they can read the tests, understand the setup, read what is being tested, and then read what the expected outcome is and have a complete picture of what's going on. The same goes for commit messages. You are writing little bits of documentation for people in the future.
And I've also been thinking about how legacy code is just this artifact as well of all of the changes that an organization might have gone through. And so when you see something that you see a bit of code that is really weird or gets your spidey senses tingling, it's almost like, oh, I wonder what happened here that led to this piece left behind?
NICOLE: Yeah, definitely. Now that you're talking about it, I also think of pull requests as a great way to employ storytelling. I remember there definitely have been times where myself or other engineers are working on a really thorny problem, and we always joke that the PR description is longer than the change. And it's like, but you got to read the PR description in order to understand what change you're making and why. And here's the backstory, the context to kind of center people in that.
As a manager, I think about storytelling a lot in terms of defining purpose and providing clarity for teams. I was reading Julie Zhuo's "The Making of a Manager," and it was a really kind of foundational text for me when I first was exploring management. And she kind of boils it down to people, purpose, and process.
And so I do think the purpose part of that is really tied to clear communication. And can you tell a story of what we're doing from really high-level vision and then more tactically strategy? And then making sure that people have bought into that, they understand, can kind of repeat that without you being there to remind them necessarily. Because you really want that message to carry through in the work and that they have that understanding.
Vision is something I only recently have really started to realize how difficult it is to articulate. It's like you don't really understand the purpose of vision until you maybe don't have one, or you've been kind of just trying to keep your head afloat, and you don't have a Northstar to work towards. But I do think that is what plays into motivation, and team health, and, obviously, quality of the product. So yeah, that's kind of another dimension I've been thinking of.
And also our foes actually. Sorry, another one. Our foes, I think, like outages and incidents. I think that's always a fun opportunity to talk about stories. There was a period of time where every time we had an incident, you had to present that incident and a recap of it in an engineering all-hands every month. And they ended up being really fun. We turned something that is ostensibly very stressful into something that was very entertaining that people could really get on board with and would learn something from.
And we had the funniest one; I think was...we called it the Thanks Obama Outage because there was an outage that was caused by a photo of Barack Obama that had been uploaded in our content management system, as required no less, that had some malformed metadata or something that just broke everything. And so, again, it was a really difficult issue [laughs] and a long outage. And that was the result that I remember that presentation being really fun.
And again, kind of like mythmaking in a way where that is something that we remember. We pay attention to that part of the codebase a lot now. It's taught us a lot. So yeah, I do think storytelling isn't always necessarily the super serious thing, but it can also just be team building, and morale, and culture as well.
STEPHANIE: Yeah, absolutely. I think what you said about vision really resonates with me because if you don't have the vision, then you're also not making the best decisions you can be making even something as low-level as how you write the code. Because if you don't know are we going to be changing this feature a month from now, that might dictate how you go forth with implementation as opposed to if you know that it's not in the company's vision to really be doing anything else with this particular feature. And you then might feel a little more comfortable with a more rudimentary approach, right?
NICOLE: Yeah, totally. Whether or not it's, we've over-optimized or not or kind of optimized for speed. Like, it's all about trade-offs. And I do think, again, like you said, having a vision that always you can check your decision-making against and inform the path ahead I think is very, very helpful.
STEPHANIE: When you write, do you also keep that in mind? Like, do you write with that North Star? And is that really important to your process?
NICOLE: I think it depends. I think that writing can be a little more at a slant, I suppose, is how I think of it because I don't always...just similar to work, I don't always come in with a fully-fledged fleshed-out vision of what I want a piece to be. The most recent piece I've been working on actually I did have kind of a pretty, I think, solid foundation.
I've been working on this story about loneliness. And I knew that I wanted to base the structure on the UCLA...a UCLA clinic has this questionnaire that's 20 items long that is about measuring loneliness on a scale. And so I was like, okay, I knew that I wanted to examine dimensions of loneliness, and that would be the structure. It would be 20 questions, and it would be in that format. So that gave me a lot more to start with of, you know, here's where I want the piece to go. Here's what I want it to do.
And then there have definitely been other cases where it's more that the conceit seems interesting; a character comes to mind. I overhear a conversation on the subway, and I think it's funny, and that becomes the first thing that is put on the page. So I definitely have different entry points, I think, into a draft. But I will definitely say that revision is the phase where that always gets clarified. And it has to, I think, because as much as I'm sometimes just writing for vibes, it's not always like that.
And I do think that the purpose of revision is to clarify your goals so you can then really look at the piece and be like, is it doing what I want it to? Where is it lacking? Where's it really strong? Where's the pacing falling flat? And things like that. So I do think that sooner or later, that clarity comes, and that vision comes into focus. But it isn't always the first thing that happens, I think, because I do think the creative process is a little bit more mysterious, shall we say, than working on an engineering team. [laughs]
STEPHANIE: Yeah. Well, you started off responding to my question with it depends, which is a very engineering answer, but I suppose --
NICOLE: That is true. That is true. You got me. [laughs]
STEPHANIE: It applies to both.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: You mentioned revision. And so, I do want to talk about feedback because I think that is an important part of the revision process. And I have really loved what you've had to say about writing feedback and your experience with writing feedback, especially in writing workshops. And I have always been really curious about what we might be able to learn about receiving feedback in code review.
NICOLE: When it comes to receiving feedback, I think I wrote a two-part series of my newsletter, one that was about providing feedback, one that was about receiving it. I think on the side of receiving feedback, first and foremost, I think it's important to know when you're ready to share your work and know that you can share multiple times. In writing, that can be I show a very early draft to my partner who is the person who kind of reads everything and anything at any stage. It's something less polished, and I'm really just testing ideas.
But then obviously, if there's something that is more polished, that is something I would want to bring to a writing group, bring into a workshop, things like that. Similarly, as engineers, I think...thank God for GitHub drafts actually adopting literally the way in which I think of that, right?
STEPHANIE: Yeah.
NICOLE: You can share a branch or a GitHub PR in progress and just check the approach. I've done that so many times, and really that helped so much with my own learning and learning from mentors in my own organization was checking in early and trying to gut-check my work earlier as opposed to later. Because then you feel, I think, again, a bit more naturally receptive because you're already in that questioning phase. You're not like, oh, this is polished, and I've written all the tests, and the PR description is done. And now you want me to go back and change the whole approach from the ground up. That can feel tough. I get that.
And so I think, hand in hand, what goes with that is whose feedback are you interested in? Is that a peer? Is it a mentor? I think obviously leaning on your own team, on senior engineers, I do think that is one of the primary, I think, expectations of a senior engineer is kind of multiplying the effectiveness of their peers and helping them learn and grow. So I do think that that's a really valuable skill to develop on that end, but also, again, just approaching people.
And obviously, different teams have different processes for that, if it's daily stand-ups, if it's GitHub reminders, automated messages that get pulled up in your channel, things like that. But there are ways to build that into your day-to-day, which I think is really beneficial too.
And then there's also the phase of priming yourself to receive the feedback. And I think there's actually a lot of emotional work that I don't think we talk about when it comes to that. Because receiving feedback can always be vulnerable, and it can bring up unexpected emotions. And I think learning how to regulate the emotional response to that is really valuable for us as people but obviously within the workplace too.
So I've found it really helpful to reflect if I'm getting feedback that...well, first of all, it depends on the format. So I think some people prefer verbal feedback, some people will prefer written. I think getting it in the form of written feedback can be helpful because it provides you some distance. You don't have to respond in the moment. And so I've definitely had cases where I then kind of want to reflect on why certain suggestions might elicit certain reactions if I have a fight or flight response, if I'm feeling ashamed or frustrated, or indignant, all the range of emotions.
Emotions are, to put the engineering hat on, are information. And so I think listening to that, not letting it rule you per se but letting it inform and help you figure out what is this telling me and how do I then respond, or what should I do next? Is really valuable. Because sometimes it's not, again, actually the feedback; maybe it's more about that, oh, it's a really radical idea. Maybe it's a really...it's an approach I didn't even consider, and it would take a lot of work.
But again, maybe if I sit and think about it, it is the scalable approach. It's the cleaner approach, things like that. Or are they just touching on something that I maybe haven't thought as deeply about? And so I think there is that piece too. Is it the delivery? Is it something about your context or history with the person giving the feedback too? I think all of those, the relationship building, the trust on a team, all plays into feedback.
And obviously, we can create better conditions for exchanging and receiving feedback. But I do think there's still that companion piece that is also just about, again, fostering team trust and culture overall because that is the thing that makes these conversations all the easier and less, I think, potentially fraught or high pressure.
STEPHANIE: 100%. Listeners can't see, but I was nodding very aggressively [laughs] this entire time.
NICOLE: Loved it.
STEPHANIE: And I love that you bring up interpersonal relationships, team culture, and feelings. Listeners of the show will know that I love talking about feelings. But I wanted to ask you this exact question because I think code review can be so fraught. And I've seen it be a source of conflict and tension. And I personally have always wanted more tools for giving better feedback. Because when I do give feedback, it's for the person to feel supported to help push their work to be better and for us to do good work as a team.
And I am really sensitive to the way that I give feedback because I know what it's like to receive feedback that doesn't land well. And when you were talking about investigating what kinds of feelings come up when you do receive a certain kind of comment on a code review or something, that was really interesting to me. Because I definitely know what it's like to have worked really, really hard on a pull request and for it to feel very precious to me and then to receive a lot of change requests or whatever. It can be really disappointing or really frustrating or whatever. And yeah, I wish that we, as an industry, could talk about this stuff more frequently.
NICOLE: Yeah, for sure. And I do think that you know, I think the longer you work with someone, ideally, again, the stronger relationship you form. You find your own ways of communicating that work for you. I think actually what I've learned in management is, yes, I have a communication style, but I also am flexible with how I work with each of my reports, who, again, have very different working styles, communication styles, learning styles.
I don't believe that the manager sets the standards. I think there is a balance there of meeting people where they are and giving them what they need while obviously maintaining your own values and practices. But yeah, certainly, again, I think that's why for perhaps more junior engineers, they might need more examples. They might not respond well to as terse a comment.
But certainly, with engineers, senior engineers that I've worked with, when I was starting out, the more we developed a relationship, they could just get a little bit more terse. For example, they could be like, "Fix this, fix that," and I would not take it personally because we had already gone through the phase where they were providing maybe some more detailed feedback, links to other examples or gists, or things like that, and our communication styles evolved.
And so I do think that's another thing to think about as well is that it doesn't have to be static. I think that's the value of a team, and having good team process, too, is ideally having arenas in which you can talk about how these kinds of things are going. Are we happy with the cadence? Are we happy with how people are treating each other and things like that? Are we getting timely feedback and things like that? That's a good opportunity for a retrospective and to talk about that in a kind of blameless context and approach that more holistically.
So I do think that, yeah, feedback can be very fraught. And I think what can be difficult in the world of engineering is that it can be very easy to then just be like, well, this is just the best way for the work. And feelings are, like you said, not really kind of considered. And, again, software development and engineering is a team sport. And so I do think fostering the environment in which everyone can be doing great work is really the imperative.
STEPHANIE: Yeah, I really like how you talked about the dynamic nature of relationships on a team and that the communication style can change there when you have built that trust and you understand where another person is coming from. I was also thinking about the question of whose feedback are you interested in?
And I certainly can remember times where I requested a review from someone in particular because maybe they had more context about this particular thing I was working on, and I wanted to make sure that I didn't miss anything, or someone else who maybe I had something to learn from them. And that is one way of making feedback work for me and being set up to receive it well.
Because as much as...like you said, it's really easy to fall back into the argument of like, oh, what's the best way for the work, or what is the cleanest code or whatever? I am still a person who wrote it. I produced a piece of work and have feelings about it. And so I have really enjoyed just learning more about how I react to feedback and trying to mitigate the stress that I feel in what is kind of inherently like a conflict-generating process.
NICOLE: Yeah, yeah, definitely. Another thing that kind of popped into my head to one of the earlier questions we were talking about is in terms of similarities between writing and engineering, style and structure are both really, really important. And even though in engineering, like you said, sometimes it can be, I mean, there is a point with engineering where you're like, this line of code works, or it doesn't.
There is a degree of correctness [laughs] that you do have to meet, obviously. But again, after that, it can be personal preference. It's why we have linters that have certain styles or things like that to try to eliminate some of these more divisive, shall we say, potentially discussions around, [laughs] God forbid, tabs or spaces, naming conventions, all this stuff.
But certainly, yeah, when it comes to structuring code, the style, or whatever else, like you said, there's a human lens to that. And so I think making sure that we are accounting for that in the process is really important, and not just whether or not the work gets done but also how the work gets done is really important. Because it predicts what do future projects...what does future collaboration look like? And again, you're not just ever optimizing for one thing in one point of time. You're always...you're building teams. You're building products. So there's a long kind of lifecycle to think about.
STEPHANIE: For sure. So after you get feedback and after you go through the revision process, I'm curious what you think about the idea of what is good enough in the context of your writing. And then also, if that has influenced when you think a feature is done or the code is as good as you want it to be.
NICOLE: Yeah, definitely. I think when it comes to my writing, how I think about what is good enough I think there is the kind of sentiment common in the writer community that you can edit yourself to death. You can revise forever if you wanted to. It's also kind of why I don't like to go back and read things I've already published because I'm always going to find something, you know, an errant comma or like, oh, man, I wish I had rephrased this here.
But I do think that, for me, I think about a couple of questions that help me get a sense of is this in a good place to, you know, for me generally, it's just to start submitting to places for publication. So one of those is, has someone else read it? That is always a really big question, whether it's a trusted reader, if I brought it to a workshop, or just my writing group, making sure I have a set of outside eyes, fresh eyes on the piece to give their reaction. And again, truly as a reader, sometimes just as a reader, not even as a fellow writer, because I do think different audiences will take different things and provide different types of feedback.
Another one is what kinds of changes am I making at this point in time? Am I still making really big structural edits? Or am I just kind of pushing words and commas around, and it feels like rearranging deck chairs on the Titanic? They're not massive changes to the piece.
And then the final question is always, if this were published in its current state right now, would I be happy with it? Would I be proud of it? And that's a very gut feeling that I think only an individual can kind of feel for themselves. And sometimes it's like, no, I don't like the way, like, I know it's 95% there, but I don't like the way this ends or something else. Again, those are all useful signals for me about whether a piece is complete or ready for submission or anything like that.
I think when it comes to engineering, I think there's a little bit less of the gut feeling, to be honest, because we have standards. We have processes in place generally on teams where it's like, is the feature working? Have you written tests? Have you written a QA plan if it needs one? If it's something that needs more extensive documentation or code comments or something like that, is that something you've done? Has a bit more of a clear runway for me in terms of figuring out when something is ready to be shown to others.
But certainly, as a manager, I've written a lot more types of documents I suppose, or types of communication where it's like organizational changes. I've written team announcements. I've written celebration posts. I've had to deliver bad news. Like, those are all things that you don't think about necessarily. But I've definitely had literally, you know, I have Google Docs of drafts of like, I need to draft the Slack message.
And even though it's just a Slack message, I will spend time trying to make sure I've credited all the right people, or provided all the context, got all the right answers. I run it by my director, my peers, and things like that if it's relevant. And again, I think there is still that piece that comes in of drafting, getting feedback, revising, and then feeling like, okay, have I done my due diligence here, and is it ready? That cycle is applicable in many, many situations. But yeah, I certainly think for direct IC work, it's probably a little bit more well-defined than some of the other processes.
STEPHANIE: Yeah, that makes sense. I really liked what you said about noticing the difference between making big structural changes and little word adjustments. I think you called it pushing commas around or something like that.
NICOLE: [laughs] Yeah.
STEPHANIE: I love that. Because I do think that with programming, there is definitely a big part of it that's just going on the journey and exploring different avenues. And so if you do suddenly think of, oh, I just thought of a completely different way to write this code, that is worth exploring even if you just end up going back to the original implementation. But at least you saw that thought through, and you're like, okay, this doesn't work because of X, Y, and Z, and I'm choosing to go this other route instead. And I think that, yeah, that is just a good practice to explore.
NICOLE: Another example of storytelling, too, where it's like, you can tell the story in the PR description or whatever, in stand-up, to be like, I also did go down this path, XYZ reason. Here's why it didn't work out, and here's what we're optimizing for. And there you go. So I do think we talk...I guess product managers think more about buy-in, but I think that's true of engineers too. It's like, how do you build consensus and provide context?
And so yeah, I think what you were saying, too, even if the path is circuitous or you're exploring other avenues, talking to other people, and just exploring what's out there, it all adds up to kind of the final decision and might provide, again, some useful information for other people to understand how you arrived there and get on board with it.
STEPHANIE: 100%. I remember when I worked with someone who we were writing a PR description together because we had paired on some code. And we had tried three different things. And he wrote paragraphs for each thing that we tried. And I was like, wow, I don't know if I would have done that on my own. But I just learned the value of doing that to, like you said, prime yourself for feedback as well, being like, I did try this, and this is what I thought. And other people can disagree with you, but then at least they have the information, right?
NICOLE: Definitely.
STEPHANIE: So before we wrap up, the last thing that I wanted to talk about, because I think it's super cool, is just how you have a totally separate hobby and skill and practice that you invest time and energy into that's not programming. And it's so refreshing for me to see you do that because I think, obviously, there's this false idea that programmers just code all the time in their free time, in their spare time, whatever. And I'm really curious about how writing fits into your life as something separate from your day job.
NICOLE: Yes, I've been thinking about this a ton. I think a lot of people, the last couple of years has forced a really big reckoning about work and life and how much we're giving to work, the boundaries that can be blurred, how capitalism butts its head into hobbies, and how we monetize them, or everything is a side hustle. And, oh, you should have a page running...oh, you should charge for a newsletter. And I think there's obviously the side of we should value our labor, but also, I don't want everything in my life to be labor. [laughs]
So I do think that is interesting. Writing to me, I actually do not see it as a hobby. I see it as another career of mine. I feel like I have two careers, but I have one job, [laughs] if that makes sense. I certainly have hobbies. But for me, what distinguishes that from my writing is that with hobbies, there's no expectation that you want to get better. You approach it with just...it's just pure enjoyment. And certainly, writing has part of that for me, but I have aspirations to publish. I love it when my work can reach readers and things like that.
But I do think that regardless having other interests, like you said, outside engineering, outside technology, it's a great break. And I do think also in technology, in particular, I notice...I think we're getting away from it, but certainly, there's an expectation, like you said, that you will have side projects that you code in your free time, that you're on Hacker News.
I think there is a little bit of that vibe in the tech industry that I don't see in other industries. You don't expect a teacher to want to teach in their free time, [laughs] you know what I mean? But we have almost that kind of implicit expectation of engineers to always be staying up to date on those things.
I think with writing and engineering; the two complement each other in some interesting ways. And they make me appreciate things about the other craft or practice that I may not previously have. And I think that with engineering, it is a team effort. It's really collaborative, and I really love working in that space. But on the flip side, too, with writing, I do love, you know, there's the ego part of it. You don't have individual authorship over code necessarily unless it's git blame level. But there's a reason why it's called git blame, [laughter] even the word is like git blame.
I've literally had cases where I'm like, oh, this thing is broken. Who wrote this? And then I was like, oh, surprise, it was you six years ago. But I do think with writing; it's an opportunity for me to really just explore and ask questions, and things don't have to be solved. It can just be play. And it is a place where I feel like everything that I accomplish is...obviously, I have people in my life who really support me, but it is a much more individual activity. So it is kind of the right-left brain piece.
But I've been reading this book called "Saving Time." It is what my microphone is currently propped on. But it's by Jenny Odell, who wrote: "How to Do Nothing." It's breaking my brain in a really, really, really good way. It talks a lot about the origin of productivity, how we think about time, and how it is so tied to colonialism, and racism, and capitalism, and neoliberalism, all these things. I think it has been really interesting.
And so thinking about boundaries between work and writing has been really, really helpful because I really love my job; I'm not only my job. And so I think having that clarity and then being like, well, what does that mean in terms of how I divide my time, how I set examples for others at work in terms of taking time off or leaving the office on time? And trying to make sure that I have a good emotional headspace so that I can transition to writing after work; all those things. I think it is really interesting.
And that also, ultimately, it's we're not just our productivity either. And I think writing can be very, again, inherently kind of unproductive. People joke that cleaning is writing, doing the dishes is writing, taking a walk is writing, showering is writing, but it is true. I think that the art doesn't talk about efficiency. You can't, I think, make art always more efficient in the same way you can do with engineering. We don't have those same kinds of conversations. And I really like having that kind of distinction.
Not that I don't like problem-solving with constraints and trade-offs and things like that, but I also really like that meandering quality of art and writing. So yeah, I've been thinking a lot more about collective time management, I guess, and what that means in terms of work, writing, and then yeah, hobbies and personal life. There are never enough hours in the day. But as this book is teaching me, again, maybe it's more about paradigm shifting and also collective policies we can be putting in place to help make that feeling go away.
STEPHANIE: For sure. Thank you for that distinction between hobby and career. I really liked that because it's a very generative mindset. It's like a both...and... rather than an either...or... And yeah, I completely agree with you wanting to make your life expansive, like, have all of the things. I'm also a big fan of Jenny Odell. I plugged "How to Do Nothing" on another episode. I am excited to read her second book as well.
NICOLE: I think you'll like it a lot. It's really excellent. She does such interesting things talking about ecology and geology and geographic time skills, which is really interesting that I don't know; it's nice to be reminded that we are small. [laughter] It's a book that kind of reminds you of your mortality in a good way, if that makes sense. But much like Gary on your porch reminds you of mortality too [laughs] and that you have to put Gary away for a little bit so that his time can come in October. [laughs]
STEPHANIE: Exactly, exactly. Cool. On that note, let's wrap up. Thank you so much for being on the show, Nicole.
NICOLE: Thank you so much for having me. This was a blast.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël has been integrating a third-party platform into a testing pipeline...and it has not been going well. Because it's not something she usually keeps up-to-date with, Stephanie is excited to learn about more of the open-source side of things in Ruby, what's new in the Ruby tooling world, and what folks are thinking about regarding the future of the language.
Today's topic is inspired by an internal thoughtbot Slack thread about writing a custom matcher for Rspec. Stephanie and Joël contrast DSLs vs. Object APIs and also talk about:
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been integrating a third-party platform into our testing pipeline for my client. It has not been going well. We've been struggling a little bit, mostly just because tests just kind of crash. Our testing pipeline is pretty complex. It's a lot of one script, some environment variables, does a few things, shells out to another script, which is in a different language. Does a few more things, shells out to another script, maybe calls out to rake, calls out to a shell script. There are four or five of these in a chain, and it's a bit of a mess.
Somewhere along in there, something is not compatible with this third-party service that we're trying to integrate with. I was pairing this week with a colleague. And we were able to reproduce a situation where we were able to get a failure under some conditions and a success under other conditions. So these are basically, if we run the whole chain of scripts that call each other from the beginning, we know we get a failure. And if we skipped entirely the chain of scripts that set up things and then just manually try to invoke a third-party service, that works.
And so now we know that there's something in between that's incompatible, and now it's just about narrowing things down. There are a few different approaches we could take. We could try to sort of work our way forward. We know a known point where it breaks and then just try to start the chain one step further and see where it fails. We could try to get fancy and do a binary search, like split it in half and then half and half again.
We ended up doing it the other way, where we started at the end. We had our known good point and then just stepping one step back and saying, okay, now we introduce the last script in the chain. Does that work? Okay, that pass is great. Let's go one step further; two scripts up in the chain. And at some point, we find, okay, here's the one script that fails. Now, what is it within this script? And it was a really fun debugging session where we were just narrowing things down until we found the source of the bug.
STEPHANIE: Wow, that sounds pretty complicated. It just seems like there are so many layers going on. And it was really challenging to pinpoint where the source of the issue was.
JOËL: Definitely. I think all the layers made it really complicated. But having a process that we could follow and then kind of narrowing it down made it almost mechanical to figure out where the bug was once we got to a point where we had a known good point and a known bad point.
STEPHANIE: Yeah, that makes sense. Kind of sounds like if you are using git bisect or something like that to narrow down the scope of where the issue could be. I'm curious because this is like a bunch of shell scripts and rake tasks or commands or whatever. What would have made this debugging process easier?
JOËL: I think having fewer scripts in this chain.
STEPHANIE: [laughs] That's fair.
JOËL: We don't need so many scripts that call out to each other in different languages trying to share data via environment variables. So we've got a bit of a Rube Goldberg machine, and we're trying to patch in yet another piece in there.
STEPHANIE: Yeah, that's really tough. I was curious if there was, I don't know, any logging or any other clues that you were getting along the way because I know from experience how painful it is to debug that kind of code.
JOËL: It's interesting because I feel like normally logging is something that's really useful. In this particular case, we run into an exception at some point. So it's more of under what conditions does the exception happen? The important thing was to find that there is a point where it breaks, and there's a point where it doesn't, and realizing that if we ran some of these commands just directly without going through the whole pipeline, that things did work and that we were not triggering that exception.
So all of a sudden, now that tells us, okay, something in our pipeline is wrong. And then we can just start narrowing things down. So yeah, adventures in debugging. Sometimes it's really frustrating, but then when you have a good process, and you find the bug, it's incredibly satisfying.
STEPHANIE: I like that you used a process that can be applied to many different problems, in this particular case, debugging a testing pipeline. Maybe not something that we do every day, but certainly, it comes up, and now we have tools to address those kinds of issues as well.
JOËL: So my week has been up and down with all of this debugging. What's been new in your world?
STEPHANIE: I've been doing some travel planning because I'm going to RubyKaigi in Japan.
JOËL: Whoa.
STEPHANIE: This is actually going to be my first international conference, so I'm really looking forward to that. I just have never been compelled to travel abroad to go to a tech conference. But I'm really looking forward to going to RubyKaigi because now I've been to the U.S.-based conferences a few times. And I'm excited to see how things are different at an international conference and specifically a RubyKaigi because, obviously, there's a lot of really cool Ruby work happening over there in Japan.
So I'm excited to learn about more of the open-source side of things of Ruby, what's new in the Ruby tooling world, and just what folks are thinking about in terms of the future of the language. That's not something I normally keep super up-to-date on. But I'm excited to be around people who do think and talk about these things a lot and maybe get some new insights into my own work.
JOËL: Do you find that you tend to keep up more with some of the frameworks like Rails rather than the underlying language itself?
STEPHANIE: Yeah, that's a good question. I do think because the framework changes a little more frequently, new releases are kind of more applicable to the work that I'm doing. Whereas language updates or upgrades are a little bit less top of mind for me because the point is that it doesn't have to change [laughs] all that much, and we can continue to work with things as expected and not be disrupted.
So it is definitely like a whole new world for me, but I'm really looking forward to it. I think it will be really interesting and just kind of a whole other space to explore that I haven't really because I've usually been focused on more of the web development and industry work side of things.
JOËL: What's a Ruby feature that either is coming out in the future or that came out in the last couple of releases that got you really excited?
STEPHANIE: I think the conversation about typing in Ruby is something that has been on my radar but has also been ebbing and flowing over time. And I did see a few talks at RubyKaigi this year that are going to talk about how to introduce gradual typing in Ruby. And now that it has been out for a little bit and people have been using it, how people are feeling about it, pros and cons, and kind of where they're going to take it or not take it from there.
JOËL: Have you done much TypeScript?
STEPHANIE: I have been working more in TypeScript recently but did spend most of my front-end work coding days in JavaScript. And so that transition itself was pretty challenging for me where I suddenly felt a language that I did know pretty well. I was having to be in that...in somewhat of a beginner's mindset again. Even just reading the code itself, there were just so many new things to be looking at in terms of the syntax. And it was a difficult but ultimately pretty rewarding experience because the way I thought about JavaScript afterwards was much more refined, I think.
JOËL: Types definitely, I think, change the way you think about code; at least, that's been my experience.
STEPHANIE: Yeah, absolutely. I haven't gotten the pleasure to work with types in Ruby just yet, but I've just heard different experiences. And I'm excited to see what experts have to say about it.
JOËL: That's the fun of going to a conference.
STEPHANIE: Absolutely. So yeah, if any listeners are also headed to RubyKaigi, yeah, look out for me.
JOËL: I was recently having a conversation with someone about the fact that a lot of languages provide ways to sort of embed many languages within them. So the Lisp family of languages are really big into macros and metaprogramming. Some other languages are big into giving you the ability to build your own ASTs or have really strong parsing capabilities so that you can produce your own, again, mini-language.
And Ruby does this as well. It's pretty popular among the Ruby community to build DSLs, Domain-Specific Languages using some of Ruby's built-in abilities. But it seems to be a sort of universal need or at the very least a universal desire among programmers. Have you ever found yourself as a code author wanting to embed a sort of smaller language within your application?
STEPHANIE: I don't think I have, to be honest. It's a very interesting question. Because I think the motivation to build your own mini-language using Ruby would have to be you'd have to have a really good reason for it, and in my experience, I haven't quite encountered that yet. Because, yeah, it seems like a lot of upfront work, a lot of overhead to introduce something like that, especially if it's not necessarily either a really, really particular domain that others might find a use for, or it just doesn't end up seeming worthwhile if I can just write regular, old Ruby code.
JOËL: I think you're not alone. I think the Ruby community has been kind of a bit of a pendulum here where several years ago, everything that could be made into a DSL was. Now the pendulum kind of has been swinging the other way. And we see DSLs, but they're not quite as frequent. For those who maybe have not experienced a DSL or aren't quite familiar with the concept, how would you describe the idea?
STEPHANIE: I think I would describe domain-specific languages as a bit of a mini-language that is created for a very particular problem space in mind to make development for that domain easier. Oftentimes, I've also kind of seen people describe the benefit of DSLs as being able to read that language as if it were plain English.
And so, in my head, I have kind of, at least in the Ruby world, right? We see that a lot in different gems. RSpec, for example, has its own internal DSL, and many people really enjoy it because it took the domain of testing. And the way you write it kind of is how you might read or understand it in English. And so it's a bit easier to talk about what you're expecting in your tests.
JOËL: Yeah, it's so high-level and minimal and domain-specific that it almost stops feeling like it's a programming language and can almost feel like it's a high-level configuration for this very particular domain, sometimes even to the point where the idea is that a non-programmer could read it and understand what's going on.
STEPHANIE: I think RSpec is actually one of the first Ruby DSLs that you might encounter when you're learning Ruby for the first time. And I've definitely seen developers who are new to Ruby, you know, they're writing code, and they're like, okay, I'm ready to write a test now. And the project uses RSpec because that's what most of us use in our Rails applications. And then they see, like you said, almost a configuration language, and they are really confused. They're not really sure what they're reading. They struggle with the syntax a lot. And it ends up being a point of frustration when they're first starting out if they're not just copying and pasting other existing RSpec tests. I'm curious if you've seen that before.
JOËL: I've definitely seen that. And it's a little bit ironic because oftentimes, an argument for DSL is that it makes things simpler that you don't even have to know Ruby; you can just write it. It's simpler. It's easier to write. It's easier to understand. And to a certain extent, maybe that's true. But for someone who does know Ruby and doesn't know your particular little domain language, now they're encountering something that they don't know. And they're having to learn it, and they're having to struggle with it. And it might behave a little bit weirdly compared to how Ruby normally works. And so sometimes it doesn't make it easier for adoption. But it does look really good in a README.
STEPHANIE: That's totally fair. I think the other thing that's interesting about RSpec is that a lot of it is really just stylistic. I actually read a blog post by Jason Swett and the headline of it was "Mystified by RSpec's DSL? Some parentheses can add clarity." And he basically goes on to tell us that really RSpec is just leaning on some of Ruby's syntactic sugar of omitting parentheses for method calls. And if you just add the parentheses back in your it blocks or your describes, it can read a lot more like regular Ruby. And you might have a better time understanding what's going on when you realize that we're just passing our descriptors as arguments along with some blocks.
JOËL: That's ironic given that oftentimes, the goal of these is to make it look like not Ruby.
STEPHANIE: I agree; it is ironic. [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: I think another drawback that I've seen with DSLs is that they oftentimes are more limited in their capabilities. So if the designer of the gem didn't explicitly think of your use case, then oftentimes, it can be really hard to extend or to support edge cases that are not specifically designed for that language in the way that plain Ruby is often much more flexible.
STEPHANIE: Yeah, that's really interesting because when a gem does have some kind of DSL, a lot of effort probably went into making that the main interface that you would work with or you would use. And when that isn't working for your use case, the design of the underlying objects may or may not be helpful for the changes that you want to make.
JOËL: I think it's interesting that you mentioned the underlying objects because those are often sort of not meant for public consumption when you're building a gem that's DSL forward. I think, in many cases, my ideal gem would make those underlying objects the primary interface and then maybe offer DSL as a kind of nice-to-have layer on top for those situations that maybe aren't as complex where writing things in the domain language might actually be quite nice. But keeping those underlying objects as the interface, it's nice to use and well-documented for the majority of people.
STEPHANIE: Yeah, I like that too because then you can get the best of both worlds. So speaking of trying to make a DSL work for you, have you ever experienced having to kind of work around the DSL to get the functionality you were hoping to achieve?
JOËL: So I think we're talking about the idea of having both a DSL and the underlying objects. And RSpec is a great example of this with their custom matchers. RSpec itself is a DSL, but then they also offer a DSL to allow you to create custom matchers. And it's not super well documented. I always forget how to define them, and so I oftentimes don't bother. It's just kind of too much of a pain for something that doesn't always provide that much value. But if it were easy, I would probably do it more. Eventually, I realized that you could use just regular Ruby objects as custom matchers. And they just seemed to respond to certain methods, just regular old objects and polymorphism.
And all of a sudden, now I'm back into all of the tools and mechanisms that I am familiar with, like the back of my hand. I can write objects all day. I can TDD them. I can apply any patterns that I want to if I'm doing something really complicated. I can extract helpers. All of that works really well with the knowledge that I already have without having to sink a lot of time into trying to learn the built-in DSL.
So, for the most part, now, when I define custom matchers, I'll often jump directly to creating a regular object and making it conform to the matcher interface rather than relying on the DSL for that. So once we go back to the test, now we're back in DSL land. Now we're no longer talking in terms of objects so much. We'll have some nice methods and they will all kind of read like English. So to pull a recent example that I worked on, I might say something like expect this policy object method to conform to this truth table.
STEPHANIE: That's a really interesting example. It actually kind of sounds like it hits the sweet spot of what you were describing earlier in the sense that it has a really nice DSL, but also, you can create your own objects, and that has an interface that you can implement. And yes, have your cake and eat it too. [laughs] But the idea that then you're kind of converting it back to the DSL because that is just what we know, and it has become so normalized.
I was talking earlier about okay; when is a DSL worthwhile? When is the use case a good reason to implement it? And especially for gems that I think that are really popular that we as a Ruby community have collectively used most of the time on our projects because we have oftentimes a lot of the same problems that we're solving. It seems like this has become its own shared language, right?
JOËL: Yeah, there are definitely some DSLs that we all end up learning because they're just so prominent in the Ruby community, even Rails itself ships with several built-in DSLs.
STEPHANIE: Yeah, absolutely. FactoryBot is another one, too. It is a gem by thoughtbot. And actually, in preparation to talk about DSLs with you today, I scoured our blog and found a really great blog post, "Writing a Domain-Specific Language in Ruby" by Gabe Berke-Williams. And it is basically like, here's how to write something like FactoryBot and creating your own little mini Ruby DSL for something that would be very similar to what FactoryBot does for fixtures.
JOËL: That's a great resource, and we'll make sure to link that in the show notes. We've been talking about some of the limitations of DSLs or some aspects of them maybe that we personally don't like. What are maybe examples of DSLs that you do enjoy working with?
STEPHANIE: Yeah, I have an example for this one. I really enjoy using Capybara's DSL for acceptance testing. I did have to go down the route of writing some custom selectors for...I just had some HTML elements within kind of a complicated table and was trying to figure out how to write some selectors so that I could write the test as if it were in, you know, quote, unquote, "plain English" like, within this table, expect some value.
And that was an interesting journey. But I think that it really helped me have a better understanding of accessibility of just the underlying building blocks of the page that I was working with. And, yeah, I really appreciate being able to read those tests from a user perspective and kind of know exactly what they're doing when they're interacting with this virtual browser without having to run it in headful mode and see it for myself.
JOËL: It's always great when a DSL can give you that experience of abstracting enough to where it makes the code delightful to work with while also not having too high a cost to learn or being too restrictive in what it allows you to do. Would you make a difference between something that's a DSL versus maybe just code that's written at a higher level of abstraction?
So maybe to get back to your example with Capybara, it's really nice to have these nice custom matchers and all of these things to work with HTML pages. If I'm writing, let's say, a helper method at the bottom of a test, I don't think that feels quite like it's a DSL yet. But it's definitely a higher level than specifying CSS selectors. So would you make a difference between those two things?
STEPHANIE: That's a good question. I think it's one of those you know it when you see it kind of questions because it just depends on the amount of abstraction, like you mentioned, and maybe even metaprogramming. That takes something from the core language to morph into what you could qualify as a separate language. What do you think about this?
JOËL: Yeah, part of me almost wonders if this exists kind of on a continuum, and the boundary might be a little bit fuzzy. I think there might be some other qualifications that come with it as well. Even though DSLs are typically higher-level helpers, it's usually more than just that. There are also sort of slightly different semantics in the way that you would tend to use them to the point where while they may be just Ruby methods, we don't use them like Ruby methods, and even to the point that we don't think of them as Ruby methods.
To go back to that article you mentioned from Jason, where just reminding people, hey, if you put params on this, all of a sudden, it helps you remember, oh, it's just a Ruby method instead of being like, oh, this is a language keyword or something.
STEPHANIE: Yeah, I wonder if there's also something to the idea of domain specificity where it should be self-service within the domain that you're working. And then it has limitations once you are trying to do something separate from the domain.
JOËL: Right, it's an element of focus to this. And I think it's probably also a language is not just one helper; it's a collection typically. So it's probably a series of high-level helpers, potentially. They might not be methods, even though that is ultimately one of the primary interfaces we use to run code in Ruby. So it's a collection of methods that are high-level, but the collection itself is focused. And oftentimes, they're meant to be used in a way where it's not just a traditional method call.
STEPHANIE: Right. There's some amount of you bringing to the table your own use case in how you use those methods.
JOËL: Yeah, so it might be mimicking a language keyword. It might be mimicking the idea of a configuration. We see that a little bit with ActiveRecord and some of the, let's say, the association and validation APIs. Those kind of feel like, yes, they're embedded in a class, but they feel like either keywords or even just straight-up configuration where you set key-value pairs of things to configure how a particular class is going to work.
STEPHANIE: Yeah, that's true for a lot of things in Rails, too, if we're talking about routes and initializers as well.
JOËL: So I've complained about some things I don't like about DSLs. I really like the routing DSL in Rails.
STEPHANIE: Why is that?
JOËL: I think it's very compact and readable. And that's an element that's really nice about DSLs is that it can make things feel very readable and, oftentimes, we read code more often than we write it. And routes have...I was going to say fewer edge cases, but I have seen some really gnarly route files that are pretty awful to work with, especially if you're mostly writing RESTful controllers, and I would recommend that people do. It's really nice to just be able to skim through a route file and be like, oh, these are the resources in my app and the actions I can do on each resource. And here are the ones that are nested.
STEPHANIE: Yeah, it almost sounds like a DSL can provide guardrails towards the recommended way of tackling that particular domain. The routes DSL really discourages you from doing anything too complicated because they are encouraging you to follow the Rails convention. And so I think that goes back to the specificity piece of if you've written a DSL, it's because you've thought very deeply about this particular domain and how common problems show up and how you would want people to be empowered by the language rather than inhibited by it.
JOËL: I think, thinking more about that, the word that comes to mind is declarative. When you read code that's written with DSLs, typically, it's very declarative. It's more just describing a thing as opposed to either procedural, a series of commands to do, or even OO, where you're composing objects and sending messages to each other. And so problems that lend themselves to being implemented through more descriptive and declarative approaches probably are really good candidates for a DSL.
STEPHANIE: Yeah, I like that a lot because when we talk about domains, we're not necessarily talking about a business domain, which is kind of the other way that some folks think about that word. We're talking about a problem space. And the idea of the language being declarative to describe the problem space makes a lot of sense to me because you want it to be flexible enough for different use cases but all within the idea of testing or browser navigation or whatever.
JOËL: Yeah. I feel like there's a lot of... there are probably more problems that can be converted to declarative solutions than might initially kind of strike you. Sometimes the problem isn't quite as bounded. And so when you want customizations that are not supported by your DSL, then it kind of falls apart. So I think a classic situation that might feel like something declarative is authorization.
Authorization are a series of rules for who can access what, and it would seem like this is a great case for a DSL. Wouldn't it be great to have just one file you can just kind of skim, and we can just see all of the access rules? Access rules that are basically asking to be done declaratively. And we have gems like that. The original CanCan gem and then the successor CanCanCan are trying to follow that approach. Have you used either of those gems?
STEPHANIE: I did use the CanCanCan gem a while ago.
JOËL: What was your experience with that style of authorization?
STEPHANIE: It has been a while but I do remember having to check that original file of like all the different authorizations kind of repeatedly coming back to it to remember, okay, for this rule, what should be allowed to happen here?
JOËL: So I think that's definitely one of the benefits is that you have all of your rules stored in one place, and you can kind of scan through the list. My experience, though, is that in practice, it often kind of balloons up and has all of these edge cases in it. And in some earlier versions, I don't know if that's still a problem today, it could even be difficult to accomplish certain things.
If you're going to say that access to this particular object depends not on properties of that object itself but on some custom join or association or something like that, that could be really clunky to do or sometimes impossible depending on how esoteric it is or if there's some really complex custom logic to do. And once you're doing something like that, you don't really want to have that logic in your...in this case, it would be the abilities file but inside because that's not really something you express via the DSL anymore. Now you're dropping into OO or procedural world.
STEPHANIE: Right. It seems a bit far removed from where we do actually care about the different abilities, especially for one-off cases.
JOËL: That is interesting because I feel like there's a bit of a read-versus write-situation happening there as well. It's particularly nice to have, I think, everything in one abilities file for reading and for auditing. I've definitely been in code where there's like three or four ways to authorize, and they're all being used inconsistently, and that's not nice at all.
On the other hand, it can be hard with DSL sometimes to customize or to go beyond the rules that are built in. In the case of authorization, you've effectively built a little mini-rules engine. And if you don't have a good way for people to add custom rules without just embedding procedural code into your abilities file, it's going to quickly get out of hand.
STEPHANIE: Yeah, that makes sense. On the topic of authorization, you did mention an example earlier when you were writing a policy object.
JOËL: I've generally found that that's been my go-to pattern for authorization. I enjoy the Pundit gem that provides some kind of light scaffolding around working with policy objects, but it's a general pattern, and you can absolutely write your own. You don't need a gem for that. Now we're definitely not in the DSL world. We're not doing this declaratively. We're leaning very heavily on OO and saying we're just going to create objects. They talk to each other. They can do anything that any Ruby object can do and as simple or as complex as they need to be.
So you have the full power of Ruby and all the patterns that you're used to using. The downside is it is a little bit harder to read and to kind of just audit what's happening in terms of permission because there's no high-level overview anymore. Now you've just got to look through a bunch of classes. So maybe that's the trade-off, flexibility, extensibility versus more declarative style and easy overview.
STEPHANIE: That makes a lot of sense because we were talking earlier about guardrails. And because those boundaries do exist, that might not give us the flexibility we want compared to just writing regular Ruby objects. But yeah, we do get the benefit of, like you said, auditing, and at least if we don't try to do some really gnarly, custom stuff, [laughs] something that's easier to read and comprehend.
JOËL: And, again, maybe that's where in the best of both worlds situation, you say, hey, I'm creating some form of rules engine, whether it's for describing routes, or authorization, permissions, or users can build custom business rules for a product or something like that. And it's all object-based under the hood. And then, we provide a DSL to make it nice to work with these rules.
If a programmer using our gem wants to write a custom rule that just really extends what the ones we shipped can do, allow them to do that via the object API. We have all the objects available to you that underlie the DSL. Add more rules yourself. And then maybe those can be plugged back into the DSL like we saw with the RSpec and custom matchers. Or maybe you have to say, okay, if I have a custom rule object, now I have to just stay in the object space. And I think both of those solutions are okay. But now you've sort of kept those two worlds separate and still allowed people to extend.
STEPHANIE: I like that as contributing to the language because language is never static. It changes over time. And that's a way that people can continue to evolve a language that may have been originally written at a certain time and place.
JOËL: Moving on from DSLs, we got some listener feedback recently from James, who was listening to our episode on discrete math. And James really appreciated the episode and wanted to share a resource with us. This is the book "Discrete Math and Functional Programming" by Thomas VanDrunen. It's an introduction to discrete math as a theoretical concept taught side by side with the very practical aspect of learning to use the language standard ML, and both of those factor into each other.
So you're kind of learning a little bit of theory and some practice, at the same time, getting to implement some discrete math concepts in standard ML to get a feel for them. Yeah, I've not read this book, but I love the concept of pairing a theoretical piece and a practical piece. So I'll drop a link to it in the show notes as well. Thank you, James.
STEPHANIE: Yeah, thanks, James. And I guess this is just a little reminder that if our listeners have any feedback or questions they want to write in about, you can reach us at [email protected].
JOËL: On that note. Shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
It's gardening season! Stephanie swaps seeds with friends and talks about her Chicago garden. Joël recently started experimenting with a dedicated bookmark manager.
They discuss the aspirational (and sometimes dogmatic) sides of TDD and explore when to test: first or after.
How does that affect the tests?
How does that affect the code?
How does that affect workflow?
Are you a "better" programmer because you 100% TDD?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what is new in your world?
STEPHANIE: It's gardening season here in Chicago. So right now, it is like mid-April as we're recording this, and we are just starting to get some warm weather. And this is usually the time that I do my garden planning for the season. And the other week, I went over to a friend's place, and we did a bit of a seed share. So we just each have collected fruit and vegetable seeds and herbs and all that.
And a really fun way to collect more things to grow is to share with your friends. Seeds are super cheap, but I feel like you could just have like an infinite amount for all of the things that you might want to grow. And so it's really nice to be able to, yes, spread that gardening love around and share with your friends.
JOËL: I'm imagining something like people trading collectible trading cards but the plant version.
STEPHANIE: Yeah, exactly. The fun thing that we did, my friend and I, because, you know, you usually get a little envelope with between 10 and 50 or more seeds, and they're super tiny. Some of them are really teeny tiny, like with broccoli, for example, it's like I can't even explain. It's less than a millimeter, I swear. It's very easy to just lose them, so you want to keep them contained.
But because we are sharing, we don't have a second envelope for the other person to take home with them. And so we actually made our own little envelopes with some origami paper that she had. And we folded it and stapled it and made it very cute. And so I came home with a bunch of these very adorable handmade envelopes with all of my new seeds.
JOËL: Are you mostly doing vegetables, or are these flowers?
STEPHANIE: Yeah, so we mostly focus on vegetables for our garden. And we do like to sprinkle some flower seeds in our yard. But that is more just like throw some seeds out there, and whatever happens to them happens. But with the vegetables, we put a little bit more effort because we usually try to have a good yield.
So in past years, that has meant starting seeds indoors because, in Chicago, we have a shorter growing season than some warmer climate places. And the late summer vegetables like tomatoes, peppers those usually take a little bit longer. So if you want to get a good yield, you might want to start them inside a little early before it's warm enough for them to go outside.
JOËL: So, do you have a garden plot out in your yard, or do you have a community garden plot? How does that work?
STEPHANIE: I am really grateful to have a bit of backyard space. And we have three raised beds that we built that cover...I think each one is 3 feet by 10 feet, so quite a good amount of space. Yeah, we're able to grow a lot of food. Our highlights include shishito peppers. That's one that I really like to grow myself a lot because I usually don't see them in stores as frequently. We grow really great eggplants. Tomatoes, obviously, is a pretty popular beginner-friendly vegetable plant. And we like to grow a lot because then we can process it all and can some of it so we can have nice tomato sauce that's homegrown year round.
JOËL: Hmm, sounds delicious. Do you experiment with the different varieties?
STEPHANIE: We do. That's also a way that the seed sharing is really helpful because maybe I'll get some varieties of certain vegetables like cucumbers or whatever, but maybe my friend has a different kind. And I think we try to do a mix of growing the varieties that we know we like and then experimenting with some ones that are new to us.
JOËL: It's hard to beat fresh vegetables in the summer.
STEPHANIE: Yeah. I'm very excited, especially because during the fall and winter seasons here in Chicago, our local food is a little less exciting. It still can be good, but it's been a lot of root vegetables and the like when we try to eat seasonally in the other season. So I'm really looking forward to stuff that's just juicy and fresh, and it's just one of my biggest joys during the summer. What about you, Joël, what's new in your world?
JOËL: I've recently started experimenting with a dedicated bookmark manager. This is not because I have been to too many bookstores and have all the free bookmarks they give you. These are the digital bookmarks to websites, and I've been really bad at managing those. I mostly just memorize the keywords I need to Google to get access to that website, which is a terrible way of doing things.
And then I've got a mix of a few different browsers, which I don't sync, and have a couple of bookmarks. I use a little bit of Pocket, which is a tool by Mozilla. It's all right, but the search capabilities are not very good. So sometimes I'll know it's in there, but I can't find it.
STEPHANIE: I'm so glad you brought up this topic because I am in a similar boat where I read a lot of things on the internet and have just thrown them all into my top-level bookmarks hierarchy. And that has not really been working for me, either. So I'm really curious to find out how you've been solving this problem.
JOËL: So recently, I volunteered to be a mentor for first-time speakers at the upcoming RailsConf in Atlanta. And someone was asking me about designing slides, and we were talking a little bit about when should you use maybe a bulleted list on a slide versus when there are other options available. I knew that I had read years ago a fantastic resource on slide design. But try as I could, I could not Google this and get the page that I was looking for.
This was shared to me by somebody else as part of a conference preparation group years ago, and so I reached out to this person. I was like, "Hey, so do you happen to remember that link you shared with me five years ago?" And this person says, "I do remember it. I don't have the link either."
STEPHANIE: I've literally been in this exact same situation where I remembered that there was an article that I read, and I remembered exactly who shared it with me or who I talked about it with, and when I couldn't find it, trying to reach out to them and also not being able to find it through them.
JOËL: So the story ends well because I was able to log into an old Slack group...
STEPHANIE: Wow.
JOËL: That had been created for the speakers at this conference and dig through the history. And luckily, I still had access to the group. I was still in that private channel for the speakers. And I found the link, and I was able to share it with others. So that was great. But then I started thinking; I can't keep living this way. I need something better.
STEPHANIE: It's true. Even though we are expert Googlers as developers, sometimes the search just doesn't get you the thing you're looking for.
JOËL: So, about this time, I'm scrolling Twitter as one does. And I saw a tweet from Cassidy Williams talking about some of the productivity tools that she's been using this year did a longer article about it. And I started reading it, and a tool that she mentioned there is Raindrop.io, which is an all-in-one bookmark manager. And I'm like, oh, that is exactly, I think, the missing piece of technology in my life right now. So I went up and signed up for it, and so far, it's been pretty good. I'm experimenting with it.
But I've consolidated a lot of the links that were in my head or in some of these other places, put it in there, categorized them a little bit, tagged them. And hopefully, this becomes a better way so that when I want to reference a link for someone else either in a conversation or as a resource or even for myself, maybe when I'm writing an article, I'm like, oh, I know I read something that would act as a good resource here. I can go to Raindrop and get that article without any of these other shenanigans I had to do this time.
STEPHANIE: Amazing. What is special about Raindrop as opposed to just your native browser bookmark capabilities?
JOËL: It has some deeper structuring capabilities in terms of not everything has to be hierarchical. It has tags as well as categories. And I think most importantly, for me, it has search, which seems to be pretty good at surfacing things. It also has some somewhat smart capabilities where it will automatically figure out if the thing that you've linked is an article, or a document, or a video, something like that. So you can filter by these inferred types as well. It has the ability to sync across devices, which browsers can do if you're signed up for them.
STEPHANIE: Nice. I like that it has that search functionality that you mentioned because I think I'm definitely in the boat of just scrolling through all of my untagged, unorganized bookmarks. And it's really tough to find what I'm looking for, especially if the meta title also doesn't quite tell me exactly the keywords that I'm needing to be scanning for in that moment. So I will definitely have to give it a try.
JOËL: I believe you can get full-text search if you pay for the premium version (I'm currently trying the free version.), which in theory, could mean that it searches the contents of the article. It's not clear with that. But I do know they save a snapshot of the text of the article.
STEPHANIE: That's really interesting because then it's almost like a search engine but scoped to the things that you have saved.
JOËL: Yes.
STEPHANIE: Nice.
JOËL: I'll see how that goes, and maybe six months from now, I can talk a little bit about what the experience has been using that.
STEPHANIE: Yeah, six months from now, you can tell us all about how you have no issues or qualms with how you've been managing bookmarks because everything is working perfectly well for you. [laughs]
JOËL: JK, I've dropped this whole bookmark thing.
STEPHANIE: That's true. That's also the flip side of trying out a new tool, [laughs], isn't it?
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: So our topic for today is something that we've had in our topic backlog for a while, and I'm excited to talk about it. It's TDD, which I think is a very well-known, potentially controversial topic of discussion in the world of software development. And specifically, we wanted to talk about when TDD is useful and when you actually might also have some value in writing tests afterwards. And in preparation for this topic today, I actually have been TDDing most of my client work this week.
JOËL: What? You're telling me you don't TDD 100% of the time? Are you even a real developer?
STEPHANIE: I am a real developer, and I do not TDD 100% of the time. I'm just going to say it. It's on record.
JOËL: You know what? Me too.
STEPHANIE: Wow, I'm glad we could clear the air on that one. [laughs]
JOËL: What percent of the time would you say that you do TDD? In this case, test first as opposed to maybe testing after you've written the code or maybe not testing at all.
STEPHANIE: Hmm, that's an interesting question. A part of me wants to answer it in my ideal workflow terms. But I think that is less interesting than reality, which is I will usually at least try to test first if I'm feeling like I am up for it. So maybe the percentage is, I don't know, I really couldn't tell you, but I'm just going to throw out 40% of the time [laughs] because that seems pretty, I don't know, reasonable. Sometimes you wake up, and you're just like, I'm not going to do it today. [laughs] And other days, you wake up, and you're like, you know? It sounds like a fun exercise to do for this particular feature.
So yeah, if I TDD 40% of the time, then I think maybe I write tests after another 40% or 50%. And then [laughs] I'm hesitant to say this on the air, but sometimes you code, and you don't write tests for it and would not recommend it for the majority of your work. But I'm just going to be real here that sometimes it happens.
JOËL: It's always a trade-off in terms of the work you put in versus the value you're getting out of it. And sometimes, you get very little value out of a test.
STEPHANIE: Yeah, that's real. It totally depends on what you're doing.
JOËL: I think one thing that's interesting for us, because we're consultants, so we move from one project to another, is that some projects are set up in a way that they're very test friendly. It's easy to have a testing workflow with them. And then others are just incredibly painful to test because of the way the system has been architected.
And I think a TDD purist would then tell us that this is a symptom of high coupling or other architectural problems; that's probably true. But also, you don't have time to re-architect the entire system, and so then it becomes a question of trade-offs. Can I test some things easily today? Can I refactor a few things that will make this local change somewhat easier to test? And then, where is it not worth the effort to make something testable?
STEPHANIE: Yeah, I've definitely struggled with that, where a part of me wanted to test something very thoroughly or even do test-driven development and then ran into some obstacles along the way and having to be realistic about that effort.
The other thing I was referring to around it depending is also the actual code you're working on. So maybe if you're just writing a script or something to automate some dev workflow, it's okay for that not to be tested. And I also do think that the decision to TDD is very dependent on whether you are writing net new code, or refactoring, or having to deal with legacy code.
JOËL: That definitely makes a difference. For me, when I'm refactoring in the purest sense, changing structure without changing behavior, in theory, I should not be writing tests for that because there should already be existing tests, and I'm not changing behaviors. So the test suite should prove that my changes did not change behavior. In practice, oftentimes, there is not the coverage that needs to be there. I don't know about you; I feel like I often don't trust the code enough, where I'm a little bit scared to do a refactor if there isn't test coverage. How about you?
STEPHANIE: I've been running into that issue a lot on my current client project where I've been making an intentional effort to add test coverage before I make any changes because that forces me to really understand how things work because either I read a piece of code and I just can't tell at all. Or I learn later on that I thought I understood something based on the class or the method names, but it turns out that there was actually some nuance in there or side effects or what have you that belied my understanding [laughs] of what it was doing.
And after a few times of that lack of trust that you talked about popping up, I was like, okay, I think, at least for me, the way that I can feel good about the work that I'm doing is to set myself up for success in that way.
JOËL: Do you ever find that the code that you write in a test-driven approach tends to end up different than code that you might write with a test-after approach?
STEPHANIE: Yeah, I think this can actually be answered at a few different levels but let me start with talking about how I like to practice TDD. If I'm given a user story, I usually try to work outside in. So I will write a feature or acceptance test and that involves testing how the user would interact with our application.
At that point, I will usually go with the most naive implementation to get the test passing, and so it probably won't look pretty. In a recent case, I was adding a new parameter to a controller, and I just put everything in the controller to get the test green. [laughs] And then, at that point, is when I gave that code a second pass and looked for areas to extract where I could.
JOËL: That refactor step and the red, green refactor cycle is really important.
STEPHANIE: Yeah, absolutely. I think that is where I find TDD to be the most valuable from that higher-level perspective. And I know that there are different schools of thought on this. But that helps ensure that at least I have written the code to make the feature work the way that I was hoping. And I use TDD less for driving design decisions just because I like to have something to react to that is helpful for me rather than having a blank slate of, okay, let me write a test with an idea about how an object's interface will look. And so that's what works for me.
So I do think it's kind of a mix of like from an acceptance test level; I am at least writing code that I know works, but the shape of the code for me is less determined by how I test.
JOËL: So when you're looking at code that you've written six months down the line, you generally can't tell the difference whether it was test-driven or written first and tested after.
STEPHANIE: That sounds right. That's just how my process works. In fact, I think recently we, to go on a quick tangent, we talked about writing conference talks, and I think I even mentioned for me the process is looking at the thing and then revising. And I think that the design driving element of TDD that a lot of people like is a bit less effective for me personally.
JOËL: Hmm. Would you say that TDD does not impact the shape of the code that you end up creating in response to the tests? Or when you're talking about design, are you mostly thinking in terms of the interface that you would have in the test itself, like, what arguments the constructor takes or things like that?
STEPHANIE: I think I was talking more about the latter, the interface, the construction arguments. When I do test afterwards, I also will notice the way the setup of my test how that is feeling. And if it is feeling a bit unwieldy or is a bit complicated, that will cue me to maybe take another pass at the code itself. So that's actually one way that testing after can signal to me a way that I might want to change my code.
JOËL: Okay, so you're getting some of those pressures that you get from testing, but you respond to them in a like second path?
STEPHANIE: Yeah, I think so. I'm curious how you TDD and whether you notice changes in how your code looks.
JOËL: I think there are a couple of things that TDD does in my workflow that are really nice. One is it keeps me focused in terms of getting the work done because you're just following from one failure to another. It also keeps me focused in terms of scope. It's really easy for my engineering brain to be like, oh, we could totally do this thing and all that, and it's like, no, that's not needed to solve the problem at hand.
Because in TDD, you try the smallest solution that will solve your problem, and then you will refactor it to make it maybe nicer to work with. But you try not to add new behavior that's not required in order to pass the test, and that can be a really helpful forcing function for me.
STEPHANIE: That's interesting because I was just thinking about how sometimes, at least with the outside-in approach that I was talking about, I will find that the scope of the ticket is too big as I make changes to get the desired quality of the code that I want.
Like I mentioned, the naive implementation, like, sure, maybe everything is in a controller, but as soon as I'm starting to do that second pass, and I want to maybe change another class and to make it work for my needs, I will notice it start to sprawl a little bit. And that is usually a signal to me that, like, oh, maybe what I need first is just refactoring the objects that I'm hoping to use to get the desired implementation. And that ends up being a separate PR that I do first to then set myself up for making the change.
JOËL: The classic make the change easy before you then go and make the easy change.
STEPHANIE: Right. But that does mean that that initial feature test that I wrote won't ever be green. So I do have to kind of like back out of making that change and just be like, okay, today is not the day [laughs] that I'm going to get this feature working.
JOËL: There are some times where I'm in a situation like that, and I will kind of recognize, oh, there's a refactor step that's happening right now as a sort of subtask. And so, I will make that refactor change that I need to and then commit only those files that were a part of that refactor and may be included as part of the PR with the feature change or maybe push it up and make it its own PR.
But depending on what the refactor is, oftentimes, I can kind of do it sort of all more or less continuously but decide once I've done that refactor step, okay, commit time but only those files for the smaller set of changes, and then keep moving with that outside-in approach.
One thing I have noticed about the style of code that I tend to produce when I TDD versus when I don't is how I will tend to decouple things. And so because coupled code is really annoying to test in isolation, TDD sort of forces me to do more dependency injection, passing objects to others. It will often force me or maybe not force me, but it gives me that wholesome pressure to maybe separate HTTP requests from more of the business logic in my code, which otherwise I might completely intermix because it's just so convenient. Even certain things like class methods, I might tend to overuse them or use them more if I'm not test driving than if I were.
STEPHANIE: When you talk about coupling, I'm curious, do you end up mocking a lot in the tests that you are writing to drive your development?
JOËL: No, but if I'm testing after, I probably will. Mocking, I think, is a sign of coupling generally. In tests where you're just passing objects to each other, generally, you can get away with passing in a test double or something, whereas if you're hard-coding dependencies, you often have to mock.
STEPHANIE: Got it. That makes a lot more sense now. I think that does require a bit of thought upfront about what kinds of objects you might need and what they would provide for you in the thing that you're testing.
JOËL: Yes. There's definitely a phase where let's say; I'm testing some kind of third-party integration; I'm just kind of trying to do it all in one object that has a mix of business logic and some HTTP request stuff. It gets really annoying as we're adding...maybe the first feature is okay. I use WebMock, and I stub out a request, and it's good. And then the second one, I feel like I'm kind of duplicating that. And then the third one, I've got to deal with retries.
So now I've got to go back to the first one and add some two or three WebMocks because now we've got exponential backoff code that's happening here. And this new feature broke the old tests. And it just becomes this really annoying thing to do. And then I might start thinking, okay, how do I separate these two things? I have one place where I test the HTTP logic, the exponential backoff, the what to do if I get a 404 from the API. And then, separately, I can just have the business logic and test all of those branches there without having to touch any of the HTTP stuff.
I think you could get there from a few different paths. So you could get there by sort of following a lot of classic design principles, things like SOLID, because they kind of converge on that general idea as well. You could even get there if you took more of a functional programming approach where you are really good at separating side-effectful code from, I'm going to use the term loosely here, pure functions.
I've heard some people make the distinction between IO versus non-IO in code and how that affects the types of tests that you write for them. And separating those two is a thing that you might do, even if you weren't writing tests at all, if that's a design principle that you know to follow.
STEPHANIE: Yeah, that's a great point. I was thinking, as you were talking about your approach for handling that potential feature with talking to a third party, that I've heard that particular task or problem in software development used as an example for a lot of those different techniques or strategies that you mentioned. And I suppose TDD really is just a tool, and it doesn't replace your experience or intuition.
And earlier, when we were talking about times that you don't do TDD, I will have to say that if I am doing something that I've done many times before, I feel confident enough that I don't need to lean on that red, green refactor cycle. At that point, it's more muscle memory. And maybe I do forget a step along the way, but I have the experience to know how to debug that or to see the error and know exactly what it was that I did wrong. And in that case, I am tapping into something different than using TDD.
JOËL: I think definitely, for a lot of things now, there are patterns that I have learned where even if I weren't TDDing, I might do a third-party integration using this pattern because I've done it via TDD enough to know that this is a structure that I find works very well in terms of the coupling of things. And then maybe if I want to fill in some tests afterwards, then I'll thank my past self that I'm using a pattern that plays nicely with that. One thing that I do notice happens sometimes is that when people add tests after the fact, they will add tests that are green but that don't necessarily fail if the code breaks. Have you ever seen that?
STEPHANIE: I have seen that before. In fact, I just saw it recently where we had a false positive test. And I made a change expecting the test to fail, and it didn't, which is not great because the value that tests have are when they fail, you want to be alerted when something goes wrong. Just because they're green doesn't mean that everything works. It just means that they didn't detect a problem. And in this particular case, I don't know if the developer who wrote this test had TDDed or not. But I did notice that in the test, we were mocking a method, and that ended up being the cause of the false positive.
JOËL: I'm always a little bit skeptical of mocks because I feel like I've seen so many either brittle tests or tests that will succeed all the time come out of mocks. I don't know if you've ever heard the term tautological test or a test that is a tautology.
STEPHANIE: No, I haven't. What does that mean?
JOËL: In its sort of most basic sense, it's a test that is always green no matter what the output is. Some people think of it more in terms of self-referential tests, like, oh, a thing equals itself, which, yes, it does, and those tend to be always green. But it's not always self-referential. It can be some other subtle ways. Typically this happens when mocking or specifically if you mock the system under test. It's very easy to write a test that is now going to always be green, no matter how the code changes.
A fun fact about the word tautology is it comes from discrete math, which is the topic of my RailsConf talk. If you write out a truth table that shows all the possible inputs and whether or not something will be true or false, depending on what the inputs are, the output column is all true in a tautology, which tells you that no matter what the inputs are, you're going to get true out of that method or function or equation. And so, if this was a Boolean expression in Ruby, you could replace that by hardcoding true and get the same result.
STEPHANIE: Yeah, that's what I was imagining, a function that just returns true. [laughs]
JOËL: And that's effectively what you can accidentally write when you're creating a test that is a tautological test is one where you could have just replaced the entire thing with expect true to be true, and it would have the same effect. And, like you said, tests only have value when they fail. And a test that never fails has no value. So TDD has this red, green refactor cycle. I feel like you could probably come up with a cute slogan like that for a testing-after style. So maybe I guess you'd start off you write some code, then you write a test that theoretically passes for it.
So you start green, but then you want to make sure you see that test fail, so you got to go red and then comment out the code or something. Then comment it back in to see that it goes back to green to make sure that not only does this test fail when the code is broken but also that bringing this test back is what makes it pass, which is an important distinction. So maybe it's a green, red, green, and then maybe refactor. Because one thing that I admired in the style that you were talking about earlier is that even when you test after, you include a refactor step. The test at the end is not the final step in your workflow.
STEPHANIE: Yeah, that's a really good point. When you said green, red, green, I was thinking of a Christmas garland [laughs] or something like that. But yeah, I do think that stuff gets skipped sometimes. If you are testing after, you're backfilling tests for code you wrote, and at that point, you think you know how it will work, and so you're writing your tests kind of colored with that in mind.
I like the injecting commenting something out or changing an input or something that you know should make the test fail, just so that you can confirm that you didn't just write a test that expects true to equal true or give you a false positive like that, then go back to green. And as you were saying that, it did make me think like, oh, well, that's like a whole extra step as opposed to TDD where we do just have red, green refactor. We don't have that extra step. But I think the effort is just like put in at a different point in time.
JOËL: Agreed. It's important that you see the code fail and that you see it pass after the change. The order has changed a little bit, but those two kinds of core elements are present. Kind of by default, you have no choice when you're doing TDD. You have the ability to skip that if you're testing after, but ideally, you incorporate those in a robust test after workflow as well.
STEPHANIE: Yeah. And I know I mentioned times when I've done something enough or used a pattern enough that maybe I'll just go ahead and implement it and then backfill with tests. And I also recognize that in those moments, I could have done something wrong, that there is some amount of wanting to check that the test failed. And I imagine there is some kind of balance to achieve there between the speed that you get by having that experience and knowing the direction you want to take things and applying a pattern that you've done a lot with being like, oh, we're all human, and sometimes we make mistakes.
JOËL: In a situation where you feel like you're coding something that you've coded up 100 times before, you're very familiar with this. Do you find that a test after workflow is faster for you?
STEPHANIE: Hmm, that's an interesting question.
JOËL: Because I think that's often a motivation. It's like, I don't want to bother, like, I just have the idea. I know what to do. Let me just write that code and get it done.
STEPHANIE: I think if I were introducing a new route or controller action or whatever, I don't need to go through the cycle of writing a test and it failing because I haven't added the action to the controller yet. It's like I know that that is the next logical step, and so maybe I might skip it there. But if I'm at the point where I'm working with business or domain logic, I think that's where is the value of test writing first because it's like, I passed the framework and passed my tools. And now, I'm working at working through the logic of the business problem itself.
JOËL: So you're working in maybe slightly larger iterations of that red, green refactor cycle.
STEPHANIE: Yeah, that's a good way to describe it.
JOËL: I was recently working on a gem and tried to TDD it from scratch and went with micro iterations. And it was actually really fun, and there was a flow to it. And this is a greenfield side project. And it helped me stay focused. I think it did give me a decent design. I really enjoyed it.
STEPHANIE: Nice. Was there something satisfying about seeing that green each time and kind of doing that bit of mechanical labor? And I can see how that can feel almost meditative.
JOËL: Yes. And I think also because this was a problem that I didn't fully know how I was going to solve, TDD helped really focus me on solving sub-parts of that problem, things that I can hold in my head and solve in a minimalistic way and then iterate on.
STEPHANIE: I like that a lot. You were using that technique, and that really helped for the task at hand, which was, in this case, a bit smaller in scope. I think the way you and I have been talking about TDD has been very realistic and very reasonable. And I'm curious what you think about people who kind of use it as the pinnacle of how you should write code.
JOËL: I think that's really interesting because TDD is a really wonderful technique, and I wish more people used it. But it's kind of taken on a mystique of its own where if you do TDD or claim to do TDD 100% of the time, now, all of a sudden, you've put yourself on another level. And I think people even who choose for pragmatic reasons not to TDD all the time maybe feel a little bit of guilt or at least feel the need to explain themselves to other people to say, "Hey, I didn't TDD this here. Well, let me explain to you why that's okay, and I'm not a bad programmer."
STEPHANIE: Yeah, I think we even alluded a little bit to that earlier in the show, and I could hear my hesitancy to be like, oh, I guess I'm going to say this and have all these people hear it. But I think that's a good point that it's okay for you not to do it 100% of the time. That doesn't make you any less of a programmer. Also, the way we've been talking about it also makes it sound like one of those things where it's like you do have to learn the rules before you can break them. And so there is value in learning it and doing it.
And then you also, after having done it enough, know when you want to use it or when you don't. My advice for folks who haven't really done it before or don't quite see the value of it is just to try it and then decide for yourself. I think at the end of the day, we should all feel empowered to be able to decide how we work best.
JOËL: It's also really valuable, I think, to maybe pair with someone who is really good at that and to get to see what their workflow is like. Oftentimes, there's almost a hump of getting into it where you are more productive without TDDing because you're not comfortable with the flow or with the techniques. And it takes a lot of expertise to get over that hump where maybe at the expert end of things, you are more productive with it. And on the less expert end, it just becomes a chore that takes up all of your time or ends up giving you results that aren't that great anyway. And so, how do you cross that chasm?
STEPHANIE: Yeah, that's a really, really great point because, in some ways, when you first get started, it will feel slow. You are unlearning the ways that you have known to code before and trying to do it in a different way. And I really like your advice about trying to pair with someone who has expertise or has been practicing it for a long time. That was my first real introduction to it too. At this point, I had been a few years into my career and hadn't really tried it because it seemed very daunting, and seeing someone else verbalize their process and seeing their workflow was really helpful for me to get on board.
JOËL: I think what I would like is for TDD to be a tool that is aspirational for a lot of people. If you're new to the technique and you've paired with somebody who's really good at it, and you see the flow that they have and be like, wow, that's really good. I would love to get that incorporated in the way that I work. Rather than a sort of measuring stick for how elite of a programmer you are. There's no sense in shaming people over the tools they use.
STEPHANIE: Right. Because it should also be accessible, and if you make people feel bad about it, and then it's not accessible to folks.
JOËL: On that note. Shall we wrap up?
STEPHANIE: Yeah, let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël has been working on his RailsConf talk about various aspects of discrete math useful in day-to-day work as a developer and going deep on some concepts from propositional logic and Boolean algebra, particularly DeMorgan's Laws, which explain how to negate a compound condition. Stephanie attended a meeting with a fun "Spicy Takes" topic. She gave a short talk on how frictionless technology may not be the best path forward and tried to argue in favor of more friction in our software.
Together, they talk about ways they've made remote work work for them and things they'd like to try/do differently.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've recently got accepted to speak at RailsConf. And I've been working on my talk about various aspects of discrete math that are useful in day-to-day work as a developer and going really deep on some concepts from propositional logic and Boolean algebra, particularly the DeMorgan's Laws, which explain how to negate a compound condition.
So if condition one or condition two, if you want to negate that thing as a whole, you can't just negate both of the conditions individually. You will get a totally different result, and that's a really easy mistake to make. I don't always memorize exactly what to do. But I know enough in the back of my head when it comes up on a pull request to check it out and be like, oh, there's a negating of a compound condition here. Pay closer attention. There might be a bug.
STEPHANIE: So are you saying that when you negate each condition individually, you get the opposite result that you want?
JOËL: It's not opposite, just different.
STEPHANIE: Just different, okay.
JOËL: So De Morgan's Laws tell us that if you want to negate the compound condition as a whole, you negate the individual clauses but then also have to flip the sign in the middle. So if you're trying to negate condition one and condition two, it becomes not condition one or not condition two.
STEPHANIE: I see. Wow, that's confusing because you'd think that there are just two outcomes, but really there are a lot more.
JOËL: Yes.
STEPHANIE: And that reminds me of when we've talked about on the show combinatorial explosions, which I know is a favorite topic of yours.
JOËL: Combinatorics will definitely come up in the talk as well. It's sometimes hard to hold all the possibilities in your mind. And so I'm a big fan of truth tables to visualize what's happening and to be like, oh, when I make this thing negative, now all these things flipped into false when I want them to be true and vice versa. Okay, I've got a weird inverse going on here or something like that.
STEPHANIE: I have a funny thing to share with you. Joël, have you ever heard of the show "Taskmaster"?
JOËL: No, I'm not familiar with this.
STEPHANIE: Okay, it's a British reality competition comedy show where the contestants are usually famous British actors or comedians. And they have to do just really insane, silly tasks. And usually, one of the more iconic ones is to eat as much watermelon as you can in a minute. But they're just presented with a whole watermelon without any tools or anything [chuckles] for cutting it up. And it's just very funny and very delightful.
And one of the tasks that I watched recently was a situation where they had to follow these instructions, and the instructions were to do the opposite of the following statement: "You must under no circumstances not avoid not making the bell not ring." And they had a bell right in front of them. And so they had to figure out if they were supposed to ring the bell or not ring the bell based on those instructions and within a certain time limit. If they had the math skills that you were talking about, [chuckles] perhaps they would have been able to figure it out.
JOËL: I would absolutely want to write that out as a more formal logic thing. Otherwise, it becomes...you just mess with your head. You get in almost a recursive space where like, wait, not not, does that cancel? Does it stay? And yeah, it gets really messy.
STEPHANIE: Yeah, it was very funny to watch them try to figure that out on the spot. And I think there's a clip of it on YouTube that we can link [laughs] for our listeners.
JOËL: That's amazing. What's new in your world?
STEPHANIE: So last Friday...you and I are on the same team at thoughtbot called Boost, and every two weeks, we get together as a team, and we have a meeting where anyone can propose a topic. It's just a nice space for people to see each other and hang out. And one of our co-workers hosted that meeting and he chose the topic of spicy takes and asked for volunteers to sign up and give a quick couple of minutes lightning talk on the spicy take that they had. And it was so fun.
We got some takes on how REST is not the best. We got some opposing opinions about Tailwind. And I ended up giving a short, little talk on how frictionless technology may not be the best path forward and was trying to argue in favor of a little more friction in our software.
JOËL: What would friction look like in this scenario?
STEPHANIE: I was really interested in exploring how by making our software so easy for users we eliminate some amount of attention and mindfulness into using technology. So I think for me friction would be presenting the user with more autonomy and choice rather than making decisions on their behalf.
I don't totally know what that looks like, but I do know that things like one-click ordering or autoplay those things have made me bristle a little bit in certain contexts and wondering what other options do we have available to us to provide the features we want to provide to our users but maybe not in a way that is so convenient and easy to use that we lose that aspect of knowing what we're doing with our technology.
JOËL: I feel like knowing you, you've probably read a couple of articles and some books on this topic. And if I wanted to dig more into this idea of a little bit more mindfulness or introducing a little bit of friction into my software world, where would you recommend I go to read?
STEPHANIE: Yeah, that's a great question. When I was preparing the talk, I referenced a few articles that I'll link in the show notes, one from The Atlantic and one from The New York Times. And I liked them because one of them presented what I was getting at, the more philosophical approach of like, what does it mean for our attention to be? And what does it mean for our technology to be too easy? And the other one had more practical use cases for security and technology misinformation and abuse. So I liked that those two things complemented each other equally.
And then I also would plug a book called "How to Do Nothing: Resisting the Attention Economy" by Jenny Odell. I read that book last year and really enjoyed it. And she talks a lot about just the current technology landscape and what we, as consumers and users, can do to reframe our relationship with it. And I think that book is for people who use technology in general. But as developers, I think we are in a unique position to extend that train of thought right into the things that we develop.
JOËL: You know, a place where I do appreciate friction is in the physical world. If there weren't any friction, my chair would not stay put on the ground. My fingers would not press on the keyboard. So we need friction to be able to do our jobs. So you work from home; I work from home because thoughtbot is now fully remote. How has that been for you setting up a work environment in your home?
STEPHANIE: So I've actually been working from home since 2019. So about a year before the pandemic, I had moved to Chicago and was still working for a company in New York. And so that was when I started working from home, and then have just been doing that ever since. So I think I have now really figured out a setup that works for me. I've been doing it for four years now, which is pretty wild to me when I think about it. It's interesting because I actually really enjoyed going into an office. And there are parts of that that I really miss. But I think I have just gotten used to it and have a setup that works well for me.
JOËL: Are there any things that you like to do for your environment to help get yourself into maybe the zone a little bit more easily?
STEPHANIE: Yeah. So my workspace is a separate room from the rest of my apartment, which is also really just one big room. [laughs] It's kind of like a loft-style situation, so I don't really have doors. But I am in what we call the sunroom, and it's actually kind of like an enclosed porch with a big window and lots of plants. And it's in the back of the apartment.
And so whenever I'm in this space, it's because I'm working. And I think having that separation of home and work is really helpful. Because when I step into this space, I'm like, okay, now I'm at work, and I don't have as many distractions as I would if I were working in a different space like a bedroom or the living room.
JOËL: I have to say whenever you're on a video call, the plants around you are iconic.
STEPHANIE: Oh, thank you. Yeah, it's been a nice conversation starter. When I'm meeting a new person, they usually comment on the plants, and I can give them a little show and tell. And that's been really nice.
JOËL: I feel like a lot of people who work from home have put a lot of work into creating fun backgrounds for their video calls. Maybe they're setting up a cool bookcase behind them or plants. People like to put something behind them that will make things interesting on a video call in a way that maybe we didn't need to when it was just a conference room and in an office.
STEPHANIE: Yeah, absolutely. I was just on a meeting with someone who had a big pile of tiny rubber ducks. So he was also a developer and, I guess, had just amassed this very delightful rubber duck collection, and it was just in the background. And we got to joke about it for a little bit, and that was really fun.
JOËL: Are these rubber ducks meant to be used during debugging sessions?
STEPHANIE: Yeah, exactly.
JOËL: So I'm in a somewhat different situation from you in that I don't have a separate room to set up a home office. I've resisted doing anything in my bedroom. Like you said, it's good to have that separation. So I work more in my kind of living room-dining room space. And something that I found is really valuable for me has been movement. So say I work an hour in one part of the room, and then I switch to a different place. And it's going to be maybe a different posture.
So I'm working in a solid chair table for a while, and then maybe I switch to more of an easy chair situation. That I think has been really helpful for me ergonomically during the day is just making sure that I'm not always in the same position constantly all the time but actually incorporating change in movement throughout my day.
STEPHANIE: I like that a lot. I actually do also end up sitting at my dining room table sometimes for a change of scenery. It's funny because there was a while when...when I'm at my office desk, I have a standing desk. And so usually if I'm in a meeting, I would be at my desk and people would see me standing. And I think someone at some point mentioned like, "Wow, you seem to stand all day." And I was like, "Oh, well, when I'm not in a meeting, that's when I'm sitting on the couch or a lounge chair or something." [laughs] I'm curious, though, because you are working in your dining living space if it's been harder to separate work and home life.
JOËL: I think it was definitely an adjustment, but it's a thing that I learned to do. And I still try to keep some amount of separation, which is why I don't set up an office space in my room. But I've also gotten to the point where now that I work from home, I find myself leaving home much more frequently after the workday ends. I was surprised just how much social interaction you get just by default being in an office around people all the time. When you're at home all day, even if you're on calls, it's not the same.
And so I've found myself more and more to stay in a healthy emotional, mental space, leaving the home in the evening to go do things with friends or with other people. And so even though I am an introvert who prior to working from home preferred to stay at home more evenings than not, I've started living almost more of what people would assume is an extrovert lifestyle where I'm out every evening.
STEPHANIE: Wow, that's so interesting because I'm the opposite; where when I was commuting and going to an office, I found it much easier to stay out. I would just go to a bar or a restaurant after work. Whereas now it's a bit harder because I'm not already out and about in the world, and also I am in my comfy pants, and I'm just like, oh, I have to go out? I don't know if I'm up for that. [laughs]
Though I also really...I think the downside is that I have been really missing some of that human contact. And there are weeks where I'm like, dang, I really didn't talk to people in the world very much. So it's actually been a bit of a bigger obstacle for me to find the energy to see people in the evenings after work.
JOËL: It helps to make plans.
STEPHANIE: Yeah, that's a good idea.
JOËL: Or you can have people come to you. You mentioned you were doing that soup club.
STEPHANIE: I did, yeah, back when the winter was first starting. I mentioned on the show that I was having people over for soup on Friday nights, and that was really great. That was nice because then I was like, okay, I have to sign off by 5:00 p.m. so I can start making the soup. [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So you mentioned that sometimes it's hard to leave the home because you're kind of in your comfy clothes, and you don't want to kind of get dressed to go out. Has working from home kind of changed the way you tend to dress? Do you ever do the thing where it's like, oh, I've got the formal top and then just sweats?
STEPHANIE: Yeah. Like business on top and party in the bottom [laughter] or something like that is the phrase. My habits around getting ready in the morning have definitely changed; where I don't put as much energy or effort, or time into it as I did when I was working in an office. And that has been nice because I get that time back, and that is really valuable to me.
Yeah, I'm also usually just in soft pants. [laughs] That has definitely been a very positive impact on my life. And I do try to make an effort to go out for coffee. And when I do that, I'm just like, yeah, how I go out is how I go out. I don't really mind. I'm very comfortable going out however I'm feeling that day. But I think getting the time back actually has been really important to me.
JOËL: Hmm, I think for me, interestingly, that's become an interesting way to build a little bit of separation from personal life and work life. So I'm making a point to put on...I don't know how you describe it. I was going to say real pants, but it's not like sweats are not real pants. But yeah, I will put on the kind of thing that I would put on to go in the office. And for me, that's kind of a...it's a start to the day. It's a start to being more serious and transitioning to more of a work mindset.
STEPHANIE: Yeah, absolutely.
JOËL: As opposed to on the weekend, if I'm just hanging around in the same space, but I'm dressed differently, I don't feel like I'm in work mode.
STEPHANIE: Yeah, yeah, that's fair. I've definitely noticed your fun sweaters that you wear in video calls and stuff. So I really appreciate that; yeah, you are just putting on clothes that make you feel like you're ready to dive into the work week. I'm really curious, do you find yourself being more productive working from home than you were working in an office?
JOËL: I would say it's about even on average. There are probably days where more or less on one side or the other, but I would say it's similar.
STEPHANIE: I think I'm actually much more focused at home. And I know that this is not true for everyone because I was chatting with a friend, and she was asking, like, "How do you stay focused at home?" She was telling me that she gets so distracted by all the things that she could be doing in her home life. And for me, because I really enjoyed the social aspect of being in an office, I found myself wandering into the kitchen not infrequently to go get some snacks, and oh, running into this person and having a little chat.
And I think my presence also, I was available for other people to come to me and start a conversation or ask to go on a walk. And I think I actually really needed that external push to take breaks. Because now that I'm working from home by myself, I definitely just get into some rabbit holes, and it's tough for me to resurface.
JOËL: Let me fix one more error, and then maybe the test will be green. Oh, that didn't fix it, but I'll bet one more would fix it. And keep doing that until it's like, oh, well, I'm going to push off my break for another 30 minutes, oh, another hour. And it's like, you know what? I'm just going to finish my day.
STEPHANIE: That literally happens to me all the time. The lunchtime break is tough because I definitely will delay that by 15 minutes and then 30 minutes, and then oh no, it's like 2:00 p.m. Okay, let me just eat a snack, then. And then keep going until I finish whatever task, and then end up wishing that I had made a little more of an effort to take a real break.
JOËL: Yeah, I was having a conversation recently with someone about how it's often easier to make space for other people than for yourself. So if somebody is like, "Hey, I want to take a break. Do you want to go take a walk?" You might be like, "Sure." Maybe I wasn't quite in a place where I wanted to take a break, but I'll make the time for you.
Whereas when it's like, you know what? My body or my mind is telling me I need to take a break, but this test isn't green yet. So I'm going to almost deny myself here for the, I don't know, the good of the mission, whatever. It's not really a noble sacrifice. It ends up hurting you and the project in the longer run, but it's so much easier to do that.
STEPHANIE: Wow. Okay, yeah, that really resonated with me because I find myself in situations where I don't think that I can take a break because I'm like, oh, I have all these red tests, and I need to get them in a place where I feel comfortable stepping away. But if someone asked me like, "Hey, I'm at your door. Let's go for a walk," I could just put it away and go for a walk and have a great time. And I would like to be able to do that for myself when I don't have someone prompting me.
JOËL: There's something I really appreciated that someone who used to be at thoughtbot would do is this person would go for a walk every afternoon without fail and would drop a line in the Slack channel being like, "Hey, I'm stepping away for a walk." And, I mean, yeah, it's nice to know that, okay, this person's not reachable for the next 15 minutes or whatever. But that's not really, I think, the value that I got from it. It was more of seeing somebody else taking a break and it being a reminder for me too to be like, you know what? Maybe I should take a walk as well, like, it might be time for a break.
STEPHANIE: Yeah, I like that a lot. I think it's kind of ironic that I have quote, unquote, "optimized" my setup so much that I don't get distracted that I miss out on the friction [laughs] (A little call back to earlier.) that I would like to, yeah, call more mindfulness to how I'm physically feeling, not even physically but also emotionally and intellectually and being prompted, like I said, externally because I am realizing now that I really need that.
JOËL: And at least for us here in North America, it's now starting to be spring. And so I think sometimes winter can be its own barrier to be like, you know what? I should go and take a walk. I don't know if I want to put on all the layers and my boots and all of that and deal with the snow. Whereas now it's like, just walk out and there will be flowers and trees covered in blooms. And it's going to be amazing.
STEPHANIE: Yeah, I'm really looking forward to that. I agree; I think when the weather is nice, that is definitely a bigger motivator for me because there's more to enjoy and more to look at. And I love being outside. When you do step away to take a break, what do you do in your home or outside your home?
JOËL: So I'm a big fan of taking a walk. I live in a dense, walkable neighborhood, Downtown Boston. And so just walking around a few blocks is a great way to get a little bit of fresh air, just get some motion going because I've been sitting around for a long time. It's a lot of natural beauty as well. A lot of people have gardens, and there are a lot of trees planted along the roads. So it's just a really pleasant way to, in some ways, connect with a little bit of nature and be outside and reset. Do you find yourself when you're looking for a break gravitating outwards or inwards in your space?
STEPHANIE: I like to make myself a snack, a cup of tea. Sometimes if I'm reading a good book, I'll get into the book for 20 minutes. And sometimes, if there's nothing to pick up, maybe I'll find myself on YouTube and watch a short little thing just to reset and have some fun. Sometimes I'll try to tackle some dishes. I think the other thing with working from home is that I now create more mess in my home. [laughs] I don't know if it's the same with you. But I, yeah, try to keep on top of that so that I don't have to do it later in the evening.
JOËL: I think one of the things that's really nice about working from home is the ability to cook more because you're in that space. So I've found myself oftentimes more on my lunch break, maybe prepping some things for a stew or something that's going to braise, and then just having it sit on the stove all afternoon. And like I said, maybe a really quick break is just you get up, go check the pot on the stove, and you turn the heat down or stir it a little bit and then get back to work.
STEPHANIE: Yeah, I like that a lot. I do that, too, with a pot of rice or beans or something like that. I also am definitely making my own food for lunch a lot more just because, being at home, you have your whole kitchen and fridge available to you, and I feel less pressure to get all that done the night before.
JOËL: Right. I think I've been trying to incorporate a little bit more physicality to my breaks recently. And one thing that I've done for shorter breaks...if it is a longer break, it is nice to go out and take a walk. But for shorter breaks, I set up a pull-up bar, and I just try to go and do a set of pull-ups there. And I'm not great at it, so it's not like I'm there for 10 minutes doing 100 pull-ups. But it's a nice way to go from a very static mental mode to a quick break that just totally resets you into this active physical space.
STEPHANIE: Yeah, I like that a lot because something like that requires your full attention and physical effort in that moment. So it's not like you can still really be thinking about work while you're in the middle of doing a pull-up, at least [laughs] that's my interpretation of [laughs] how you take those breaks.
JOËL: I'm curious, are there any other kinds of lifestyle elements that you've changed or customized to help you have a better working-from-home experience?
STEPHANIE: There was a past Bike Shed episode hosted by Steph Viccari and Chris Toomey, and I can't remember exactly what it was that they were talking about. It must have been working from home-related because Chris had mentioned a ritual that he had when he was finishing his workday where he would close his laptop and say, "Schedule shutdown complete." And I've been thinking about that a lot and trying to do a similar thing of just verbalizing, "I'm done with work now," to make it true. [chuckles]
Otherwise, if I don't, I can find myself gravitating towards my laptop when I have a thought. Like, I have an idea like, oh, I just thought of a way to try to debug that test or whatever. And then I'll want to go back just really quickly to write it down on my work computer so it's there for me when I come back. But if I've said, "I am done with work today," that means I'm trying not to reopen the work laptop, and then I'll try to jot it down somewhere else. And that has been really helpful.
JOËL: So, setting like an emotional boundary.
STEPHANIE: Yeah, an emotional boundary that almost becomes physical in a way because when I was working in an office, I would never take my work stuff home with me, so I physically could not access it. And since I can't do that now, by verbalizing it, it's almost as if I've created a boundary in my head.
JOËL: That's really powerful, the impact that you can have just by sort of verbalizing something.
STEPHANIE: I will say that I also don't keep any work stuff on my personal devices and that was true even when I worked in an office, but I think it has actually been more helpful and important working remotely. It sounds like you've experimented with a lot of different ways to make remote working work for you. And I'm curious if there's anything else that you really want to change or anything that you would like to try or do differently.
JOËL: I think an element that I've been experimenting with recently is actually working outside of the home, so something like going to the library or going to a coffee shop. Interestingly, I've tended to use those mostly for when I want to work on personal projects that are not work. So strangely enough, now I work in my home, and when I do things for myself that I previously would have maybe done in my home, now it's always at a coffee shop, at the library, something like that. So I still keep that separation, but it's inverted.
STEPHANIE: Wow, that's really interesting. I also like to be in a more public space as well with my work. And just being surrounded by other people and busyness is very comforting for me. And it actually also helps with the rabbit hole because I think I am more present in my environment when I do have cues of people getting up around me or whatever. Though ironically, my wanting to be around other people does not really work well with meetings and collaborating and pairing with other people. [chuckles] And so when I have to do those things, even though I'm also socializing just in a different way, I usually have to be in a more quiet, private space.
JOËL: Have you ever tried to maybe group your meetings on a particular day so that you have, let's say, an afternoon of uninterrupted time that you know you can just go to a coffee shop and be heads down and not have to take a call there?
STEPHANIE: I haven't tried that. But I think that would be helpful because then it's kind of like the best of both worlds, right? Where I can say, "Hey, I can meet once I'm moved back into my private space," and also have that physical environment of being around other people. And I think I had previously thought just those things were mutually exclusive, but there are certainly ways that I'd love to try injecting that into my home-work setup.
I'm really glad that we ended up talking about this because I think this will just be our future for a while. And it's always worth revisiting it and thinking about it and thinking if it's working for us or not. I'm really excited to try some of the new things that you mentioned. Like, we've been doing this for several years now, but there's always room for improvement and room to inject more fun and joy, and creativity in how we choose to do our work.
JOËL: On that note. Shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël submitted a last-minute submission to RailsConf discreet math, which got picked up! 🎉 He'll be speaking at RailsConf 2023 in Atlanta at the end of April about why it's relevant to developers and all the different practical ways he uses it daily.
Stephanie recommends headlamps for in-bed reading sessions and sets up the feature flags topic for today based on a project that must be released to the public in one go.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So a few episodes ago, we had a guest, Sara Jackson, on and she was talking about some cool elements of discrete math and how those can be really practically useful to us as developers in our day-to-day work. I was really inspired by that conversation. And the day we recorded that, I think, was the last day that the RailsConf CFP was open.
So I went and submitted a last-minute submission to RailsConf for this idea, and it got picked up. So I'm going to be speaking at RailsConf 2023 in Atlanta at the end of April about discrete math and why it's relevant to us as developers, and all the different practical ways that I use it on a daily basis.
STEPHANIE: That's awesome. Congrats, Joël. I'm so excited for this talk.
JOËL: Thanks.
STEPHANIE: Was this an 11:59 p.m. submission?
JOËL: Aaah, very close to that, yes. I don't recommend people to do this. But if inspiration hits you on the last day of the CFP, do it; go for it. I'd actually submitted two talk proposals. And I think maybe a little bit of the excitement in the energy for this last-minute one came through because that's the one that the committee picked.
STEPHANIE: How did you know that you would want to turn this topic into a talk?
JOËL: I think because I was excited to share about this to other people, and we'd kind of already done that on the show. But this is the sort of thing that's like, you know what? I'm kind of feeling a little bit of fire of this idea. I think more people should know about this. I think there's value in sharing this idea more broadly. What are some areas that I could do? I could maybe write a blog post. Oh, RailsConf, that's open until tonight. Let me put together a proposal because I'm excited about sharing this idea.
STEPHANIE: Yeah, I think we chatted a little bit about discrete math as potentially a fundamental pillar of information or a skill set for developers back in our episode on just the fundamentals of what developers need to know. And I was just thinking as you were talking about it that that was an audio medium, but it's obviously kind of an academic topic. And so to have slides for it and almost make it kind of like a little mini-lecture but more fun [laughs] than sitting in a classroom, I think, could be a really cool application of this topic.
JOËL: Yeah, I'm excited to take a very practical look at this. I've got 30 minutes to talk about it. I'm not going to give you the deep mathematical fundamentals. But if there are little bits of discrete math that I can show that are actually practically useful in a day-to-day situation, I think that would be incredibly valuable. So currently working on the talk, but the way I have it structured right now is very scenario-based. So we're going to look at some problems, look at just a tiny little bit of discrete math theory, and then see how that allows us to solve the underlying problem.
STEPHANIE: Nice. I won't be attending RailsConf this year, but I really look forward to watching the video when it comes out.
JOËL: Thank you. So, Stephanie, what's new in your world?
STEPHANIE: So last night I was reading in bed, and I usually stay up a little bit later than my partner. And so he was going to bed, and at that point, I usually turn off the bedside table lamp and either move to the couch to keep reading or just go to bed as well. But I was really engrossed in my book, and I wanted to keep going. But I was so cozy in bed. I really didn't want to move to the couch because then you risk falling asleep on the couch, and nobody likes that.
And so I was like, oh man, you know what I would really love? One of those book lights. And I got this really vivid memory of the clip-on ones I used to have as a kid that would clip onto the pages of my book and shine a light onto the page. And I was like, I feel like that technology must have gotten better [laughs] since I was a child. And I was like, okay, now I need to look up some doodad and buy a thing to help me solve this problem of reading in bed after hours.
And I was scrolling through my phone for a little bit, but then I was like, well, I want to read right now, not after [chuckles] I've purchased this gadget. And it hit me that we have headlamps that we use for our camping trips and also just if we ever are stuck without electricity. And I grabbed one of those from the junk drawer, and I wore it to bed and was reading my book. And it was a pretty good experience. So I'm actually pretty happy that I didn't need to buy a new thing to solve this problem of mine. And yeah, it turns out that you can just use something that you have at home.
JOËL: That's really cool. I feel like when I've used the headlamps before; normally, they're incredibly bright. Did you find that to be a problem for you?
STEPHANIE: I was able to adjust the brightness on mine, so it was definitely on the lower end of the setting and, overall, just better than having the nightstand lamp on, which I think just totally brightens the room. And at least this was a more targeted area of light, and it's working so far. He didn't wake up or anything, so I would call that a success. And now I have it in my bedside drawer. And I think I look really silly, but that's okay. No one can see me anyway. [laughs]
JOËL: That's kind of the whole point of it, right?
STEPHANIE: Exactly. [laughs] So, aside from my bedtime routine, another thing that's new in my world is something project-related. So I wanted to bring this up to you and get your thoughts on the situation. So I want to talk about feature flags because I'm currently working on a pretty big project that has to be released to the public in one go. And so naturally, we reached for using feature flags to be able to release our work to production but not to make it accessible to users so that we could be working on this thing incrementally and not have a huge release where all of this code goes out.
But as we've been building the feature, I am realizing that we are having to plug conditional code to check for this feature flag in a lot of different places. So, so far, we've been putting it in controllers. We've been putting it in a menu builder to show the user where they can navigate to. And then we've even been putting it inside of methods to change behavior based on whether the flag is on. And so that was kind of, I think, getting my spider senses a little bit tingling.
And then recently, one of the bigger issues that our team had a discussion about was whether to include this conditional check for the flag in queries. So we are building on top of an existing model. But once the feature flag is on and customers are using it, the application will be able to create new records for that model. But they're of a different category with a different value for one of the attributes. And we end up clearing for this model in a lot of one-off places.
And so once this thing is on, all of the records that are specific to this new feature will be included whenever we query for this model. And as we've been developing, it's been less of an issue because customers can't access the flow to create these new records. But someone brought up what if we release this, and it turns out that we have an issue and we want to turn it off; we want to roll it back? And at that point, those records will still have been created in the database and will then be included in those queries and what do we do then?
And so we started getting into the weeds a little bit of, like, do we want to have some conditional query situation going on? We haven't quite landed on an answer, but things are getting a little hairy. And I am curious if this works. Any thoughts for you?
JOËL: That's a really interesting problem to have. You mentioned having all of these conditionals kind of sprinkled everywhere for the feature flag kind of triggered your spidey senses a little bit. Do you have a sense a little bit of maybe why that feels wrong or maybe why you're feeling uncomfortable about this code?
STEPHANIE: Yeah, I was thinking about it, and, in my opinion, the ideal world for feature flags would be you have the checks at the boundaries of your code or where a customer could interact with the application. And then you are able to, I guess, branch a little earlier so that they don't go down the flow at all where the changes are being made. And this seems to be a little bit of the opposite, where we end up having to check in a million different places because we aren't keeping that separation as explicit as I think it should be.
JOËL: Hmm, so almost like it's not the feature flag that's the problem. The feature flag is just a symptom of tight coupling in the wider system.
STEPHANIE: Yeah, that's definitely a smell that has emerged. But I also don't think that we have the luxury necessarily to decouple all of the places in the code as we are trying to add this new feature. Have you been in a situation like this before?
JOËL: I think tight coupling, in general, is a thing I've seen in a lot of projects. I can't immediately think of a moment where it was highlighted by introducing a feature flag. Sometimes I think what can be tricky is if you have a feature that has a lot of cross-cutting concerns, then it's easy for that to kind of bleed into other things. There are a variety of techniques to try to, like you said, isolate the new code such that you don't need to conditionally branch on it everywhere.
I wonder if there might be maybe a few important inflection points that you could introduce some sort of wrapper or push a conditional higher up the decision tree or something like that that might get 50%-80% of the way there and at least eliminate a lot of the pain.
STEPHANIE: Yeah. I like the use of inflection points because I think that right now, our strategy has been everywhere as we are making changes, just putting it in as a guard when there's probably some degree of higher level thinking about, okay, we're now changing a bunch of internals of this class, but maybe the change that we're making is its own concept. And it could be a separate class, and that is when we choose to use that class like that as an inflection point.
JOËL: I have this vague mental model in my mind of building a mechanical system and wanting to know where do I want it to be rigid and where do I want some sort of joints so that it can flex? And I don't want the entire system to be made out of joints because now it's a pile of spaghetti. It's important to be rigid in a lot of places. But in some places, I do need it to flex.
And it's identifying what are those right places where it needs to bend to flex to different scenarios? And to me, that's a metaphor for when do we need abstractions, the ability to choose between different paths, the ability to maybe have some polymorphism? And when do we want to force everything to say, "No, all the code is going to behave in this one way?"
STEPHANIE: Yeah, that's a great point. I am also kind of stuck on what you said earlier, where the feature is very cross-cutting, and I think that is true for this project at hand. And so that has been very challenging too. There's also a lack of trust that it will just work with the other parts that it's touching because we're actually extending something that was already a bit of a unique flow or a kind of a one-off situation. And it was built very particularly to support that one thing.
And now here we are introducing something else for it to support and having to go in and change all of those individual places where we were making those one-off exceptions to now also be able to handle this new thing we're introducing.
JOËL: I think, in many cases, I appreciate when people write code in that way and that they didn't try to abstract everything on that first difference. But now that I'm the one coming in doing the second one now, it's time to think, okay, clearly, we're trying to flex here. We probably need some sort of abstraction. Unfortunately, that means it does have to come into the time budget for this feature and say, look, we already have a special case here. Now we're going to add a second special case. We need to take the time to do a little bit of refactoring or a little bit of abstracting in order to make that work properly.
STEPHANIE: Yeah, absolutely. I think the logic there originally was that, oh, we already have a special case for this thing. So it should be easy [laughs] to now add this other special case that's kind of based off of the original exception. And I actually think that worked against us a little bit because I'm mentioning we're introducing this new feature flag, and I'm also seeing remnants of that old feature flag that didn't get quite cleaned up as well, so just a lot more complexity in the different ways that the flow is going.
JOËL: In some ways, you kind of see earlier failures with this exact same approach that you're trying to take. So it's a bit of a warning.
STEPHANIE: Yeah. I want to avoid leaving those traces for future developers. There's definitely some degree of regret that I feel for all the times that I've introduced a feature flag and never cleaned [laughs] it up. So seeing just remnants of this same approach, like you mentioned, and wanting to do something differently to make sure that we aren't creating as much trouble for the next time around.
JOËL: So you'd brought up cross-cutting concerns earlier. And I think some common situations for that is like, oh, we've got some code that changed in the model where we have to put conditions. But then we also have a parallel condition in the controller and a parallel condition in the view, and maybe even in the routing file, and it quickly becomes a mess. Sometimes you're able to sort of take all of those, and kind of put them in a different part of the app.
So you might have a view file that is really complex and has all these conditionals. And maybe you kind of give up trying to make it all work in one file and say, you know what? For this special case, we're going to do it all in a separate view file. And maybe there's one top-level condition that says, if this thing is true, render the second view file; otherwise, render the first one. And now you've only got one condition instead of having it kind of sprinkled all over some other piece.
STEPHANIE: Yeah, that actually is the approach we ended up taking for this project. And in this case, we have React on the front end. So we're rendering a completely different React component based on the flag. And that did solve a lot of the front-end-specific complexity. And still, I think we are seeing a lot of the coupling be an issue on the back end, which is interesting.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So you'd mentioned queries and how...it sounds like there's maybe an enum in the database that you're adding a new state to, but you want to exclude that new state from any existing queries. I almost wonder if this is a situation where you want to create a sort of no-op commit that could go out without the feature flag, and that would basically prevent all the existing queries from using this new enum. And maybe it introduces the new enum but behind something that blocks them from being used in the existing queries.
Depending on how your queries are structured, maybe it's just in your main query; you have a where not extra line that excludes the values in that enum. If it's not nicely scoped, maybe you need some sort of default scope on the model to say, exclude anything that has this enum value. And now you can introduce that enum, and you can have all those records. And even if it's not behind a feature flag, everything will continue to work exactly as it does today, and that's your goal.
And that is a fairly small scoped change. It's almost like a refactor in the classic sense of it, the make the change easy before you make the easy change. Although, in your case, it's maybe not that easy of a change. But then, after that, you can build on top of this with your work that might be behind a feature flag.
STEPHANIE: Yeah, I like that approach a lot because that ensures that the existing behavior will continue to work as expected. And then that also is a good way to audit, I guess, all the places that we will need to consider when we are making the change. So we already have in mind all the places that things are touching, and it's not a surprise when we find out, oh, we missed this query, or whatever.
I was also thinking about...I mentioned that we are querying one-off quite a bit with different filters. I was also thinking that maybe a query object could be a good use case here and wrapping existing business logic in a meaningful query. And then perhaps the new enum that we're introducing would have its own conceptual meaning.
JOËL: When you talk about one-off queries, are these like custom queries built out directly in the view or in the controller? Or where are you seeing these one-off queries?
STEPHANIE: Unfortunately, it's both the controller and in-service classes. [chuckles]
JOËL: Okay. I was afraid you were going to say the view, which brings me back to my old PHP days because I definitely wrote queries in...I guess my entire app was basically the view at that time. I had just one PHP file, and HTML, SQL all went in there. The page submitted to itself. So there's a giant conditional that split the files like if POST versus if GET.
STEPHANIE: Oof, that sounds pretty gnarly. And I feel like; in that case, you don't have the tools for the flexibility that you would have liked.
JOËL: It turns out the frameworks are nice. It's a good thing to have.
STEPHANIE: Good take.
JOËL: I think going back to something we were talking earlier, we were talking about how to maybe incrementally ship parts of this, and the sort of no-op approach that I was talking about, I'm a huge fan of. I think it's similar to what's sometimes called the strangler fig pattern that allows you to sort of incrementally sometimes change over from one system to another. But I think a lot of the ideas from that pattern can apply to adding new behavior where you might want to start by introducing some sort of no-op system and then maybe conditionally branching within that and keeping things separate to then evolving to your final piece.
And there was an article that I am a huge fan of by Adrianna Chang on the Shopify Engineering blog about this pattern. We'll link it in the show notes. But for those who are interested in digging into this pattern more, kind of wondering how it applies, I recommend reading this article.
STEPHANIE: I'm really glad you mentioned that because the other thing I was thinking about that feels like such a trap with feature flags is that it kind of conflates two different things, at least in the way that I've seen it used on different teams where we are using it to hide work that we are incrementally shipping.
And then it's also used as a deployment strategy. So maybe you might turn it on for X percent of customers, or you might just turn it on to 100% for everyone. But if something goes wrong, you can quickly turn it off. And for some reason, I'm thinking that those two things...maybe I'm not sure that one tool is the best thing for both of those concerns.
JOËL: Yes. Sometimes feature flags are used to actually gate a feature. I tend to bucket the turning it on and turning it off as the same thing as turning it on for 50% of people. But sometimes, it's really used more for a kind of CI/CD process where you want everything that gets merged from a PR to immediately go out to the production server. But you might not be ready to have that particular feature get turned on just yet. But a lot of those things might get turned on pretty quickly, and then the feature flag might get removed fairly quickly as well.
So they're not really features in the sense of product people are going to want to be like, okay, now we're ready to turn on this new feature for the big release day. They're very logistical. And they're really almost like another way to do version control but at the release level rather than via Git. Does that sound like a reasonable distinction to you, or is that different than what you're talking about?
STEPHANIE: That's really interesting. I think you may have found a third [laughs] way that people use them because I have definitely seen that as well. I think for me, what you said about if you turn it on to 50% of people, that is essentially turning it on to 100% in the sense that if you are trying to mitigate risk and things go wrong, things are still wrong, and you have to figure out what to do then.
And I'm glad you mentioned the strangler fig pattern because I think people on teams that I've been on at least have been using feature flags as a crutch for managing risk when they deploy and thinking that, oh, if something goes wrong, then at least we have this mechanism to stop the bleeding. Whereas if you were making sure that the code you are writing is in production but doesn't change the existing behavior because of the way you wrote it, that is a lot more resilient, I think, than just opening the gates and hoping for the best and having a safety net of turning it off.
JOËL: I see what you're saying now. And I think you've hit on a really important thing. So when you're trying to build software incrementally, there's a bit of an anti-pattern where you go off on a long-running branch, and you build, and you build, and you build. And hopefully, you're keeping it in sync with the main branch, but maybe you're not. And at some point, there's a big merge, and it's scary and potentially dangerous. And then there's a release, and hopefully, everything goes well. And we generally agree that that style of development generally does not lead to the best outcomes. So we try to work incrementally. We try to merge back into the main branch as much as possible.
But I think that many developers are kind of wired to think in that long-running branch approach. And so they kind of try to recreate it with feature flags. So it's like, oh yeah, technically, we're merging, like, we're on small branches, and we're merging back into main, but it's all gated behind this one feature flag, and then it's going to be a big bang release. And it may or may not break, and it's going to be scary. And it's going to be a lot of risk that we're trying to mitigate. But the more we're piling on behind this feature flag, the higher the risk becomes. So it ends up being kind of a long-running branch.
STEPHANIE: Yeah, that's a really great connection. I think that is where some of the fear is coming from too. And we are thinking that, oh, we're relatively safe because we have this mechanism that at least we can react with very quickly rather than having to revert a change or something like that. But the way that we're using it on this project...and in general, I've seen other people do it this way. And that could be why this technique has proliferated a little bit. It is a bit concerning; I think, because it still is just okay; we're just going to release it and then hope for the best.
JOËL: I think there are valid use cases for that kind of strategy, but I would like to see those come from a product perspective rather than a development perspective. So maybe as a business strategy, we decide that this series of features, we want them all to kind of show up at the same time. They're part of a larger package of changes that we want to release all at once. Now you're getting into product strategy. Oftentimes, it's better to release small things incrementally. But there are some times where you want to release a big thing that has multiple kind of sub-features all at once, and I think that's totally valid.
Where I'm less comfortable with it is when it's coming from the developer side because it's more about, oh, we just want to protect all of these things and kind of release a massive change all at once. And that, I think, ideally is better managed by truly working incrementally with something like a strangler fig pattern. I think that manages to mitigate the risks while also avoiding some of the messiness that comes with the feature flag here.
STEPHANIE: Yeah, I think the other way of working that I've noticed with feature flags gating larger features is having a QA team member manually going through all of the features behind the feature flag to make sure things are working as expected. But that is still different from real users using it. I think that is another form of reassurance that folks think we're getting with this strategy. But I think it's still limited in covering a production use case which is ultimately giving you the truest [laughs] reassurance that your application is working as expected for the people that are using it.
JOËL: I think I'm fine with having a QA person turn a feature flag on on a staging server, try something out as part of the workflow as long as these features are generally kept small. And then we ship it, and we turn the feature flag on, and then maybe we remove it as part of the process later on. Where I get less comfortable is where you've got a team of people working for weeks behind a feature flag. And, again, I do understand that for product reasons, there are times where that's a valid thing, where there's a big set of changes that you don't want your customers to see.
But if it's for developer reasons because we're concerned about coupling or we're concerned that something could break, then I get really uncomfortable with saying, okay, we're going to have a team of people working for several weeks behind a feature flag, and then we're going to need multiple QA people to test a ton of work that's all behind a feature flag. Now you're just kind of creating new risk while also trying to mitigate it and it kind of turns into a sort of weird zero-sum game.
STEPHANIE: Yeah, I know what you mean. I think in those cases, too, you're only testing what you know. So you're only testing the edge cases and the different flows that you know about. And inevitably, customers are probably using the app [chuckles] in ways that are completely unexpected, and no amount of testing all the different cases that you think are comprehensive will account for the way that it's used by customers.
JOËL: To be fair, I think a good QA person will catch a lot of weird edge cases. They know a lot of the ways that customers will try weird things. They often are really good at not being boxed into product or development's ideas of how the product should be used rather than how it can be used. That being said, real customers are a surprising and creative bunch, and they will absolutely do the unexpected.
STEPHANIE: Yeah, that's a great point. I'm glad you mentioned that.
JOËL: We've talked about a few different ways where maybe you're uncertain or uncomfortable about feature flags. Are there ways of using feature flags that you are 100% in favor of or that you're excited about, like, yes, feature flags are a great use in this scenario?
STEPHANIE: First of all, I think one of the most delightful times that I've had with a feature flag is when we use the Flipper gem. At a previous company, I remember really enjoying that one over...I've seen more hand-rolled implementations of feature flags, but I do want to give that gem a little shout-out. I think they are most effective when you do have a new page or something that has a very clear boundary, and you want to make sure that no one can go to the URL and see the new features that way, at least that's like the most clear cut use case for them.
And then, ideally, you have implemented everything. You have tested behind the feature flag. And then, the day of the release, we can just turn that thing on, and all is well. I hate to be that person to say simpler is better, but sometimes I think that is true. What about you? Do you have any instances where you felt a feature flag was really effective?
JOËL: I appreciate feature flags in more traditional CD, continuous deployment-style systems, particularly once the team grows a little bit and you've got a lot of PRs going out. And you want to be able to maybe undo things without necessarily having to revert and redeploy code, or if you've got things where you don't want to have Git be the thing that's gating when customers get to see features.
So maybe a person implements it on a particular day, but you don't want customers to see it until next week because that's when your marketing has said, if we're going to turn it on, do you really want that to sit in a PR for a week and then hit the merge button and that's going to be what ships it out to people? Like, that gets kind of clunky. I think you can get away with it when you're a small, fast-moving team. It doesn't work once you start scaling a little bit. And so I find that at that point, starting to introduce feature flags has been really helpful on teams I've worked on.
And I've been part of that transition where we're feeling a lot of pain, and then we introduce feature flags and can move to more of a continuous deployment where you get your code reviewed; you merge it; it's good. And then it goes out, and at some point, we turn the feature on, and then maybe at some point, we come back and clean up the feature flag and say this is now permanently in. I've seen good success with that style of approach.
STEPHANIE: Yeah, you bring up a good point about incorporating it into the CI/CD process. And that also makes me think, ideally, we would have some information as developers about how long we want this thing to be in that feature flag state because if you know, then you can claim ownership over it and make sure that it gets cleaned up. I think right now, what I've seen is we are using it in the development process, but then we're not necessarily communicating with product about how long it's supposed to stay this way and what are we supposed to do with it afterwards. A lot of that gets lost.
JOËL: It is interesting that I think some feature flags are mainly product-focused, and then others are entirely developer-focused. In fact, maybe the product team doesn't even need to know that this is internally gated by a feature flag because it's more about internally how you manage the release process and continuous deployment.
The fact that maybe physically pushing the code onto the server and having a feature be available to users is not exactly the same action might be a distinction that doesn't really matter to your product team. And if that's not a thing that they care about toggling for users, then that can be something that just stays internal to the dev team. And you say, okay, we're using that to manage risk or to allow us to merge code earlier, whatever the thing is, but then we clean them up ourselves, and nobody needs to know.
Whereas some things are very much product-focused, and we say, okay, the product team wants this to be a thing they can turn on and off, or they want to run an experiment, and then they very much need to be in the know. And I think you were alluding to some feature flags that might be kind of halfway in between that. They're not clear-cut product ones, but they're not entirely just mechanical deployment flags. They kind of sit in that weird zone between the two.
STEPHANIE: Yeah, absolutely. I think maybe there is a case to be made about this distinction, and people can then better use this tool and apply it to the actual problem that they're having.
JOËL: I think it's helped me get a better feel for them by having a bit of that distinction in my mind, thinking of flags as these are either sort of mechanical developer flags, or these are products flags. And maybe there's a richer nomenclature that can be developed. But I think at least having that distinction has helped me think about these in a more structured fashion.
STEPHANIE: Yeah, and even communicating that in the name of the flag, too, I think could be really valuable for other developers who come across it in the codebase. As for cleaning them up, I was just thinking about how it can be such a pain to then unwind the logic of the feature flag [chuckles] when you're removing them to figure out, okay, now that this thing is done, how do we actually want it to work? I think sometimes that ends up being a bit of a pain point and is what leaves them hanging around.
JOËL: And then it's not obvious which of the two branches in a conditional should be kept.
STEPHANIE: Right. And maybe that also depends on that distinction where you're talking about between a product release flag and a more internal development tool.
JOËL: Yeah, I could definitely see a more product-focused one where it's possible that you might want to keep both and say, no, this is a thing that we want to be able to toggle on. In general, for the foreseeable future, don't remove this flag. I want to control it from a dashboard. Maybe we want this to become maybe not gated by a feature flag in the traditional sense, but we want to tie it to a subscription level on a user's profile. So keep all the conditionals in place, but now just branch off the user's subscription status rather than a global flag.
STEPHANIE: Yeah, it turns out that feature flags can be really complicated. It's not just kind of a binary is it on or off [laughs] situation. It almost makes me wonder if some metadata would be helpful with flags to signal more information about them in the codebase. Like, are they a part of your application domain or not? It kind of gets a little fuzzy.
JOËL: Definitely. Something could be experimental; something could be like we said, mechanical; something could be deeply integrated into the vision of the product where you know that you're going to want to branch in the future here for different types of customers. I've even been on projects that were multi-tenant, where we had just standard feature flagging setup, but different tenants would have different sets of flags turned on. So they were global configuration for their instance of the app, but each instance had their own unique config of flags turned on and off.
STEPHANIE: That's really interesting because, in some ways, flags are a bit of a configuration. But then, when you are combining a lot of different flags to configure something, then maybe that is better done with a different approach.
JOËL: I like doing that kind of thing with flags rather than configuration because then it can be done via an admin dashboard rather than having to make changes to a YAML or JSON and having to commit it. And in this particular projects case, it was really nice because it meant that we could sort of, again, find those flex points and say, okay, we know that some clients want things to behave in two or three different ways in this one particular place. We'll gate that behind a flag, and then you can enable the variant that you want for each client. And now the customer service reps can manage that, and not the dev team.
STEPHANIE: Got it. Yeah, that is pretty cool.
JOËL: So, in a way, feature flags can be a great way to empower other departments within the business instead of kind of centralizing control within the dev team.
STEPHANIE: Yeah. I suppose you just provided another use case for them. [laughs]
JOËL: I feel like there are probably some really interesting elements to dig into around the interaction between product design and the actual development of things and teams that are doing either business development or customer support. Or maybe just some kind of rep for a big client is sort of where a lot of the power to actually execute things is. That is not this episode today, but I think that could be a great follow-up episode to do sometime.
STEPHANIE: Yeah, that's very interesting. I would love to talk about that in the future.
JOËL: The short version is I'm a fan of empowering other people.
STEPHANIE: There you have it, folks. [laughs] You heard it here first. Joël is a fan of empowering other people. On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Today's episode is "Old News"! Stephanie shares her ergonomic desk setup. Joël talks about the pyramids.
Another old thing is the Bike Shed episode two weeks ago about success and fulfillment. Stephanie and Joël realized off-mic that one area they didn't really talk about so much is impact, and that is something that is very fulfilling for both of them. Today, they talk about impact and leadership as individual contributors because leadership is typically associated with management. But they believe that as ICs, at any level, you can be displaying attributes of leadership and show up in that way on teams.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's old in your world?
STEPHANIE: I'm glad you asked that question because I don't think we get a chance to talk about things that are exactly the same as they've always been. And so today, I'd like to share my ergonomic desk setup, [laughs] which has been old for about a year or so. And back then, I was having some issues with some back pain and some wrist pain, and I made a few upgrades and since then have not had any issues.
And I feel like it's one of those things that I just forgot about because when it stops being a problem, you don't really notice it. And today, I am able to reflect on my old problem of bodily pain while working. And I'm happy to say that things have been much better for a while now.
JOËL: Oh, that's amazing. What's one thing you think had the most impact in your setup?
STEPHANIE: Oh, I picked up one of those vertical mice for my wrist. I was having some wrist pain, like I mentioned. And I actually solicited some input from other thoughtboters for the best mouse to replace the Apple Magic Mouse that I was using, which I really wanted it to work for me because I liked the way it looked, but nevertheless, that was causing me issues. So I ended up with the Logitech MX vertical, and that has really solved my wrist pain. It is very not cute. [laughs] It kind of looks like a weird big, gray snail. But you know what? You got to do what you got to do.
JOËL: That sounds like an art project waiting to happen.
STEPHANIE: Yeah. I would love to see; I don't know, a way to make these vertical mice look a little more cute. Maybe I will stick some googly eyes or something on it and then just be like, this is my pet snail [laughs] that works with me every day.
JOËL: Do you have a name?
STEPHANIE: Not yet. Maybe I'll save it for what's new next week.
[laughter]
JOËL: Homework assignment. Years ago, I was also having some wrist pain. And I think one of the most impactful things I did was remapping some keys on my keyboard. So I'm a pretty heavy Vim user. And I think just reaching with that pinky for the Escape key all the time was putting a lot of strain on my wrist. So I remapped Caps Lock to control. That's what I did. Yes, because it was reaching down with the pinky for the Control key and remapped escape to hitting J twice.
So now I can do those two very common things, Control for some kind of common chord and then Escape because you're always dropping in and out of modes, all from the Home row. And now, both my hands feel great, and I can be happy writing Vim.
STEPHANIE: That's really nice. I think when I had asked in Slack about mouse recommendations, someone had trolled me a little bit and said that if I just use my keyboard for everything, then I won't need to use [laughs] a mouse at all. [laughs] So there's also that option too for listeners out there.
JOËL: It's true. You go to tmux and Vim, and on a Mac, maybe something like Alfred and a few OS shortcuts, and you can get 90% of the way to keyboard only.
STEPHANIE: What about you, Joël? What's old in your world?
JOËL: So you know what, something that's really old? Pyramids.
STEPHANIE: Wow. [laughter] I should have known that this is where we were headed.
JOËL: Long-term listeners of the show will know I'm a huge history nerd. And we think of the pyramids as being old, but they are ridiculously old. A fun fact that I have not learned recently because this is something that is old in my world, but that I learned a while back is that if we look back to Cleopatra, the last Pharaoh, she is closer to us in time than she was to the building of the Great Pyramid.
STEPHANIE: No. What? Wow. Okay, yeah, that definitely just messed with my brain a little bit. And now, I have to rethink my understanding of time.
JOËL: I think the way the timeline sort of works in my mind is it tends to get compressed the further back you go. So it's like, yeah, I think of modern-ish times, like, yeah, there's like a lot of stuff, and I'm thinking in terms of decades until maybe like the 1900s. And now I start to think in terms of centuries. And they're kind of more or less equivalent, you know, the Victorian Age. It fills about the same amount of space in my mind as like the '60s. And then you get to the point where it's just like millennia.
STEPHANIE: Mm-hmm. When you think of Ancient Egypt, do you think Cleopatra and also pyramids, so you kind of conflate? At least I do. I conflate the two a little bit. But yeah, I guess a lot of time passed in between that. [laughs]
JOËL: The pyramids are also really cool because they were one of The Seven Wonders of the ancient world, which is sort of, I want to say, like a tourist circuit created by the ancient Greeks, sort of like monuments that they thought were particularly impressive. But they're also the only ones that are still standing; all of the others have been lost to time.
STEPHANIE: Wow, it's the real wonder then [laughs] for being able to stand the test of time.
JOËL: It's also the oldest of the seven and has managed to survive until today, so very impressive.
STEPHANIE: I love that. Just now, when you were talking about thinking about time periods kind of compressed, I definitely fall victim to thinking that the '70s or whatever was just 30 years ago, even though we are solidly in the 2020s and, in reality, it's obviously like 50. But yeah, I think that always freaks me out a little bit.
JOËL: Yes, it's no longer the year 2000.
STEPHANIE: Turns out. [laughs] So, in case our listeners didn't know. [laughs]
JOËL: I think when we were close-ish to the turn of the millennium, it just made mental math so easy because you're at that nice zero point. And then you get to the early 2010s, and it's close enough within a rounding error. And now we just can't pretend about that anymore.
STEPHANIE: No, we really can't.
JOËL: We need a new anchor point to do that mental math.
STEPHANIE: I love that we're talking about what's old in our world because I love a chance to just repeat something that I've said before that I still think is really cool, but I feel like that doesn't get invited as frequently. It's just like, oh, how are you doing? What's new? So yeah, highly recommend asking people what's old in their world?
JOËL: Yeah. And beyond that, not just like, what are some new things you're trying? But kind of like what you were talking about earlier, what's something that's stayed stable in your life, something that you've been doing for a while that works for you?
STEPHANIE: Yeah, I love it. So another thing that's old is our episode from a couple of weeks ago about success and fulfillment. And you and I realized off-mic that one area we didn't really talk about so much is impact, and that being something that is very fulfilling for both of us. And that kind of got me thinking about impact and leadership. And I especially am interested in this topic as individual contributors because I think that leadership is typically associated with management. But I really believe that as ICs, at any level, really, you can be displaying attributes of leadership and showing up in that way on teams.
JOËL: Definitely. I think you can have an impact at every level of the career ladder, not just an impact on a project but an impact on other people. I remember the first internship I did. I was maybe two weeks in, and I had a brand new intern join. It's day two, and I'm already pairing with him and being like, "Hey, I barely know anything about Rails. But if you want help with understanding instance variables, that's the one thing I know, and I can help you."
STEPHANIE: Yeah, that's awesome. I mean, everyone knows something that another person doesn't. And just having that mindset of injecting leadership into things that you do at work, no matter how big or how small, I think is really important.
JOËL: I think there's maybe a lie that we tell ourselves, which is that we need to wait to be an expert before we can help other people.
STEPHANIE: Yeah, I've certainly fallen into that trap a little bit where I think it's held me back from sharing something because I assumed that the other person would know already or the thing I'm thinking is something I learned but not necessarily something that someone else would find interesting or new.
JOËL: Right. Or even somebody's looking for help, and you feel like maybe you're not qualified to help on that problem, even though you probably are.
STEPHANIE: One thing that I was really curious about is, can you remember a time when an IC on your team demonstrated leadership, and you were really impressed by it? Like, you thought, like, wow, that was really great leadership on their part, and I'm really glad that they did that.
JOËL: Yeah. So I think one way that I really appreciate seeing leadership demonstrated is in client communication. Typically, the teams we have at thoughtbot are structured on a particular project where there's like a team lead who is in charge of the project. It's usually a couple of consultants working together as peers. Depending on the situation, one or the other might take leadership where it's necessary.
But I've really appreciated situations where a colleague will just really knock it out of the park with some communication with the client or when they are maybe helping talk through a difficult situation. Or maybe even we realize that there's a risk coming down the pipeline for the project and raising it early and making sure that we de-risk that properly. Those are all things that I really appreciate seeing.
STEPHANIE: Yeah. I think the way folks engage in channels of communication can have a really big impact. A few things that come to mind for me that I think is really great leadership is when more experienced or senior folks ask questions in public spaces because that kind of cultivates a space where asking questions is okay, and even people who have whatever title or whatever years of experience they still have questions and can signal to other folks in the team that this is okay to do.
And the same thing goes for sharing mistakes as well. Also, just signaling that, like, yeah, we mess up, and that's totally normal and okay. And the consequences aren't so scary that people feel a lot of pressure not to make mistakes or share when they happen.
JOËL: Yeah. The concept you're describing is very similar to the idea of vulnerability.
STEPHANIE: Yeah, that sounds right.
JOËL: So kind of modeling that from more senior people helps create a safer environment for the more junior people.
STEPHANIE: I think another thing that I really love that others do for me, and something that I want to get better at doing for others, is speaking up when something is a little off because, again, with power dynamics, for people who are newer or less experienced, they might be noticing things, but they don't feel encouraged to speak up about it in a public space or even with their manager. But they might confide in another IC who is maybe a little more senior.
And one thing that I really liked that happened on my client project recently is a senior engineer said in Slack, "Hey, I noticed some sentiment from our daily sync meeting that we're cutting it close to our deadline." And he asked like, "Should we shift some priorities around? Or what is more important to make sure that we focus on in the next few weeks before the end of the quarter?" And I was just really glad he said that because I certainly had been feeling it. But I don't know if I necessarily kept a pulse that other people were also feeling it.
And so having someone keeping an eye on those things and being receptive to hearing that from folks and then being like, okay, I want to make sure that I bring it up to the manager because it's important. I thought that was really cool.
JOËL: Yeah. Now we're almost dialing into sort of emotional awareness of what other people on the team might be feeling and also the ability to think in terms of risks and being proactive about managing those.
STEPHANIE: I like your use of the word risks because that definitely feels like something that, in general, people are scared to bring up. But ultimately, it is the signal of someone who is experienced enough to know that it's important to make transparent and then adjust accordingly.
Even beyond noticing what folks are feeling, there are also more concrete things that can be noticed as well, like if team members are complaining about CI build time being really long and that being a repeating issue in getting their work done. Or any other development or tooling thing that is causing people issues, having someone notice how frequently that happens and then being like, hey, this is a problem. And here's what I think we should do about it.
JOËL: So not only the awareness but also the initiative to try to enact change.
STEPHANIE: Yeah, absolutely.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: So you and I are actually working on the same client but on different project teams. And you've been involved with making improvements to CI to reduce kind of the problem that I was just talking about where it takes a while for us to develop. And you are working on reducing the number of days between the master branch and when you are allowed to hit the merge button to make sure that feature branches had incorporated the latest changes for master.
And one thing that I really like that you did was you solicited folks' input for what that time range should be. So I think you were playing around with the idea of giving people three days to merge, or else they'd have to rebase.
JOËL: I thought it was being really comprehensive here with three days because, you know what? You solicited feedback, you got review, but maybe it's the end of the day, or maybe someone's in different time zones. So we definitely want to cover at least a 24-hour period. So three days gives you an extra day. It should be safe. Is there any common situation where you might want a PR to be open for more than three days, but you wouldn't have rebased the latest master changes?
STEPHANIE: Yeah. I can see how you thought about it from a few different angles too. Like, you're thinking about time zones and folks working in other regions. And I ended up responding to you, and I was like, oh, what about the weekend? [laughs]
JOËL: Oops.
STEPHANIE: Because three days seems a little short if two of those days are eaten up by Saturday and Sunday. But what I liked was that you said, "Hey, I'm thinking about doing this. What do other people think?" Because you didn't claim to know what works best for everyone. And I think that's a really important skill to be honest, soliciting others for feedback, and knowing who to ask for and who to make sure you are not negatively affecting their work by making a change or making a decision.
JOËL: And in this case, it helped me realize that I had skipped over the most obvious edge case while thinking I'd covered all the really niche ones
STEPHANIE: We got there in the end, [laughs] and I think made the most informed decision.
JOËL: I guess that's just good product design in general. Talk to your users, get early feedback, put a prototype out where necessary. You don't always want your users to dictate what you will do, but it's good to get their feedback. And similarly, I think that applies when working with dev-facing things; you want feedback from developers. If I asked everybody at the company, I would have gotten a lot of different answers. And I might not have gotten one that satisfied everybody. But having some of that feedback helps me make a more informed decision.
STEPHANIE: Yeah, and to take it to the next step, I think there's also accountability for those decisions that you have to have. So if the decision that you made ends up being like a huge pain for some unforeseen reasons, I imagine you'd be on top of that as well and would want to figure out how to adjust if the experiment doesn't work as well as you would have liked.
JOËL: Right. I think we often talk about failing early. In fact, we have a recent episode about dealing with failure. And we mostly talked about it from a technical perspective, catching errors or making code more resilient to failure. But there is also a human component of it, which is if you catch errors or design problems, and I'm using design here as a product design, not in visual design, at a prototype phase or maybe a user interview phase, you've saved yourself a lot of maybe unnecessary work that you would have had if you went out to the product phase and shipped it to your entire customer base.
I guess, in a sense, it's worth thinking about other developers, the engineering team as customers sometimes. And a lot of the internal facing parts of your project are effectively a product geared towards them. They are the users. And so, throwing in a little bit of product development and design skills into building internally-facing software can have a huge impact.
So beyond just thinking of developers as a sort of internal customer base, occasionally, we work on projects where you are building internal tooling for other teams; maybe it's business development, maybe it's the marketing team, maybe it's some form of customer support. And that can often have a really large level of impact. Have you ever been on a project like that?
STEPHANIE: I have. One of my first jobs was for an e-commerce company. And I built tools for the customer support team for dealing with customers and getting their orders correct and fixed and whatnot. So I did work on an admin dashboard to make their jobs easier as well as the company also had its own internal software for dealing with warehouse logistics. And so, I also built a little bit of tooling for our logistics and fulfillment team.
And I really liked that work a lot because I could just go over and talk to the folks internally and be like, "Hey, what did you mean by this?" Or like, "What do you want here, and what would make your life easier?" And I felt a much more tangible impact than I did sometimes working on customer-facing features because I would deliver, and that goes out in the world. And I don't get to see how it's being used, and the feedback loop is much longer. So I really liked working on the internal tooling.
JOËL: In my experience, those teams are often really underserved when it comes to software. And so it's possible to make a huge impact on their quality of life with relatively little work. Sometimes you can just take an afternoon and eliminate a thing that's causing them to pull out their hair.
STEPHANIE: Yeah, absolutely. And you get the satisfaction of knowing that you built something exactly as they wanted it. Whereas sometimes, with user or customer-facing features, we are guessing or experimenting a little bit. And yeah, I think having someone who then is very grateful for, I don't know, the button that you added that makes them have to click less buttons [laughs] when they do their work in an internal dashboard can feel really good.
JOËL: Having that direct access can be really nice where you get to just go over and talk to them or shadow them for a day, see how their work happens, get to hear their frustrations real-time. It's often a smaller group as well than you would have for our customers, which might be thousands of people, and so you sample a few for user testing. But for an internal team, you can get them all in a Zoom call. I don't necessarily recommend doing a giant Zoom call for this kind of thing, but it's a small enough group that you could.
STEPHANIE: I'd like to flip that around to you. Have you ever been on the receiving end of an improvement or someone else making your life a little easier, and if you could share what that was and how it made you feel?
JOËL: I think pretty early on in my career, one of my first projects for thoughtbot, we were building a small kind of greenfield app for a startup. And another member on the team took a couple of hours one afternoon to just write a few small abstractions for the test suite that; just made it so much nicer to write tests. And we're pretty scrappy. We've got a tight deadline, and we're trying to iterate very quickly. But that quality of life difference was significant to the point I still remember this ten years later. I think we were rotating this developer off, and this was kind of a farewell present, so...
STEPHANIE: That's really sweet.
JOËL: You know what? I love that idea of saying when you rotate off a project, do a little something extra for the people you're leaving behind.
STEPHANIE: Yeah, I love that too. It's your kind of like last chance to make a small impact in that world.
JOËL: Especially because on your last couple days, you're probably not expected to pick up a ticket and get it halfway done. So as you're kind of ramping down, you might have a little bit of time to do some sort of refactoring task or something that needs to get done but hasn't been prioritized that will have a positive impact on the team.
STEPHANIE: Yeah, or even writing a script to automate something that you have kind of developed the muscle memory for, like, oh, I run these three commands in succession. And if you could just wrap it up in a little script and hand it off to someone else, it is a very sweet parting gift as well.
JOËL: Absolutely. So I'm curious, we opened the topic talking about impact, and you immediately connected that to leadership, and I want to explore that idea a little bit. Do you think impact has to be connected to leadership? Or are there ways to have impact, maybe outside of a leadership role?
STEPHANIE: I think they kind of go hand in hand, don't you? Because if you are wanting to make an impact, then in some ways, you are demonstrating that you care about other people. And at least for me, that is kind of my definition of leadership is enabling other folks to do better work. And you and I talk about attending and speaking at conferences pretty frequently on the podcast. And that is a very clear way that you are making an impact on the community.
But I also think that it is also a demonstration of leadership that you care enough about something that you want to share it with others and leave them with something that you've learned or something that you would like to see be done differently.
JOËL: And just to be clear here, the way you're talking about leadership is not a title; it's an action that you do. You're demonstrating leadership, even if you don't have any form of leadership title.
STEPHANIE: Yeah, absolutely. I think that because software development is a collaborative job, in some ways, in most things we do, there is some form of leadership component, even if you're not managing people or you don't have a particular title.
JOËL: Like you said, it's about the things that you're doing to enable other people or to act as a sort of force multiplier on your team rather than how many people report to you in the org chart.
STEPHANIE: Yeah, absolutely.
JOËL: So if everybody aspires to enable each other and to be impactful, is it possible to have a team where every person on the team is a leader?
STEPHANIE: Whoa, [laughs] asking the big questions, Joël. I mean, logically, the answer seems to be no based on our traditional understandings of leadership and being a leader or follower. But I also kind of disagree because, as developers, we have to make choices all of the time, and that can be at the level of the code that we write, the commit messages we write, what we communicate in our daily sync.
And those are all opportunities, I think, to inject those skills that we're talking about. And so, yeah, everyone on the team is making decisions about their work. And inherently, to me, at least, the way you make those decisions and the impact of those decisions imply some form of leadership. What about you? What do you think about this?
JOËL: It's tough because you can get into bikeshedding the definition.
STEPHANIE: [laughs]
JOËL: Which, hey, it's all about that, right? You know, is leadership about authority or decision-making capacity? Is it about impact? Is it about maybe even responsibility if things go wrong? Who's responsible for the consequences? It could be about position in the org tree and relative depth on that tree, to use some data structure terminology. But I liked your emphasis on the idea of impact and enabling others. So now it's a thing that you do.
And so any member at any moment can be demonstrating leadership or acting in some leadership capacity, and they're contributing to the team in that way. And in the next moment, somebody else stands up and does the same thing. And it doesn't necessarily have to be in conflict. You can actually be in a beautiful harmony.
STEPHANIE: Yeah, I really like the way you said that. I love a good beautiful harmony. [laughs] I think part of what has shaped my view on this is a keynote talk from RubyConf Mini back in November by Rose Wiegley. And her talk was called "Lead From Where You Are." And I think perhaps I've kind of internalized that a little bit to be like, oh yeah, everything we do, we can make a decision that can have a positive impact on others.
So that has helped me at least feel like I have a lot more agency in what I do as a developer, even if I don't have the concrete responsibility of being a mentor to a particular person or having a direct report. It injects meaning into my work, and that goes back to the fulfillment piece that we were talking in, knowing that, like, okay, like, here's how I can make an impact. And that's all just wrapped up together.
JOËL: So you kind of defined earlier the idea of leadership as work that has impact on others or that enables the work of others. And I think that there are some forms of that work which are kind of highly respected and will get you noticed and will be kind of called out as like, oh, you're performing leadership here. You stood up in that meeting, and you said the hard thing that needed to be said.
And there are other forms of supporting or enabling the team that almost get viewed as the opposite of leadership that don't get recognized and are almost like you're seen as less of a leader if you're spending a lot of your time doing that. That can be sometimes more administrative work. How does that sort of fit into this model where we're talking about leadership as something that has an impact on others?
STEPHANIE: Yeah, I'm glad you mentioned that because I have a lot of gripes [laughs] and thoughts, I suppose, about what work is visible and not visible and valued more or less. And I do think some more traditional signals of leadership, like talking the most in a meeting, like, that I don't necessarily think is my definition of leadership; in fact, the opposite. A true leader, in my opinion, is someone who makes space for others and makes sure that all voices are heard.
And yeah, I guess it just speaks to like what I was saying about soliciting other people for feedback as well. It's like someone to me who demonstrates leadership is not someone who thinks that they have all the right answers but actively seeks out more information to invalidate what they think is right and find the right solution for the folks on their team.
Similarly, in Rose's talk, she also mentions the idea of being a problem finder, so not just being tasked with solving a problem but looking around and being like, okay, like, what aren't we talking about and that we should be? And obviously, also contributing to making that better and not just being like, "Here's a bunch of problems, [laughs] and you have to deal with it," but that proactive work. Ideally, we are addressing those things before they become a huge problem. And I really liked that aspect of what leadership looks like as well.
JOËL: Yeah, I think something that I've noticed that I do more as I've built more experience over time is that when I started off earlier in my career, it was a lot of here's a problem that needs to be solved, go and solve it. And then over time, it's what are the problems that need to be solved? You have to sort of figure out those problems before you go and solve them. And then sometimes it's even one level above that; what questions should we be asking so that we can find the problem so that we can solve them?
And that will happen...it could be internally, so some of the things that I'm doing currently around improving the experience of a test suite is like, okay, we know sort of that it's slow in certain ways. How can we make that faster? We know that the experience is not great. But what are the actual problems that are happening here, the root causes? Or we're getting some complaints, but we don't really know what the underlying problem is. Let's go and search that out.
STEPHANIE: Yeah, that brings to mind an issue that I think I see a lot on client projects where perhaps stakeholders or an engineering manager is seeing that we are slow to merge our PRs, and they kind of start reaching for solutions like, okay, well, people should spend more time doing code reviews or whatever, thinking that that's what the issue is.
But in reality, maybe it's, I don't know, it can even be something as lower level as having to re-request reviews every single time you push a new commit because the GitHub settings are such that it requires additional approvals for every new change. And that is something that they would not know about unless someone spoke up and said, "Actually, this is what's causing us friction," and having to go back and do these manual tasks that maybe we should explore a different alternative to solve.
JOËL: Yeah, instead of just jumping in with a solution of we need to throw more dev hours at this problem, it can be useful to step back and ask, okay, well, why do we have this problem in the first place? Is it a process issue that we have? Is there some sort of social element that we need to address and organizational problems? And if it's not that, then what are the questions that we're missing? What questions should we be asking here to understand this problem?
STEPHANIE: Right. And even speaking up about it too and going against someone's assumption and saying, "Here's what I've been seeing, and this is what I think about it," that takes a lot of courage. And I do think it is something that is especially important for folks who are more experienced and have more responsibility or a higher-level title, but ideally is something that anyone could do. I would love to know for you, Joël, what is the most important way that you want to make an impact as a developer?
JOËL: I think the human element is the most important. I want to have an impact on my colleagues, on the dev teams with my clients. I want to ship good work. But I think the most valuable thing to invest in is other people.
STEPHANIE: Yeah, I agree. I think for me; it's like making a good work experience for the people that I work with. And it's also a little bit selfish because then that means I am having a good work experience, and I'm in a good culture and environment. But that is definitely an area that I spend a lot of time thinking about and wanting to start conversations about.
JOËL: It's a win-win, right? You make it better for everybody else and better for you in the process.
STEPHANIE: Exactly.
JOËL: And it's okay for it to be somewhat selfishly motivated. Like, it doesn't have to always be every day super altruistic like; I just want to make the world a better place.
STEPHANIE: [laughs]
JOËL: Like, you know what? I want my corner of the world to be better, and in doing so, I'm going to make it better for everyone else.
STEPHANIE: What's that phrase? The tide rising all the ships. [laughs] That is extremely not correct, but I think you know what I'm trying to say.
JOËL: I think a rising tide lifts all boats.
STEPHANIE: Yeah, something like that. I love a good rising tide. [laughs] On that note, shall we wrap up?
JOËL: Let's wrap up. Or let's rise up.
STEPHANIE: [laughs]
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël is a mentor for RailsConf and got matched with a speaker. Stephanie has been having trouble stepping away from her work. It's frustrating when chasing down a bug because something's gone wrong, and you spend a whole afternoon figuring out where it is. Joël and Stephanie discuss error handling as a possible solution.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So recently, RailsConf has closed out their CFP, and they've started sending out acceptances and rejections for proposals. And one thing that they do that I think is really nice is that they offer first-time speakers the ability to get matched with a speaker-mentor, somebody else who has given talks before that can help them prep their talk, listen to them rehearse, that kind of thing. And so they had put out a call for mentors last week. I responded to that, and I got matched with a speaker today.
STEPHANIE: Cool. Is this your first time being a speaker-mentor?
JOËL: First time for RailsConf. I've done it for another conference before.
STEPHANIE: That's really exciting. What do you like about playing that role?
JOËL: So I very much like prepping and giving talks myself. And I really value if there's something that I'm excited about sharing it, helping others build up that skill as well. So I think it's a great opportunity. I also remember what it was like when I was a first-time speaker and just how very nervous I was and not sure.
So I think having someone who can play that role is an opportunity to have a really powerful impact in what's oftentimes, I want to say, a monumental moment. But it's kind of like a milestone marker moment in someone's career, the first time I gave a talk at a conference. So you get to help them to make that moment the best it can be.
STEPHANIE: I love that, yeah. You make a really great point that after you've been speaking for a while, you maybe might forget what it felt like to give your first talk and how big of a deal it is. And in general, I think one thing I really love about Ruby Central conferences is how supportive they are of first-time speakers. So even in the CFP, they mentioned that they welcome first-time speakers and want to make sure to accept talks from those folks and then provide them support through this mentor program. And yeah, it just makes me feel really happy.
JOËL: Do you remember your first talk?
STEPHANIE: I do. So my very first talk I gave virtually at RubyConf in 2021. And then last year was actually my first in-person talk. And I remember even though it was technically my second talk; it was really my first talk in front of an audience. And I saw speakers in the Slack workspace asking questions about the AV setup, and I didn't even think to consider that in my preparation. So it was nice. Even though I didn't get set up with a mentor, to share a space with other experienced speakers and see what kinds of things they were asking about or what kinds of things they were sharing in that Slack space was helpful for me.
JOËL: So when you do a proposal, do you typically have an outline already built out, or is it mostly a concept that you're pitching, and then you maybe start with an outline? Or where do you go next after a proposal has been accepted?
STEPHANIE: That's a great question. I think first, I procrastinate for several months, [laughs], but I do try to write an outline in the proposal when I submit it so I do have a starting point. And I think that actually helps the CFP committee, too, when they are evaluating proposals to kind of get a better idea of what the talk will be about. And so, in my ideal world, I already have some structure, so by the time I've procrastinated to the point where it's a month or so before the conference itself, [laughs] I have an outline.
And I end up writing words, like, I will just write my talk as if it were an essay with this bullet point outline already. And I find that helpful for me because I definitely have a bit of a stream-of-consciousness productivity energy. And so if I just put it all out there, I will then go back and be very ruthless, I suppose, in my editing, and I think that's where the magic happens.
So I kind of let myself just word vomit all over the page. And then the real work comes in the editing process and organizing and making sure it sounds the way I would want it to sound when I'm speaking. And yeah, that's how it has worked for me so far.
JOËL: So you have a sort of a separate phase for sort of just stream of consciousness dumping and then separately editing. And having those two separate is an important part of your process.
STEPHANIE: Yeah, I think so. I don't do as well trying to imagine the structure and everything perfectly the first time around and then filling things in. I find that just putting everything out there and, you know, a lot of things get cut. But that works well for me. What about you? What is your typical conference talk writing process?
JOËL: I think mine is a little bit more iterative. I tend to put in some pieces that I like and then try to connect them together, try to make sure it's telling a story. I think a lot about the pedagogical side of things, where people are going to be confused, where they're going to have questions, where they might check out.
And then very early, start doing kind of draft rehearsals where I'm starting to work on the talk. And I will stop halfway through because, in my mind, I'm trying to seat myself in the audience and be a person who's listening. And there might be a moment where I'm like, wait a minute, you just jump from one thing to another, and I don't get the logical connection here. And I might pause right there in the rehearsal and add in, say, okay, we need a transitional point, or we need to explain a concept between these two.
And I keep doing that until I can get through the whole thing and then realize it's way too long and start cutting. And I cut aggressively, and now it's too short. And now I go through it again. And again, people have questions in the audience, hypothetical audience; I am the audience. And so I really kind of inflate it and then cut it down and re-inflate it and cut it down a bunch of times until I'm happy.
STEPHANIE: I like that a lot. That sounds right. That sounds very you to work even on a conference talk iteratively.
JOËL: It's very time-consuming. So I don't know it's the most efficient way to build a talk, but it's a process that works for me.
STEPHANIE: Yeah, that's true. And then there's value in the journey, even if the talk ends up changing from the very beginning to the end product.
JOËL: So the approach that you described for yourself, I think, where you have a rough draft, and you're separating the editing from almost like a creative process, reminds me a lot of an article that I read called "Mise en Place Writing" by...I'm not sure what their full name is. They go by the handle Swyx. This is an article about their process for writing, but I think it applies to conference talks as well. Have you seen this article?
STEPHANIE: I haven't. But that, I think, is similar to how I've thought about it or I've seen or heard other authors talk about their writing process and it being kind of similar where the creative work...they give themselves a lot of grace and just letting it be. And then the, like I mentioned, real work is in the editing process. It's kind of two different mindsets, I think.
JOËL: We'll link the article in the show notes.
STEPHANIE: I'm curious then how you incorporate visuals into your process because I think that's where my workflow is a little less successful because I'm not really thinking about visuals along with the words, and they do feel more like an afterthought. And I've always been really impressed when people who give talks can have a really visual and dynamic slide presentation. How does that work for you?
JOËL: So I think I try to avoid slides that are three bullet points in the slide, and then I'm going to talk about it for three or four minutes for each bullet point. People read those quickly and then check out. I'll oftentimes try to have, like, turn each of those bullet points into a full-on slide. And maybe it's just a title and a fun picture or something like that. What this ends up doing is I kind of really inflate my slide deck. I'm going through maybe 80 or 100 slides in a 30-minute presentation.
So it's multiple slides a minute. They move by really quickly. So I usually have either just an image or a header. I will usually start by just sketching it out with headers and then, where it makes sense, using an image. An image can be just for fun, or it can be something like a diagram where it is trying to illustrate a point.
STEPHANIE: Yeah, I like that. I think talks with a lot of slides that are mostly just images or something that you can grasp in a few seconds are really engaging because you're keeping it moving, and you don't really let people get bored. And so you show a new slide, and they look at it, but then they are able to direct their attention back to what you're saying.
JOËL: It's fun too with images because you can reuse them, and then they become a way to connect people back to a theme or let them know that you're making the same point again. A lot of talks, I will have a central theme that gets repeated. I'll often have a slide with some fun image with my key point on it. And then that slide will show up three or four times in my presentation oftentimes because each of the main points I'm trying to make kind of culminates at that same takeaway.
And so for example, in the talk I gave at RubyConf Mini last fall, I had a slide about writing Ruby code being delightful. I think having some children being happy with just a big title being like, "Oh, delightful," or something like that. And after each of my examples where we went from code that was less good to something that was more idiomatic, Ruby that was really fun to work with, I would finish on that slide and be like, hey, our code is now delightful.
And hopefully, that helped people with the takeaway of, like, we want to write delightful code. Ruby has tools to do that. And then, hopefully, they either remember the things they can do to get to that point or can look it up and find a talk online.
STEPHANIE: Yeah, I watched that talk, and I really vividly remember that slide and the theme that you were trying to hone in on. So I thought it was pretty effective. I think this makes me realize speaking, I mean; speaking is obviously a skill but even the process of creating a talk in that particular medium is also a vast skill and can go...there are so many different styles and flavors. But I really think that what you said will get me thinking next time I'm writing my talk and how I can better incorporate that kind of engagement with the audience and making sure that the way I deliver the talk is just as thoughtful as the content itself.
JOËL: Yeah, I've been putting a lot of thought into what makes a good talk and what elements are unique to my process, what elements can be useful to others because now I have to coach someone else on their process and say, "Hey, here's the thing that worked for me. Maybe this will be helpful for you." Or maybe it's just, "Have you tried this?" Or "I think audiences will be asking this question at this moment, what do you think of this?" So that's definitely been top of mind in a whole other dimension for me recently. How about you? What's new in your world?
STEPHANIE: So before we started recording, I was heads down deep in the muck of trying to write some tests, some RSpec tests on my client project. And the domain for this client project is really big. There are a lot of models. And I was starting to go deep into the factory setups for our test fixtures. And it was hairy. And I was just going further and further down the rabbit hole to the point where I was skipping lunch.
JOËL: Ooooh.
STEPHANIE: Yeah, I was like, I couldn't pull myself away from it, and I kind of regret it a little bit. [laughs] And so I was just thinking about, like, how can I incorporate taking breaks a little bit more and feeling better about stepping away from the work when I'm really deep in it? You and I had this standing appointment to record [laughs] a podcast, so that was kind of the signal to me that it was time to try to set it aside.
And I did end up taking the dog for a walk around the block beforehand to get some fresh air, but yeah, it was a little rough, I don't know. How do you deal with just being so deep in the code that you don't really want to resurface?
JOËL: That's hard because sometimes I'm feeling productive, and I don't want to stop because I feel like I might not get back into the flow quite as easily. Sometimes it's just out of frustration. It's like, oh, I'm just so close to getting this bug done. If I get this one more test to pass, then I'll be good. And I keep doing one more thing. And the next thing I know, I have skipped lunch, and it's late in the afternoon. And it's just like; it's been a frustrating day.
STEPHANIE: And you're cranky, yeah, yep. I know that feeling.
JOËL: I've stopped being productive for the past hour. But I'll be like, one more thing, one more thing.
STEPHANIE: I think I was in that place because I was starting to get deep into the internals of models completely unrelated to the test that I was writing, but that was just where the rabbit hole led me. And I think after this, I will go and ask in Slack for a pair because I think that would be really helpful right now. I've just reached the limits of what I know. And I'm almost positive that someone knows how to do this more efficiently than I do. So that was a bit of a signal to me, but it was very challenging untangling myself out of that headspace.
JOËL: Have you ever played the video game Civilization?
STEPHANIE: No, I haven't.
JOËL: It's a turn-based historical strategy game. The running joke about it is that people get really pulled into it. And they're always just saying, "I'm going to play one more turn, and then I'm going to be done for the evening." And the next thing you know, it's 4:00 a.m. And I think that sometimes applies to fixing one more failure, just getting one more file in that chain of figuring out what the bug is in code. It's a very similar feeling.
STEPHANIE: Yeah, I know exactly what you're talking about.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So it can be really frustrating when you're kind of chasing down a bug because something's gone wrong, and now you're spending a whole afternoon figuring out where it is. Do you ever find yourself maybe acting preemptively to try to prevent those sorts of things from happening in the first place? So maybe putting in some sort of guards or error handling or something like that so that your future self won't have to spend that afternoon.
STEPHANIE: That's a great point because the bug that I was facing just now was definitely something I think could have been avoided. It was a classic no method [laughs] on nil class error. And I am still unsure how that happened, and I hope to come back to it after this. But yeah, that certainly is a great topic to get into, error handling. I think it's been on my mind a little bit lately because I'm working on a full-stack feature that has user-facing errors and things we want to make sure that we communicate to the user so that they could hopefully do something about it or just contact customer support on this app.
But there are also some API calls that are kicked off in the process of the user submitting the form, and those can lead to a bunch of different failures. And we may or may not have already discovered what those failures could be, and there may or may not have been designs created for those different failure states. And I feel like I haven't quite gotten a handle on how to deal with all of the possible errors that can happen when implementing a full-stack feature or a vertical slice. Yeah, that has tripped me up a lot lately.
JOËL: I think my time working in Elm has really made me much more aware of the different ways that things can fail just because Elm's type system is very robust. It's very complete. And so it will point out to you every potential place that could have a failure and ask you to handle it because it doesn't want to get to a point where it doesn't know what to do and there's a runtime error for something like no method or something like that.
So if you've got a potential nullable value and you're trying to say, okay, take this and render it, the compiler will say, wait a minute, you did not handle the null case. Give me something to do with the null case, or I refuse to compile. And now you've got to handle that. If there's something that might feel like an HTTP request, again, the compiler would be like, well, but what about the failure case? You didn't tell me what to do on the failure case. This is an incomplete piece of code. I refuse to compile that.
So I think I've built now a little bit of anticipation because I know the compiler is going to tell me to do this. Now even when I write code that's not compiled like Ruby, my brain compiler is still like, oh, there's a nullable value here. You didn't check the null case. What are you going to do about that?
STEPHANIE: Yeah, that's a great point. I think the more experience I gain, the more possible errors I see in the world or out in the wild. When I think about developing on the web, you know, you mentioned HTTP requests, but also, if we fail to connect to the database or a job fails to enqueue, there are just so many places where things can go wrong. And it's almost like the more I learn about all those possible failures, the more anxious I am [laughs] to make sure that I've covered everything though I think there is some amount of just that being impossible.
And I'm particularly interested in figuring out what is enough because one thing that really I find quite painful is when you don't think through things enough and you just cross your fingers and hope it works and you ship it, and then your team is dealing with a lot of bugs or a lot of noisy error monitoring notifications afterwards. And so that's kind of what I'm trying to adjust for, I think.
JOËL: I think there's like two general classes of approach you can use to deal with that; one is to try to prevent errors altogether, and there's a variety of tools you could use for that. I'm thinking of either something like a type system or maybe test-driven development or even some sort of analysis tool. That could be diagramming, that could be decision tables, something like that. All those, I think, fall under better understanding of the edges of your system. Whereas sometimes you want to do the opposite and sort of really lean into, okay, errors will happen. How do we recover from them? How do we make them easy to diagnose in the future?
STEPHANIE: Yeah, that second bit is really interesting to me because I've started to try to think about the errors and who we want to notify about the errors. And so I feel like there are a few different categories of errors where if it's a validation error and it's something that the user can fix, you know, that we want to make sure to surface and tell them how they can fix it. If it's like a programming error, there's no value in showing that to the user.
And I'm sure that we've all seen a website that responded with a 500, but then we actually saw the error message itself, and we're like, ooh, this is kind of weird [laughs] to be seeing this. And so realizing, okay, that's not valuable to the user. But what should I be doing with it instead? And maybe that is hooking it up to whatever error monitoring service you use to make sure someone is alerted. Or, I don't know, even in the third case, like, what should a customer support team be notified about? And that kind of sits in between not quite a user-facing error but also not a bug, and that's a different category.
JOËL: So, something that is not necessarily a problem in the code, but you might want somebody in the company to know about and be notified about.
STEPHANIE: Yeah, exactly. Maybe not something that is so urgent that it needs to be flagged in real-time but goes somewhere, and someone will check on it at some point. [laughs] So you were mentioning that you now have a better sense of what could go wrong. How much time do you spend writing code to cover all of those different possibilities?
JOËL: Hmm, I don't know that I've ever put the time to quantify it. I would say a decent amount because you've got to think about...sometimes they're not even things that can go wrong per se. But they're off that very simple, linear happy path that you're thinking of. So you might think even for rendering some kind of view, and you've got some search results you're trying to display.
Have you considered an empty state? Is there a difference between initially loading the page or have not performed a search yet, and search but did not find any results? Those are things that are not necessarily errors, but they're not things you're thinking in mind when you're just writing that first happy path of, like, oh, load page, show results. I assume there's always a result set. And so those are things that are important for the user experience that you need to have, but that are kind of edge cases that you have to add in afterwards, or you have to think about.
And so I think that, for me, tends to fall under a similar category as okay; what if an error happened? Especially when you're dealing with kind of a full-stack situation where on the front end maybe you're making a result to a back end to pull down...let's say you're making a search and the back end is doing the actual search. You send up a query. Now you get back a failure.
Is that the same thing as getting no results back? Like, a success with no results, versus an error code, versus not making a query yet. So you've got like four or five states you've got to think about on the front end to display and how you're going to handle those. So I think thinking about those upfront is often really helpful.
STEPHANIE: As you were talking about that, I suppose I asked the question because I have experienced when those things are not thought about upfront, and then you discover them as you're implementing. And how much time do you use to kind of go into a little detouring trying to make sure that you have all of the edge cases covered, and at what point do you stop? Because you're like, I've covered what I can. And this ticket was supposed to only be three points [laughs] or whatever, you know.
JOËL: Yeah. And how do you keep a feature from ballooning when there are all these edge cases?
STEPHANIE: Yeah, exactly. It's a balance.
JOËL: Are there any techniques that you like to do when you...so you pick up a ticket that looks easy, but that might have a lot of these hidden edge cases in it. Are there techniques you like to use that might help you figure out those edge cases and maybe give you some follow-up questions that you might reach out to the product person to clarify? Or maybe it's mostly intuition and experience as a developer that you kind of figure out that, oh, these are the things we need to ask about. It's like, have you thought about an error state?
STEPHANIE: Yeah, that's a good question. In general, I'm a little suspicious of any ticket that doesn't include some kind of acceptance criteria about the unhappy path. And I certainly think it's a lot easier once you are embedded into this domain, and you have that expertise, and you are able to see the possible issues you'll run into. I do think that I like to do a little bit of coding just to kind of explore the space, and then that does give me more insight into how I might be able to follow up on the ticket.
So you mentioned techniques. Especially if they're written as user stories, I don't think they necessarily incorporate the flow or the procedure of how things are kicked off. And so when you're thinking about implementing it, you're like, oh, this actually needs to happen in the background, or this should be synchronous or not. And those are a lot of error states that I find are missing when I pick up a ticket.
And I think it also depends on which way you want to implement it what implementation is viable. And then maybe you bring it to a product person, and they are actually like, "No, we don't want it to work like that." And then you have to kind of rethink things a little bit. But yeah, certainly, the process of taking a user story and then doing an initial think-through of what approach you want to take definitely surfaces some potential unhappy paths.
JOËL: It's almost like prototyping it in your mind.
STEPHANIE: Yeah, I think so. I think it also depends a little bit on the team because if the engineer wrote the ticket, then there likely has been some thought about unhappy paths. But on other teams that I've been on when implementation is up to the person who picked it up rather than kind of spelled out for you by someone else who did that thinking, that's definitely an opportunity to pause, I think, and document which way you might want to go so that you can make sure that you account for the possible things that could go wrong that likely the user story didn't cover.
JOËL: Sometimes there are some edge cases or failure states that are just sort of built into the problem that you're solving. If you're having to make a background request, there's always a chance that that might fail because the network is not trustworthy. Sometimes though, those things just kind of come out of our implementation, the fact that we implemented it in a particular way.
And that's not something that you'd expect a product person to have to think about. That's more on us as developers to be like, oh yeah, well, I'm indexing into a hash and didn't think to check is a nested hash even present? Maybe that key isn't there. And now I've got a weird nil error, an undefined method. That's kind of on us rather than on, like, oh, a specific kind of thing that we can think about upfront.
STEPHANIE: Yeah, that's fair. And I think that is just an important part of the development process. Though you make a good point because I think that just kind of speaks to all of the different layers of things that can go wrong [laughs] and figuring out which ones are specific to your role as developer to account for, and then which are ones that you need to bring in or pull in a designer to chat about. It can be a little overwhelming. I'm overwhelmed just thinking about it.
[laughter]
JOËL: Yeah, errors are not a sort of monolithic class of things. They can't be an afterthought. But they're also not just a thing where it's like, oh yeah, do the error handling, and then you're good. We kind of lump a lot of things under the concept of errors, even if they might all eventually manifest as some kind of exception. I guess a true solution is just one giant top-level rescue nil.
STEPHANIE: [laughs] Very funny.
JOËL: So we've talked about a few different dimensions of errors where they might be sort of user-visible or not, or something that's more implementation-based versus inherent to the problem. One thing that we haven't looked at is the dimension of errors that might be recoverable versus not. Have you ever built a system where you had errors that could be recovered from and didn't crash the program?
STEPHANIE: Ooh, yes. That makes me think about retrying and especially what you're saying if things are happening in the background. Maybe there is an ephemeral error where the network timed out or something. But if it is given another shot, it might succeed on the second go. And I think there's a whole process of thinking about what happens when a process has to be retried and if there were any side effects that you didn't want to have committed the first time around, you know, but then something else that was supposed to happen and when the process happens again, things are very broken. So making sure that you are keeping things idempotent so that by undoing it again, there are not any unforeseen issues.
JOËL: I heard you say that word commit here, and that's kind of a keyword to my mind. I immediately think database transactions. Is that the sense that you're thinking about this term here? Or does it have another meaning for you in this context?
STEPHANIE: Yeah. I do think I used that word specifically because when I've run into this in the past, it has been around making database changes. I'm trying to think if there is another way that this might show up. I think even in something like sending an email, too, though it is a bit lower stakes. I've certainly, as a user, experienced when that goes wrong and just been [laughs] flooded with emails and being like, wow, this is annoying. And that's, I think, something valid to consider as well.
JOËL: Yeah. You don't want that email job to be a thing that gets retried and just keeps failing because there's a nil error after the email gets sent. And so we just re-enqueue it, re-enqueue it, re-enqueue it, and the person ends up receiving 500 emails.
STEPHANIE: What about you? Any thoughts about recoverable errors?
JOËL: Yeah. I think really common for me is thinking about that in the context of a background job because those are things that I think are specifically designed to be retried if they fail; at least, a lot of job enqueuing systems assume that. When we write them, we don't always take that into account, but that's the system that we're working in. So that can be something as simple as marking somewhere that you have sent that email so that you don't resend it if that job ever re-executes.
I think that goes to your point about idempotency earlier that you often want to write code that can get executed multiple times but doesn't necessarily do the action multiple times. It will do it at most once. And that's probably an interesting distinction to have is knowing what elements of your code need to execute at most once, versus as many times as the code is called, versus things that might get tried and then rolled back like a database transaction. And so then that will...I guess you could say it's at most once because you're writing it but unwriting it. But that plays out a little bit differently than something like an email where you can't undo sending an email outside of Gmail.
STEPHANIE: Yeah, I love that undo button. [laughs]
JOËL: You need some other mechanisms for that.
STEPHANIE: Yeah. As you were talking about that, I was also thinking about the idea of failing gracefully, which I think also ties into the idea of recoverable. So this is not a development-specific example. But the idea of an escalator no longer working well; at least you can use it as stairs. So that doesn't mean that everything is totally broken and people are unable to get from one floor to another. So maybe if there is a network request that's touching data and that fails, you can at least fall back on something that's cached. That mindset, I think, really is important to think about at all the different levels we are talking about.
JOËL: Yeah, or hopefully, even maybe some amount of graceful degradation. On a front-end app, you might not want to just crash the whole thing if one background request failed. So you can try again. You can be told, okay, try again in so much time. Maybe we automatically retry to make that same request with some sort of exponential backoff strategy. Or maybe we say, "Look, search is down for now. Here's a link if you want to go check a status page. Until then, other parts of the site are still working."
I feel like we're getting back into what makes great product design and how great product designers have to make failure conditions. It has to be at the forefront of the thinking that comes to designing that product.
STEPHANIE: Yeah, that's a good point. I think my initial feelings of being overwhelmed and stressed about dealing with errors may be because a lot of it falls on the developer if those things aren't accounted for. And we spoke a little bit earlier about, okay, what is within our realm or domain of actually being responsible for, and what can we loop in others for help with?
JOËL: So we've been talking a lot about different ways of preventing errors, different ways of recovering, generally trying to make the whole experience really smooth. A slightly different philosophy around errors is rather than preventing them early is to fail early, like, fail early and loudly. And maybe you recover, maybe you don't. That depends on the context. But instead of putting so much effort into preventing errors upfront, it's better to just crash a lot or to fail loudly and deal with the consequences, or have a strategy for dealing with failure because failure is inevitable. How do you feel about that philosophy?
STEPHANIE: Oh, I think it has a time and a place. One example I'm thinking of is if you don't want your application to be deployed if some configuration is not exactly how it needs to be for the app to run effectively. And so there is a matter of, like, okay, I really want to make sure that the DevOps team or the development team knows that something is very wrong because if this were to be deployed, the app would be unusable. And so that's an example to me of failing loudly but, ideally, not letting it affect end users because they're still using [laughs] the site on a different version. [laughs]
JOËL: Right. I guess the classic example of that is for a Rails app, doing a Hash#fetch() on the environment to load up your environment variables instead of using the square bracket syntax so that as the app is booting and executing those initializers, it will crash if it encounters one of those and then fail to deploy if you're doing a deploy via something like Heroku. I've even sometimes when I'm adding environment variables, purposefully had them loaded in an initializer rather than maybe like in a class later on, specifically so that it would crash the app and prevent deployment if that environment variable was not set.
STEPHANIE: Yeah, that's what I was thinking too, environment variables. Though I think even with that kind of mindset, you're either just delegating that responsibility to someone else down the line to either figure out how to accommodate or account for in a graceful manner. Or you are creating an environment where everyone is very stressed out [laughs] and having to fight fires. So I think it also requires a little bit of thought and isn't necessarily a strategy to just completely embody. [laughs]
JOËL: I've noticed that bugs and errors often accumulate at the boundaries of systems or even subsystems or modules within a program. Maybe the place to apply the strategy of failing early and loudly is particularly valuable at those boundaries. But internally, within a subsystem or a module, maybe it's nicer to use other strategies for error handling. Does that sound like maybe a useful distinction to you?
STEPHANIE: Yeah, I think subsystems was the keyword to me there because you don't want it to be such a catastrophe that it affects the usability of the app entirely. But that does still require some systems in place, I think, to respond to when that thing is failing loudly.
JOËL: I think an example that came to mind for me is like you were mentioning earlier, a full-stack application. And if you've got the back end that's providing an API and something is wrong, I don't want that API to give me back garbage data and try to pretend that everything's okay. Let that API give me back some kind of error code. And I in the front end, I already know that the network is inherently unreliable. I'm planning to handle errors at that point. So it's fine for the API back end to fail loudly in this case. In fact, I think that's the optimal solution.
STEPHANIE: Yeah, I think that's true because ideally, that error clues you into what kind of thing failed, and then maybe you can use that information more meaningfully than trying to guess at what happened with this bad data and then having to define some kind of error message in your app when ideally someone else who had more knowledge about it could have told you what went wrong.
JOËL: And I guess the problem with not failing loudly or with an explicit failure is that if you try to just pass on some sort of value that will pretend to be like what you initially asked for, whoever's consuming that doesn't know that something went wrong. So then you use this garbage data, and you do some things and pass it along to the next person. And eventually, it may cause a failure three or four steps down the line.
And now, trying to trace that, like, why did this fail? And it's not because something was wrong in Module D, or C, or B. It all comes back to oh, A had a problem but didn't crash or give us an error. It tried to pass its sort of best guess, like, this is probably okay. And then it just kind of moved all the way down the line.
STEPHANIE: I'm imagining external API developers everywhere just nodding their heads in agreement. [laughs]
JOËL: I've fought this on a local level as well. I was working on some kind of code for a JavaScript date picker plugin, and this was back in the jQuery days. It was some kind of...it was not a date picker. I think it was a typeahead drop-down thing. In some situations, I forget exactly how it would happen, but the input from there might be empty, but then that would get converted into an undefined value, which then JavaScript would convert to the string undefined, which would then get passed to something else that if it saw a string, it thought that was the thing that the user typed, and they would pass it through.
And I think maybe in the end, I was looking at a crash ten functions away in the front-end code that had to deal with the input from this typeahead and being like, why am I getting these undefined? Or maybe it was a string NaN or something like that. Like, why am I dealing with these weird strings that should never have come out of this? And it turned out it was just kind of an edge case. It wasn't addressed in this component further on, and then it was kind of leaking strings that everybody else thought was sensible up until three or four jumps further down the stack.
STEPHANIE: Yeah, that's a great point. I think it does go back to the idea of there being preventable errors. And then there are things that are truly not preventable because we live in a physical world [laughs] where computers talk to each other over the wire. And that distinction is, you know, perhaps the first being avoidable errors by writing resilient code. And the second being like, okay, in reality, there will be things that go wrong, and this is what we really have to watch out for.
On that note, shall we wrap up?
JOËL: Let's wrap up. [laughs]
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Stephanie has a win and a gripe from her client project this week. In a previous episode, Joël talked about his work exploring how to model dependent side effects, particularly D&D dice rolls. He went from the theoretical to the practical and wrote up a miniature D&D damage dice roll app that you put in a few inputs. Then it will roll all the dice necessary and tell you did you successfully hit your target and, if so, how much damage you did.
Together, they discuss how they think about fulfillment at work and what brings them fulfillment as developers.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So I have a win, I suppose, and a gripe from my client project this week that I would love to share. So my win is that I've been working in React lately. And I might have mentioned this on a previous episode, but it's been a few years for me. So I'm kind of catching up on the new, hot tooling, you know, whatever is popular in that world these days, and having to read a lot of documentation to figure out how to use it and just in general, I think being a little bit outside my comfort zone.
And I was working on an existing React component that was untested, and I had to change and extend some functionality in it. And we're also a little bit on a deadline. So there's like a little bit of pressure on the team to be delivering. And so when I got this ticket, I was like, okay, I am seeing this existing component that looks also a few years outdated. It's using some of the older technology that we've kind of moved on from.
And I was just like, oh, I really should write tests for this before I go in and change some things just to feel confident that my changes don't break anything because it was pretty gnarly. But I was not in the mood for it. [laughs] And this was like two or three days ago. I was just very grumpy. And I was like, oh man, why do I have to do it? [laughs] I kind of wanted to just get into making the changes so I could deliver on this work.
So, spoiler, I did not write the tests that day and just kind of went ahead with the changes. But then, the next morning, I woke up, and I was feeling inspired. I was like, I made those changes, but I'm actually not feeling that confident about it. So let me go back and try to write some tests. And I got to use the new tools I had been looking into, and that was part of my hesitation too. I was like, oh man, this is like a really old component.
And I don't want to use the older ones that we're using for testing. But how is it going to play with the newer testing tools that we're using? And so there was just like a lot of, I think, barriers to me feeling excited about writing those tests. But with my renewed energy, I did it. And I feel very happy about it and proud of myself. Yeah, that's my little win.
JOËL: That's a roller coaster of a journey there. That sort of deception when you find out that there are no tests for this and somebody else's problem has kind of become your problem. But then you decide you don't want it to be your problem, you know, kick it down the road for somebody else. And then you feel good about yourself, and you decide to backfill the test anyway. And you get that confidence, and now everything's better for everybody. That is quite the journey.
STEPHANIE: Exactly. I listened to another podcast recently where they coined this term called tantrum logic, which is basically the idea that when you're kind of grumpy or something happens, and you're like, man, I don't want to do any of this, like, if I can't do it my way, then I don't want to do it at all. [laughs]
And just the idea that the way you're thinking about the issue at hand may not be totally grounded in reality. And I think I needed that reset and just a good night's sleep and going to do something else to come back and be like, actually, I do want to write those tests, even if it will be challenging. I'm in a better mind space for it. Mind space? Headspace? [laughs] Headspace for it. And I overcame the tantrum logic.
JOËL: A good night's sleep is just such a powerful tool for resetting.
STEPHANIE: Yeah, I agree. Shout out to sleep. [laughs] It turns out that it can really have a positive effect on how you feel.
JOËL: By the way, this is not an advertisement. We are not sponsored by sleep. We just both love it and recommend it.
STEPHANIE: [laughs] To get into my gripe a little bit, so you and I are on the same client project we've mentioned before on the show. And I think I even talked a little bit about receiving a new computer from our client to do our client work on. So now I have many devices at home. And we had also chatted previously about a note-taking app that we both use called Obsidian.
And one of the reasons that I really like it is because it's all local storage. So your notes are not being uploaded to the cloud or whatever. But that does make it hard to use on multiple, I mean, not just hard, impossible to use [laughs] on multiple devices unless you pay for it. They have a sync offering where you can use it on multiple devices. And I think it's also encrypted in a certain way.
Anyway, sometimes I'll be working on my client laptop and have some idea or thought that I really want to note down, but I don't have Obsidian installed on this machine, and it's not synced to my other Obsidian. And I have just been kind of annoyed about having to go open another computer to write a thought down if I want to document it. And I'm curious how you deal with this problem.
JOËL: So the downside of Obsidian not being a cloud product is that you don't just get that sync for free. The upside of it just being markdown files on your hard drive is that you can use any other product or tool that you want to manipulate these files. So I have my Obsidian vault, which is just the term for the directory where it keeps all of these files in a Dropbox directory. And so I have it sync across multiple machines just by being signed into my Dropbox account.
STEPHANIE: That's smart. And that sync is pretty smooth for you? You don't have any issues with updating it in one spot and seeing those changes in another?
JOËL: I have not had issues with that. Of course, I'm not jumping between machines within 30 seconds of each other. Generally, I'm also connected to the internet. So I haven't had a situation where I make a change to a machine not connected to the internet, and then later on, I edit an old version on a different machine that is connected to the internet, and now we have conflict. I've not run into that problem.
STEPHANIE: Okay, cool. That sounds good. It's funny you mentioned that because it's just the other day, off-mic; you and I were on a call doing a little bit of pairing. And you were on both machines at the same time [laughs] because we had to use one for our call. And then you were looking something up on your client computer as well. And the thought of you just using two computers at once was very amusing to me.
JOËL: It's the ultimate hacker move in...I was going to say bad, but that's maybe a little bit too judgmental, but yeah, in classic, I feel like police shows, things like that.
STEPHANIE: I do have one more thought about note-taking that we haven't talked about before. But I'm really curious, how do you deal with thoughts you have on the road during a time you don't have a device on you? Do you go and write that down somewhere, or what do you do with those?
JOËL: I have an absolutely awful solution, which is I add it to my mental stack and hope it doesn't overflow before I get to a computer.
STEPHANIE: That's really funny because I used to do something similar where if I had a to-do list or something like that in my head, I would remember the number of items on my list to try to cue me into remembering what those items were. The worst thing that would happen is I would remember that I had three things on my to-do list but could only remember two. And so I had to just [laughs] deal with my existential anxiety about knowing that there was something else that I had forgotten about but could not remember [laughs] for the life of me what it was.
JOËL: So I do that trick sometimes for my grocery list if I don't want to write it down. I'll just be like, oh yeah, go to the grocery store, make sure there are five items in my basket when I check out. And similar to you, sometimes I have that problem. I had a light-bulb moment the other day, which is that this trick is actually an example of hashing content.
STEPHANIE: [laughs]
JOËL: So if you're ever hashing the contents of a file and then wanting to compare if another file is the same and you check the hashes are the same. In a sense, you're kind of hashing your grocery list and your shopping cart and trying to see do they both hash to the same value? Now, a good hashing algorithm has an infinitesimally low chance of a collision. Counting the number of items in your list or cart has a fairly high chance of a collision. You could have a cart and a list that both have five items, but they're not the same items. Yet this comparison would still make you think that they're the same.
STEPHANIE: This is a very funny metaphor to me. I think the other issue is that as a human and not a computer, I do not have the mental storage space to then also remember what algorithm [laughs] I'm using to hash my to-do list.
JOËL: The algorithm is the count function.
STEPHANIE: [laughs] True, true, a more sophisticated algorithm then. [laughs]
JOËL: Yes, which is why I keep using this not very safe, but it's good enough.
STEPHANIE: Sometimes, we just need to be good enough. So, Joël, what's new in your world?
JOËL: So, in a previous episode, I think we talked about some work I was doing exploring how to model dependent side effects, particularly D&D dice rolls. So this week, I went from the theoretical to the practical and wrote up a miniature D&D damage dice roll app that you put in a few inputs, and then it will roll all the dice necessary and tell you did you successfully hit your target, and if so, how much damage did you do? And it takes into account all these edge cases.
STEPHANIE: Cool. That's so exciting because I think we mentioned last time how that would be a really interesting exercise to write up that code. Did you get any insight from doing that?
JOËL: I think a lot of the insight that I got came from the initial diagramming phase. And I think coding it out really solidified the things that I had learned from the diagramming. Of interest here is that there are effectively or potentially four separate dice-rolling phases that can happen. First, you're rolling to see can you hit your target? And depending on the situation, you're rolling one or two dice.
And then after that, you're rolling to see if you do hit, how much damage you do. And you're either rolling one set of dice, or you might be rolling two sets of dice if you happen to do a critical hit. So I think that the diagram that I had clearly showed these are four sets of randomness that have to happen and then how they relate to each other. These two are dependent on each other; these two are independent.
I think one thing that was really interesting that I learned from the code is that for something like a dice roller, you usually don't want to see just the result. Because if I just have a button that says how much damage did I do, and then I get a number back that says, "You did zero damage," or "You did three damage," as a person, that's not very satisfying. And I don't know that I fully trust it. I want to see all the intermediate results. So I want to see, oh, did I roll two different dice for that initial two-hit? What were the numbers?
And then I can say, okay, well, I need to roll above a five or roll above a 10. And I rolled these two dice, and they were both under 10. That makes sense why I didn't hit. Or I rolled one of them above and one of them below, but I was rolling with disadvantage, which means I have to take the lower of the two numbers. So I could have hit, but I didn't. So I think that is really fun as a user to see the intermediate steps. But also, as a developer, it helps me to be confident that the code I wrote works the way I expect it to.
STEPHANIE: Yeah, that's really neat. I think what I love about this is that you took something that, in some ways, could be really simple, right? And the implementation could have been just the first thing that you thought of, but you thought very deeply about it and made the dice roller that you wanted in the world. [laughs] I'm curious. Can anyone go check out this repo on the internet?
JOËL: Yes. So we can link to the repo in the show notes. And also, the dice roller itself is up online at dnd-damage-roller.netlify.app. And we can link that as well for anybody who wants to go and check it out.
STEPHANIE: Awesome.
JOËL: I think my goal in this is it's more of a learning exercise. I don't think the world needs another D&D dice roller. There are better ones built into more comprehensive tools. But it was fun for me to work on this, to explore some ideas, and to dig into randomness. I've always had a fascination with random rolls.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: So it sounds like the Dice Roller app really scratched an itch for you and was a fulfilling exercise for you and just exploring randomness, like you mentioned, and just a theory that you had about writing good code. I'm curious about how you think about fulfillment at work in general and what brings you fulfillment as a developer.
JOËL: Fulfillment is really interesting because I think it's a really kind of personal question. It probably varies a little bit from person to person. But there are probably also some aspects that are global to everyone. I know we've talked about things like psychological safety in the past. And if you don't have things like that, that baseline, it's going to be hard to feel fulfilled.
STEPHANIE: Yeah, I agree. I am thinking of Maslow's hierarchy of needs, and in some ways, fulfillment is kind of the tip of the pyramid. If you are feeling safe and like you belong and get enough sleep, like we mentioned earlier, you can reach towards getting into what really feels fulfilling and gives you purpose in life.
JOËL: I love that you brought up Maslow's pyramid because like you said, that top part is self-actualization. So you need all those lower layers before you can actually reach the point of true fulfillment on the job. One thing I recently realized about myself is how I tend to approach projects that are in a difficult place. I find a lot of fulfillment in sort of relative change. It doesn't matter if a project is in a bad place as long as the project on a week-by-week basis is moving in the right direction. It might still be in a bad place, but is it better than last week? And was I a part of making that better? That makes me feel good.
STEPHANIE: Yes. I have always really admired your optimism around that and how you share even small wins. You're really good about that, actually, and celebrating that. And it's interesting to learn that it's like that process itself that has a lot of meaning for you. Because I think I'm a little bit different in the sense that I have an ideal version of working in my head, and if we're not there, even if we are making some incremental progress week to week, I think I struggle.
Sometimes I feel frustrated or stressed because I think that we're just not where I want to be. And I've definitely been thinking about harnessing some of that optimism and celebration that you have around, just making things better a little bit at a time.
JOËL: And I think we should be clear that this is not the way one has to be; this is just how I tend to feel on projects.
STEPHANIE: Yeah, absolutely.
JOËL: I know there are plenty of people who feel most fulfilled when they're on projects where things are mostly good. And then it's not about incremental improvement in the product, but maybe it's shipping a lot of features and feeling like they're moving very quickly. Maybe it's that feeling of speed that gives them fulfillment rather than the feeling of incremental progress.
STEPHANIE: Yeah, absolutely. I think what is helpful for me in hearing about this from you and just from others (I love talking to other people and learning about what motivates them.) is seeing what else is possible outside of my own little universe inside my head and doing the self-reflection to be like, okay cool, this works for Joël, but maybe this doesn't work for me. But having the input from other people lets me discover more about myself in that way.
JOËL: That is incredibly powerful. I love that. I think in a variety of aspects of my life, but especially when it comes to fulfillment in software and at work, talking to other people, seeing how they relate to a project or to a particular task, and, like you said, getting to see their perspectives that are sometimes totally different than mine.
STEPHANIE: Yeah, absolutely. So you just mentioned one aspect of how you find fulfillment when a project is maybe in a tougher spot than usual. I'm curious if you can recall a time that you've been the most fulfilled at work.
JOËL: Most fulfilled. I think one of the most fulfilling projects I did was several years ago. We built a dashboard for just exploring a lot of data from medical studies. And so the researchers would upload some time series data for things like heart rate, or skin electro-sensitivity, a bunch of other things, along with a video. It was a kind of an interview-style situation. They were doing a session with a patient.
And we would then sync all of these data streams up. We would sync it up to the video, and then you could kind of explore the data. There were scrubbers, so you could kind of scrub through the video, and it would scrub through the time series data all at the same time in sync. You could scrub through the time series data. It would sync the video kind of like bidirectional. You could zoom in on the data.
The idea is this is a high-level kind of exploratory tool. And you could then find the interesting bits of data that you could then do more quantitative analysis on. So you could then find a part of the stream and say, this is the interesting part. Clip from 10:55 to 11:10 in the stream, on all streams, and then export just that data in a zip file. And then I'm going to put that through a bunch of math and figure out, oh, is there a correlation between these moments?
STEPHANIE: So what about that project was really exciting or fun for you?
JOËL: I think the client was incredibly fun to work with. There was like an energy and excitement. This was part of their, I think, Ph.D. thesis. And they were really excited. They were incredibly knowledgeable, just delightful to work with. I think this was a fun...so we built this from scratch. It was a greenfield app. I think it had a lot of interactivity. It had a lot of visuals. It was one of the first projects I got to work on that used Elm. I think all those things combined to just make it a really fun project to work on.
It was also a fairly short project. So we had a very kind of tight deadline. We were very pragmatic with absolutely everything on there. Like, what can we do to get this done quickly? Is this feature worth the time? It was kind of a classic MVP product. And I think it was one of the most fun things I've built.
STEPHANIE: Cool. I'm also hearing there was probably some creative aspect of it that was really fulfilling for you, like exploring a lot of new things. Like, you said, you were working with Elm for the first time. And the project itself sounds very different from some of our other more typical consulting engagements and also the collaboration aspect. Like, you mentioned the tight deadline, which compelled you all to work really closely together to make this really cool thing in that short amount of time.
JOËL: Exactly. Yeah, it was like a three or four-week project that I look back on really fondly. Like, oh, that was a good time with those two colleagues and that client, and we did a thing. It was really cool.
STEPHANIE: That's awesome.
JOËL: I think it's really interesting that just hearing that story, you're immediately picking up on, like, oh, I see elements of creativity and exploration. Do you have kind of an internal system that you use to analyze projects that you're on to be like, oh, this is a project I'm enjoying because of this element or that? Because you seem very self-aware around these types of things.
STEPHANIE: I'm glad you asked that because I think I was trying to reflect back to you some of the things that I picked up about what you were sharing. I have been reading a book, surprise, surprise.
JOËL: What? You read?
STEPHANIE: I read. [laughs] It's called "Engineering Management for the Rest of Us" by Sarah Drasner. And I am not an engineering manager, and I don't necessarily know if I even want to be. But I really enjoy reading management books to better understand how to manage myself or how to be a person who is managed.
And one of the things she talks about is understanding an individual's values and how those things end up being what motivates them and also likely what brings fulfillment. And so after I learned about the value of values, I started thinking, okay, what is it that I am motivated by? And really reflecting on when I have felt really good about work and also when I felt challenged or unhappy at work and what things were missing during that time.
So the things that I have realized that I am very motivated by are human connection. I love spending quality time with people, and that is probably why I enjoy pairing so much. But also, in my one on ones with my manager, I really enjoy that time just being time for us to share space and get to know each other and talk. It doesn't necessarily need to be going through agenda items or a status report or even necessarily talking about my project.
JOËL: So you mentioned that you value quality time with others. Is that a reference to "The Five Love Languages" concept?
STEPHANIE: It is. It is. I think I also made a bit of a connection there too because what I like in my personal relationships also obviously applies to work.
JOËL: Yeah, it's how you feel appreciated, how you feel fulfilled. And just for our listeners who may not have read this book, I think the concept is that there are five ways that people like to receive appreciation.
STEPHANIE: Yeah, I think receive and both express appreciation and love. And quality time is one of them.
JOËL: Yeah, yeah. And the other four, if I remember correctly, are acts of service, words of affirmation, physical touch.
STEPHANIE: Gift-giving is the last one. Yeah, so that was a fun reflection on my part in being able to just know what makes me feel good. And then it also helps me communicate with other people how to work with me. I think that is super important. I love when people share with me what, I mean, I mentioned this earlier, just what drives them and how they like to be appreciated so that I can do my best to try to offer them that.
And I guess this actually is a good transition into the next value of mine that really drives me. I was thinking about this because I mentioned just now that I was learning some new React tools, new to me, anyway. And I'm like, yeah, I like learning. But then I was like; I don't know if I like learning the way other people like learning in the sense that it's not the knowledge itself or the process of learning itself that drives me but learning as a tool to better understand myself. So I think personal development is very important to me. And that feels different from how other people might value learning.
JOËL: Interesting. So you might be excited to learn a new React testing tool but not because you're chasing the latest, shiny tech but more because you feel like the process of learning this testing tool helps you learn something new about yourself.
STEPHANIE: Yeah, I think that sounds right. One of the tools specifically...we're using MSW Mock Service Worker for mocking network requests in Jest. And I was able to use information about testing in Rails and Ruby and apply that to this new tool. And I got to kind of revel in the fact that I could use previous learnings to apply in this new context, and that was really cool to me. So it wasn't necessarily the tool itself or even the process of learning but kind of realizing that I was capable of applying one thing to this less familiar thing.
JOËL: So kind of that realization that, hey, you're now far enough in your career, and you have enough experience. You have a broad base of knowledge that all of a sudden, you realize, wait a minute, I'm not starting from scratch anymore. I can apply lessons learned in the past to learn this new thing and make that easier. And that's a really validating feeling.
STEPHANIE: Exactly. That was really cool to me, and I felt really good afterwards. I think this week at work has been very uplifting because I've been having all these little mini-revelations if you will.
JOËL: I love that. I love that so much.
STEPHANIE: So, one thing that I think is very easily conflated with fulfillment is the idea of success. And I kind of want to talk about the distinction between success and fulfillment. Does that bring up any thoughts for you?
JOËL: Yes. I think the two are often entangled, but they're definitely not the same thing. It is possible to be fulfilled on a project that is not successful. And it's also possible to be on a successful project and yet not feel fulfilled. But oftentimes, the two go together because when things are going well on a project, they're probably also going well in a lot of other ways, and you might be feeling fulfilled as long as general parameters fit in, right? If values line up, things like that.
I know for me I value quality and excellence and doing work that I'm proud of. So I think if I were working at a place that was doing kind of low-quality, low-cost work where it's just like, you know what? You want cheap and low-quality? Come to us. We'll just get it done quick and cheap. And yeah, it's not going to be great, but you get what you pay for.
There's a reason this part of the market exists, and it's a totally valid way to build software. But I would not feel fulfilled there, even though maybe the clients are absolutely happy with the work that's being done. So I think that would be a situation where there is success, but I might not feel personally fulfilled.
STEPHANIE: Yeah, I'm glad you brought that up because I think I really struggled in the beginning of my consulting career with equating client happiness with success. And I'm now just starting to kind of unlearn that a little bit and realizing that success means different things to different people. So even if we talk about thoughtbot for just a second, one of thoughtbot's values as a company is seeking fulfillment in everything that we do.
And so even though, like you said, the client might be totally happy, for thoughtbot, that may not be a successful client engagement if you, Joël, as the developer staffed on that project, didn't find fulfillment. Because what's success for us here is that we are fulfilled in the project itself. And that was really helpful because, in some ways, I'm like, well, who cares? Who else cares besides me that I'm fulfilled? And to be like, oh, yeah, actually, what our collective success means is that I'm fulfilled, and you're fulfilled. That was really important to me and one thing that I really appreciate about working here.
JOËL: Fulfillment comes partly from our environment, from maybe the project that we're working on, our colleagues, but also comes to a certain extent from ourselves. And to a certain extent, we can drive that ourselves as well. And I think that first step is a certain amount of self-awareness and self-understanding. You are clearly a master at this. What are some things that you do to drive that self-understanding, to build maybe a sense of how you become fulfilled, and identifying those values that make you feel fulfilled on a project?
STEPHANIE: Listen, [laughs] I don't know if I would call myself a master at this, only that I'm very actively working on it in my life right now, in therapy, but also in talking to other people about this because, yeah, sometimes it has caused me a lot of turmoil. I'll be really stuck in a rut or feeling a lot of burnout, and that, ironically, actually motivates me to be like, how can this be different? And oftentimes, that means I have to look inward.
But you and I had a conversation last week off-mic that was really helpful for me because I was feeling really bummed about my client work and it not going the way that I thought it would. And your insight helped me think about the project a little differently and think about metrics of success differently. For that project, I could not expect that project to look exactly like all of my past experiences. And success for those projects were not the same for this project. So yeah, talking to others, I highly recommend that.
JOËL: I guess you mentioned that you read a lot of management books and a lot of books geared towards managers for discussing things like how to set up a one-on-one. Those are almost like...they're not really therapy, but they kind of lean a little bit towards that sometimes and trying to create fulfillment for your direct reports. So maybe seeing it from the other side helps you build understanding.
STEPHANIE: Yeah, actually, that's totally a great call out because I highly recommend reading books about [laughs] management, even if you're not interested in management. Only because there's no guarantee that you'll have a good manager who can do all those things for you, so if you can equip yourself for doing those things, then you are likely to have a better workplace experience, in my opinion.
JOËL: And I guess the obvious one that we have not talked about is if you do have a good manager, have these conversations with them.
STEPHANIE: Yeah, absolutely.
JOËL: Part of their job is to help you be more fulfilled. And they should be having conversations to maybe help you discover those ways that you are feeling fulfilled at work and how to get there. Here's one aspect that we have not talked about that I'm curious to explore a little bit: recognition.
STEPHANIE: Ooh. Yeah, that's a good one.
JOËL: How important is it for you to feel recognized, either by your colleagues or by the more official org structure?
STEPHANIE: This is a great question. I do value recognition from people I trust. So I think we were talking about sometimes client projects are not successful, but you tried your best, and you did do valuable work. And you might not hear that from the client. They might think differently. But if a trusted co-worker can provide that validation for you, oftentimes, I find that more helpful.
JOËL: That's an interesting distinction. And I think recognition has a very different weight depending on the source it's coming from. If it's somebody you look up to, and they just give you a shout-out or something, I'm riding that high all day long.
STEPHANIE: Yeah, yeah, that's a great point. How do you like to receive recognition?
JOËL: Hmm. So at thoughtbot, we have an internal system where we can give shout-outs to each other. They're called high fives. And they get shared directly to the team Slack channel. And it's a small thing, but I really appreciate it when somebody calls out like, "Hey, I appreciated this thing that you did," or "This is the thing that had an impact on me," or "I appreciated the thing that you shared."
Those things make me feel really great. It's a small thing. It takes 30 seconds to do. But I really appreciate that. And it's something that I am looking to more intentionally do more of because it's fun to receive recognition, but it's also really valuable to give recognition.
STEPHANIE: Yeah, I'm with you. I am also trying to be intentional about being even more generous with my positive feedback for others. And I think there's also some degree of recognition and validation to give to yourself.
JOËL: Self-validation.
STEPHANIE: Yeah, yeah. I mean, I'm definitely trying to do more of that. Because if I'm doing work that lines up with my values, I want to be able to pat myself on the back for it, even if no one else will do it for me. [laughs]
JOËL: What does that look like? You're like standing in the mirror and saying, "Good job?"
STEPHANIE: [laughs]
JOËL: Do you have maybe a document where you kind of list the things that you feel proud of, even if nobody else has noticed? What does that look like for you?
STEPHANIE: Ooh, yeah, a brag document. I think some folks at thoughtbot have recommended doing that. For me, it's going and getting myself a treat.
JOËL: Oh, I like that.
STEPHANIE: So maybe like a latte the next morning or going to get just a sweet thing. Yeah, that's my way of doing it.
JOËL: So we've talked about self-recognition, recognition from colleagues. I think another element is recognition from management or the company that you're working at. That can be just praise. But oftentimes, I think when you're looking at recognition from something a little more corporate, it has a more kind of concrete aspect to it. And maybe that is come yearly evaluation time; there's a raise that recognizes the fact that you've done good work.
I know for me, last year, I got a big promotion. And I felt like I had been performing at a level that was kind of pretty far above and beyond the title that I had. And getting that promotion, in some ways, was very much kind of validation and recognition of the fact that I had been performing at that high level.
STEPHANIE: Yeah, it sounds like the acknowledgment for the expanded work that you've been doing was really motivating for you.
JOËL: Yeah. It's interesting you mentioned that acknowledgment is really motivating because it really is, and sometimes the reverse is also true. You feel discouraged or unmotivated because the good work that you're doing is not recognized. Are you familiar with the idea of intrinsic versus extrinsic motivation?
STEPHANIE: Yeah, I am. Being motivated by something externally, like someone offering a promotion, or a raise, or whatever, versus it coming from yourself.
JOËL: Yeah. And I think for many people, you're probably not purely motivated by one or the other. There are some things where you're motivated by your own internal values, as we mentioned earlier, and some things where you're motivated by incentives offered at work. And that balance will probably shift over time and in different moments. But having a little bit of both can be really, really powerful. If you can be living up to your values and then get rewarded for it, that's kind of peak fulfillment right there.
STEPHANIE: Yeah, that's the sweet spot. Yeah, I wish that for everyone in the world. [laughs] On that note, shall we wrap up?
JOËL: Let's wrap up. [laughs]
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
thoughtbot had an in-person Summit in the UK! Joël recalls highlights. Stephanie is loving daily sync meetings on a new project.
The idea of deleting code has been swimming around in Stephanie's brain recently because she's been feeling nervous about it. Together, Joël and Stephanie explore ways of gaining confidence to delete code while feeling good about it.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And today, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I just got back from a few days in the UK, where thoughtbot has been having an in-person Summit, where we've brought people from all over the company together to spend a few days just spending time with each other, getting to know each other, getting to connect in person.
STEPHANIE: That sounds like it was a lot of fun. I've been hearing really great things about it from folks who've come back. Unfortunately, I couldn't make it this year. I got sick a little bit beforehand and then ended up not being able to go. But it sounds like it was a lot of fun just to get together, especially since we're now a remote company.
JOËL: Yeah, I'm really sorry you weren't able to make it there. It would have been amazing to do a Bike Shed co-hosts get-together.
STEPHANIE: I know. In the same room, maybe even record. What a concept. [laughs]
JOËL: So thoughtbot is a fully remote company, and so that means that getting a chance to have people to come together and build those in-person connections that you don't get, I think, is incredibly valuable. I was really excited to meet both the people that I work with and that I see on my screen every day and people who I don't talk to as often because they're working on different teams or different departments even.
STEPHANIE: What was one highlight of the time you spent together?
JOËL: I'll give a couple of highlights, one I think is more on the activity side. We went bouldering as a group. This was a really popular activity. We were trying to sign up people for it, and it was so popular we had to make two groups because there were too many people who were interested. And it was really fun. There are people with a whole variety of skill levels. Some people, it was their first time, some people had been doing it for a while. And just getting together and solving problems was a lot of fun.
STEPHANIE: Yes, I saw that. That was one of the things I was really looking forward to doing when I was still thinking that I was going to go. And it's cool that it had opportunities for both beginners and people who have been doing it before, which I think, if I recall correctly, Joël, you are a boulderer yourself back home. So that's pretty neat that you were able to, yeah, I don't know, maybe share some of that experience IRL too.
JOËL: Yeah, yeah, I think it's great because people were able to help each other. Sometimes you have a different perspective down on the ground than you do up on the wall. And then, in my case, because I've done it a lot, I know a little bit of actual climbing technique. And so I can give some tips on, like, oh, if you're stuck and you don't know how to get past a particular point, or you don't know how to start a particular climb, or your arms are getting tired halfway up, here's maybe a small change you can make that would make things easier for you.
STEPHANIE: Honestly, that also sounds like a really good metaphor for pair programming, [laughs] like, looking at things from different perspectives, you know, someone who's on the wall? I don't know what the lingo is. But it's the equivalent of someone driving in coding, the navigator having a little more perspective and being able to point out things that they might not see that's right in front of them.
JOËL: I love that metaphor. Now I'm going to think of that both when I pair and the next time I climb.
STEPHANIE: I love it.
JOËL: I think climbing, when I do it, it's always more fun with a friend, specifically for what you were saying. I climb alone sometimes, but as much as possible, I'll reach out to another friend who climbs and say, "Hey, let's climb together." And then we can alternate on the same route even.
STEPHANIE: That's cool. I didn't realize that it could be such a social activity.
JOËL: It is very much a social activity, and I think that's part of the fun of it. It's challenging physically but also mentally because it's a puzzle that you solve. But then also, it's a thing that you do with friends.
I think another aspect that was a highlight for me was getting a chance to connect with people from other teams, other departments within thoughtbot. I think one thing that was really nice when we were located in an office is that over lunch, or just at the water cooler, or whatever, you would connect with people who were in other teams and who were in different departments.
So I might talk to people in People Ops, or in marketing, in operations just sort of in the natural course of the day in a way that I think I don't do quite as much of now that we're more remote. And I tend to talk more with other developers and designers on my team. So I think that was really great to connect with people from other teams and other departments within the company.
STEPHANIE: Yeah, I know what you mean. I think I really miss the spontaneous, organic social interaction that you get from working in an office. And I think we've maybe talked about remote work on the podcast before, or previous co-hosts Steph and Chris have also talked about remote work. But it definitely requires a lot more intention to manifest those connections that otherwise would have been a little more organic in person.
And so, while you all were at an in-person summit in the UK, there was also a virtual summit hosted for folks who weren't able to travel this time around, and I really appreciated that. I got to spend a day just connecting with other people in Gather Town, which is a web app that's like a virtual space where you have little avatars, and you can run around and meet up with people into virtual meeting rooms on this map. [laughs] I'm not really sure I'm describing it well, but it's very cute. It is almost like a little video game.
It's like a cross between a video game and video conferencing [chuckles] software. But yeah, I think I just really appreciated how inclusive thoughtbot has been doing remote work where, like, yes, we really value these in-person gatherings, and we understand that there is a bit of magic that comes from that, but also making sure that no one's left out. And at the end of the day, not everyone can make it, but we were still able to hang out and socialize amongst ourselves in a different way.
JOËL: Agreed. I think that inclusivity is part of what makes thoughtbot such a great place to work at.
STEPHANIE: Speaking of inclusivity, I mentioned a few weeks ago that I joined a new project recently and had been going through the onboarding and hopping into all these new meetings. One thing that I've really enjoyed about this new client team that I'm on is that in their daily sync meetings, we all share what we're working on. But we also all share something that's new to us, which is a little bit meta because we do that on this podcast. [laughs]
But each person just shares maybe something they learned at work but also usually something just totally not work related like a new show that they're watching. There's another person on my team who learns a lot of things from YouTube videos. And so he's always telling us about the new thing he learned about, I don't know, like mushrooms or whatever, or AI [laughs] through YouTube. And yeah, someone else might show a sweater that they just knit themselves. And it's been a very easy way to get to know people, especially when you're meeting a whole new team. And yeah, I've been enjoying it a lot. It's made me feel very welcome and like I know them as people outside of work.
JOËL: I love that. Yeah, they're more than just people you're shipping code with. You're able to build that connection. And it sounds like that helps smooth the...maybe we can say the social aspect of onboarding. Because when you onboard onto a project, you're not just onboarding onto a series of codebases and tools; you're also onboarding onto a team, and you need to get to know people and build relationships.
STEPHANIE: Yeah, absolutely.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So you've been...is it two weeks in a new codebase? Have you gone and deleted any code yet?
STEPHANIE: I wish. I am glad you asked this question because this has been a topic that has been swimming around in my head a little bit lately because this new client codebase it's very big and it's quite old. Like, I've been seeing code from 10 years ago. And it's been a really challenging codebase to get onboarded into, actually, because there's so much stuff.
In fact, I recently learned that some of their model specs are so big that they have been split out into up to seven different files to cover specs for one model. [laughs] So that has been a lot to grapple with. And I think in my journeys working on a starter ticket, I've just stumbled upon stuff that is very confusing. And then I might follow that thread only to realize that, like, oh, this method that I spent 20 minutes trying to grok turns out it's not actually used anywhere.
JOËL: That's a lot of dead code.
STEPHANIE: It is a lot of dead code, but I am also not quite feeling confident enough to delete it because I'm new, because I have no idea what consequences that might have. So, yeah, the idea of deleting code has just kind of been swimming around in here because ideally, we would be able to, but, for some reason, I don't know, at least for me, I feel very nervous about it. So it hasn't been something that I've reached for.
JOËL: That's a great question because I think in maybe Ruby, in particular, it's not always obvious if code is being used or not. When you do find yourself deleting code, how do you gain the confidence that it was safe to delete that?
STEPHANIE: Yeah, that's a good question. In the past, when I've done it successfully, I'll probably post a Slack message or something and being like, hey, I noticed this code is not being used anywhere, or I'd like to delete it because why, like, I don't know, because it's been misleading me because it's just not providing any value. And then kind of give it like a day or two, and if no one speaks up about it, then I will usually go ahead.
And obviously, get some code review, hopefully, get some other eyes on it just to make sure that whatever assumptions I made were valid, and then go for it. And then just watch [laughs] the deployment afterwards and make sure that there are no new errors, you know, no new complaints or anything like that. And, yeah, I think that has been my process, and I've definitely found success doing that.
But I have also experienced a bad result [laughs] from doing that where one time, on my last client project, we were refactoring the signup flow. And we realized that after you signed up, you were redirected to this blank page for like 10 seconds or something. It was completely empty. There was nothing on it except a spinner, I think. [laughs] And then it would then redirect you to the dashboard of the app. And we were like, oh, we can definitely delete this. We have no idea what this is doing. We don't want to try to refactor this as part of the effort that we were doing.
And so we deleted it, only to find out later from the marketing team that they had been using that page for something Google Analytics related, and we had to revert that change. And it was a real bummer because I think when we removed it, we felt good about that. We were like, oh yes, deleting code, awesome. And then having to bring it back without a clear plan of how to actually fix the problem that we were trying to solve was a bit of a bummer.
JOËL: So, as programmers, we're hired to write code. Why does it feel so good to do the opposite of that, to delete code?
STEPHANIE: That's a great question. I actually want to know what you think about this, but before that, I wanted to plug this Slack channel that we have at thoughtbot called Dead Code Society, where people can post their PR diffs showing more red than green, so more lines removed than lines added. And I have been really enjoying that Slack channel. It's very delightful. [laughs] But, Joël, do you have any thoughts about why it feels so good to delete code?
JOËL: There are probably a few different reasons. Especially when it's not your own code, you're often not attached to it. There's often, I think, the sense when you go into an existing codebase you're just like, oh, everything's just bad, and I don't understand it. And those other coders who wrote this didn't know how to do their job and kind of be the curmudgeon character. So it just kind of feels good to remove that and maybe rewrite it yourself. I would say that's not a good mindset to go in for deleting code. I think there are positive ways where it is actually a good thing.
STEPHANIE: That's fair. Just removing code because you would write it differently is not necessarily a net positive. [laughs]
JOËL: But I think...so when I initially asked the question, I said, "We're hired to write code." And I think that's a bit of a false assumption built into the question. We're not hired to write code. We're hired to solve problems, to build solutions. And as much as code can be an asset in solving problems, it's also a liability. And code has varying maintenance costs that are typically not low. They vary from expensive to very expensive. And so any chance we get to remove some of that, we're removing some of the carrying costs, to use a term that we discussed a few episodes back when we talked about sustainable Rails.
STEPHANIE: Yeah, absolutely. One thing that I remember you sharing about the client project that we're both on in the past is they have a very cumbersome test suite. And in some situations, you have wanted to advocate for deleting some of those tests.
JOËL: Deleting tests is a really, I think, spicy take because you're trying to get better test coverage. And if your test coverage isn't great, you don't want to lose any of that. So there's definitely a loss aversion there, and we might need it later. At the same time, tests have a cost, cost to run, cost to maintain. And if they're not providing a lot of value, then the cost of keeping them around might be higher than any kind of benefit they're giving you.
And I think a classic case of this is tests that have either been marked pending in the codebase with an exit or something like that or that have been marked in your CI server as muted; just ignore failures from this test. Because now you're still having to maintain, still having to execute these tests. They're costing you time, but they're giving you zero benefit. And they're just taking up space in your codebase, making it harder to read. So if you can't get these tests back into the point where they're actually executing, and you're caring about the output, then you probably don't need those tests, and they can be removed.
STEPHANIE: Yeah, that's fair. I'm thinking about the perspective of someone who does not want to delete those tests. I think in the past, I've seen it and even felt it myself as someone who probably wrote the tests, kind of hoping for some ideal world where I will finally have time to go back to that test. And I already put a lot of effort into trying to make it work, and I want to make it work. I want to have the value of that test.
And it's kind of like a sunk cost fallacy a little bit where it's like, I already spent however much time on it that it must have some kind of value. Because just hearing that someone else wants to delete the test can kind of hurt a little bit. [laughs] And it's tough. I do think that it's easier for someone with an outside perspective to be like, "Hey, this test is costing more than the value that it's providing." But yeah, I can see why people might have a little bit of pushback
JOËL: Sometimes, the value of a test is also in the journey rather than the destination.
STEPHANIE: Yeah, that's a good point.
JOËL: So if you're practicing TDD, maybe you use some tests to help you drive out some functionality, help you come up with a design that you want to do. But maybe once you've actually created the design, the test that helped you get there is not actually that useful. I've heard some people will do this by writing a lot of more system tests-like tests that are very integration-heavy, that have a lot of edge cases that you might not care to test at that level, at that granularity. And so they use those to help drive a little bit of the implementation and then remove them because they're not providing that much value relative to their cost anymore.
STEPHANIE: I think that's a really good point. The tests that you write for implementation can have value to you as a developer, but that's different from those tests having value to the business when you commit them to a codebase and incorporate them as part of CI and a CI that everyone else has to run as well. So yeah, I think in that case, the context definitely matters. And hopefully, you can feel good about the value that it provided but then also have that eye towards, okay, what about the business, and what values does the business have?
JOËL: Yeah, and accept that the test did the job that it was supposed to do. It got you to where you needed to be, and it completed its purpose. And now it's ready to move on.
STEPHANIE: Another thing that I recently read about deleting code...and this was from Chelsea Troy. She advocates for regularly evaluating features in an app and deciding whether they're providing enough value to justify keeping around and maintaining for developers as well. And I thought that was really interesting because I don't know if that's something that I'd really considered before that sometimes an app might outgrow some features, or they might not be worth keeping around because of the problems or the maintenance costs that they carry into the future.
JOËL: That's fascinating because I think you're taking the same analysis we were talking about tests and then kind of like bringing it up now to the product level. Because now, we're not just talking about deleting code; we're talking about deleting functionality that a product might have.
STEPHANIE: I think the challenge there is that the effects of the carrying cost of a feature is not necessarily felt by the business stakeholders, or product folks, or people operating at a higher level, but it is felt by developers. If there's a bug that's come up from this old feature, and oh, I have never seen this feature before, and now I have to spend a day learning about what this thing is before I can fix the bug. It did feel like a radical idea that maybe developers can play a part in advocating for some features to be retired, that is, you know, maybe separate from how products thinks about those things.
JOËL: I think in order to be able to make those decisions or really just to be part of those conversations, the dev side needs to be really integrated with the product team and with larger business objectives. And so then you can say, look, if we take a week of one developer's time to provide the support this feature needs and we have one customer paying $20 a month for it, that's not a good business prospect.
Now, is this strategically an area that we're trying to grow? And so yeah, we're doing it for one customer, but we're hoping to get 100 by the end of the year, and then it will be worth it. Then yes, maybe we keep that feature around. If this is the thing, like, we experimented for a few weeks five years ago, and then it's just kind of hang around as a legacy thing that this one person knows about and uses, then maybe it's worth saying, look, this has a high business cost. It might be worth sunsetting that feature. But it's a conversation that everybody needs to be involved in.
STEPHANIE: Yeah, yeah. I like the idea of it being something more proactive versus, I don't know, something that I think I've seen at other orgs and just in general as a person who uses digital products, like, a feature or a product, just kind of dying. And probably the organization just wasn't able to find a team to continue to support it, and it just kind of kept being this burden. And then, eventually, it just was something that they had to let go. But then, at that point, you had already spent all of that time, and effort, and energy into figuring out what to do with this thing.
Whereas the approach that Chelsea is advocating for is more realistic, I think, about the fate of [laughs] software products and features. And as a developer, I would get that feeling of deleting [laughs] code that is so satisfying. And I'm just not burdened by having to deal with something that is not providing value, like cumbersome tests. [laughs]
JOËL: I think it's always the fundamental thing that you have to go back to when you're talking about deleting code, or features, or anything is that sort of cost-benefit analysis. Does this thing provide us any value? And if so, does that value outweigh the cost of the work we need to do to maintain it? And in the case of dead code, well, it's probably providing zero value, but it's imposing a cost, and so we want to remove it.
In the case of a test that is not muted or pending, then maybe it does provide some value. But if it's really brittle and constantly breaking, and it's costing us many hours of fixing time, then maybe it's not. If we can't find a way to fix it and make it more valuable because sometimes it's the other option, then it might be worth considering deleting it. Have you ever, on a codebase, taken some time to actually seek out code that could be deleted as opposed to just sort of stumbling onto it yourself?
STEPHANIE: That's a good question. I think I have not just explored a codebase just looking for stuff to delete, but I have...maybe if you had something under a feature flag and you no longer needed the flag because it was released to everyone, you know, going back to delete it because you specifically made a ticket to make sure that you went back and cleaned that up. I do really appreciate the tracking of that work in that way and just making sure you're like, hey, I want to avoid a situation where this becomes dead code. And even just making a card for it is putting that intention out there. And hopefully, someone, if not yourself, we'll take that on because it's important.
JOËL: Yeah, kind of proactively trying to make sure that the work that you've done doesn't become dead code, that it gets pruned at the appropriate moment.
STEPHANIE: What about you? I'm curious from your perspective as an individual contributor when you are just moving through a codebase, and you see something suspiciously [laughs] looking like dead code what you do with it.
JOËL: I often like to split out a small PR just to remove that if it's not too much work and it's semi-related to what I'm doing. I'd like to give a shout-out to two tools that can help detect or confirm that something is dead code. One is Unused, written by former thoughtboter Josh Clayton.
It uses, I think, Ctags under the hood to track all the tokens in an app and then tries to determine are there tokens that are orphaned, that are isolated, and are not used? And it can then build you a report. And that can be good if you're doing a code audit of a codebase or if you're looking to confirm that a piece of code that you're working on might not be, like, is it actually used or not?
Another one is elm-review-unused, which is a plugin for elm-review which is Elm's linter, kind of like RuboCop. And what's really nice there is because it reads the AST, and Elm functions don't have side effects. You know that if something is not reachable from the main function that, it is completely safe to remove. You've run the script, and it will delete a bunch of functions for you that are unused, and it's 100% safe.
And it is very thorough. It finds all of the dead code and just removes it. It's practically just a...it's not a button because it's a script that you run but that you can automate to run on commit or whatever on the CI. But yeah, that's an amazing experience to just have it auto clean-up for you all the time.
STEPHANIE: That's really cool. I like that a lot. I think that would be really nice to incorporate into your development workflow, like you said, that it's part of the linting system and just keeping things tidy.
JOËL: Yeah, I think it's a little bit harder to have something that's quite as thorough for a Ruby or Rails app just because it's so dynamic, and we've got all this metaprogramming. But yeah, maybe this would be a thing where you would want to run something like Unused or some other linting tool every now and then to just check; hey, do we have any dead code that can be removed?
STEPHANIE: Yeah, absolutely. And I think this is totally a little bit different because we're just talking about tools, but I'm also thinking of red flags on a team level where I have definitely asked in a Slack channel, "Hey, I've never seen this feature before. What does it do?" and just crickets. [laughs] And even the product folks that I'm working with, they're like, "I don't know. It predates me," that being a bit of a smell, [laughs] if you will, to reevaluate some of those things. And those flags can exist on many different levels.
JOËL: That's always terrifying because you're like 80% sure that this is dead code, but there's like a 20% chance that this powers the core of the app, but nobody's touched it in 10 years.
STEPHANIE: Yeah, it is very scary. [laughs]
JOËL: Hopefully, your test suite is good enough that if you comment out that function and then you run your test suite, that it just all goes red, and you know that that's actually needed for something.
STEPHANIE: Yeah, though I think sometimes you might remove a piece of dead code, and there are some issues afterwards, and you find out, and you just revert it, and it's fine. At the end of the day, there are a lot of safeguards in place, and we've all done it. And so I think normalizing it is also very important in that it's okay if sometimes you make a mistake there.
JOËL: Stephanie is giving you permission to go and delete that code today. Ship it to production, and if something breaks, it's okay.
STEPHANIE: [laughs]
JOËL: You can revert it. Hopefully, your company is set up where reverting commits from production is a cheap and easy thing to do, and life goes on. So I'm curious, Stephanie, have you ever gone into GitHub and checked your stats on a project to see if you're more red than green or what that ratio is for you on a given project?
STEPHANIE: I have. Actually, someone else did on my behalf because I was posting a lot in that Dead Code Society Slack channel. And they then shared a screenshot of my overall contributions to a repo, and it was more red than green. I felt pretty good about myself. [laughs]
JOËL: All right. Net negative but in a positive kind of way.
STEPHANIE: In a positive way. [laughter]
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. [laughs] Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël is joined by a very special guest, Sara Jackson, a fellow Software Developer at thoughtbot.
A few episodes ago, Stephanie and Joël talked about "The Fundamentals" and how many of the fundamentals of web development line up with a Computer Science degree. Joël made a comment during that episode that his pick for the most underrated CS class that he thinks would benefit most devs is a class called
"Discrete Math." Sara weighs in!
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by a special guest, Sara Jackson, who is a fellow developer here at thoughtbot.
SARA: Hello.
JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world?
SARA: Actually, I recently picked up crocheting.
JOËL: That's exciting. What is the first project that you've started working on?
SARA: I don't know if you happen to be a fan of animation or cartoons, but I love "Gravity Falls." And there's a character, Mabel, who wears many sweaters. I'm working on a sweater.
JOËL: Inspired by this character.
SARA: Yes. It is a Herculean endeavor for my first crochet project, but we're in it now.
JOËL: That does sound like jumping into it and picking a pretty hard project. Is that the way you typically approach new hobbies or new things, you just kind of jump in and pick up something challenging?
SARA: Yeah. I definitely think that's a good description of how I approach hobbies. How about you?
JOËL: I think I like to ease into things. I'm the kind of person who, if I pick up a video game, I will play the tutorial.
SARA: It's so funny you say that because I'm definitely the type of person who also reads manuals. [chuckles]
JOËL: [laughs] I'm sure you've probably, at this point, read many sections of the Unix manual. Longtime listeners might recognize you from a previous episode we did on the history of operating systems.
SARA: Yes, I am an avid reader of the man pages. In fact, I wish every command-line tool had man pages or at least more detailed man pages. Reading man pages, reading technical documentation, really, I feel like goes right in line with things like needlework, knitting, crocheting. You're following a very technical pattern description of what you should be doing, how many stitches. It's almost algorithmic.
JOËL: Do you feel like the fact that you've read a lot of man pages and now that you're getting into reading crochet patterns, do you feel like that's helped you maybe become a better technical writer when you write documentation?
SARA: Definitely. Yes. [laughs] There's a common meme going around on the internet of how to make a peanut butter and jelly sandwich: open jar, put knife in jar. And you see somebody putting the knife in handle first because it wasn't specific enough. When you're looking at a crochet pattern, it needs to be written very explicitly, and in the same way, technical documentation needs to be like that too. It needs to be accessible for every audience, well, most audiences.
JOËL: That's a big challenge because you want to give enough detail that, like you said, you don't accidentally use the wrong end of the knife to spread your peanut butter. But at the same time, if you give all the little details, you lose the forest for the trees. And people who know how to use a knife are going to struggle to use your documentation.
SARA: That is true. That's why I think it is very valuable to do something that you recommend very often, especially when writing blog posts or call for papers is, defining the audience. Who's this for?
JOËL: Yeah, knowing your audience is so important when it comes to any kind of media, even if it's a talk or an article, or I guess, a crochet pattern.
SARA: Precisely.
JOËL: Does the crochet world have sort of the concept of patterns aimed at beginners versus patterns aimed at a more advanced audience?
SARA: I would definitely say that is the case. There are more advanced stitches and techniques that you would generally not see in a more beginner pattern. And in more advanced patterns, at least speaking from a knitting perspective...I'm pretty new to crocheting, but I've been knitting for a while. In knitting patterns, simpler techniques might not be described in such detail in a more advanced pattern.
JOËL: So a couple of weeks ago, Stephanie and I were discussing the fundamentals, how much of the fundamentals of web development line up with a computer science degree. I had made this comment on that episode that my pick for most underrated CS class that I think would benefit most developers is a class called discrete math.
SARA: I remember this class. It was a love-hate relationship. I am a big fan.
JOËL: Would you describe yourself as a math person?
SARA: I don't think so. No.
JOËL: Because I know I hated math for the longest time. And I don't really find that math, in general, has been that helpful for software. There's kind of the stereotype that I'll sometimes hear from people when they find out that I write code for a living. They'll say things like, "Oh, you must be so good at math." And it's like, no, calculus was really hard for me, and I struggled and did not like it.
SARA: I feel like that's a big reason why folks go into programming; the computer can do the math for you.
JOËL: Right? It is a computer. It is a math machine.
SARA: I mean, how many folks in computer-related fields got their start on a TI-83, programming in that thing?
JOËL: A lot of people. Someday it might be fun to do an episode on the sort of common origin stories that you hear from people in the software industry, a lot of people programming a calculator, a lot of people I hear coming from Neopets.
SARA: Yeah, Neopets and MySpace, editing the profile pages with CSS, HTML.
JOËL: But that's an episode for another time. I think, in my experience, discrete math was not like all the other math that I did. It felt so practical, like, this is math for programmers is how I felt it was even though that's not how it's sold in university. What was your experience?
SARA: My concept was very much like, this is logic. This is very hard. By hard, I mean firm way of looking at the world and defining the logic behind things when you think about proofs and set theory.
JOËL: So we've been throwing around the term discrete math, and many of our listeners might not be familiar with what it is. If you had to describe discrete math to someone who is not familiar with it, what would you say?
SARA: Math that's discrete. [laughter] Sorry, sorry.
JOËL: What does discrete mean?
SARA: When I think of discrete math, I think of logic, definitions, how data relates to each other, that sort of thing, as opposed to ones and zeros.
JOËL: Yeah, discrete math; it felt like it was very much like a grab-bag class. It just involved so many different branches of math, and you kind of get a little bit of an intro of like ten different topics, all of which apply and are helpful when you're writing software. So I got a little intro to a couple of different forms of logic, propositional logic, and predicate logic. I got an intro to Boolean algebra.
I got an intro to set theory, an intro to combinatorics, talked about recursive functions from a mathematical perspective, an intro to graph theory. Probably like a few more. There are like ten different things. You just got a little intro to them, spent a couple of weeks on each topic. But I felt like that was enough to give me a lot of value that I still reference on a daily basis in my work.
SARA: Absolutely. One of the parts of discrete math that really stuck with me are computational models like Turing machines, pushdown automaton, finite-state machines. Learning about those, analyzing them really helped me break down algorithms and break down my code and look at, okay, for this specific input that I have for each of these variables, what are we doing?
JOËL: So what does that look like in your daily work? You've got a complex card, and you see that it's a difficult feature to implement. And in your mind, you say, okay, let me try to describe this as a finite-state machine, and maybe you draw a diagram or something like that.
SARA: Yeah, I will, actually. I'll draw a diagram, or I'll draw like a pseudocode out on paper. I'll think about all the different kinds of inputs that I would expect or not expect, which itself is not finite, but we try. And then what is the output that I would expect? What is the outcome that I would expect from, say, a user enters one, a user enters Sara, a user enters purple? What would the outcome be? Do I have those vectors captured in my code? And that also goes into TDD.
JOËL: Do you feel like knowing about Turing machines or finite-state machines has made it easier for you to PDD? That's a connection I haven't heard before.
SARA: Yeah, I think so because a Turing complete computational model is deterministic. That means that every possible path that could be got into from where you're at any path exists between the two. Sometimes it might mean rejection or an error, but the path has been defined. And thinking about that when it comes to tests, I feel like has been so helpful for me of like, I can't just think about the happy path. I can't just think about it's exactly what it needs to be. It's also what if it's not there? What if it doesn't exist? What if it's 0? What if it's empty? What if it's a different data structure?
JOËL: That's really fascinating to me because I feel like I encountered some of these practical applications of it much later when I was learning about types and learning about Elm and sort of that community's approach to designing data structures. And one thing that they say a lot is that you should make impossible states impossible when you design a type, and the way that they tend to approach that is thinking of types as if they were sets.
And so you think of a set of...the Boolean type is a set that has two elements because there are true and false. An enum might have, you know, if it's a three-element enum, that is, three elements. But then you start having things like records which are kind of like a hash in Ruby, which might have, let's say, two elements in them. And if it has a Boolean and an enum value, now those two multiply times each other. And so now you have two times three, six possible states. And maybe the problem you're trying to model only has five, and so you've sort of inadvertently added an extra state.
They tend to talk about it a little bit more through the lens of sets and the lens of combinatorics, which are other elements of discrete math that give you mental models to deal with this. And so talking about all the different possibilities, that's combinatorics. Thinking of a type as a set and talking about its cardinality, that's set theory. So those were things that I would do when I was writing Elm programs on a daily basis, but I never made the connection back to finite-state machines.
SARA: I feel like those marry so well together, those concepts. You can see combinatorics and set theory of objects and of where they can go. And that goes right into graph theory.
JOËL: Oooh, I love me some graphs.
SARA: [laughs]
JOËL: Listeners of the show will know that I am a huge fan of dependency graphs and as a tool and as a model that can be applied to a lot of things, so thinking in terms of maybe the dependencies of your program like packages. But it can also be in terms of tasks to be done and so thinking in terms of a larger feature, breaking it down into smaller features, all of which depend on each other. And depending on how that dependency graph is structured, what order do you need to complete them in order to ship them independently?
SARA: I love that. And it reminds me of graphs that represent state, like, finite-state machines sort of things where you can actually infer where you're going to end up based on where you are for certain types of graphs. And I feel like you can use that in programming. You can use that in proofs where you have the, okay, you've solved for the zero case. You've solved for the one case. Now let's solve for N+1 anytime in the future. This all feels very full circle in my mind. [chuckles]
JOËL: I think that's very apt. And a really powerful thing that I've noticed is having different mental models to approach the same problem or different logical or analysis techniques to interact with the same problem. And so when you look at something through the lens of a finite-state machine, or through the lens of a graph, or through the lens of a set, or through the lens of combinatorics, you might be looking at the same problem. But by having different perspectives to look at it, you gain different insight and hopefully helps you come to a better solution.
SARA: Absolutely. And I love that discrete math gives us those different tools to be better programmers. It's something that I enjoy. And I enjoyed the classes as much as they were extremely difficult. And I love the idea of being able to share those tools with other people that might not have learned about them.
JOËL: You were talking about seeing things from different perspectives and how they kind of line up. There are some equivalences that I found were really fun between, let's say, sets and Boolean algebra, the operations that you can do. So things like ANDing two values is similar to doing an intersection on two sets, and ORing two values is similar to doing a union. Interestingly, we have preserved that in Ruby. Array has operators where you can combine arrays using set operations, and it has the single pipe, which we typically read as OR to union two arrays. I want to say it has a single AND that you can use. It's used to intersect two arrays.
SARA: I actually used that sometime within the last year, I remember.
JOËL: So, if you've ever wondered why those two particular operators to do set operations instead of a union method, now you know.
SARA: I love set operations. I recently made an update to thoughtbot's internal tool hub, and I used set unions there. [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: If you had to sell a colleague on the value of discrete math, what would be the example that you would use?
SARA: What if I told you that you would never have to wonder what the results might be in a given situation of true and false?
JOËL: That's deep. Do you want to know all the secrets of the universe?
SARA: Let me introduce to you truth tables.
JOËL: Oh, I love a good truth table. Yes, such a simple tool, but it pays so much.
SARA: Absolutely, especially in a world where we have unless as an operator.
JOËL: Unless gets me so much in Ruby, especially when there are compound expressions. So you say do something unless condition one or condition two, and then I have to think, wait, when does this happen?
SARA: I have to read it to myself in English, not this and not that. [chuckles]
JOËL: So that's interesting because when you translated that in English, you changed the operator that's being used.
SARA: I totally did.
JOËL: Unless a condition or other condition. And your brain was smart enough to flip that; mine is not.
SARA: [laughs]
JOËL: But what's happening here is, and you would learn this in a discrete math class, De Morgan's Laws that say what happens when you negate compound conditions. And you have to negate each of the individual conditions and also flip all the operators, so all the ANDs turn into ORs and the ORs turn into ANDs. And so I always have to remember to do that in my mind when I see an unless or when I see someone negating a compound condition.
So now, in my mind, every time I'm reviewing code on a pull request, and I see negating a compound condition, it's just a sort of red flag for there's quite possibly a bug here. And maybe leave a comment asking the author, "Did you really mean to do this?" And like you said, maybe even write out a truth table just so that myself I know that the correct behavior is happening.
SARA: It is a good example of a code smell because if it's hard for you to understand or me to understand, sure, it made sense when it was written, but code is read more than it's written. It should be easy to read and understand. So it's definitely easy to introduce a bug at that point like you were saying, worth commenting on.
JOËL: You log on to your machine at the beginning of the day, open up a PR, and you're just like, oh yes, I love the smell of De Morgan in the morning.
SARA: [laughs] Nothing like De Morgan in your cup in the morning.
JOËL: [laughs] Yes. Oh, now I really want to --
SARA: A DeMorgan in the morgen.
[laughter]
JOËL: Now I really want to see a spoof of that Folgers ad.
SARA: [laughs] For some reason, the jingle is escaping me, but it's there.
JOËL: It's an ad for a brand of American coffee.
SARA: Yes, for those that were not in America during the '90s to see the commercial, [singing] the best part of waking up is De Morgan in your cup.
JOËL: [chuckles] That was amazing.
SARA: [laughs] Hopefully, we don't get a copyright strike for that.
[laughter]
JOËL: You know what? That is the sell for why you should learn discrete math.
SARA: Yes. What are some other ways you find discrete math around in your day-to-day life?
JOËL: I think the most practical part is working with Booleans because writing conditional code writing Boolean expressions is something that I do multiple times every day. And I think anybody who's done programming for any length of time gets some amount of intuition around working with Boolean expressions. Having spent a little bit of time studying them, you learn some patterns. You learn ways of working with them.
And a common thing that I will often see in Ruby code is people will overuse the if expression when you could have used a Boolean expression instead. So I've seen things like if condition return true, else return false, which is just identity. I've also seen more complex things which will say, "If value one is true and value two is true, return true; otherwise, return false," or some fancy things with early returns that, in the end, are just reimplementing Boolean AND.
So knowing about a little bit of basic Boolean algebra, being comfortable with combining things using AND and OR rather than just writing early returns, I think, gives a much richer toolkit and something that is much more scalable. And, of course, for those situations where there are complex conditional code, having truth tables as a tool in your back pocket is just absolutely invaluable.
SARA: 100%. When those get so complex, definitely realizing it's worth maybe breaking up a chain of Boolean logic into separate mini-methods if you need to. There's nothing like seeing a whole bunch of stuff ANDed together that are only kind of related. [laughs]
JOËL: There's a form of logic that you dig into as well called predicate logic, and there's a whole set of things you can do with it. But two things that stood out to me were these two operators that apply a condition to a whole set of values. And they either claim that a certain thing is true for at least one of the elements in a set or for every value in a set.
And the interesting thing is that if you claim that something is true for all elements, in order to falsify that claim, you only need to find one counterexample. You don't need to check every item. If I can find one, and maybe it's the first item in this set that is wrong or that contradicts the logical statement that I'm trying to make, then I've immediately disproved your entire statement because you claimed that this was true for every element.
SARA: And it's hard learning these sorts of fundamentals from computer science; it's hard to not apply that to real life and hear somebody using a statement, "Every this, all of that." I immediately come back with, "Well, some of them." [laughs] I'm that guy, yep.
JOËL: The person at the end of a conference talk who puts up their hand and says, "So this is not really a question. It's more of a statement."
SARA: [laughs] I found this one example. Yeah, I'm a stickler for specificity, for sure. Thanks, discrete math.
JOËL: It definitely helped me be much more nuanced in the way that I speak. I tend to not speak in absolutes or superlatives because of that class.
SARA: Yeah, I very frequently use the term a non-zero amount of times to describe, for example, there exists one in a set.
JOËL: There's also another interesting aspect of this, which is when you see a chain of ANDs, so condition, and condition, and condition, and condition, and condition, you're effectively making the assertion that something is true for all elements or that all these conditions are true. Therefore, it only takes one for the whole thing to evaluate to false. And I want to say the fancy name for this is annihilation, where you can have a giant chain of conditions that are ANDed together, and they're all true, but if any single one of them is false, then the whole chain evaluates to false.
SARA: And this is where you can get a little clever with the order in which you put those in your AND where you have the least heavy lifting checks first so that they fail first. Or if you have things that need to check for nil, check them after. Check the basic stuff first. Let it almost short circuit; let it fail fast, as they say.
JOËL: Yeah, these are all performance tricks that I think, even if you don't have a discrete math background, you might have picked up. You know about short-circuiting. You know about trying the cheap checks first. And now you know a little bit of the theoretical background of why.
SARA: [singing] Where do we go from here? [laughs]
JOËL: So we have these sort of logical operators that will claim that something is true for all elements of a set or at least one element of a set, and those are kind of theoretical. They're useful if we're trying to set up a logical proposition. But these exist in code, in Ruby, as part of the enumerable module. Enumerable has two methods; they are any and all. And you can use those methods to claim that all items in an array will evaluate to true when the given block runs or that at least one evaluates to true for items in that array.
SARA: What's the word where you're taking out some of a set? Slice but not slice. There's intersection [crosstalk 26:46] union, so not a set theory one, no.
JOËL: Like getting the inverse?
SARA: Maybe. I don't know.
JOËL: I feel like there's a term for getting the inverse of a set.
SARA: Not the inverse.
JOËL: Because you can get the inverse of the intersection or something.
SARA: Yeah. I think I'm just going to go along the lines of being able to slice out what you want with select and how you can then chain an enumerable on that.
JOËL: Okay. Okay, I see. So you're making a connection from enumerable to set theory.
SARA: Mm-hmm.
JOËL: Excellent.
SARA: Even if you don't necessarily want every item in your enumerable, your array, your hash, you can use things like select and reject to get a subset for a certain condition, and you can slice out based on a condition. And then you can then apply any or all to that. And so I want all of the even numbers, and now for all of these even numbers, such and such should be true for the set.
JOËL: So now we've made a connection between enumerable and predicate logic. And we've also made a connection to set theory.
SARA: It's coming full circle again. [laughs] Discrete math is everywhere.
JOËL: So if you use the enumerable module in Ruby, which you should be (It's one of the best parts of the language.), you're doing discrete math every day, and you didn't know it.
SARA: You're welcome.
JOËL: So we've seen that a lot of us are interacting with elements of discrete math every day and that learning a little bit about it more formally can help us be a bit more mindful in how we code every day. It can give us the mental models to solve and analyze problems that we encounter daily. For those listeners who might want to dig a little bit more deeply into discrete math, do you have any resources there that you recommend?
SARA: Well, not sponsored, but brilliant.org is a pretty good resource for things like math, computer science, for the very least. I'm sure it has other courses, but those are the ones that I've kind of looked at on some YouTubers' free trial. [chuckles] And I liked their approach to teaching, and I think it has got a low barrier to entry for learning these topics. I would definitely recommend that, so brilliant.org
JOËL: It's funny you mentioned that they sponsor a lot of technology, science, and math YouTubers. So for those listeners who are interested in checking it out, maybe look up some YouTubers and see if they have a free sign-up code.
SARA: Mayuko is a good YouTuber for that. I believe she gets sponsored by Brilliant occasionally. She's a software engineer out in California.
JOËL: Clearly, we're not sponsored because we don't have a code to give out.
SARA: [laughs] Sponsor us, Brilliant.
JOËL: [laughs] Host at bikeshed.fm
SARA: [laughs]
JOËL: All right. Well, with that, shall we wrap up?
SARA: Yeah, let's do.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Stephanie is joined today by a very special guest, Andrea Goulet. Andrea founded Empathy In Tech as part of writing her book Empathy-Driven Software Development. She's also the founder of the community Legacy Code Rocks and the Chief Vision Officer of two companies: Corgibytes and Heartware (which provides financial support to keep Empathy In Tech running).
Stephanie has strong opinions about the concept of "Makers and Menders" that the Corgibytes folks have written/spoken about, especially around those personas and gender stereotypes. Andrea joins Steph to evolve the conversation and add nuance to the discussion about legacy code/maintenance in our community.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn., And today I'm joined by a very special guest, Andrea Goulet. Hi, Andrea.
ANDREA: Hello, thanks for having me.
STEPHANIE: So here on The Bike Shed, we like to start by sharing something new in our world. Could you tell us a bit about yourself and anything new going on for you?
ANDREA: Yeah, so I have a background in strategic communications, and then kind of made a windy journey over to software. And so, for the past 13 years, I've been focused on modernizing legacy systems. And legacy is kind of a loose term; something you write today can be legacy. But essentially, we kind of help modernize any kind of software, any language, any platform, any framework.
And so, over the course of doing that, in the work that I did before I came to software, I had a very technical understanding of empathy and communications and had just done a lot of that. And I just noticed how much that mattered in creating healthy and sustainable codebases. So now I'm kind of taking that experience, and I've got a book contract called "Empathy-Driven Software Development." So I've been working on just diving into a lot of the really deep research. So that's been kind of my focus for the past two years.
And it's been really surprising because there were things that were positioned as truths, and then it's like, wait a second, neuroscience is completely upending everything. So it's been a fun learning journey. And I'm excited to share some of the things that I've learned over the years, especially [laughs] in the past two years with this book. So that is the new thing with me. And it's...I was telling you before it just feels like a constant new thing. Anybody who's written a book...it's the hardest thing I've ever done, so... [laughs]
STEPHANIE: Yeah, that sounds tough but also kind of exciting because you're learning so many new things that then kind of shape how you view the world, it sounds like.
ANDREA: Yeah. Yeah, it really does. And I think I really like diving into the details. And I think what started this was...my business partner, Scott, at the time, really embodied the stereotypical 2010 software developer down to the scruffy beard and dark-rimmed glasses. And what I found incredibly interesting was he had this belief of I'm good with machines, but I'm bad with people. And he just had this really deeply ingrained. On the flip side, I had this belief of, oh, I'm good with people, but I'm bad with machines. I'll never learn how to code.
And I found that really interesting. And personally, I had to go through a journey because we went on...it was the first time either of us had ever been on a podcast. So this was about ten years ago. And at the end of the podcast, Scott was the only one on there. And he said, you know, the person asked about his origin story and about our company Corgibytes. And he was like, "Yeah, you know, Andrea is amazing. She's our non-technical founder."
And by that time, I had been coding next to him for like three years. And I was like, why the heck would you call me non-technical? And I just felt this...what is it that I have to do to prove it to you? Do I have to actually go get a CS degree? I know I'm self-taught, but does that mean that I'm not good enough? What certificates do I need? Do I need to sit down next to you? Do I need to change my lifestyle? Do I need to look like you? So I was really upset [laughing] and just thinking through, how dare you? How dare you label me as non-technical?
And Scott is very quiet and patient, great with people, I think. [laughs] And he listened and said, "I use the words that you use to describe yourself. When we were in a sales meeting right before that phone call, I paid attention to how you introduced yourself, and I pretty much used the same words. So when you call yourself technical, I will too." That shattered my world. It shattered my identity because then it put the responsibility of belonging on me. I couldn't blame other people for my not feeling like I didn't belong.
That journey has just been so profound. This is what I see a lot of times with empathy is that we have these kinds of self-identities, but then we're afraid to open up and share. And we make these assumptions of other people, but, at the same time, there's real-world evidence. And so, how do we interpret that? In addition to this, Scott...like, part of the reason I called myself non-technical was because all of the people I saw who were like me or had my background, that's the word that was used to describe someone like me.
And when I would go to a conference, you know, I have a feminine presentation. And this was ten years ago. My very first conference was 300 software developers, and there were probably about 295 men. And I was one of five women in the room. And because I looked so different and because I stood out, the first question that anybody would ask me, and this was about 30% to 40% of introductions, was, "Are you technical or non-technical?" And I had to choose between this binary.
And I was like; I don't know. Am I technical? Like, is it a CEO that can code? I don't know. But then I have this background. And so I would just default to, "No, I guess I'm non-technical," because that's what felt safe because that's what they assumed. And I just didn't know, and I didn't realize that I was then building in this identity.
And so then, as part of trying to create a warm and inclusive organization, we did one of the unconscious bias surveys from Harvard. And what astonished me when I did that myself was that I didn't have a whole lot of bias, like, there was some. But the most profound bias was against women in the workplace, and it stood out a big one. I was like, how is it that I can be someone who's a fierce advocate, but then that's my own bias against people like me? What the heck is going on? So really exploring all of this.
And I think Scott and I have had so many different conversations over the years. We actually ended up getting married. And so we have a personal reason to figure a lot of this stuff out too. And when we start to have those conversations about who am I and what's important to me, then all of a sudden, we can start creating better code. We can start working together better as a team. We can start advocating for our needs. Other people know what we need ahead of time. And we're not operating out of defensiveness; we're operating out of collaboration and creativity.
So the book and kind of everything is inspired by my background and my lived experience but then also seeing Scott and his struggles, too, because he had been told like, "You're a geek. Stay in the computers. Stay in the code. You're not allowed to talk to customers because you're bad at it," and flat out was told that.
So how do we overcome these labels that people have put on us, and then we've made part of our own identity? And which ones are useful, and then which ones are not? Because sometimes labels can create a sense of community and affinity and so how do we know? And it's complicated, but the same thing, software is complicated. We can take skills like empathy and communication. We can look at them schematically and operationalize them when we look at them in kind of detail. So that's what I enjoy doing is looking under the hood and figuring out how does all this stuff work? So... [laughs]
STEPHANIE: I did want to respond to a few things that I heard you say when you're talking about going to a conference and feeling very much in the minority. I went to my first RailsConf in 2022, my first RailsConf in person, and I was shocked at the gender imbalance. And I feel like every time I used the women's restroom; I was looking around and trying to make a connection with someone and have a bit of a kinship and be like, oh yes, you are here with me in this space. And then we would have a conversation and walk out together, and that felt very meaningful because the rest of the space, you know, I wasn't finding my people. And so I feel that very hard.
I think this is also a good time to transition into the idea of makers and menders, especially because we have been talking about labels. So you all talked about this distinction between the different types of work in software development. So we have greenfield work, and that is writing code from scratch, making all the decisions about how to set up an application, exploring a whole new domain that hasn't been codified yet. And that is one type of work.
But there's also mender-type work, which is working in existing applications, legacy code, refactoring, and dealing with the complexity of something that has stood the test of time but may or may not have gotten a lot of investment or care and bringing that codebase back to life if you will. And when I first heard about that distinction, I was like, yes, I'm a mender. This is what I like to do. But the more I thought about it, I started to also feel conflicted because I felt pain doing that work as well.
ANDREA: Oh, interesting, yeah.
STEPHANIE: Especially in the context of teams that I've been on when that work was not valued. And I was doing maintenance work and fixing bugs and either specifically being assigned to do that work or just doing it because I knew it needed to be done and no one else was doing it. And that had caused me a lot of frustration before because I would look around and be on a team with mostly White men and be like, why aren't they picking up any of this work as well?
And so I was thinking about how I both felt very seen by the acknowledgment that this is work, and this is valid work, and it's important work, but also a little bit confused because I'm like, how did I get here? Did I pigeonhole myself into doing this work? Because the more I did it, the better I got at it, the more comfortable and, to whatever degree, enjoyed it. But at the same time, I'm not totally sure I was given the opportunity to do greenfield work earlier in my career. That could have changed where my interests lie.
ANDREA: Yeah, it is. And it's funny that you mentioned this because I actually I'm a maker. But yeah, I created this community, and I'm known for this thing. And I had a very similar experience to how do I exist as someone who's different in this kind of community? And I think part of it is, you know, there's a great quote by George Box, who is a statistician, and he says, "All models are wrong; some are useful."
And I think that's kind of the whole idea with the maker-mender is that it is a signal to be like, hey, if you like fixing stuff...because there is so much shame, like, that's what we were responding to. And Scott had the opposite problem of what you have experienced, where he was only allowed to work on greenfield work. They were like, "No, you're a good developer. So we want you working on features. We won't let you fix the bugs. We won't let you do the work that you like doing." And so that's why he wanted to create Corgibytes because he's like, "This work needs to be done." I am so personally passionate about this.
And when we were having these conversations 13 years ago, I was talking to him about product/market fit and stuff like that. And I was like, "You like fixing software, and there's a lot of software out there to be fixed." I just was very, very confused as to why this kind of existed. And we had been told flat out, "You're never going to find anybody else like Scott. You're never going to be able to build a company around people who find a lot of joy in doing this work."
And I think that this comes down to identity and kind of the way that Legacy Code Rocks was built too. A lot of the signaling that we put out there and the messaging and stuff really came from Scott's feeling of, like, I want to find more people like me. So being in the women's bathroom and like, how do I find more menders? Or how do I find people...because we were walking through a Barnes & Noble, and it was like a maker fest, maker everything. And he's like, "I don't have a community. There's nowhere for me to go to create these meaningful connections," exactly like you were saying. "I have maybe two people in my network."
And then we were at a conference in 2015. We were at the large agile conference. And it was one of the first ones that I've been to that had a software craft track. And we met like 20 people who were really, like, I just saw Scott light up in a way that I hadn't seen him light up because he could geek out on this level that I hadn't seen him do before. And so when I asked, like, "How do you guys stay in touch afterwards?" And they're like, "Oh no, we don't. We don't know how to build a community." And it's like, well, okay, well, we can get that started.
To your response of like, how do you operate when it is presented as a binary? And it's like, am I this, or am I this? This kind of gets down to the idea of identity-wise, is it a binary, or is it a spectrum? I tend to think of it kind of like an introvert-extrovert spectrum where it's like there is no wrong or right, and you can move in different places.
And I think being able to explain the nuances of the modeling around how we came up with this messaging can get lost a lot of times. But I'm with you, like, how...and that's kind of something now where it's like, okay, maybe my role was to just start this conversation, but then everybody's having these ideas. But there are people who genuinely feel seen, you know.
STEPHANIE: Yeah, that's really interesting because what I'm hearing is that when there's this dominant narrative of what a developer should be, and should be good at, and what they should do, it's kind of like what you were saying earlier about how hard it was for you to claim that identity yourself. People who feel differently aren't seen, and that's, I think, the problem.
And I'm very, very interested in the gender aspect of it because one thing that I've noticed is that a lot of my female developer friends do do more of that mending work. So when you talk about feeling like there was no community out there, it just wasn't represented at the time, you know, a decade ago for sure. And still, even now, I think we're just starting to elevate those voices and that work.
I wanted to share that at thoughtbot; we have different teams for different business verticals. And so we do have a rapid validation prototyping team. We do have a greenfield like MVP, V1 product team. And then we also have a team, Boost, the team that I'm on. That is more team augmentation, working with legacy code and existing systems. And it was not lost on me that Boost has the most women. [laughs]
ANDREA: Yeah, because you have the concept of cognitive load and mental load.
STEPHANIE: Yes.
ANDREA: Women at home end up taking a lot more of this invisible labor that's behind the scenes. Like, you're picking the kids up from school, or you're doing the laundry, or all these things that are just behind the scenes. And this was actually something...so when Scott and I also got married, that's when I first became aware of this, and it was very similar. And it was, okay, how do I...because Scott and I, both in our business and in our personal partnership, we wanted it to be based on equity. And then also, like, how do I show up?
And for me, the hardest thing with that was letting go of control where it's like, it has to be a certain way. It's hard for me to comment on the broader enterprise level because what I see at Corgibytes is we have gender parity. That's been pretty balanced over the course of our..., and we're a small boutique company, so it's different. But then, in the larger community of Legacy Code Rocks, it tends to be more male. There are actually fewer women in there.
And I think, too, like there's this idea of testers and QA, like, I think that falls in there as well, and that's heavily dominant. And I think sometimes it's like, oh...and I think this kind of comes to the problem of it, like, it's the way that we think about the work in general. And this might be useful just to think about kind of the way that it came about was, you know, makers and menders was we were putting together [laughs] actually this talk for this conference that we went to.
And my background in marketing, I was trying to wrap my brain around when is it appropriate for mending? And I had my marketing degree. It's like, oh, the product lifecycle. And Scott's retort was, "It needs to be a circle. We're agile, so it needs to be a circle." And I was like, this doesn't make any sense. Because look, if you have maturity and then you have it...oh my gosh, it'll link back to innovation, and then you can do new stuff.
And so yeah, I think when we describe makers and menders, and this is true with any label, the idea in the broader model is that makers and menders aren't necessarily distinct, and your team should 1,000%...everyone should be contributing. And if you only have one person who's doing this work, you're at a detriment. That's not healthy for your codebase like; this should be baked in. And the mender is more of like, this is where I get my joy. It's more of an opt-in. But I think that your observation about the invisible labor and how that gets translated to maintenance work is accurate.
A lot of times, like when Scott was describing his thing, it's like, there's the movie "Office Space." I might be dating myself. But there's this guy, Milton, and it's like, "Just go to the basement." He was told maintenance is where good software careers go to die. [laughs] And so over the years, it's like, how do we celebrate this and make it more part of the maker work?
And it's similar to how introverts and extroverts...it's like, we all work together, and you need all of it. But there is an extrovert bias. And extroverts are seen more as, oh, they have leadership traits and stuff. But increasingly, we're starting to see, no, actually, that's not the only way that you can be effective. So I think it's hard. And I think it does come down to belonging. And I think that there are also different cultural impacts there. And it comes down to just a lot of different lived experiences.
And I so appreciate you sharing your point of view. And I'm curious, what would help you feel more like you belong? Is it the work and the environment that you're in that's kind of contributing to this feeling? Or is it other things in general or?
STEPHANIE: Okay, so I did want to address real quick what you were saying about mental load and household labor because I think I really only started thinking about this after I read a book called "Equal Partners" by Kate Mangino, where she talks about how to improve gender equality at home, and I loved that book so much. And I suddenly started to see it everywhere in life and obviously at work too. And that's kind of what really drove my thinking around this conversation, maintenance work being considered less skilled labor or things that get offloaded to someone else. I think that really frustrates me because I just don't believe that's true.
And to get back to what you were asking about what would make me feel more seen or valued, I think it's systemic. But I also think that organizations can make change within their cultures around incentives especially. When you are only promoted if you do greenfield work and write thousands of lines of code, [laughs] that's what people will want to do. [laughs] And not even just promotions, but who gets a kudos in Slack? Or when do you get positive encouragement?
As a consultant, I've worked on different client teams that had different values, and that was when I really struggled to be in those environments. I have a really strong memory of working on a greenfield project, but there was another male developer who was just cranking out features and doing all of this work and then demoing it to stakeholders. But then there was one feature that he had implemented but had faked the data. So he hadn't finished the backend part of it but just used fake data to demo the user interface to stakeholders. And then he moved on to something else.
And I was like, wait; this isn't done. [laughs] But at that point, stakeholders thought it was done. They thought that it was complete. They gave him positive feedback for finishing it. And then I had to come in and be like, "This isn't done. Someone needs to work on this." And that person ended up being me. And that was really frustrating because I was doing that behind-the-scenes work, the under-the-hood work for something that had already been attributed to someone else. And yeah, I think about that a lot and what systems or what the environment was that led to that particular dynamic.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: Do you have any advice for leaders who want to make sure there's more equity for people who like to do mending and legacy code work?
ANDREA: Yeah, absolutely. I am so grateful for your questions and your perspective because this is not something that's talked about a lot, and it is so important. I wrote an article for First Round Review. This was in 2016 or 2017. And it was called "Forget Technical Debt — Here's How to Build Technical Wealth," and so if you want to link to it in the show notes. It's a really long article and that goes into some of the specifics around it, but it's meant for CEOs. It really is meant for CEOs.
And I do think that you're right; some of it is that we have lionized this culture of making and the work that is more visible. And it's like, oh, okay, great, here's all the visual design stuff. That's fantastic, but then recognizing there's a lot of stuff that's behind the scenes too. So in terms of leaders, I think some of it is you have to think about long-term thinking instead of just the short-term. Don't just chase the new shiny.
Also, you need to be really aware of what your return on investment is. Because the developers that are working on maintaining and making sure that your mission-critical systems don't fail those are the ones that have the highest value in your organization because if that system goes down, your company makes money. Greenfield work, yes, it's very...and I'm not downplaying greenfield work for sure. I'm definitely, [laughs] like, I love doing that stuff. I love doing the generating phase.
And at the same time, if we only look towards kind of more the future bias...there's a great book that we were featured in called "The Innovation Delusion" that talks about this more in general. But if we only look at the visible work that's coming, then we forget what's important now. And so for leaders, if you're running a software company, know where your mission-critical systems are and recognize the importance of maintaining them. That's the very first step.
The second step is to recognize the complexities of a situation, like, to think about things in terms of complex systems instead of complicated systems. And I'll describe the difference. So when I came to software, I had been working in the creative field, like in advertising, and branding, and copywriting, and all that. And we got inputs. We kind of ran it through this process, and then we delivered. And we did a demo and all of that stuff.
It was when is the timeline? When is it done? Big air quotes. And we were pretty predictably able to deliver on our delivery day. Sometimes things would go wrong, but we kind of had a sense because we had done the same pattern over and over again. You don't get that in legacy code because the variables are so immense that you cannot predict in the same way. You have to adopt a new strategy for how do you measure effectiveness.
And the idea of measuring software productivity in terms of new features or lines of code, like, that's something that goes all the way back to Dykstra [laughs] in the 1970s around, is that the right way? Well, a lot of people who code are like, "No, that's not." This is a debate that goes back to the earliest days of computing. But I think that the companies that are able to build resilient systems have a competitive advantage.
If a leader wants to look at their systems, whether that is a social system and the people in their organization or whether or not it's their software if you look at it from a systems thinking, like, there are interactions that I need to pay attention to not just process, that is super key as well. And then the last one is to recognize, like, one of our core values is communication is just as important as code.
I would be remiss to neglect empathy and communication in part of this, but that really is so important. Because when we position things in terms of...and I don't know as much about thoughtbot and kind of the overall strategy, but kind of an anti-pattern I have seen just in general in organizational behavior is that when you structure teams functionally and silo them, you're not getting that diversity of thought.
So the way that we approach it is, like, put a mender on a maker team because they're going to have a different perspective. And then, you can work together to get things out the door faster and value each other's perspectives and recognize strengths and shadows. So, for me, as a maker, I'm like, I've got a huge optimism bias, and we can go through all this stuff. And for Scott, it's like he struggles to know when he's done. Like, for me, I'm like, cool, we're 80% done. I got it. We're good to go. And for Scott, he'll work on something, and then it's like, I have to stop him.
So recognizing that we help each other, that kind of thought diversity and experience diversity goes across so many different vectors, not just makers and menders. But I think, to me, it's about reframing value so that you're not just thinking about what it is right now in this moment. And I think a lot of this comes down to investor strategy too. Because if you've got an investor that you're trying to appease and they're just trying to make short-term monetary gains, it's much harder to think in terms of long term.
And I think it's developers understanding business, business understanding the struggles of developers and how they need lots of focus time, and how estimating is really freaking hard, and why if you demand something, it's going to be probably not right. And then coming up with frameworks together where...how can I describe this in a way? So to me, it really is about empathy and communication at the end of the day when we're talking about interactions and how do we operationalize it.
STEPHANIE: I like what you said about reframing value because I do believe that it starts from the top. When you value sustainability...my co-host, Joël, had an episode about sustainability as a value in software development. But then that changes, like I mentioned before, the incentive structures and who gets rewarded for what type of work. And I also think that it's not only diverse types of people who like doing different types of work, but there is value in doing both.
And I know we talked about it being a spectrum earlier, but I strongly believe that doing the legacy code work and experiencing what it's like to try to change a system that you are like, I have no idea why this decision was made or like, why is the code like this? That will help inform you. If you do do greenfield work, those are really important skills, I think, to bring to that other type of work as well. Because then you're thinking about, okay, how can I make decisions that will help the developers down the line when I'm no longer on this project?
ANDREA: Exactly, which is a form of empathy. [laughs]
STEPHANIE: Yeah, it is a form of empathy, exactly. And the reverse is also true too. I was thinking about, okay, how can working in greenfield code help inform working with legacy code? And I was like, oh, you have so much energy when the world is completely open to you, and you can make whatever decisions to deliver value. And I've really struggled working in legacy code, feeling like I don't have any options and that I have to repeat a pattern that's already been set or that I'm just kind of stuck with what I've been given. But I think that there is some value in injecting more of that agency into working with legacy code as well.
ANDREA: Well, and I think, too, I think you hit it on the head because, like I said, with the mental load at home, it was like, I had to be okay with things failing where it's like, it wasn't exactly the way I would do it, and I had to be okay with that. Like, oh, the dishes aren't put in the dishwasher exactly the same way I would do it. I'm not going behind it. And like, okay, it's not perfect. That's...whoo, it's going to be okay.
And I think that's kind of what we experience, too, is this idea of we have to figure out how we work together in a way that is sustainable. And I think that, similar to my experience with the technical, non-technical piece, there is an onus. Now, granted, I want to be very careful here to not...there is trauma, and there is absolutely horrific discrimination and abuse. And that is not what I'm talking about here in terms of power dynamics.
I am talking more about self-identity and self-expression. And I think that if you are in a community like makers and menders, yeah, we're less represented. There is a little bit of an onus, the technical, non-technical, like the onus of understanding what non-technical means and where I can push back is really important work for me to do. Because what I was surprised with was everyone there, like, when I started asking...so my response ended up being, "Help me understand, why did you ask that question?" And I took ownership of the narrative.
And it was like, oh, well, what I found was that most of the people were like, if you're a recruiter, I don't want to waste your time with a bunch of stuff that you don't want to talk about. And then being able to say, "Oh, okay, I can see that, and you assumed that I was a recruiter because of the way I looked. And I understand the intention here. Next time, if I'm at a software conference, assume that I know how to code and assume that I'm here for a reason."
And a great opening question is, "What brought you here?" I'm like, oh, okay, when we ask a close-ended question, we position things as a binary, like, are you technical or non-technical? That creates a lot of cognitive dissonance, and it's hard. But if I open it up and say, "What brought you here?" Then I can create my own narrative. There is an aspect of setting boundaries and pushing back a little bit like you said, agency. And that can be really hard because it gets at the core of who you are, and then you have to really explore it.
And what I found, at least, is in the majority, there have been exceptions, but in the majority of the male-dominated groups that I've been in in my career in software, the majority are very welcoming and want me to be there. But I feel inadequate, and it's more impostor syndrome than I think it is people being discriminatory. Learning about the differences between that and where is my responsibility and where's your responsibility in this that's a tough tension to play.
STEPHANIE: Absolutely. And I think that's why it's really important that we're having a conversation like this. I think what you're getting at is just the harm of the default assumption that is chronic, [laughs] at least for me sometimes. And you mentioned earlier the history of computing a little bit. And I was really excited about that because I did a little bit of digging and learned about women's history in computing and how after World War II, programming, you know, there were so many women.
In fact, I think by 1960, more than one in four programmers were women, and they were working on mission-critical work like for NASA for, you know, during World War II for code-breaking. And I read that at the time, that work was deemed boring and tedious, and that's why men didn't want to do it. They wanted to work on hardware, which was what was the cool, creative, interesting work. And the computing work was just second class. That's changed, but in some ways, I'm thinking about, okay, where are we now? And to what degree are we kind of continuing this legacy? And how can we evolve or move beyond it?
ANDREA: Yeah, you're absolutely right. And in some of the research for the book, one of the things I learned is a lot of people know the name, John von Neumann. He created the von Neumann architecture, that is the foundation of all the hardware that most of us use today. And the very first kind of general purpose digital computer, ENIAC, all...I think it was eight of the people who were programmers for that were women. That team was led by John von Neumann's wife, Klára, and you never hear about Klára. You have to go digging for that.
And The Smithsonian actually just about 8, 10 years ago did a big anniversary and then realized none of those women were invited to the press conferences. They were not invited. And so there is kind of this...similar to generational wealth, it's the thing that gets passed down. Like, if you're in the rooms in the early days...there was a quote by John Backus, who created FORTRAN and the Backus–Naur principle, where he talked about programming in the 1950s.
He has an essay, and he was like, yeah, I mean, an idea was anybody who claims it, and we never cited our sources. And so it was whoever had the biggest ego was the one who got credit. And everyone's like, great; you're a hero. And so I think that's kind of the beginning of it. And so if you weren't invited into the room, because in the 1950s, in addition to gender, there was legislation that prevented...we weren't even allowed to use the same bathrooms. You had White bathrooms and Black bathrooms.
So you had very serious barriers for many different people getting into that room, and I think that gets to the idea of intersectionality as well. So the more barriers that you had, the harder it was going to be. And so then you get the stereotypes, and then you get the media who promotes the stereotypes. And so that is what happened to me. So I grew up in the '80s and '90s, and just every movie I watched, every TV show portrayed somebody who was, quote, "good" with computers in a very specific way. I didn't see myself in it. So I was like, oh, I'm not there.
But then, when I talk to Scott, he's like, "Oh, I never saw that. I never saw the discrimination. I just saw this stuff." That's part of it is that if you were in that position where discrimination, or difficulties, or stereotypes had been invisible to you, the onus is on you to learn and to listen. If you are in a situation where you feel like you have been in the minority, the onus is on you to find ways to become more empowered.
And a lot of times, that is setting boundaries. It's advocating for yourself. It's recognizing your self-worth. And those are all things that are really hard. And saying, hey, if we want to be sustainable, everyone needs to contribute. I'm happy to train everyone, but this is not going to work. And being able to frame it, too, in terms of value, like, why? Why is it a benefit for everyone building that empathy?
And you're right, I mean, there are absolutely cultures where...who was it? I think it was Edward Deming. And he said, "A single person is powerless in the face of a bad system." And so if you're in a system that isn't going to work, recognizing that and can you move into a different system? Or can you change it from within? And those are all different questions that you've got to ask based on your own fortitude, your own interests, your own resources, your own situation. There is no easy question. But it's always work. And no matter who you are, it's always work. [laughs]
STEPHANIE: Yeah, yeah. I joined as co-host of this podcast just a few months ago. And I had to do a lot of reflecting on what I wanted to get out of it and what my goals were. And that's why I'm really excited to have you on here and to be using this platform to talk about things that are important to me and things that I think more people should know about or think about. So before we wrap up, Andrea, do you have anything else you want to say?
ANDREA: I want to reinforce that if you feel joy from mending, it's awesome. And there are communities like legacycode.rocks. We have MenderCon, and it's a celebration of software maintenance. So it can be really great. We have a virtual meetup every Wednesday. And there's a kind of a core group of people who come, and they're like, it's like therapy because there are a lot of people who are in your situation where it's like, I'm the only person on my team who cares about automated tests, and I have no idea like...and just having people who kind of share in that struggle can be really helpful, so finding your community.
And then I think software maintenance is really, really critical and really important, and I think we see it. Like, we're seeing in the news every day in terms of these larger systems going down. Just recently, Southwest Airlines and all of these flights got canceled. The maintenance work is so, so valuable. If you feel like a mender and you feel like that fits your identity, just know that there is a lot of worth in the work that you are doing, an immense amount of worth in the work that you are doing, and to continue to advocate for that.
If you are a maker, yes, there is absolutely worth in the work you're doing, but learn about menders. Learn how to work together. And if you are a leader of an organization, recognize that all of these different perspectives can work together. And, again, reframe the value.
So I am so grateful that you framed the conversation this way. It's so important. I'm very, very grateful to hear from you and your point of view. And I hope that you continue to push the narrative like this because it's really important.
STEPHANIE: Aww, thanks. And thank you so much for being on the podcast.
ANDREA: Yeah, yeah, absolutely. Thanks for having me.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Stephanie raves about more software development-related zines by Julia Evans. Joël has been thinking about the mechanics of rolling dice.
Stephanie also started on a new client project that Joël has already been working on for many months. They talk about onboarding.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: So I got a very exciting package in the mail the other day that I wanted to share with you. So I think I've mentioned her on the pod before, but I got a package of software development-related zines by Julia Evans, and I'm going to share a few of the titles that I got.
So I picked up, "Oh shit, git!" [laughs] Can I swear on this podcast? I don't know. I guess we're going to find out. Or maybe we can just make the executive decision that it's fine. [laughs] I also got "Hell Yes! CSS!", "The Pocket Guide to Debugging," which I think I mentioned previously. I had seen the PDF version before, but now I have this cute, little, I don't know, six-inch book that I can carry around for all of my debugging needs. Who knows? Maybe I'll be out in the world and just need to pull it out [laughs] and debug something while I'm on the train; who's to say? And then I also picked up "HTTP: Learn Your Browser's Language!"
So I'm really excited to have these little illustrated digest-sized resources. I think they'll look really cute on my shelf next to my more intense hardcore technical books like "Design Patterns" and "Practical Object-Oriented Design in Ruby" or whatever. I'm really excited about the more creative endeavors people have done with creating educational resources about software development. In fact, I think last time when we talked about creativity and creative expression, we totally missed the world of side projects. And I've really just enjoyed when people illustrate things and make stuff a lot more accessible to a wider audience than a traditional textbook or more text-based heavy resources.
JOËL: I love when people go for a bit more of the playful or quirky when dealing with technical topics. And this is a great example. I love Julia Evans' work. But I'm also reminded of things like "Why's (poignant) Guide to Ruby," "Learn You a Haskell for Great Good!" or even...I forget the title of it. But there's a book by...I think it's Jamis Buck on mazes. And it's told in this sort of quirky style in a narrative. But it's all about maze-solving algorithms but told through the eyes of characters who are wandering through a maze, and it's just delightful.
STEPHANIE: Aww, that's so cute. I love that. I also just had the thought that these things would make great gifts for a fledgling developer or a developer in your life who, if you don't want to get them something super specialized or technical or whatever. There are so many, like you said, quirky and fun things out there that I'm sure they'll appreciate. So, Joël, what's new in your world?
JOËL: I play D&D regularly with some colleagues at thoughtbot. And recently, I got to thinking about the mechanics of rolling dice. Specifically, what dice can be rolled together? Like, can I roll multiple dice at the same time? And which one do you have to wait for the outcome of a previous roll before it makes sense to roll it?
That was really interesting to me because I think that connects to a lot of other things that we do in software, where sometimes some things are independent. You can do them at the same time. And then, other times, you have to wait for the outcome of the first thing before you can even start doing the second thing. So I think, in many ways, it's a great metaphor for the difference between parallel versus series operations.
STEPHANIE: I think it's very funny that you found a way to connect D&D to software development. I'm just imagining you rolling your die and then while you're doing that, having some revelation like the math lady meme or whatever, just thinking about, whoa, if this outcome happens, then [laughs] what happens?
I have not joined in on our company's D&D campaign, but I do like that y'all post little updates about the story in a public space for the whole company to check out. So sometimes I've been searching for some message in our company's knowledge base, and I have stumbled upon a post about the campaign so far and what happened in last night's session, you know, how all the adventurers fought the big bird, [laughs] and it is very delightful to me.
JOËL: It's a really fun way, I think to be creative. I think I enjoy the role-playing side of it a little bit more than just the mechanics of rolling dice, even though the thing I was excited to share today is rolling dice is fun. It is kind of like doing improv, where you're trying to figure out what would your character do and how do they respond to what other people say? It's fun, but it's hard.
STEPHANIE: One burning question I have is, does anyone do voices for their characters?
JOËL: Absolutely. Aji Slater, who was on a previous episode of this podcast, is part of this campaign, and their character has some really fun voices.
STEPHANIE: That's awesome. I'm really interested in joining as a guest or something. But yeah, the improv aspect of it kind of freaks me out. I bet it's a really welcoming group. And if other people are getting into it, then I can get into it too.
JOËL: Yeah, this group is very, very low-key. Most people playing, I think, are fairly new to the game. So it's very friendly, very kind of tolerant of, oh, you didn't know this rule existed, that's totally fine. We'll make it work, things like that.
STEPHANIE: Nice. So another recent development in my world is that I started a new client project, actually the same client that you've been working on for many months, Joël.
JOËL: Yes, the same client but different teams within the client. So we don't get to necessarily interact with each other day to day. But it is interesting that now we get to share knowledge about how this application works with each other.
STEPHANIE: Yeah, yeah. And I don't think we've gotten a chance to work together even in the same world like this before. So that's kind of exciting.
JOËL: How has the onboarding been for you?
STEPHANIE: So, one onboarding development that was surprisingly easy and felt good was setting up a new laptop. So the client company shipped a laptop to me to use for all of their work. And I had to set up just the laptop from scratch, so I could develop on it. And I was able to do that pretty painlessly with the help of the dotfiles that I had previously put together and all of the configurations that I had exported and uploaded to like a cloud drive.
And so I was able to have that up and running within a day with all of my favorite keyboard shortcuts, applications, all my little preferences, and that felt really good. So I'm going to pat myself on the back [laughs] for past Stephanie's efforts in making current Stephanie's life easier.
JOËL: I'm curious, do you use thoughtbot's dotfiles as the base for your development environment, or do you use something custom?
STEPHANIE: I have my own personal dotfiles that I have in a GitHub repo. But I think I did, at one point, go through thoughtbot's dotfiles for inspiration. I found that it has just a lot of extra stuff that I don't really need, but I do like that it's out there. So if any folks want a place to start with having a laptop setup configuration, you should definitely check that out. And we can link that in the show notes.
JOËL: I really like the tool rcm, which is also by thoughtbot that allows you to have a modular system of dotfiles that you can pull from a few different sources and combine together.
STEPHANIE: Oh, that's neat. I hadn't known about that one. That's cool.
JOËL: It's a suite of command-line tools that allows you to pull probably from a git repo. And it might be several, and then trying to pull them all to the right place on your machine to be executable. So, in my case, I have the thoughtbot dotfiles and then also some personal ones. And it just kind of merges them together based on some rules and creates all the dotfiles in my home directory for that.
STEPHANIE: Nice. I think the one thing that I do need to keep up on is pushing updates to the dotfiles when I make changes locally because I did have to pull in a few things that I had adjusted or made tweaks to that didn't make it to the source that I was pulling from on this new machine. This is actually my fifth MacBook that I own [laughs] just from remnants of jobs and clients' past. And one day...I keep telling myself that I'll have to return one of the older ones that I'm not using anymore, but as of now, I am an owner of five computers. [laughs]
JOËL: Just start mining Bitcoin on the idle ones.
STEPHANIE: Oh. [laughs] That's genius. I guess that's definitely a better use than them just sitting in my drawers.
JOËL: I guess you're paying for power, and that's kind of the whole point, so...
STEPHANIE: That's fair.
JOËL: What are some things that you like to do when you onboard onto a new project?
STEPHANIE: So, aside from my laptop adventures, when I joined this new project, I had a few things in mind that I wanted to achieve during this onboarding process. One of the things I think I want to get better at is understanding the business when I'm onboarding onto a new client. I think this is an area that previously I hadn't really focused on, but I'm now understanding is actually really important to being set up for success on a team.
And so, as consultants, we're dropped into a client project oftentimes when things are already moving. And they kind of clearly have some things that they were hoping we could help with. But I am hoping to also use this time to just take a bit of a step back and ask questions about, like, what is the product? And what are its core features? And who are its users? And also, what's the direction of the business? Can I get some more context on how things are right now?
We're so frequently brought in and being like, okay, like, you're going to work on this project but without the context of is the business scaling right now, or what are its struggles? We aren't quite able to make as informed decisions as we could if we had been at the company for longer and had just seen things change and had more of a feel of why we're doing what we're doing.
JOËL: I love that you're asking all those questions upfront. I feel like coming in onto a new project, and that can be as a consultant, or it could be just starting a new job, is the perfect time to just be asking all of those questions. And people, I think, appreciate when we ask those questions. Sometimes I think as consultants; we can sometimes be afraid that, oh, if we're asking these sorts of basic questions, people might think less of us.
But I think the opposite happens where because we're asking those foundational questions about the business model, about the future of the product, about how the technical architecture works, people really appreciate that we're asking those foundational questions where other people might not. So it actually helps build credibility rather than hurting credibility.
STEPHANIE: Yeah, and I think they are really important in making the right technical decision, too, because it can help inform where you spend your time refactoring or evaluating whether this shortcut is worth it to meet this deadline or if it's not because of the bigger picture and where things are headed. If anything, I've learned that being a developer really isn't just about being in the code but having as much information as possible so that there is less ambiguity and you have more clarity to make the right choices when you do have to write the code.
Another key aspect that I have become a lot more observational about, I think, is understanding the team that I'm joining, especially what their process is, how they communicate. One thing that's kind of funny about seeing a lot of different companies and how they work as consultants is they might claim to use agile, but in reality, it is a little bit different than that. And you can have that perspective as an outsider. Things like pointing an estimation is kind of all over the place in the industry. So I really like to make sure I fully understand how the team does that and what points means to them.
I think another thing that I want to do during my onboarding time this week and as I'm getting to know developers on the client side is learning about the pain points that they're feeling. And, yeah, just getting more of a feel about what's top of mind for them and where is a good space to invest my time and my energy.
Lastly, some more basic stuff is communication. Another thing about being a contractor that's challenging is that we don't normally get the full onboarding experience that full-time hires do. And so we may or may not have an onboarding mentor or a buddy and finding out, okay, who is the right person that I should be asking questions to? Or where's the right space for that? When you join new teams, are there any other things that you like to take into consideration?
JOËL: I like that you talked about understanding the team's process. One thing that I often like to do pretty early on is make some kind of small code change but then have it go through the full process of coding on my machine to deploy it in production. And so just find some small change in the code that needs to be done, and maybe it's an easy bug fix or something. But just so I can walk through all the steps and find out what the team's process is.
What are some sort of weird things that this team does that other people might not that I need to know about? Where does review happen? Is there a staging environment, unexpected ways which my change might get rejected? Things like that. So walking through the entire, I guess you could say software development lifecycle, kind of speedrunning is, I think, a really valuable exercise to do really early on a new project.
STEPHANIE: Yeah, that's a great point. Like I mentioned, I think that looks so different for every team. And I'm now learning about new tools and SaaS products that I have never seen before. And even though I have an understanding of the software development lifecycle in general, just learning those quirks is very valuable so that you can be a contributor as soon as possible.
JOËL: I like to contribute on day one, if possible, so kind of in order of...I don't want to say order of priority. But the order of things that I often do on a new project is one, clone the repo, try to run the setup script, or manually step through instructions in the README. Depending on the repo, that might be 10 minutes. That might be all of my first day.
Number two, try to run the test suite.
STEPHANIE: Yes.
JOËL: Number three is figure out what went wrong for me in step one or two, make a fix for it, commit it, and open up a PR for it, and that's my contribution. If I can do those three things on day one, I feel like that is a solid first day.
STEPHANIE: That's great. I love that. What can you do to help improve this process and make it just a little bit better for someone else? I think another good first-day task might be automating a part of that process that is currently manual and kind of annoying.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: So once you've cloned the repo and you're poking around the codebase, what are some things that you notice when you're looking at the code?
JOËL: Ooh, that's always fun. In a Rails application, there are a few files I almost always open first in a new project just to get a feel for it. Number one is the routes file. What does that look like? Is it huge? Is it small? Are there a lot of non-standard routes in there, not just standard RESTful resources? That's going to tell me a lot about how things are structured. I can probably even get a sense of what controllers are large, what controllers have 20 non-RESTful actions in them just by looking at the routing file.
The other place I like to look at is the user model. Generally, that just collects so many methods. And so I can also often get a feel about the app just by looking at that. And then from there, it's pulling on connections and trying to say, okay, well, what seems to be the core model of this app that everything coalesces around? And maybe for an e-commerce app, it's some kind of product, or maybe for an insurance product, it might be some kind of policy object. And so you find that, and then you find all of the core business logic around there. And that can often give you a really good picture of what the app is like.
STEPHANIE: Yeah, a few other things I would add to that list of things to check out is the Gemfile. I like to look at that to see what gems are familiar to me. Do they have authentication, common authentication gems that I've used before? Or is there a lot of stuff that's new to me? And it also kind of tells you, are they more likely to reach for a library or try to build something themselves?
I liked that you mentioned that you try to run the test suite early on. I think test coverage is a good place to investigate as well if they have any metrics, you know, that also tells you that it is or isn't something they value. And then seeing like, okay, what parts are well-tested and what parts are a little less tested?
I'm really glad that you pointed out how much information you can glean about controllers because then, once you're poking around in there, that can tell you a lot about where are the scary parts of the app? I've found that to be really interesting. You know, sometimes you can just open up a file and be like, whoa, [laughs] and have kind of a gut reaction. Other times, you might pick it up from other developers, and you might start hearing about areas of the app that they are a little nervous to touch.
JOËL: I definitely connect with that. I feel like many products have a particular file that is kind of scary and that people don't want to touch. And sometimes, people will tell you upfront, sometimes, you just discover it yourself. And I've been on projects where it's like, oh no, we have a ticket that's come up. It's fairly straightforward, except we know whoever picks it up is going to have to touch the scary file, and I'm not it.
STEPHANIE: Yeah, absolutely.
JOËL: I'm curious if you run any kind of automated tooling to try to understand a little bit more about the code. So I'm thinking things like maybe Flog or Flay or some of those tools to get a feel for maybe what are the hotspots in the application, anything like that that you like to look for?
STEPHANIE: That's a great point. I think the only times I have invested energy into doing that has been more when I'm doing a code audit for a client, which, in some cases, is a separate service that clients can pay consultants for. But I can see the value of doing it when you're joining a team for the first time.
JOËL: In a sense, I almost feel like we do a kind of abbreviated code audit for ourselves as part of onboarding.
STEPHANIE: That's fair. I wonder if you can use those tools and scope it in a way to the particular team or areas in the codebase that you know that you'll be working on.
JOËL: You mentioned the Gemfile earlier. And one thing that maybe seems super obvious is checking version numbers for things like Rails and Ruby because that will significantly impact how development is going to work. Is this a Rails 3 app, or is this a Rails 7 application?
STEPHANIE: Yeah, yeah, that's a great point. I am glad you mentioned that because I think that's probably the very first thing [laughs] that I would do just to set my expectations around what I'm working with.
JOËL: I feel like it's one of those things that's often just told to you when somebody helps you onboard. It's like, "Okay, you can clone the repo. It's over here. By the way, this is a Rails 3 app. We're kind of behind the times. Here are some weird things we've had to do to keep it alive. We have this other team. They're in this back room over there, slowly working on a Rails 4 upgrade. It's been in progress for four months, but we think we're pretty close. Can't wait for Rails 4."
STEPHANIE: Oh God. [laughs] I think the alternative is a developer being like, "Oh yeah, we just upgraded to Rails 7," and they're all really excited and feeling really good about it, [laughs] as they should be, because I think that Rails upgrades are an important thing to stay on top of. And it is really great when you are working on a project that gets to be up to date there.
JOËL: Yeah, Rails upgrades are interesting because I feel like when you're proactive about them, they're not that bad, especially more modern versions. I think Rails has gotten a lot better about making those upgrades smoother today than they were ten years ago. But when you're not up to date about them, when you've just kind of procrastinated on doing the updates, every month or year that you wait to do the update makes it so much harder to do that update when the time comes.
Because now more gems have fallen out of date, more things have now been abandoned that you just can't use. A lot of community knowledge is just not around as much anymore. Because Rails 3...I forget when Rails 4 came out, probably about ten years ago. So people who remember how things were done idiomatically ten years ago, some of that knowledge has kind of passed on. It's not as prevalent as knowledge around Rails 6 or Rails 7 is.
STEPHANIE: 100%. I think I heard someone at thoughtbot identify themselves as a post-Rails 5 generation developer. And I loved that because it really tells you a lot about just their experience. And it's kind of fun. I can imagine some kind of BuzzFeed quiz or something that's like, what Rails generation are you? But yeah, I've certainly seen pro-con lists about joining different projects, and a con might be the app is still on Rails 3. And then, if the app is on a very new version of Rails, that's usually in the pro column because folks are excited about getting to have all that good, new stuff.
What do you look out for in terms of design patterns in a codebase? Is that something that kind of sets off your radar at all?
JOËL: One thing that will definitely make me raise an eyebrow is heavy use of metaprogramming. I've been bitten by that a lot on projects. Some things are way too clever by half. So a lot of metaprogramming typically means it's going to be difficult to read and follow the flow of logic in the code. And also, there might be some unexpected bugs. Or I found once a memory leak that happened because of some weird metaprogramming. So that definitely makes me a little bit skeptical of part of the code.
STEPHANIE: Yeah, that's fair. And it also just makes it hard to understand the domain when you have no idea where things go. And you have to just find out later when you are debugging and are in the middle of desperately trying to figure out how this app works. So I can see how that is a little suspicious. I think one thing that I am reevaluating for myself when I notice design patterns is trying to figure out, do I want to perpetuate them? Do I want to follow them? And in the past, I have been more likely to just follow an existing pattern in the codebase.
But one thing that I'm hoping to do moving forward is to simply ask, how do decisions get made around patterns? Who gets to introduce them? Are they documented? What does that process look like? Do you have a conversation with the team about it? Just so that I have more tools in my toolbox, I think if I ever do find something that I feel really strongly about, that should be different than what I'm seeing in the codebase. So kind of expanding my skill set there.
JOËL: I think that's a fantastic question to ask, and I've done this on previous projects. And sometimes, the answers are just absolutely illuminating. So you see a weird pattern, and you ask, like, "Oh, where does that come from? Why do we do that?" And some will say," Oh yeah, that was Bob back in, you know, 2017. He read an article and was really a fan of this thing, and he put it everywhere. Nobody else really understood the pattern, but we haven't really been able to change it. And he's no longer with the company, and now we just kind of...it's there."
Or sometimes it's like, "Oh, great question because you see, we have this subtle business problem. And we've got to reconcile these two pieces of technology with also this expectation that our customers have. And so we came across this pattern, and we decided to use it." And it's these things where just looking at the code with no context, you're like, that's weird. Why would you want to do that? And then, when you understand the underlying problem, it makes so much sense.
It's like, okay, I don't love this pattern, but it's the correct solution here, and I fully support having that here. It's a tricky problem at the intersection of technological problems and business problems, and this was the best way we could solve it. I'm not always super happy, but it is the right choice.
STEPHANIE: Yeah, I've heard someone describe that as code archaeology in a way that all codebases have a story to tell about how they got to the current state that they're in. And I have certainly struggled with this but trying to approach joining a new team and working on a new codebase, especially if it's legacy code, from a place of curiosity rather than being combative about it. And just going through the git commits or just simply asking members of the team, like, "Hey, what's going on here?" and getting to hear some of those fun stories.
JOËL: Yeah, most code exists for a reason. It's not just people writing things just because, particularly code that, you walk in as an outsider and think, oh, that's bad code or looks weird. It's usually for a reason. People aren't just purposefully writing this to trigger you two years down the road.
It's also important...as a new person onboarding onto a project, people care about your perspective. As an outsider, oftentimes, it's really rich to bring in an outside perspective. But it's also not a great look to come in and just immediately be like, "Oh, we need to tear this thing down," or "This is so bad." It's important to build trust with the team. And as with so many things in life, seek to understand before running your mouth.
STEPHANIE: Wow, how insightful, Joël. [laughs] Speaking of building trust, can we talk a little bit about different strategies we have for doing that?
JOËL: Yeah. As a new person on the team, you really want to build a strong connection with the client and to build that trust because then you can be more effective in doing your job. You can bring more value to the client. What are some ways that you like to get that moving in a positive direction early on a new project?
STEPHANIE: I think setting up channels of communication is really important, so, ideally, having a one-on-one with a manager or a team lead because that is a great place to make sure that the work you're doing is aligned with what they think you should be doing. So figuring out what their expectations are, like, what do you expect me to get done in my first week? And then what do you want me to be doing by the first month?
That is important because we might think about all the things we would love to improve about this codebase or like influence on the team. But if that is not lined up with their views of what success looks like, then we're not quite delivering on the value that we [laughs] had hoped that we would.
Another thing that I'm starting to notice a lot more, and we talked a little bit about this previously when we talked about the value of sustainability in web development, but learning what the team's values are and also what the organization's values are because that will really inform the behavior of folks on the team and the decisions that they make.
So some values that come to mind are transparency, or collaboration, or growth, or speed. Like, if you find out those underlying foundational pillars, that can really help you orient yourself in your work and being like, okay, I know that this organization really focuses on these kinds of things, so I would like to try to make decisions that uphold or are in line with the things that are important to them.
JOËL: I want to really second your comment about good communication. That is one of the most powerful things you can do to build credibility to build trust with another human being, and that can happen in a lot of ways. Like you're saying, some of it is setting up actual communication channels with a manager. Some of that can be the things we mentioned earlier, like asking questions about the architecture, trying to learn all about the product and the business.
That can also be being active in that particular team's Slack channel. Sometimes new people come on to a team, and they're a little bit more timid, and they're just kind of not present. And so kind of coming in and...like, you don't want to take over the channel but being active in the channel, asking your questions in that channel, even just talking about your onboarding experience being like, "Hey, I'm running through...I got stuck on this thing. Here's the thing I did to get unstuck." People love seeing that. And it helps them to feel like you're actively participating from day one.
STEPHANIE: Yes, that is a great transition to what I wanted to make sure to say at the end of this is that your onboarding experience matters. I know that when you're joining a new team, you might feel a lot of pressure to start contributing and make sure that you are providing value. But your onboarding experience should be inclusive, and you should advocate for your needs.
Like, if you don't have access to credentials or there are just various blockers to your onboarding, that's a big deal, and it should not be a gatekeep-y process. Everyone wants you to be able to do your job, and so if you're running into those issues, it's definitely important to raise those concerns for yourself and also for anyone else who comes along the way.
Also, everything is new, and will probably feel uncomfortable. If you're anything like me, I feel a lot of pressure to prove myself when I join a new team and start contributing left and right. But it's just important to remember that when all this stuff is new, feeling uncertain or feeling confused and just being in that beginner's mindset again can be uncomfortable, but that is totally normal.
JOËL: I feel like something I sometimes do that ties all of these ideas together is when I'm encountering some new code or a new problem, to help myself understand it, I will diagram it. But oftentimes, it can be nice to share that diagram in the team's Slack channel and to say, "Hey, I'm new to the project, and I was exploring this area, and I kind of diagrammed it." Just talk a little bit about the thing that you're doing and maybe what you learned about it. People love that.
Visuals are a really powerful tool. And you might be surprised that there might be some team members that have been on the project for a while who never really understood that part of the code. And so they will latch on to what you've shared and be like, "Oh, thank you, because now I finally have a feel for that part." Or maybe you didn't get it quite right, and somebody will follow up and say, "Hey, I love your diagram, but you have a misconception here. There's actually a different piece that connects here." And then you can have a conversation, and you just revealed a blind spot. And so I've found that that can be a really positive way to get started.
STEPHANIE: Yeah, absolutely. Joël Quenneville, professional diagrammer. But even if you don't draw a diagram, putting your assumptions out into the world and how you understand things I think is really valuable because, yeah, it's like you are showing your learning path and also being open to receiving feedback if it's not quite right and, hopefully, spreading knowledge all around. So I love that.
JOËL: This reminds me a little bit of the episode we had with Steve Polito about learning in public. And he was focused more on learning about Rails, and open source, and things like that. But there's a sense in which you can sort of learn the product or learn the codebase. And public means your team channel. So you can say, "Hey, I'm digging into this model, and here's how I understand the way things work. It's a bulleted list of three things." You might get some good comments on that. You might get other people who appreciate it. So kind of learning the internals of a product within the public confines of a team, I think, is a really good framework as well.
STEPHANIE: Absolutely.
JOËL: On that note, Shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël has been fighting autoloading in a Rails app recently, and it's been really unpleasant. Stephanie has been experimenting with how she interacts with Slack.
What are "the fundamentals"? People often argue for the value of Computer Science classes for the jobbing programmer because we need "the fundamentals." But what are they? And does CS really provide that for us?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I have been fighting autoloading in a Rails app recently, and it's been really unpleasant. This is an older Rails 4 application, so we don't have Zeitwerk, any of the fancy modern things. But the problem I'm encountering is that people write code that references a constant somewhere. And if that constant is not named in the conventional way so that it would load from the proper file, it will raise a NameError at runtime when you try to execute the code.
And then you want to reference that constant with a big asterisk there because if anybody else has happened to have loaded that constant correctly either by a manual require or some other method, then that constant will already be in scope, and so you don't get a NameError. So it causes a situation where you have a lot of non-deterministic failures in the code that are not easy to always reproduce either locally or even in the test suite.
STEPHANIE: That sounds really frustrating because you must be getting errors left and right that you weren't expecting and then have to deal with. I'm curious, though, because you use the word non-deterministic. But in some ways, I'm thinking that you could perhaps grep or search the codebase for places where we're requiring constants like that and perhaps even audit that. Has that been something you've thought about, or do you think that's possible at all?
JOËL: I don't think just grepping is going to be good enough because it's anytime you use a constant, and that's a class name, a module name. Things like that are probably the most common cases. If you're just referencing a constant like an array or a string, it's probably defined in the same file, so you're probably good. But if you're trying to include a module, or inherit from another class, or you want to instantiate another class somewhere, then you can run into issues if the class name or the namespacing for it doesn't line up with the file name so that when Rails tries to autoload it, it doesn't find it where it expects.
STEPHANIE: Got it. Okay, that makes more sense to me now.
JOËL: When I interviewed at thoughtbot, one of the questions that I was asked, and I don't know if that's still in our interview anymore, was, "Tell us about one of your favorite features in Ruby." And then, "If you could remove one or change one thing about the language, what would you change?" And I think the goal of that was to see if people had enough expertise in the language to both have something they really liked about it but also know the warts; what are the places that are hard to work with?
And I definitely know that I said enumerable is my favorite part of Ruby because enumerable is amazing. I feel like, at the time, I didn't have a great answer for what I would change, but I don't remember what I said. I think today, if I had to answer that question, I might say the global namespace for all constants where just because you load another file, it might change what constants are available in a way that can really lead to some surprising behavior sometimes.
STEPHANIE: It's really funny that you mentioned enumerable because I think, at this point, you've been at thoughtbot for almost a decade, and you recently gave a talk about the enumerable module. [laughs] And so it sounds like that's something that's still one of your favorite and beloved features of Ruby. So that's really fun.
I also agree that autoloading is very opaque to me, especially, like you mentioned earlier; things can be totally different depending on what Rails version you're running. And it sounds like, ideally, when it works, it works. And hopefully, someone has done the legwork of making it effective for you. But when something goes wrong, then something that you had kind of taken for granted prior becomes really hairy.
JOËL: I think for those who are interested in digging into really deep how autoloading works in Rails and how it's sort of changed over time, there was a keynote at RailsConf last year that dug into that. That is an excellent talk to listen to.
STEPHANIE: Yes, that keynote from RailsConf 2022 about how Zeitwerk was developed by Xavier Noria, Xavier with an X. I was there too. That was a really awesome keynote. And I found it really interesting because, again, it was about this whole aspect that I just took for granted and had never really thought about. And I'm glad that someone else [laughs] figured it out for me. So that was a great reference.
Speaking of conference talks, in prior episodes, we had mentioned the talks that you and I gave at RubConf Mini back in November, and the videos for those talks are out. So if you want to check out Joël's talk about enumerable or my talks about pair programming and non-violent communication, we'll link the videos of those talks in the show notes.
JOËL: Excellent. So, what's new in your world, Stephanie?
STEPHANIE: I've been experimenting with how I interact with Slack. So I used to be very distracted by Slack as someone who needs to mark everything as read in order for the little red badge to go away or for the bold channel name to become unbold. I was constantly clicking around in Slack whenever I had it open for the sake of completing that task of reading messages, even if I wasn't necessarily in a space to fully read or even had time to be spending distracted on Slack.
But naturally, I would like, oh, click on this channel because it's bold, so I've unread messages. And then I'd get sucked in and be like, oh, I totally lost like five minutes of my time [laughs] and have forgotten what I was doing prior. So I started experimenting with using Slack as an inbox instead, so more of a pull than push in terms of receiving notifications. And I think it's been working well for me.
I've also been leaning on Slack's native keyboard shortcuts instead of using a mouse to interact with the Slack client because that helps me avoid that distracted clicking or going into this channel just to see what's up, and that has also been just okay. I think their keyboard navigation is not my favorite. There are no customization options.
So at one point, the shortcut to close the thread window pane was conflicting with my 1Password keyboard shortcuts, so I had to change my 1Password situation. And whenever you have to learn keyboard shortcuts for something different and in ways that might clash with your regular muscle memory for other applications, it's kind of annoying. But that's my journey with using Slack mostly on the keyboard so far.
JOËL: What kind of impact have you seen on your focus since you've been using this workflow?
STEPHANIE: I think it's been helpful for me to tune out things that I just can't prioritize my time and energy to at the moment. So I'm also pretty decisive when it comes to muting and leaving channels. I'm not in a ton of fun, casual channels because, again, I find them a little bit distracting. If I do want to go see cute dog content, I will go into the pets channel. But it's easier for me to have that be an intentional decision that I'm making as opposed to, oh, look, there are more messages in the dog channel. [laughs] Let me go check them out now.
I think it has helped me focus my time and energy on the things that are most important to me. And the trade-off there is that I missed out on some content, but I think that I've become okay with that. And the channels that I am more subscribed to, like our dev channel that we've mentioned on the podcast before any project or team-related communications those, are top of mind for me. And when I do need a little bit of a break and do need some fun banter, I will hop into other channels for that.
JOËL: So you brought up a little bit of this idea of FOMO around Slack channels. I think there's an area where our industry at large has a lot of FOMO, and this is around the computer science degree. A lot of people that are in the industry do not have one. There are a lot of different paths that can come into becoming a developer. Some people are entirely self-taught. Some people have gone through a bootcamp. Some people have kind of transferred from other similar or not at all similar industries. So there are a lot of different journeys that people have.
But for many people, if you don't have that, there is some FOMO around, “Did I miss out on something?” And there's a word that people always kind of toss around when talking about computer science and specifically the things you might be missing out on if you don't have it, and that is the fundamentals. You might be missing out on the fundamentals or, oh, well, what if I don't know the fundamentals, then I'm faced with a problem? And I won't know what to do, or it might make me learn more slowly. Is that something that you've heard thrown around?
STEPHANIE: I agree that the word fundamentals is extremely vague. In fact, I'm just going to say it: I have no idea what most people mean when they say the fundamentals, or at least I think they could mean a lot of different things. And so when that term is used, maybe I should just be asking for clarification [laughs] because I think that we could be talking about a lot of different things. Before we get into what fundamentals of programming or computer science or whatever means to each of us, Joël, do you want to share a little bit about where you're coming from in terms of any education prior to becoming a developer?
JOËL: Sure. So I do have a CS degree. I actually learned to code before college. I read some books, did some tutorials on the internet, played around with some code on my own, had a lot of side projects, and even at some point was, freelancing a few small projects as well. But I always struggled trying to build projects larger than a certain size. I felt like they would sort of implode under the weight of their own complexity.
I definitely felt like there's got to be some underlying fundamentals that I'm missing, some theory of writing code that would explain how to structure things in a way that scales. And this is not scaling to millions of users; this is scaling beyond 100, 200 lines of code, maintainability. And so, I had really high hopes for a computer science degree. And honestly, I think I was a little bit disappointed. I learned a lot of other interesting theoretical things, but not a lot that was actually answering that underlying problem.
And I didn't really get the answers I was looking for until I started working more in the industry, doing various internships and then later on full-time jobs as well. And just over the years, that has sort of built up a lot of answers to those questions that I had. But I didn't necessarily find them in my computer science degree, so I have mixed feelings.
STEPHANIE: I had no idea that you were doing that level of coding prior to studying in college. That's really cool. Now that I'm thinking about it, I think that's the case for a lot of people. They might get a taste of coding when they're young. It was true for me. I was a young girl on the internet in the early days writing HTML and CSS on neopets.com. And that is also how I got my first taste, and kind of wanted to explore further.
So, in college, I studied journalism actually. That was where my interests were at the time. But I did take some computer science courses on the side and ended up completing it as a minor. But I'm with you that the education I got didn't quite match up with what I was expecting. In fact, I kind of struggled, I think, because there weren't a lot of more relatable applications to what I was learning, and so I was very bored and disengaged, I think.
And so when I came out of college, I didn't think I wanted to do software development, or programming, or anything like that because I didn't love taking those classes. And, I don't know, I'm going to get into my whole career history today. But I basically fell into the role of development. It was kind of like, oh, you have these coding skills; we need these coding skills. I was like, okay, I guess this is my job now. [laughs] And I think you and I are actually kind of similar in the sense that once you started doing the work, you started to see a lot of the things that you had learned previously that you could then apply. Does that sound right?
JOËL: Yes. And I think maybe more so over the years, there are some things where it's like, oh, with a five-year mark, it's like, oh, finally now I feel like I've got enough practical experience where I can start to appreciate some of these underlying things, or I'm getting into things beyond just the basics of writing a simple Rails app where I start to need some of those other concepts. And some of those it's five years, some of those are ten years. And so it's sometimes nice to have something to go back to. Although after ten years, there's only so much I remember, so sometimes it's just having a keyword that I can Google and dig into further.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So earlier, you mentioned that fundamentals is a bit of a weasel word; it means different things to different people. So I'm curious for you, what do you think of when you think of fundamentals And, maybe more specifically, from the perspective of a web developer?
STEPHANIE: Yeah, I want to caveat this by saying that I think you can learn these different skills in many different ways; that could be formal education, that could be a bootcamp, that could be self-taught. But these three categories are what I think are useful education for a web developer.
So firstly, understanding how computer systems work at the abstraction you're using and maybe even one level lower. So this is likely the programming language or framework and the tools that you're using, and I think a lot of bootcamps teach this. They want to teach you the skills you need to get a job and do that job.
The second category, I would say, is more theoretical. So the theories of computer science and math that you and I alluded to earlier as not having been super practical at the time that we learned them. I guess that's probably why they're called theories. [laughs] But I'm thinking like algorithms, data structures, and other concepts like O notation or whatever.
And then thirdly, this one, I think, doesn't get talked about enough. But there's this whole world of practical skills that we do in the industry that I don't quite think are taught in either environment, so that looks like reading code and reviewing code, especially as it relates to working in an existing application as well as writing tests and documentation, and, in general, working with other people. I think a lot of programming education focuses on the act of writing code when I think there's a lot to learn from reading code and analyzing it. And that's something that I have been thinking a lot about the more I spend doing it in my job.
JOËL: I like the way you've sort of broken these down into more relatable categories rather than just this generic idea of the fundamentals. I think when people think of the fundamentals, they're probably thinking mostly of your category two, the more theoretical underpinnings. Some of those are actually quite relevant, I think, to the day-to-day work that we do. And then some of them are very kind of abstract and maybe even to the point of mostly being relevant if you're doing research but not that interesting if you're writing code on a day to day. Does that sound about right to you?
STEPHANIE: That's fair. I revisited a series of blog posts and conference talks that my friend Mercedes Bernard gave called "Fun, Friendly Computer Science," that was aimed towards people who didn't have a formal CS background to give them a vocabulary or some exposure to computer science concepts that would be helpful in day to day programming work. And one thing that stood out to me was the idea of set theory and how working with relational databases is pretty much working with set theory, even if you don't use that vocabulary or reference those underlying concepts.
I think another aspect of these theoretical fundamentals that companies might interview people on, like algorithms or data structures, there's also a lot of talk about how those aren't part of your day-to-day jobs. And yes, knowing about them is useful, but the benefits of working in an existing programming language is that those have all been figured out for you. And you are just using those data structures and have to worry a bit less about how they are working under the hood.
That's not to say you should know nothing about them, but I think I had mentioned earlier maybe it's helpful to understand things at a lower level than what you're working with. And that can come up when you run into problems that those data structures aren't quite able to help you solve, and you need something different, or you need to be a little more creative. But I would say that almost all of the time, you have the luxury of not having to think about that.
JOËL: It's interesting you mentioned set theory because I was thinking of a set theory metaphor here, which is we have a set of things, which is what a traditional computer science degree will teach you. We have a set of things that are the quote, unquote, "fundamentals." And there's definitely an intersection between those two sets, but they don't totally overlap over each other.
And so there are some things in a computer science degree that are absolutely going to be fundamentals that you need to use and that you're going to use in your day-to-day job. You don't have to learn them through CS. You can learn them from all sorts of other sources. And then there are also some things in CS that are maybe not part of the fundamentals.
All knowledge has value. And so these things can be mental models, or they can connect to other things. But they're not necessarily fundamental to you being able to do a good job as a working developer out in the world. And again, this will vary a lot depending on the type of development that you're doing. I'm mostly thinking of web development because that's what you and I do, both front end and back end.
I want to come back to something that you mentioned earlier because I feel like when everybody thinks of the fundamentals of computer science, the first thing that comes to mind are data structures and algorithms. Those are the things that you're working on when you're doing leet code. They are the kind of things you get asked on interviews. It's the kind of thing that's kind of fun to show off to other people. And it sounds like you're saying that data structures and algorithms are actually kind of overrated; that's a hot take.
STEPHANIE: I think it's overrated if you are working in web development. Like we were talking about earlier, at that level of abstraction, you're using the tools of the language to build software that works for your users. Like, I am really not thinking all that much about how to implement a linked list or something like that. [laughs] I have my trusty hash in Ruby. I can use an array if I need to and just put my data in there, and that is perfectly fine and acceptable. And I have not really needed to do anything too fancy beyond that.
In fact, I think you and I have talked a lot on this podcast about paradigms and design patterns. And those are the things that I find really interesting and want to learn more about at this point in my career and because they're more relevant to my day-to-day work. And I think we should be interviewed on the work that we will be doing.
JOËL: I think that lines up a lot with my experience as well. I have had to implement some trees, some linked lists, and things occasionally throughout time. I think especially working in Elm, sometimes is a little bit more lower-level data structures to work with or to construct. And sometimes you might need that to know some basics around things like trees if you're operating over the DOM, which is a tree, things like that. But, again, a lot of those things are already pre-built for you. So having the 10-minute version might be good enough to get what you need to do.
I think one thing that's probably the most useful thing that I would pull out of an algorithms class is the concept of a binary search, not just literally how do I implement a binary search on an array or a linked list but the idea of it and then taking that and applying it to a very broad set of problems. And a classic one is when you're debugging, and there are all sorts of ways that your program might fail. And if you are looking at it just process of elimination, just one little thing at a time, it's going to take you forever to check every possible cause.
But if you can find a way to eliminate half of the possibilities, and that might be by putting a conditional high in your decision tree, or there are a lot of different ways you can do that, all of a sudden, it makes it much easier to narrow down your search and to find a bug. And so that is a technique that I think is just hugely valuable that you learn in an algorithms class, but that can be generalized to all sorts of problems. I'm curious, in writing in Ruby, or JavaScript, or any of the web languages that we tend to write in, have you ever had to calculate the big O of a method or function you've written?
STEPHANIE: Only in the context of performance and Rails performance. The only times I think I've ever really pulled it out is when I am seeing a database query that is O of n or worse, and then I rewrite the function to avoid that inefficiency. Otherwise, I think most functions are perfectly fine, and there's no need to really optimize for that the first time around.
Though, I am going to plug another conference talk that I watched recently from Jemma Issroff and Jacob Evelyn about their developing of a gem called MemoWise which involves memoization and caching. But they did a really cool job of deep diving into the source code of their gem to make things as efficient as possible. And that did involve investigating different O notations and stuff like that.
111
JOËL: Yeah, I found that in practice, most performance bottlenecks on the web tend to be I/O bound rather than CPU bound. I just realized I threw out some fancy technical terms that you probably would learn in a CS degree, and that might feel confusing for those who don't have that background. So that means that your problem is slow. It's waiting on, usually, some sort of network or file system, or database, or something like that rather than waiting for processing speed.
As an aside, we've talked about the value of having specialized vocabulary and names to add to problems. So that is a value that you get out of a more formal education path is that you do learn some of those technical terms. And that can sometimes help you to build a mental model of the solutions you can apply to a problem.
STEPHANIE: I might have mentioned it in that episode, but I do think I learned best, having had to wrestle with something in my personal life and experience and then going out and seeking more information about it and learning about it. And at that point, it's much more interesting to me because I can relate it to something that is in front of me as opposed to reading a textbook and trying to imagine ways that this information would be useful.
A lot of these concepts, it's totally okay to go explore them once you need them. You're right that it is tough if you have no idea where to even begin or what to search for, or what to look for. But I don't know; I think maybe I'm just being efficient with my time this way. [laughs]
JOËL: I'd like to throw a metaphor at you that you kind of introduced earlier when you were talking about Slack and how you're trying to change from a push to a pull mode, and I think this can apply to learning as well. Any sort of push approach to learning, you're kind of pre-learning some things because you think they're going to be useful or they might power some more learning. And it's going to be good to have that sort of already there in the situation where you need it.
Then that might be going to do a four-year degree, or maybe that's just saying, this year, I want to learn a little bit of this theoretical idea to build some better understanding of the quote, unquote, "fundamentals." And that could just be sort of general continuing education that you do. But sometimes, like you said, it's just in time. You say until I encounter a problem, it's like, okay, this problem is slow. I don't know a whole lot about performance. Let me go read up about performance.
And then you get to be like, oh, the first question you ask is, is your problem I/O bound or CPU bound? What does that mean? Okay, now there are different strategies for how you deal with things and different analysis tools. And then you go and learn that at that moment rather than having learned it the summer before because you just were trying to fill out a sort of broader foundation for your knowledge.
STEPHANIE: Wow. Excellent callback. Again, only you can find multiple ways to reference something [laughs] I said earlier. I also really enjoy listening to someone who's an expert at something or particularly knowledgeable talk about something that they're excited about. And so I was thinking that I don't have as robust of a computer science education as some of my peers or coworkers, but I know that I have people to go to with my problems.
Like you, for example, you might pull out, oh, this reminds me of graph theory. [laughs] I know we've talked about dependency graphs a lot on this podcast. And, in some ways, I am absorbing that education through you. And maybe in the future, I will encounter something that reminds me of a conversation we had, and I have a starting place. So I think having people with diverse backgrounds in this field can be really valuable as well.
JOËL: I love that because that means that even in your day-to-day, there's kind of a sort of mix of push and pull that's happening. You might just be having a conversation with somebody, and they're really excited about dependency graphs, and they tell you why they're excited about it. And that's maybe a little bit more of a push because you don't immediately need it, but you're gathering some knowledge.
But then you might also be encountering a problem on a client, and then you ask in our dev channel, "Hey, I'm encountering this sort of problem. What should I do?" And somebody says, "Oh, you might want to look into calculating the big O of this function because that looks suspicious. Tell me about that."
STEPHANIE: Exactly. And now it's my turn to call back to my Slack anecdote earlier because I do think in this field, there's just an infinite amount of things to learn. And I do have to accept that I'm not going to learn everything. And I have found a way that works for me, you know, that combination of, oh, here's a problem I'm facing.
And I really need to find out what is going on in this C code so I can better understand this Ruby code I'm writing or something like that. Or people sharing different insights that they have, and I'm getting that information that way. And you said it earlier that however you receive this information or get this education, there's no one way to do it. There's no one correct way.
JOËL: And I think everybody does a mix of both, right? You've mentioned several times that you had attended a conference talk, or read a book, or read an article on a topic related to more theoretical underpinnings. And I'm pretty sure you weren't going to that talk because you had a problem that needed to be solved, and you're like, oh, if only I could get the answer, this is where I'll get it. It's probably a little bit more in preparation, saying, oh, I'm at a conference. The whole point of a conference is to get some information ahead of time. And this particular topic sounds like it would be helpful. Does that sound right?
STEPHANIE: Yeah, that is a really great way of putting it. I hadn't thought of it that way. But that also kind of checks off the box of listening to someone else explain things to me [laughs] that they've already done a lot of research on and feel excited enough to share with the world. And that is inherently more interesting to me than reading a textbook.
JOËL: Oh yeah. Textbooks are boring. We'd done a whole episode recently about where to focus our learning. So I think if listeners are interested in digging deeper into that and maybe the push versus the pull, there are a lot of great thoughts there as well.
STEPHANIE: So before we wrap up, are there any underrated quote, unquote, "fundamental" computer science topics or concepts that you think are particularly valuable to you and your work as a developer?
JOËL: I would like to plug discrete math as a topic. And I know we've talked about, oh, there are some theoretical ideas that are maybe very firmly in the theoretical realm and aren't that useful in day-to-day work, and math sounds like it would be in that branch. But discrete math is basically all the practical math that is useful to you as a developer. It's kind of a mishmash of 10 different subjects.
So it's a bit of an overview of here's an intro to Boolean algebra. Here's an intro to propositional logic. Here's an intro to predicate logic. Let's talk about set theory. Let's talk about combinatorics. Let's talk about recursive functions from a mathematical perspective. Let's talk about a little bit of graph theory. So it just touches on a bunch of these topics, and they're all generally quite useful.
I find things like Boolean algebra I use absolutely every day because writing Boolean expressions is a thing that we do all the time in our code. And you might think there's not that much to doing Boolean expressions. And you might even have picked up on some of the patterns by yourself just by doing the work long enough. But there are some really interesting laws and rules that can be applied and analysis techniques that, even in just the small portion of a course dedicated to that topic, you get a lot of value out of it.
So I would recommend either digging into some of these topics a little bit on your own, so digging a little bit into Boolean algebra, digging a little bit into set theory, or digging into discrete math as a whole, sort of looking at all the different little sections together. I think that really gives you a lot of tools to improve your day-to-day work.
STEPHANIE: That was a great sell on discrete math. And who knows? Maybe you've influenced some of our listeners to go check that out.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thank you so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Stephanie shares that she's been taking an intro to basket weaving class at a local art studio, and it's an interesting connection to computer science. Joël eats honeycomb live on air and shares a video that former Bike Shed host Steph Viccari found from Ian Anderson. It's a parody to the tune of "All I Want For Christmas Is You," but it's all about the Ruby 3.2 release.
In this episode, Stephanie and Joël shift away from literature and lean into art. Writing code is technical work, but in many ways, it's also aesthetic work. It's a work of art. How do you feel about expressing yourself creatively through your code?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I'm really excited to share that I've been taking this intro to weaving class at a local art studio. I'm actually a few weeks in, and it's wrapping up soon. But one thing that I found really cool at the very first class was that the instructor mentioned that weaving was, in some ways, a predecessor or inspiration to modern computing. And he said that, and I got really excited because surely that meant that I would be good at this thing [laughs] and this craft, and then I promptly kind of forgot about it.
But I was inspired the other night to look up this history to just learn more about weaving and its connection to computer science. And I learned that, in particular, the invention of something called the Jacquard loom really led to early computing machines because, basically, weaving involves threading horizontal and vertical fibers. And the way you do it if you thread the horizontal fiber, also called the weft, over or under the vertical fibers, called the warp, you get different patterns.
And so with the Jacquard loom, this invention utilized punch cards as instructions for basically binary code, and that would tell the loom how to raise and lower those vertical threads, which would then lead to a beautiful pattern. And after that invention, this previously very laborious process became automated. And that also had a really big impact on the textile industry. And fabric became a lot more available at a much lower cost. So that was a really cool little history lesson for me.
JOËL: That is really cool. So are you saying that punch cards, as we know them from early computing, were borrowed as a concept from the weaving industry?
STEPHANIE: Yeah, that's at least what I've read. I can see now how complex weaving tapestries and patterns set the stage for more complex computations. And I don't know if I'm going to keep going down this weaving journey. I liked the intro class because it was very chill, and I got to use my hands. And I had a little bit of fun making, I don't know, like ten by 12-inch little tapestry. But yeah, I've definitely seen other more advanced weavers make really beautiful textiles and fiber arts. And it's really cool to see the application of that detail-oriented skill in different formats.
JOËL: Are you going to try to make your own punch cards?
STEPHANIE: That's an interesting evolution of this skill [laughs] for sure. I think what I really did like was the hands-on approach. And so the punch cards did make this process automated. But I personally enjoyed the switching of the threads and pulling them through and doing it with my hands instead of something that's kind of turned into automated machine work. Does that inspire you in some way?
JOËL: I think sometimes it's interesting, right? As software people, we sort of have the two urges. We work in so much automation. When we see a process, we would love to try to automate it ourselves, even if it's been done before. So, oh, could I build a small, automatic mechanical loom using punch cards? That sounds like a fun automation challenge. At the same time, so much of my daily job is automation that sometimes it's nice to kind of remove automation entirely from the picture and, like you said, just work with your hands.
STEPHANIE: That's a really interesting way to think about it. I do believe that people have different reactions to it, like you said, where they're like, "Wow, I can use my skills to do this really cool thing." On the other hand, you might also respond with, "Wow, I've done this automation code-writing work for eight hours. So now I really want to do something completely different." And I think that's the camp that I was in, at least when I first signed up for this class, just having space, like three hours a week, to sit and not look at a computer and deal with the physical realm.
JOËL: So here's the other route that I think a lot of software people take, and that is, here's a fun mechanical process that can be automated. What if we simulated it virtually? So what if I create a program where you can sort of create your own punch card, like, decide where you want to punch the holes?
And maybe these are just radio buttons or something or checkboxes in a grid on a webpage. And then, the program will output an SVG that is the thing that would have been woven if you'd used it in that pattern. And so now you can kind of play around with, like, huh, what if I punch here? What if I unpunch here? And you get all these patterns out, and you could just get to try it around.
STEPHANIE: That's fascinating. I can't believe your brain went there. [laughter] But yeah, the idea that it's not actually about the pattern itself but the holes that you make, that part being the creative process and then what comes out of it then being a bit of a surprise or just something organic that's a really interesting take too.
JOËL: Something that I find is really fun about software and things created from software is this sort of really short feedback loop in terms of trial and error. So if you were actually having a weaving machine and you made a physical punch card, and then you try something, and you realize it's not quite right, the machine weaved something you didn't quite like, now you've got to set it up again.
You probably have to start from scratch with a new punch card because you can't really unpunch holes unless maybe you can put tape over it or something. That trial-and-error feedback loop is much shorter. Whereas with a program, you just pause the simulation, punch-unpunch some holes, restart, and then you just kind of keep trying. And there's something fun about that creative exploration when you've got that really tight feedback loop.
STEPHANIE: That's fair. I think perhaps that actually might be why doing it manually, and by it, I mean weaving, gives you a little bit more room to [laughs] debug if you will, because you can see when something goes wrong. And this actually happened to me in class earlier this week where I didn't thread the fiber over instead of under. And I was like, oh, this doesn't look right. Like, that's not the look I'm going for.
And then I could kind of quickly see, oh, I missed a thread over here and unravel and do it again. Whereas what you just described, if the punch card is wrong and then you create this big piece of fabric, at that point, I'm not really sure what happens then. If someone out there is a weaving expert and knows the answer; I would be very curious to know.
JOËL: Now I kind of wish we'd had this conversation last month because, in early January, there was a game jam event that happened. It's a yearly or biyearly Historically Accurate Game Jam, and they select a theme, and then everybody has to submit a game, or a simulation, or something, an interactive program that fits with the theme. And this year's theme was the Industrial Revolution. And I feel like simulating an old automated loom with punch cards would be the perfect fit for something that's small enough that I could build it in a week without spending 10 hours a day working on it. It fits within the theme, and it's still kind of fun.
STEPHANIE: Wow, that would have been a really great idea. If there was an award for best fitting the theme, I think that would have won because then you're also tackling the history of computing. I was talking about earlier the loom obviously being...or the automated loom also really playing a big role in the Industrial Revolution. And, I don't know, maybe this is our future club, Joël, and we're going to get into video game development. [laughs] What's new in your world, Joël?
JOËL: There are two things. One is that today former Bike Shed host, Stephanie Viccari, shared a video with me from Ian Anderson. This was made last December to the tune of All I Want For Christmas Is You. But it's all about the, at that time, upcoming Ruby 3.2 release. It is amazing. The lyrics talk about the different features that are upcoming. It rhymes. It's set to meter. I am just blown away by this. And I'm just really hyped [laughs] about this video.
STEPHANIE: You sent it to me and I gave it a watch before we sat down to record, and I also loved this video. It was so fun. And I think Ruby has a bit of a tradition of releasing new versions around Christmas time. So if this became a tradition, that would be very fun, and maybe instead of singing Christmas carols, we'll be singing new Ruby version carols around the holidays.
JOËL: I feel like if Ian wants to do another one next Christmas, now that you have the precedent, it'd be a great space to try something to the tune of Last Christmas because now you can reference back last year's song.
STEPHANIE: Yeah. I might as well just go all in and create a whole Christmas album of Ruby anticipation carols.
[laughter]
JOËL: Yeah, really excited about that. Kudos to Ian. And for all of our listeners, we'll link the video on the show notes of the podcast. Go and check it out; it is worth the two and a half-minutes of your life.
STEPHANIE: Agreed.
JOËL: The other cool thing, for the past few episodes, we've been talking a lot about hexagons and how they show up in nature, and bees, and how they build their honeycombs and whether that is sort of by design or sort of just happens by nature through sort of external forces. And so this week, I went out to the store, and I bought some real honeycomb. And I'm going to try it on air.
STEPHANIE: [laughs] Oh my gosh, I didn't realize that's what was happening. [laughter] Okay, I'm ready.
JOËL: All right, I'm going to take a slice.
STEPHANIE: Wow. For research.
JOËL: For science.
STEPHANIE: Wow, that is a big bite. [laughs]
JOËL: Hmmm, it's basically crunchy honey.
STEPHANIE: So I've enjoyed honeycomb in that raw form on ice cream. I really like it on there and oatmeal and stuff like that. I think it's a little bit waxy. Like, once you get to chewing the bits at the end, that part is a bit of a less pleasant mouth-feel [laughs] in my opinion. What are you experiencing right now?
JOËL: Yeah, so like you're saying, the honey kind of dissolves away in your mouth. You had this really fun mix of textures. But then, in the end, you do end up with a ball of [laughter] beeswax in your mouth.
STEPHANIE: Oh no.
JOËL: Which I understand is completely safe to eat, so...
STEPHANIE: Yeah, that's true.
JOËL: I'm just going to eat the whole thing.
STEPHANIE: I think it's kind of like swallowing gum. [laughs]
JOËL: Which apparently does not last for seven years in your digestive system; that's a myth.
STEPHANIE: Wow, debunking myths, trying honeycomb. You're welcome, to all The Bike Shed listeners out there. Investigating the important things.
JOËL: What is interesting is that we're talking about the structural power of hexagons. I can cut a pretty thin slice of the comb, and it doesn't fall apart. It still has a lot of strength to it, which is nice because it means that the honey doesn't just go splashing everywhere. I can cut up a fairly thin slice, pick it up, it still holds the honey, put it in my mouth, and it doesn't make a mess.
STEPHANIE: The bees know what they're doing. [laughs] Cool. Would you eat raw honeycomb again?
JOËL: Well, I got a whole block, and I had one tiny slice. So, yes, I will be eating the rest of this.
STEPHANIE: [laughs]
JOËL: I don't think this will be a regular thing in my weekly groceries. But I would bring this out again for a special occasion. Or I can see this fitting nicely, like you said, on maybe certain breakfasts, even on a charcuterie board or something.
STEPHANIE: Oh yeah, that's a really good use for it.
JOËL: In some ways, it's nice because it's a way to have honey without having to have it on something else or having to eat it with a spoon. It's honey that comes with its own carrying vessel.
STEPHANIE: That's great. Yeah, like a bread bowl for soup. [laughs]
JOËL: Exactly. Bees make their own bread bowls for honey.
STEPHANIE: [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So, for the last couple of weeks, we've been joking that this is turning into the Stephanie and Joël book club because we've been talking about a lot of articles and books. Today, I'd like to shift a little bit away from literature and lean into art. Writing code is a technical work, but in many ways, it's also an aesthetic work. It's a work of art. How do you feel about the idea of expressing yourself creatively through your code?
STEPHANIE: So this is interesting to me because it's actually quite different from what we've been talking about in recent episodes around the idea of writing sustainable code, code for other people to read. Because if you are writing code purely for creative expression and just for yourself, that will look very different than what I think folks have kind of called boring technology, which is choosing the patterns, the tools, the frameworks that are tried and true, and just kind of sticking to the things that people have solved before.
And so, in some ways, I don't know if I really get to express myself creatively in the code that I write, which I think is okay for me because I don't really consider myself someone who needs a creative outlet in my work. What about you? What thoughts do you have about this?
JOËL: I think it's interesting the way you described it. I'm almost wondering if I'm making maybe a comparison to physical architecture; maybe you almost have a sort of brutalist perspective on the things you construct.
STEPHANIE: [laughs]
JOËL: So they're functional. They're minimal. They are not always the prettiest to look at, but they're solid. Does that metaphor sound about right to you?
STEPHANIE: I feel like I have to make a pun about SOLID, the design patterns, and code.
JOËL: Ooh.
STEPHANIE: [laughs] But I think I like brutalist, I mean, the term itself. I don't know if I necessarily identify with it in terms of my work and output. But the idea that the code that I do is functional is, I think, particularly important to me as a developer. And I don't just mean, like, oh, the code works, so it's done, but functional for whatever need I'm solving and also for the people who are working with this code again in the future.
I mentioned boring technology. There's a talk that I'm kind of referencing by Dan McKinley, and you can check out his slides at boringtechnology.club. And he talks about this idea of decision-making and how that relates to writing boring or creative code. And he also references Maslow's hierarchy of needs. And so, ideally, if you're working in an existing codebase, all the low-level decisions have been made for you. And then you can kind of traverse the hierarchy and focus your creativity on the high-level problems that you're trying to solve.
So maybe you're not necessarily expressing your creativity in the syntax or whatever pattern you're using, again, because a lot of those things have been solved. But where the creativity comes from is the particular domain or business problem you have and the real-world constraints that you're faced with. And how do you figure out what to do given those constraints?
JOËL: I think that lines up a lot with my own experience as well. I think as a newer developer, syntax is sort of the thing that's top of mind. And so, maybe trying to get clever with syntax is something that I would focus on more. Sometimes that's trying to get code really short and terse. Sometimes it's because I want to try. Can I do this thing with a particular piece of syntax, or even just does it look pretty?
I think now, in my code, I am actually kind of boring with my syntax. I, probably when I write Ruby, mostly use a kind of slimmed-down set of syntax and don't use the full expressive power of the language for most of my day-to-day needs. So basic things with objects, and methods, and blocks, sort of the basic building blocks that we get from Ruby regular conditionals, if...else, and a few other nice things that the language gives us. But, in many ways, it almost feels like...I don't know if you've ever seen the simplified English Wikipedia.
STEPHANIE: No, I haven't. What is that?
JOËL: They're treating it, I think, like a separate language, but it is a version of Wikipedia in English with a more restricted vocabulary to try to make the content more accessible to those that might struggle with more standard English. So it's a sort of smaller subset of English. And, in many ways, I feel like a lot of the day-to-day Ruby code that I write is simplified, Ruby.
STEPHANIE: Wow, that's really interesting. I think this also goes back to the specialized vocabulary episode we talked about. And is there value in keeping things accessible, and straightforward, and boring but at the cost of being able to express yourself with everything you have available to you? This is a bit of a tangent, I guess, but I grew up speaking Chinese with my parents, but since then, I have really lost a lot of that vocabulary.
And, in some ways, I really struggle with communicating in Chinese because I feel like I'm not able to express myself exactly the way I want to in the way that I can in English. And when I'm talking to my parents, yeah, that's been a bit of a challenge for me because I do really value being able to say things the way that I mean, and I'm not able to have that with my limited vocabulary. So I can also see how people might not enjoy working within these confines of boring syntax and boring frameworks.
JOËL: Sometimes it's nice to give yourself a sort of syntactical restriction, but they're very low-level when it comes to most of what we do for programming. And I think that's sort of what I've learned as my career has evolved is that programming is so much more than just learning syntax. So kind of like with art, maybe it's nice to restrict yourself to say, oh, can I do something with only a particular brushstroke technique, or restricting myself to a particular palette or a particular medium? And that can foster a lot of creativity. So, similarly, I think you could do some things like playing Code Golf, not on production code; please don't.
STEPHANIE: [laughs]
JOËL: But as an experiment in a side project or just almost as a piece of art, that can be a really interesting problem to solve and give you a deeper understanding of the language. And I'm sure there are plenty of other syntactical limitations you could put on yourself or maybe fancy things you would like to explore and say, "Well, this is over the top. We don't need to structure it in this way or use this syntax. But I want to sort of push the boundaries of what can be done with it. Let's see where I can take it."
STEPHANIE: That's really fair. And I think it relates back to what I was saying earlier about perhaps creativity when writing software products comes from the constraints of the business of, in some ways, physical aspects of development. In the Dan McKinley talk, I mentioned about choosing boring technology. He generally recommends against bringing in a new language or framework because of the costs, the carrying cost of doing that, and the long-term maintenance to consider.
But he instead suggests turning the question on its head and being like, how can we solve this problem with the current technology that we do have? And I think that relates to what you were saying about being able to push the boundaries of a particular medium or tool and in a way that you might not have considered before.
JOËL: Exactly. And I think going back to the analogy with art; sometimes it is nice to restrict yourself to a particular brushstroke or something like that to try to foster creativity. But oftentimes, you want to explore creativity in much higher-level ways. So maybe you're not restricting things like brushstrokes and color, and, instead, you want to explore lighting. You want to explore maybe certain ways of mixing colors.
There are all sorts of, I think, higher-level ways that you can be creative in art that's not just the mechanics of how you apply pigment to canvas. And we see the same thing like you were saying, in code where there's a lot of higher level business problems. Generally, how do we want to structure large chunks of the code? How do we want to build abstractions? Although that can also be a dangerous place to get too creative in.
STEPHANIE: Yeah, absolutely. Do you consider yourself a creative person or need a creative outlet? And how does writing code or software development play a role in that for you?
JOËL: I would say, yes, I consider myself a creative person. And I would consider coding, in general, to be a creative endeavor. I sometimes describe to people that writing code is like building something out of infinite legos. You're constrained only by the power of your imagination and the amount of time you're willing to put into constructing the thing that you're building.
Of course, then you have all sorts of business constraints. And there are things you want to do on a work project that are probably not the same as what you would want to do on a client project or on a personal project. But there's still creativity, I think, at every level and sometimes even outside of the code itself. Just understanding and breaking down the business problem can require a ton of creativity before you even write a single line of code in your editor.
I was reading a Twitter thread the other day by @GeePawHill that sort of proposes that there are sort of four steps in evolution of kind of the mindset that programmers go through over their career. And I'd be curious to hear your thoughts on this evolution if you kind of agree with it or disagree with it if that maybe lines up with some of your experience.
So this Twitter thread proposes four levels of thinking that we go through. I think we can kind of jump between these levels at various points in our work. So we might do all of these in a day, but to a certain extent, they also follow a little bit of a progression in our career. So the first level is thinking in terms of syntax; that's just knowing the characters to type in the editor.
The second level is thinking in terms of code, that's, thinking a little bit more semantically. So now, instead of thinking, oh, do I need if then curly brace, then closed curly brace? Now we're thinking more in terms of, okay, I need a branch in the flow of control for my logic here. And at that level, maybe you don't even need to think about the syntax quite so much because you're so comfortable with. It kind of just fades away.
Building beyond that, now you're thinking in terms of your paradigm. So Ruby is an object-oriented language, so you might be thinking in terms of what objects do I need to represent this problem and how do they need to talk to each other? And the sort of underlying semantics of, oh, do I need a conditional here or not? Those might start fading away because now you're thinking at a slightly higher level.
And then, finally, thinking in terms of change sets. Now you're thinking less in terms of the language itself and more in terms of the business problems and how the current behavior of the software is different and needs to change to get to where we want the behavior to be.
STEPHANIE: I think I disagree a little bit with the idea that it's a progression. And I'm thinking about how when you have a beginner's mind, anything is possible. And in some ways, if you are new to coding, before you have that understanding of what is and isn't possible, anything is possible. And so, in some ways, I've worked with people who are super new to coding, and the ideas that they come up with for how to make a change at that highest level that you were just describing, in some ways, make sense.
You can be like, oh yeah, that actually is something we can do and an idea that you might eventually get to from someone more experienced, having followed those different levels of progression and reaching a place where you're like, I know exactly what tools or the details about how to do this. But when you have that beginner's mind, and you don't have the details of the how, I think you can still think about those problems at a higher level, and that is valuable, and maybe they'll need help implementing along the way.
And I think that that could be a really interesting area of collaboration that perhaps we don't do enough in this industry because it's very mentorship-focused where it's like, okay, I have more experience, and so I'm going to teach you what I know. Whereas if you bring someone with a totally fresh perspective along, what ideas can you generate from there?
JOËL: I think we definitely exist in all of these layers every day as developers. I think, looking back at myself as a newer developer, I tended to maybe work bottom-up when I tried to solve a problem. And I think that now I probably tend to work sort of in the reverse order, start by thinking in terms of changes and then work my way down. And so syntax, at that point, is the last thing that I'm thinking about. It's really an implementation detail. Whereas I think as a new coder, syntax was super important. Was your experience similar to that, or did you have a very different journey?
STEPHANIE: It's funny that you mentioned it because I think when I was new to development, there were so many syntactic things that I didn't understand that I just kind of like blurted out of my brain when I was reading code and was then trying to latch on to the important pieces of information that I needed to know, which often meant class names or method names. Pieces that I could grab onto and be like, okay, I'm seeing that this method then calls this other method or whatever.
And, yeah, what you were saying about implementation details falling away, I kind of did that at the beginning of my career a little bit, at least at that syntactic level. So, yeah, I think I'm with you where we all exist at different parts of this framework, I suppose. And that journey could look different for everyone.
JOËL: So we're talking about ways to be creative at higher levels. And one way that I find has been really fun for me but also really useful has been bringing in dependency graphs as a tool for design. You knew I had to mention dependency graphs.
STEPHANIE: We got there in the end. [laughter] Cool, go on.
JOËL: I think it's been really good sometimes in terms of modeling change sets because dependency graphs can be a great tool for that, but also sometimes in terms of trying to understand what the underlying business problem is and how it might translate into code structures where things might be tightly coupled versus not. And so, drawing it out visually is a really powerful design tool.
And because now I can look at it in two-dimensional space, I can realize, oh, I see something that feels like it's maybe an anti-pattern or might be a problem here. There's a cycle in my graph; maybe we should find a way to break that. Maybe we need to introduce some dependency inversion and break that cycle, and now our graph is acyclic.
And so I think that's where there can be a lot of creativity that happens, even when you're not writing code at that point. You're just sort of talking about how different pieces of the project or even different subproblems...you're not even talking about if they're implemented in code, but just saying this subproblem is related to this subproblem, and maybe I would like to find a way for them to not have a connection to each other.
STEPHANIE: I'm glad we got back to this dependency graph topic because I stumbled upon something that I'm curious to hear your opinion on. I have been following Julia Evans' work for a little bit now. And she recently released a new zine about debugging. And at the end of the zine, she includes a link to these choose-your-own adventure puzzles that she has created, specifically to teach you about debugging and how to do it.
And so it's basically a little detective game, and you kind of follow along with this bug. And she gives you some different options about how would you like to find a little bit more information about this bug? And what approach would you take? And you make some different selections, and then as you go, you get more information about the bug. And that helps inform what next steps you might take.
And, one, I think this is a great example of a creative project about software development, even though it's not necessarily your day-to-day work. But then she also uses a tool called Twine, which is for creating non-linear stories, or puzzles, or games. And it got me really thinking about the multi-step wizard we've been talking about and this idea of looking at a problem in different mediums.
It also reminds me of if you have a designer on your team and they're doing prototyping, they usually have some kind of user interactivity that they have to codify. And they are making those decisions about okay, like, if you are at this step, then where do you go next? And those are all things that you've talked about doing as a developer, I think, at a later point in the future lifecycle. And I'm now just kind of thinking about how to integrate some of that into our workflow. Do you have any thoughts about that?
JOËL: I had one of the coolest experiences in my career when I was doing a front-end project where we were building a typeahead component that was pulling data from a remote server and then populating a drop-down. And the designer and I sat down and just started to look through all the different states that you could be and how you could move from one to another.
So it looked like maybe you start the typeahead is empty; it's just a text box. And then as you start typing, maybe there's a spinner that shows up. And then maybe you have some results, or maybe you don't have results. And those are two different entirely states that you could be in. And then, if you backspace, what happens? And what if something goes wrong on the server? Like, we just kept finding all these edge cases.
And we built out a diagram of all the possible journeys that someone could take, starting from that empty text box, all the way to either some sort of error state or a final state where you've selected an item. But, of course, these are not necessarily terminal because in an error state, maybe you can just start typing again, and you sort of jump back into the beginning of the flow.
So we did this whole diagram that ended up looking very much like a finite-state machine. We didn't use the term, but that's kind of what it ended up being. And I think we both learned a lot about the problem we were trying to solve and the user experience we were trying to create through that.
There was just a lot of back and forth of, like, oh, did you think about what would happen if we get no results here? Have we thought about that state? Or it's like, okay, so now we're in an error state. What do we do? Is there a way to get out of it, or are we just kind of stuck? Oh, you can backspace. Okay, what happens then?
STEPHANIE: Yeah, I mean, we've been talking about creativity as a solitary process. But I think that that goes to show that when collaborating with other people, too, that process can also be very fun and creative and fulfill that need outside of the way the code is written.
JOËL: In many ways, I think working with somebody else, and that gets made at the intersection of two or more people's work, is probably the most creative way to build software.
STEPHANIE: That actually reminds me of a book I read last year called "Tomorrow, and Tomorrow, and Tomorrow." And it's about these two friends and their journey creating video games together. And it kind of follows several decades of their life and their relationship, and their creative and collaborative process. And I really loved that book. It was very good, especially if you like video games. There are a lot of great references to that too.
But I think what you were saying about that fulfillment that you can get with working with other people, and that book does a really good job examining that and getting into our need as humans for that type of collaboration. So that's my little book rec. It goes back to our conversation about designing a game. Again, maybe this is [laughs] what we'll do next. Who knows? The world's our oyster.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thank you so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël has been pondering another tool for thought from Maggie Appleton: diagramming. What does drawing complex things reveal? Stephanie has updates on how Soup Group went, plus a clarification from last week's episode re: hexagons and tessellation.
They also share the top most impactful articles they read in 2022.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot that has basically become a two-person book club between me and Joël. [laughter]
JOËL: I love that.
STEPHANIE: I'm so sorry, I had to. I think we've been sharing so many things we've been reading in the past couple of episodes, and I've been loving it. I think it's a lot of the conversations we have off-air too, and now we're just bringing it on on-air. And I am going to lean into it. [laughs]
JOËL: I like it.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So, in a recent episode, I think it was two episodes ago, you shared an article by Maggie Appleton about tools for thought. And I've kind of been going back to that article a few times in the past few weeks. And I feel like I always see something new.
And one tool for thought that Maggie explicitly mentions in the article is diagramming, and that's something that we've used as an industry for a long time to deal with conditional logic is just writing a flow diagram. And I feel like that's such a useful tool sometimes to move away from code and text into visuals and draw your problem rather than write your problem.
It's often useful either when I'm trying to figure out how to structure some of my own code or when I'm reviewing a PR for somebody else, and something just feels not quite right, but I'm not quite sure what I want to say. And so drawing the problem all of a sudden might give me some insights, might help me identify why does something feel off about this code that I can't quite put into words?
STEPHANIE: What does drawing complex things reveal for you? Is there a time where you were able to see something that you hadn't seen before?
JOËL: One thing I think it can make more obvious is the shape of the problem. When we describe a problem in words, sometimes there's a sense of like, okay, there are two main paths through this problem or something. And then when we do our code, we try to make it DRY, and we try all these things. And it's really hard to see the flow of logic. And we might actually have way more paths through our code than are actually needed by the initial problem definition.
I think we talked about this in a past episode as well, structuring a multi-step form or a wizard. And oftentimes, that is structured way more complex than it needs to be. And you can really see that difference when you draw out a flow diagram, the difference between forcing everything down a single linear flow with a bunch of little independent conditions versus branching up front three or four or five ways, however many steps you have. And then, from there, it's just executing code.
STEPHANIE: I have two thoughts here. Firstly, it's very tragic that this is an audio medium only [laughs] and not also a visual one. Because I think we've joked in the past about when we've, you know, talked about complex problems and branching conditionals and stuff like that, like, oh, like, if only we could show a visual representation to our listeners. [laughs] And secondly, now that makes a lot more sense why there are so many whiteboards just hanging out in offices everywhere. [laughs]
JOËL: We should use them more. It's interesting you mentioned the limitations of an audio format that we have. But even just describing the problem in an audio format is different than implementing it in code. So if I were to describe a problem to you that says, oh, we have a multi-step form that has three different steps to it, in that description, you might initially think, oh, that means I want to branch three ways up front, and then each step will need to do some processing. But if you look at the implementation in the code, maybe whoever coded it, and maybe that's yourself, will have done it totally differently with a lot more branching than just three up front because it's a different medium.
STEPHANIE: That's a really good point. I also remember reading something about how you can reason about how many branches a piece of code might have if you just look at the structure of the lines of code in your editor if you either step away from it and are just looking at the code not really able to see the text itself but just the shape that it makes. If you have some shorter lines and then a handful of longer lines, you might be able to see like, oh, like these are multiple conditionals happening, which I think is kind of related to what you're saying about taking a piece of code and then diagramming it out to really see the different paths.
And I know that that can also be obscured a little bit if you are stylistically using different syntax. Like, if you are using a guard clause to return early, that's a conditional, but it gets a bit hidden from the visual representation than if you had written out the full if statement, for example.
JOËL: I think that's a really interesting distinction that you bring up because a lot of languages provide syntactic sugar for common conditional tasks that we do. And sometimes, that syntactic sugar will almost obfuscate the fact that there is a conditional happening at all, which can be great in a lot of cases.
But when it comes to analyzing and particularly comparing different implementations, a second conversion that I like to do is converting all of the conditional code to some standardized form, and, for me, that's typically just your basic if...elsif...else expressions. And so any fancy Boolean operators we're doing, any safe navigation that we're doing around nil, maybe some inline conditionals, early returns, things like that, all of the implicit elses that are involved as well, putting them all into some normalized form then allows me to compare two implementations with each other.
And sometimes, two approaches that we initially thought were identical, just with different syntax, turned out to have slightly different behavior because maybe one has this sort of implicit branch that the other one doesn't. And by converting to a normalized syntax, all of a sudden, this difference becomes super obvious.
To be clear, this is not something I do necessarily in the actual code that I commit, not necessarily writing everything long-form. But definitely, when I'm trying to think about conditional code or analyzing somebody else's code, I will often convert it to long-form, some normalized shape so that I can then see some things about it that were not obvious in the final form. Or to make a comparison with something else, and then you can compare apples to apples and say, okay, both these approaches that we're considering in normalized form, here's what they look like. There's some difference here that we do care about or don't care about.
STEPHANIE: That's really interesting. I find it very curious that there is a value in having the long-form approach of writing the code out and being able to identify things. But then the end result that we commit might not look like that and be shortened and be kind of, quote, unquote, "polished," or at least condensed with syntactic sugar. And I'm kind of wondering why that might be the case.
JOËL: I think a lot of that will come down to your personal or your company's style guide. Personally, I think I do lean a little bit more towards a slightly more explicit form. But there are plenty of times that I will use syntactic sugar as well, as long as everybody knows what it does. But sometimes, it will come at the cost of other analysis techniques. You had mentioned the squint test earlier, which I believe is a term coined by Sandi Metz.
STEPHANIE: I think it might be. That rings a bell.
JOËL: And that is a benefit that you get by writing explicit conditionals all the time. But sometimes, it is much nicer to write code that is a little bit more terse. And so you have to do the trade-offs there.
STEPHANIE: Yeah, that's a really good point.
JOËL: So that's two of the sort of three formats that I was thinking about for converting conditional code to gain more insight. The other format is honestly a little bit weird. It's almost a stretch. But from my time spent working with the Elm language, I learned how to use its type system, which uses a concept called algebraic data types, or some languages will call these tagged unions, some languages will call these sum types. This concept goes by a lot of different names.
But they're used to define types into model data. But there's a really fun property, which is that you can model conditional code using this as well. And so you can convert executable code into these algebraic data types. And now, you can apply a lot of tools and heuristics that you have from the data modeling world to this conditional code.
STEPHANIE: Do you have a practical example?
JOËL: So a classic thing that data modelers will say is you should make impossible states impossible. So in practice, this means that when you define a type using these algebraic data types, you should not be able to create more distinct values than are actually valid in this particular system. So, for example, if a value is required to always be present for something and there's no way in the system for a value to become not present, then don't allow it to be nullable.
We do something similar when we design a database schema when we put a null false on a column because we know that this will never be null. And so, why allow nulls when you know they should never be there? So it's a similar thing with the types. This sort of analysis that you can do looking at...the fancy term is the types cardinality. I'll link to an article that digs into that for people who are curious.
But that can show you whether a type can represent, let's say, ten possible values, but the domain you're trying to model only has 5. And so when there's that discrepancy, there are five valid values that can be modeled by your type and an additional extra five that are not valid that just kind of shake out from the way you implemented things. So you can take that technique and apply it to a conditional that you've converted to algebraic data type form. And that can help find things like paths through your conditional code that don't line up with the problem that you're trying to solve.
So going back to the example I talked about earlier of a multi-step form with three different steps, that's a problem that should have three paths through your conditional. But depending on your implementation, if it's a bunch of independent if clauses, you might have a bit of a combinatorial explosion. And there might be 25 different paths through that chunk of code. And that means three of them are the ones that your problem wants, and then the extra 22 are things that should quote, unquote, "never happen," but we all know that they eventually will. So that kind of analysis can help maybe give you pointers to the fact that your current structure is not well-suited to the problem that you're trying to solve.
STEPHANIE: I think another database schema example that came to mind for me was using an enum to declare acceptable values for a field. And, yeah, I know exactly what you mean when working with code where you might know, because of the way the business works, that this thing is impossible, and yet, you still have to either end up coding defensively for it or just kind of hold that complexity in your head. And that can lead to some gnarly situations, and it makes debugging down the line a lot more difficult too.
JOËL: It definitely makes it really hard for somebody else to know the original intention of the code when a conditional has more paths through it than there actually are actual paths in the problem you're trying to solve. Because you have to load all of that in your head, and our programmer brains are trained to think about all the edge cases, and what if this condition fires but this other one doesn't? Could that lead to a bug? Is that just a thing that's like, well, but the inputs will never trigger that, so you can ignore it? And if there are no comments to tell you, and if there are comments, then do you trust them? Because it --
STEPHANIE: Yes. [laughter] I'll just jump in here and say, yeah, I have seen the comments then conflict with the code as well. And so you have these two sources of information that are conflicting with each other, and you have no idea what is true and what's not.
JOËL: So I'm a big fan of structuring conditional code such that the number of unique paths through a set of conditions is the same as the sort of, you might say, logical paths through the problem domain that we haven't added extra paths, just sort of accidentally due to the way we implemented things.
STEPHANIE: Yeah. And now you have three different ways to visualize that information in your head [laughs] with these mental models.
JOËL: Right. So from taking code that is conditional code and then transforming it into one of these other representations, I don't always do all three, but there are tools that I have. And I can gain all sorts of new insights into that code by looking at it through a completely different lens.
STEPHANIE: That's super cool.
JOËL: So the last episode, you had mentioned that you were going to try a soup club. How did that turn out?
STEPHANIE: It turned out great. It was awesome, the inaugural soup group. I had, I think, around eight people total. And I spent...right after work, I went straight to chopping celery [laughs] and onions and just soup prepping. And it was such a good time. I invited a different group of friends than normally come together, and that turned out really well. I think we all kind of had at least one thing in common, which was my goal was just to, you know, have my friends come together and meet new people too.
And we had soup, and we had bread. Someone brought a spiced crispy chickpea appetizer that went really well inside of our ribollita vegetable bean soup. And then I had the perfect amount of leftovers. So after making a really big batch of food and spending quite a long time cooking, I wanted to make sure that everyone had their fill. But it was also pretty nice to have two servings left over that I could toss in the freezer just for me and as a reward for my hard work.
And then it ended up working out really well because I went on vacation last week. And the night we got back home, we were like, "Oh, it's kind of late. What are we going to do for dinner?" And then I got to pull out the leftover soup from my freezer. And it was the perfect coming home from a big trip, and you have nothing in your fridge kind of deal. So it worked out well.
JOËL: I guess that's the advantage of hosting is that you get to keep the leftovers.
STEPHANIE: It's true.
JOËL: You also have to, you know, make the soup. [laughs]
STEPHANIE: Also true. [laughs] But like I said, it wasn't like I had so much soup that I was going to have to eat it every single day for the next week and a half. It was just the amount that I wanted. So I'm excited to keep doing this. I'm hoping to do the next soup group in the next week or two. And then some other folks even offered to host it for next time. So maybe we might experiment with doing a rotating thing. But yeah, it has definitely brought me joy through this winter.
JOËL: That's so lovely. What else has been new in your world?
STEPHANIE: I have a clarification to make from last week's episode. So last week, we were talking about hexagons and tessellation. And we had mentioned that hexagons and triangles were really strong shapes. And we mentioned that, oh yeah, you can see it in the natural world through honeycomb. And I've since learned that bees don't actually build the hexagon shape themselves.
That was something that scientists did think to be true for a little bit, that bees were just geometrically inclined, but it turns out that the accepted theory for how honeycomb gets its shape is that bees build cylindrical cells that later transform into hexagons, which does have a lot of surface area for holding the honey, though the process itself is actually still debated by scientists.
So there's some research that has supported the idea that it's formed through physical forces like the changing temperature of the wax that transforms it from a cylinder shape into a hexagon, though, yeah, apparently, the studies are still a bit inconclusive. And the last scientific paper I read about this, just to really get my facts straight [laughs], they were kind of exploring aspects of bee behavior that led to the hexagons eventually forming because that does require that the cylinders are perfectly the same size and are at least built in a hexagonal pattern, even though the cells themselves are not hexagons.
JOËL: Fascinating. So it sounds like it's either a social thing where the bees do it based off of some behavior. Or if it's a physical thing, it's some sort of like hexagons are a natural equilibrium point that everything kind of trends to, and so as temperature changes, the beehive will naturally trend towards that.
STEPHANIE: Yeah, exactly. I have a good friend who is a beekeeper, so I got to pick her brain a little bit about honeycomb. [laughs]
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So in the past few episodes, we've talked about books we're reading, articles that we're reading. This is kind of turning into the Stephanie and Joël book club.
STEPHANIE: I love it.
JOËL: That got me thinking about things that I've read that were impactful in the past year. So I'm curious for both of us what might be, let's say, the top two or three most impactful articles that you read in 2022. Or maybe to put it another way, what are the top two or three articles that you reference the most in conversations with other people?
STEPHANIE: So listeners might not know this, but I actually joined thoughtbot early last year in February. So I was coming into this new job, and I was so excited to be joining an organization with so many talented developers. And I was really excited to learn from everyone. So I kind of came in with really big goals around my technical growth. And the end of the year just passed, and I got to do a little bit of reflection. And I was quite proud of myself actually for all the things that I had learned and all the ways that I had grown.
And I was reminded of this blog post that I think I had in the back of my mind around "Coachability" by Cate, and she talks about how coaching is different from mentorship. And she provides some really cool mental models for different ways of providing support to your teammates. Let's say mentorship is teaching someone how to swim, and maybe helping someone out with a task might be throwing them a life raft.
Coaching is more like seeing someone in the water, but you are up on a bridge, and you are kind of seeing all of their surroundings. And you are identifying ways that they can help themselves. So maybe there's a branch, a tree branch, a few feet away from them. And can they go grab that tree branch? How can they help themselves?
So I came to this new job at thoughtbot, and I had these really big goals. But I also knew that I wanted to lean on my new co-workers and just be able to not only learn the things that I was really excited to learn but also trust that they had my best interests in mind as well and for them to be able to point out things that could help my career growth.
So the idea of coachability was really interesting to me because I had been coming from a workplace that had a really great feedback culture. But I think this article touches on what to do with feedback in a way that I hadn't seen before. So she also describes being coachable as having two axes, one of them being receptiveness to feedback and the other being actionability in response to feedback.
So receptiveness is when you hear feedback; do you listen to it? Do you work through it? How does that feedback fit into your mental model of your goals and your skills? And then actionability is like, okay, what do you do with that? How do you change your behavior? How do you change the way you approach problems? And those two things in mind were really helpful in terms of understanding how I respond to feedback and how to really make the most of it when I receive it.
Because there are times when I get feedback, and I don't know what to do with it, you know, maybe it just wasn't specific enough. And so, in that sense, I want to work on my actionability and figuring out, okay, someone said that testing would be a really great opportunity for me to learn. But what can I do to learn how to write better tests? And that might involve figuring that out on my own, like, what strategies work for me. Or that might involve asking them, being like, "What do you recommend?"
So yeah, I had this really big year of growth. And I'm excited to keep this mental model in mind when I feel like I might be stuck and I'm not getting the growth that I want and using those axes to kind of determine how to move forward.
JOËL: I think the first thing that comes to mind for me is the episode that you and I did a while back about the value of precise language. For example, you talked about the distinction between coaching and mentorship, which I think in sort of colloquial speech, we kind of use interchangeably. But having them both mean different things, and then being able to talk about those or at least analyze yourself through the lens of those two words, I think, is really valuable and may be helping to drive either insights or actions that you can take. And similarly, this idea of having two different axes for receptiveness versus...was it changeability you said was the other one?
STEPHANIE: Actionability.
JOËL: Actionability, I think, is really helpful when you're feeling stuck because now you can realize, oh, is it because I'm not accepting feedback or not getting good feedback? Or is it that I'm getting feedback, but it's hard to take action on it? So just all of a sudden, having those terms and having that mental model, that framework, I feel like equips me to engage with feedback in a way that is much more powerful than when we kind of used all those terms interchangeably.
STEPHANIE: Yeah, exactly. I think that it's very well understood that feedback is important and having a good feedback culture is really healthy. But I think we don't always talk about the next step, which is what do you do with feedback? And with the help of this article, I've kind of come to realize that all feedback is valuable, but not all of it is good. And she makes a really excellent point of saying that the way you respond to feedback also depends on the relationship you have with the person giving it.
So, ideally, you have a high trust high respect relationship with that person. And so when they give you feedback, you are like, yeah, I'm receptive to this, and I want to do something about it. But sometimes you get feedback from someone, and you might not have that trust in that relationship or that respect. And it just straight up might not be good feedback for you.
And the way you engage with it could be figuring out what part of it is helpful for me and what part of it is not? And if it's not helpful in terms of helping your growth, it might at least be informative. And that might help you learn something about the other person or about the circumstances or environment that you're in.
JOËL: Again, I love the distinction you're making between helpful and informative.
STEPHANIE: Yeah. I think I had to learn that the hard way this year. [laughs] So, yeah, I really hope that folks find this vocabulary or this idea...or consider it when they are thinking about feedback in terms of giving it or receiving it and using it in a way that works for them to grow the way they want to.
JOËL: I'm curious, in your interactions, and learning, and growth over the past year, do you feel like you've leaned a little bit more into the mentorship or the coaching side of things? What would you say is the rough percentage breakdown? Are we talking 50-50, 80-20?
STEPHANIE: That's such a good question. I think I received both this year. But I think I'm at a point in my career where coaching is more valuable to me. And I'm reminded of a time a few months into joining thoughtbot where I was working and pairing with a principal developer. And he was really turning the workaround on me and asking, like, what do I want to do? What do I see in the code? What areas do I want to explore?
And I found it really uncomfortable because I was like, oh, I just want you to tell me what to do because I don't know, or at least at the time, I was really...I found it kind of stressful. But now, looking back on it and with this vocabulary, I'm like, oh, that's what true coaching was because I gained a lot of experience towards my foundational skill set of figuring out how to solve problems or identifying areas of refactoring through that process.
And so sometimes coaching can feel really uncomfortable because you are stretching outside of your comfort zone and that your coach is hopefully supporting you but not just giving you the help but teaching you how to help yourself.
JOËL: That's a really interesting thing to notice. And I think what I'm hearing is that coaching can feel less comfortable than mentoring because you're being asked to do more of the work yourself. And you're maybe being stretched in some ways that aren't exactly the same as you would get in a more mentoring-focused scenario. Does that sound right?
STEPHANIE: Yeah, I think that sounds right because, like I said, I was also receiving mentorship, and I learned about new things. But those didn't always solidify in terms of empowering me next time to be able to do it without the help of someone else. Joël, what was an article that really spoke to you this last year?
JOËL: So I really appreciated an article by Adrianna Chang, who's a developer at Shopify, about "Refactoring Legacy Code with the Strangler Fig Pattern." And it talks about this approach to moving refactoring code from one implementation to another. And it's a longer-ranged process, and how to do so incrementally. And a big theme for me this year has been refactoring and incremental change.
I've had a lot of conversations with people about how to spot smaller steps. I've written an article on working incrementally. And so I think this was really nice because it gave a very particular technique on how to do so with an example. And so, because these sorts of conversations kept coming up this year, I found myself referencing this article all the time.
STEPHANIE: I really loved this article too. And this last year, I also saw a strangler fig tree for the first time in real life in Florida. And I think that was after I had read this article. And it was really cool to make the connection between something I was seeing in nature with a pattern in software development or technique.
JOËL: We have this metaphor, and now you get to see the real thing. I was excited because, at RubyConf Mini this year, I actually got to meet Adrianna. So it was really cool. It's like, "Hey, I've been referencing your article all year. It's super cool to meet you in person."
STEPHANIE: That's awesome. I love that, just being able to support members of the community. What I really liked about the approach this article advocated for is that it allowed developers to continue working. You don't have to halt everything and dedicate time to refactor and not get any new feature work done. And that's the beauty of the incremental approach that you were talking about earlier, where you can continue development. Sometimes that refactoring might be paused for some reason or another, but then you can pick back up where you left off.
And that is really intriguing to me because I think this past year, I was working on a client where refactoring seemed like something we had to dedicate special time for. And it constantly became tough to prioritize and sell to stakeholders. Whereas if you incorporate it into the work and do it in a way that doesn't stop the show [laughs] from going on, it can work really well and work towards sustainability and maintenance, which is another thing that we've talked a lot about on the show.
JOËL: Something that's really powerful, I think, with that technique is that it allows you to have all of the intermediate steps get merged into your main branch and get shipped. So you don't have to have this long-running branch with a big change that's constantly going stale, and you're having to keep in sync with the main branch. And, unfortunately, I've often seen even this sort of thing where you create a long-running branch for a big change, a big refactor, and eventually, it just gets abandoned, and you have not locked in any wins.
STEPHANIE: Yeah, that's the worst of both worlds where you've dedicated time and resources and don't get the benefits of that work. I also liked that the strangler fig pattern kind of forces you to really understand the existing code. I think working with legacy code can be really challenging. And a lot of people don't like to do it because it involves a lot of spelunking and figuring out, okay, what's really going on.
But in order to isolate the pieces to, you know, slowly start to stop making calls to the old code, it requires that you take a hard look at your legacy code and really figure it out. And I honestly think that that then informs the new code that you write to better support both the old feature and also any new features to come.
JOËL: Definitely. The really nice thing about this pattern is that it also scales up and down. You can do this really small...even as part of a feature branch; maybe it's just part of your development process, even if you don't necessarily ship all of the intermediate steps. But it helps you work more incrementally and in a tighter scope. And then you can scale it up as big as changing out entire sections of a framework or...I think Adrianna's example is like switching out a data source. And so you can do some really large refactors. But then you could do it as well on just a small feature.
I really like using this pattern anytime you're doing things like Rails upgrades, and you've got old gems that might not convert over where it's like, oh, the community abandoned this gem between Rails 4 and Rails 5. But now you need sort of a bridge to get over. And so I think that pattern is particularly powerful when doing something like a Rails upgrade.
STEPHANIE: Very Cool.
JOËL: So what would be a second article that was really impactful for you in the past year?
STEPHANIE: So, speaking of refactoring, I really enjoyed a blog post called Finding Time to Refactor by a former thoughtboter, German Velasco. He makes a really great point that we should think of completeness in our work, not just when the code works as expected or meets the product requirements, but also when it is clear and maintainable. And so he really advocates for baking refactoring into just your normal development process.
And like I said, that goes back to this idea that it can be incremental. It doesn't have to be separate or something that we do later, which is kind of what I had learned before coming to thoughtbot. So when I was also speaking about just my technical growth, this shift in philosophy, for me, was a really big part of that. And I just started kind of thinking and seeing ways to just do it in my regular process. And I think that has really helped me to feel better about my work and also see a noticeable improvement in the quality of my code.
So he mentioned the three times that he makes sure to refactor, and that is one when he is practicing TDD and going through the red-green-refactor cycle.
JOËL: It's in the name.
STEPHANIE: [laughs] It really is. Two, when code is difficult to understand, so if he's coming in and fixing a bug and he pays the tax of trying to figure out confusing code, that's a really great opportunity to then reduce that caring cost for others by making it clear while you're in there, so leaving things better than you found it.
And then three, when the existing design doesn't work. We, I think, have mentioned the adage, "Make the change easy, and then make the easy change." So if he's coming in to add a new feature and it's just not quite working, then that's a really good opportunity to refactor the existing design to support this new information or new concept.
JOËL: I like those three scenarios. And I think that second one, in particular, resonated with me, the making things easier to understand. And in the sort of narrower sense of the word refactoring, traditionally, this means changing the structure of the code without changing its behavior. And I once had a situation where I was dealing with a series of early return expressions in a method that were all returning Booleans. And it was really hard because there were some unlesses, some ifs, some weird negation happening. And I just couldn't figure out what this code was doing.
STEPHANIE: Did you draw a diagram? [laughs]
JOËL: I did not. But it turns out this code was untested. And so I pretty much just tried, like, it took two Booleans as inputs and gave back a Boolean. So I just tried all the combinations, put it in, saw what it gave me out, and then wrote tests for them. And then realized that the test cases were telling me that this code was always returning false unless both inputs were true.
And that's when it kind of hits me, it's like, wait a minute, this is Boolean AND. We've reimplemented Boolean AND with this convoluted set of conditional code. And so, at the end there, once I had that test coverage to feel confident, I went in and did a refactor where I changed the implementation. Instead of being...I think it was like three or four inline conditionals, just rewrote it as argument one and argument two, and that was much easier to read.
STEPHANIE: That's a great point. Because the next time someone comes in here, and let's say they have to maybe add another condition or whatever, they're not just tacking on to this really confusing thing. You've hopefully made it easier for them to work with that code. And I also really appreciated, you know, I was mentioning how this article affected my thought process and how I approach development, but it's a really great one to share to then foster a culture of just continuous refactoring, I guess, is what I'm going to call it [laughs] and hopefully, avoiding having to do a massive rewrite or a massive effort to refactor.
The phrase that comes to mind is many hands make light work. And if we all incorporated this into our process, perhaps we would just be working all around with more delightful code. Joël, do you have one more article that really stood out to you this year?
JOËL: One that I think I really connected with this year is "Parse, Don't Validate" by Alexis King. Long-time listeners of the show will have heard me talk about this a little bit with Chris Toomey when he was a guest on the show this past fall. But the gist of the article is that the process of parsing is converting a broader type into a narrower type with the potential for errors.
So traditionally, we think of this as turning a string which a string is very broad. All sorts of things are strings, and then you turn it into something else. So maybe you're parsing JSON. So you take a string of characters and try to turn it into a Ruby hash, but not all strings are valid hashes. So there's also the possibility for errors. And so, JSON.parse() could raise an error in Ruby.
This idea, though, can be then expanded because, ideally, you don't want to just check that a value is valid for your stricter rules. You don't want to just check that a string is valid JSON and then pass the string along to the next person. You actually want to transform it. And then everybody else down the line can interact with that hash and not have to do a check again is this valid JSON? You've already validated that you've already converted it into a hash. You don't need to check that it's valid JSON again because, by the nature of being a hash, it's impossible for it to be invalid.
Now, you might have some extra requirements on that hash. So maybe you require certain keys to be present and things like that. And I think that's where this idea gets even more powerful because then you can kind of layer this on top and have a second parsing step where you say, I'm going to parse this hash into, let's say, a shopping cart object. And so, not all Ruby hashes are valid shopping carts.
And so you try to take a broader value and coerce it into a narrower value or transform it into a narrower value and potentially raise an error for those hashes that are not valid shopping carts. And then, whoever down the line gets a shopping cart object, you can just call items on it. You can call price on it. You don't need to check is this key present? Because now you have that certainty.
STEPHANIE: This reminds me of when I was working with TypeScript in the summer of last year. And having come from a dynamically-typed language background, it was really challenging but also really interesting to me because we were also parsing JSON. But once we had transformed or parsed that data into this domain object, we had a lot more confidence about what we were working in. And all the functions we wrote down the line or used on the line, we could know for sure that, okay, it has these properties about it. And that really shaped the code we wrote.
JOËL: So use the word confident here, which, for me, it's a keyword. And so you can now assume that certain properties are true because it's been checked once. That can be tricky if you don't actually do a transformation. If you're just sort of passing a raw value down, you'll often end up with code that is defensive that keeps rechecking the same conditions over and over.
And you see this lot around nil in Ruby where somebody checks for a value for nil, and then inside that conditional, three or four other conditions deep, we recheck the same value for nil again, even though, in theory, it should not be nil at that point. And so by doing transformations like that, by parsing instead of just validating, we can ensure that we don't have to repeat those conditions.
STEPHANIE: Yeah, I mean, that refers back to the analyzing conditional code that we spent a bit of time talking about at the beginning of this episode. Because I remember in that application, we render different components based on the status of this domain object. And there was a condition for when the status was something that was not expected.
And then someone had left a comment that was like, technically, this should never happen. But I think that he had to add it to appease the compiler. And I think had we been able to better enforce those boundaries, had we been more thoughtful around our domain modeling, we could have figured out how to make sure that we weren't then introducing that ambiguity down the line.
JOËL: I think it's interesting that you immediately went to talking about TypeScript here because TypeScript has a type system. And the "f, Don't Validate" article is written in Haskell, which is another typed language. And types are great for showing you exactly like, here's the boundary. On this side of it, it's a string, and on this side here, it's a richly-typed value that has been parsed.
In Ruby, we don't have that, everything is duck-typed, but I think the principle still applies. It's a little bit more implicit, but there are zones of high or low assumptions about the data. So when I'm interacting directly with raw input from a third-party endpoint, I'm really only expecting some kind of raw string from the body of the response. It may or may not be valid. There are all sorts of checks I need to do to make sure I can do anything with it. So that is a very low assumption zone.
Later on, in the business logic part of the code, I might expect that I can call a method on the object to get the price of a shopping cart or a list of items or something like that. Now I'm in a much higher assumption zone. And being self-aware about where we transition from low assumptions to high assumptions is, I think, a really key takeaway for how we interact with code in Ruby. Because, oftentimes, where that boundary is a little bit fuzzy or where we think it's in one place but it's actually in a different place is where bugs tend to cluster.
STEPHANIE: Do you have any thoughts about how to adhere to those rules that we're making so we're not having to assume in a dynamically-typed language?
JOËL: One way that I think is often helpful is trying to use richer objects and to not just rely on primitives all the time. So don't pass a business process a hash and be just like, trust me, I checked it; it's got the right keys because the day will come when you pass it a malformed hash and now we're going to have an error in the business process.
And now we have a dilemma because do we want to start adding defensive checks in the business process to be like, oh, are all our keys that we expect present, things like that? Do we need to elsewhere in the code make sure we process the hash correctly? It becomes a little bit messy. And so, oftentimes, it might be better to say, don't pass a raw hash around. Create a domain object that has the actual method that you want, and pass that instead.
STEPHANIE: Oh, sounds like a great opportunity to use the new data class in Ruby 3.2 that we talked about in an episode prior.
JOËL: That's a great suggestion. I would definitely reach for something like that, I think, in a situation where I'm trying to model something a little bit richer than just a hash.
STEPHANIE: I also think that there have been more trends around borrowing concepts from functional programming, and especially with the introduction of classes that represent nil or empty states, so instead of just using the default nil, having at least a bit of context around a nil what or an empty what. That then might have methods that either raise an error or just signal that something is wrong with the assumptions that we're making around the flexibility that we get from duck typing.
I'm really glad that you proposed this topic idea for today's episode because it really represented a lot of themes that we have been discussing on the show in the past couple of months. And I am excited to maybe do this again in the future to just capture what's been interesting or inspiring for us throughout the year.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thank you so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Stephanie talks about hosting a Soup Group! Joël got nerd-sniped during the last episode and dove deeper into Maggie Appleton's "Tools for Thought."
Stephanie has been thinking a lot about Sustainable Web Development. What is sustainability? How does it relate to tech and what we do?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, Stephanie, what's new in your world?
STEPHANIE: I'm excited to share a winter survival idea for folks out there who are, like me, in a very cold place where all your friends don't want to hang out [laughs] and bear the cold temperatures of deep winter in January. Because tonight, I'm hosting my first soup group where I'm basically just going to make a really big batch of soup and have my friends come over with bread, and we're going to eat soup and bread and be cozy.
And I'm really excited because I was trying to figure out a way to combat the winter blues a little bit. And, yeah, I think this time of year can be really tough after the holidays to get people together again. At least for me, I was feeling like I haven't seen my friends in so long. And I was like, well, I could just be the person to take the initiative [laughs] and be like, "Come over to our place."
And the goal is to eventually do this regularly and just have this low-stakes open invitation for anyone to come and show up however they want to. It doesn't have to be, like, big pressure or anything. And if they can't make it at any one time, then there will hopefully be one in the future where they can make it, so I'm excited. After this, I am going to make soup for ten people, and it's going to be great. [laughs]
JOËL: I love this idea. Soup on a cold day is just the coziest thing.
STEPHANIE: Yeah, exactly. I definitely wanted to just make people feel warm and cozy. And that's what I want, so I'm really doing this for myself. [laughs]
JOËL: And you know the advantage of hosting is you don't have to go outside.
STEPHANIE: Yeah, that's the real thing is I'm probably going to kick everyone out at like 11:00 p.m. and then go straight to bed, and it's going to be great. [laughs]
JOËL: Have you been experimenting with a particular kind of soup recently? Are you going to bring out an old favorite?
STEPHANIE: Yeah, I'm excited to make ribollita today, so kind of like a Tuscan style of veggie hearty soup. And I've just been bookmarking soup recipes left and right. [laughs] And I've outsourced the bread situation. So I'm excited to see what kind of bread people bring. And yeah, it'll be very fun and kind of surprising in a comforting way.
JOËL: I'm not familiar with this soup. It's ribollita you said?
STEPHANIE: Yeah, that's it.
JOËL: You said it's a vegetable soup.
STEPHANIE: Yeah, mostly veggies and beans. So I have this giant cabbage, a lot of kale, multiple cans of Great Northern white beans, and they're all going to get mixed together. And we'll see how it turns out. I'll update the podcast on how the soup group goes. It is the inaugural one. So I can't think of a time that I made that much soup before. So, hopefully, it goes well. We'll find out. So, Joël, what about you? What's new in your world?
JOËL: So, in the previous episode, we talked a little bit about some of the things you had learned about note-taking. And you'd mentioned an article by, I think, Maggie Applebon --
STEPHANIE: Maggie Appleton.
JOËL: Appleton...on tools for thought. It was linked in the show notes of that episode. And I went back and read that article, and it was so good, particularly the section, I think, on historical tools for thought and how they, over time, were sort of groundbreaking in helping us to either remember things or to think about problems or ideas in a different way, or to sort of interrogate those ideas and see if we think they're true or helpful. And these were things like writing or the number system but even some more fancy things like the scientific method for the Cartesian coordinate system.
STEPHANIE: Yeah, I was really excited to share this with you because I think it was the intersection of a lot of your different interests, including note-taking, diagrams, history, and human cognition, so I'm glad that you found it interesting.
JOËL: I definitely got nerd-sniped there.
STEPHANIE: [laughs]
JOËL: I think one thing that really struck me was the power of having multiple different representations for ideas. And one that jumped out at me was the Cartesian coordinate system, which, among other things, a really powerful tool that gave people...when this was invented, it allowed you to convert algebra problems into geometry problems.
And so now, something that used to be an equation you can draw as a triangle or something. And we know how to find the area of a triangle. That's been known since the ancient Greeks and even earlier. And so now a problem that sounded hard is now easy, or at least we have a different way to think about that problem. Because if this equation is equivalent to a triangle, what does that mean?
And vice versa, you can use this to convert geometry problems into algebra problems. And so sometimes the power of a new tool for thought might be in that it allows you to sort of convert between two other existing ways of representing things. And making those connections, all of a sudden gives you a whole new way of thinking about things. That blew my mind.
STEPHANIE: Yeah, I agree. I think the other really cool thing is that a lot of these ideas that humans are discovering also already existed in the natural world. So when you are talking about math, you can see representations of math in plants and nature, and I was reminded of how honeycomb from bees is one of the strongest shapes. And yeah, it's really neat to draw inspiration from a lot of places and learn from things that, like, figured it out before we did.
JOËL: Have you seen the video on YouTube called "Hexagons are the Bestagons?"
STEPHANIE: No, I have not. Tell me more.
JOËL: It's a video on YouTube. We can link it in the show notes. Basically, the hexagon shows up everywhere in nature in part because it has a lot of really fun mathematical properties. It's one of the few shapes that you can use to completely cover a surface. So if you want to subdivide a two-dimensional surface into smaller shapes without leaving any empty spaces between them, you really don't have that many options.
I want to say it's like squares and triangles and hexagons are the only shapes that can do that. And hexagons have these really fun properties around strength. They also are one of the best balances between volume versus the amount of material that it takes to give you that volume and for strength and things like that. So it's good for honeycombs because you can store a lot of honey for very little amount of wax. But it's also good for all sorts of structural engineering because you can build things that are very strong yet light because they require very little metal or other material to create them.
STEPHANIE: When you're saying hexagons filling a lot of space, I also thought about how they've become kind of popular in tiles or interior design in kitchens, and bathrooms, and stuff. [laughs] I've definitely seen that trend a bit. [laughs] So that's really cool just to see, like, yeah, this thing in the natural world that we have adopted for other uses. It's really fun.
JOËL: I want to say this idea of taking a 2D space and being able to completely cover it without spaces with a shape is called tessellating a plane. It's a fancy term for it. And if you want to do it with just a single shape, I think there are only like three or four shapes that can do it.
STEPHANIE: That's really interesting because it reminds me of those tessellation puzzles that I used to play with as a kid. Do you know what I'm talking about?
JOËL: You're thinking like a tangram or something different.
STEPHANIE: Yeah, yeah, tangram, that was...oh my gosh, those were fun. Wow, I was learning math as a young child, [laughs] just didn't even know it.
JOËL: Another random fun fact: the logo for the Elm programming language is a tangram.
STEPHANIE: [Gasps]
JOËL: And the community is sort of encouraged to then remix it because the tangram is just a square tessellated out of a bunch of these shapes. But then, if you're building a library or you've got an event or something, the community will take those shapes and remix them into some other shapes that might fit your event.
STEPHANIE: That's really cool. Is it a metaphor for how Elm can be used in different ways? [laughs]
JOËL: I'm not sure about the story behind the logo. We'd have to look that up.
STEPHANIE: That'll be a good adventure for later. [laughs]
JOËL: In...I want to say Moroccan art, but I think it might be broader than just Moroccan. It might be more broadly North African or Moorish or whatever you want to call that. There's a long history of building these tessellations, I think, out of tiles, but maybe other things as well where you're doing it with a variety of shapes.
So you might start...a classic one, I think is an eight-pointed...is it eight, or? I think it's an eight-pointed star, and then you sort of add other shapes around it. And those can create patterns that take a long time to repeat. And there are these beautiful geometric patterns that just keep on going and expanding without necessarily repeating over a lot of space.
STEPHANIE: Whoa. That kind of blows my mind a little bit. It seems so counterintuitive, but then I feel like there are a lot of things in math that are like that as well.
JOËL: So, yeah, I think a classic pattern you might start with something like an eight-pointed star. And then maybe to fill in the spaces around that central star, you might put some squares, and then maybe you put some triangles around that, and you sort of keep trying to fill in. And maybe eventually you get to another eight-pointed star, but it's not always perfectly symmetric.
STEPHANIE: Someone should make a board game or something out of this idea. [laughs]
JOËL: Oooh.
STEPHANIE: I bet there's one that exists. But I'm just thinking about people who like jigsaw puzzles and that being the next level challenge of, like, can you figure out how things fit together without the confines of a little jigsaw shape? [laughs]
JOËL: Right, right. You have a rectangle shape that you have to perfectly fill in with all of these other smaller shapes, and there is a single solution that will work. You have to figure it out.
STEPHANIE: I personally would be very overwhelmed, [laughs] but it sounds fun at the same time.
JOËL: So those are a lot of thoughts that I've been having inspiration reading that article that you shared on a previous episode. Have you been reading anything interesting recently?
STEPHANIE: I have. I'm really excited to talk about this topic because during my investment time this past week, I've been thinking a lot about it, taking a lot of notes in Obsidian, which is a callback to the last episode, and yeah, I'm excited to kind of get into it. So what I've been reading is Sustainable Web Development with Ruby on Rails by David Bryant Copeland.
And I think a lot of fellow thoughtboters have referenced this book or talked a little bit about ideas from this book; at least, I've seen discussion about it in Slack, so that's kind of why I wanted to pick it up. But what really blew my mind was honestly the first chapter where he talks about why he wrote this book and basically what sustainable web development is because it is a little bit, maybe, like a buzzy word. It's like, what is sustainability? How does it relate to tech and what we do?
And he basically gets down to it by saying that the software that we write is sustainable if it continues to meet our needs years into the future or has longevity and continues to be something we can iterate and work on and not feel that pain or friction, and we feel like we want to, and we feel joyful working on this codebase. So that was kind of my interpretation of his definition about sustainability.
JOËL: I love that definition of sustainability about code that can grow and live for a long time. And I feel like that's not a universal value in the tech industry. And on the extreme end of that, you'll have teams that promote the idea that maybe every few years, you should throw out your old codebase and rewrite. I want to say some teams at Google may have done that as a practice for a while, and, of course, then people quote that as a best practice.
To a certain extent, I want to say that's kind of what happens with Basecamp in that there are multiple versions of Basecamp. And I want to say each of those is a fresh Rails app. So there's a sense in which those or that style of development is not sustainable in the definition that you were just giving there. How do you feel about that?
STEPHANIE: I definitely think the industry has a bias towards newness and change. And a lot of people want to pick up the hot, new technology and, like you said, rewrite code, especially when it's become hard to work with. And honestly, I think that could be its whole own episode, rewrites because I think you and I have pretty strong opinions about it.
But I genuinely think that most of our work is, at least, you and I on the Boost team, in particular here at thoughtbot, where we embed on existing client teams, and usually, that means legacy code as well, but I think that the work of development is mostly extending existing code and trying to sustain applications that have users and are working for users.
And I think that that's certainly a value that I wish were highlighted more or were invested in more because sometimes that change or wanting to hop on to do something different or do something new has a lot of consequences that I'm not sure we talk about enough as an industry.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: It's interesting you mentioned the types of projects that we tend to be on. I feel like there are a lot of projects that I've been brought on where my goal, specifically coming onto this project, was to make the software more sustainable for the team. It's very easy to sort of start moving very fast in the beginning with a greenfield app, and then eventually, a lot of your choices catch up to you.
And then, as your team grows and your product grows, it becomes less and less sustainable. And that's often the point in the lifecycle of the product where I might join the team and try to help make things better for them. I love the keyword sustainable. I don't think that's one that I've used a lot, but it's a great label to put on that kind of work.
STEPHANIE: Yeah, I agree. I think what you mentioned earlier, too, about values that, really stuck out to me in this book because it basically says, "This book is for you if you value these three things: sustainability, consistency, and quality." And all of the recommendations and techniques that he then presents in the rest of the book, using Rails, those decisions are recommended with those three values in mind.
And I think, one, those values are personally important to me as a developer. But it also helped me develop some guiding principles around decision-making and provided a lot of clarity around times that I've been on teams where we were doing things that didn't quite align with my values, and I didn't enjoy it. And I couldn't really figure out why.
But now I'm able to see that, oh, perhaps this team or organization was valuing something like speed, or profit, or change, or something like that that I just fundamentally value differently. And that was kind of where my internal friction or contentment or discontentment was coming from when working on these teams. So, yeah, that was really clarifying for me.
JOËL: Would you say, for you, when you talk about these values, that these are fundamental or ultimate values for you when you write code? Or are they values that are a good way to sort of be a means to some other end? You know, for example, sustainability, do you care about sustainability just for its own sake? Or do you care about it because you want a product to be able to live for a long time? You're building for ten years or 20 years or however long you want this project to last.
STEPHANIE: I think the thing with values is that they are really fundamental to a person's identity or belief system. In fact, the definition that I'm kind of working off of here is that values are those fundamental beliefs that drive our actions. And so when you say, like, are values driving how you write code? I think they drive everything. [laughs]
But the point that he makes in this book is like, here's how they drive code and technical decisions. So the book is actually quite specific about technical recommendations that he has in the context of Rails. And it's funny because we're talking pretty abstractly and big picture about values and things like that. But then I think it's because he sets the stage to be like, everything I recommend here is what I believe to be sustainable, and good quality, and consistent.
And just for an example, one of the recommendations he makes is to, when you're kind of setting up a greenfield application, is to use a SQL schema instead of the default ActiveRecord DSL, so using a structure .SQL file. Because, in his eyes, having the flexibility to write SQL and use the most you can with those tools when it comes to database work is more sustainable in the long term than using the DSL that might not have all the tools available to you that SQL does.
And so he kind of gives his reasoning about, like, this is what I recommend, and here's why it contributes to sustainability, in my opinion. And so I have found myself, while I'm reading along, either agreeing, like, oh yeah, I can see his reasoning here, or maybe even disagreeing because I might think about things differently or have other considerations in mind that are more important to me and what sustainability means to me. But what I hopefully want to take away from the framework or understanding of values is evaluating technical decisions that I make based on my values as an individual but, more importantly, the values of the team or organization.
JOËL: I love mental frameworks like that that give you clarity into your own thought processes or how you make decisions moving forward. Sometimes you can look at something that's very concrete. Somebody gives you some advice on maybe structuring your database schema, and that might be helpful in and of itself. But if you came away with a larger thought process, I think that's doubly valuable.
As an aside here, I love this approach to writing where he sort of lays down almost like preconditions for this book. If you don't agree on these values, this book is not going to be very helpful for you. And then also, here are situations where this advice is not going to apply. Now that I've put down all these edge cases for the rest of this book, I'm going to be speaking very decisively; these are the things I recommend and not have to caveat myself all the time.
It's like, yes, I know there are some edge cases where you might not want to do this if it's a one-off script or whatever it is. We've already dealt with all of those upfront. And now, I can be very confident and very direct for the whole rest of the book. And I feel like that's something I struggle with in some of my work sometimes is. I care a lot about nuance, and my audience probably cares about edge cases even more than I do. They probably care too much.
Because I say something that's generally true most of the time, and I know somebody's already thinking about the one edge case where that's not true. And that doesn't matter for the main point I'm trying to make. So it's always a struggle to know when to caveat a statement that I'm making. But if you caveat too much, then you undermine your whole point. And so I like this idea of putting some caveats up front and then just saying, like, now we're in the 80% case. Within the 80% case, these are things I think are true.
STEPHANIE: Yeah, that's a really good point. I agree he is very clear about the intended audience. And so when you read this book, you are either on board because you value the same things he does, or you're not because you are focused and your goals are things that are different from him. So I think it was really helpful to get on the same page, even in a piece of content or in a piece of writing. Because I want to use my time well as a reader, so I want to make sure that what I am consuming makes sense for me, and I will find it worthwhile.
David takes a really strong stance on what quality means. And even though that is a pretty subjective value, he describes it as doing things right the first time and acknowledging the reality that we likely won't have the time to go back and clean things up after they've been shipped. So, on this client project, I found myself wanting to refactor things as part of my process, suggesting different implementations to do things the quote, unquote, "right way," or the best way we could, and not everyone shared that sentiment.
I sometimes got pushback, and that was challenging for me to figure out how I wanted to navigate that situation and what I was willing to let go and what I wasn't. And so I'm curious if you've ever been in a consulting position like that where maybe the team and organization's values were a little bit different from your understanding, or if they just weren't clear at all, and you were driving towards something that seemed very nebulous.
JOËL: I think I've been on both sides of that, both sometimes saying, "Look, we need to maybe slow down," or "Here's a thing that we need to do otherwise that's going to cost us on the longer term. Here's an area where we need to invest in quality today." And sort of on the other side where I'll feel like someone is really pushing an overengineered solution claiming it's going to make life a whole lot better, "If we invest three months upfront today, and maybe in three or four years, it'll pay off if certain things happen," that don't really necessarily line up with the immediate goals.
A lot of this, I think, comes down to understanding the client, and their business, and their goals. Sometimes there is a really important deadline for something that has to happen based on an event in the real world. If you were building software for something that had to do with, let's say, the World Cup, you don't want it shipping in January 2023. That's just pointless. And so you've got to prioritize shipping things.
And sometimes you say, "Okay, well, do we ship a few broken things? Or do we prefer to ship something that's a little bit smaller, more tightly scoped, but that holds well together?" That again, you have to really understand the client, their business, their needs. So I think for me those values of sustainability, quality...I forget what the third one was that you'd mentioned.
STEPHANIE: Consistency.
JOËL: Consistency, yes. They all sort of inform how it's going to mesh with the product I'm working on, the goals of that product. Where's it going in the next three months, six months, 12 months? Where's it coming from? Who's the team that I'm working with? Am I with a team of 300 people that are just committing to the main branch all the time with no tests, and we're constantly fighting regressions? Then sustainability looks very different there than a one other-person team, and we're trying to ship something for the World Cup.
STEPHANIE: Oh yeah, I have a lot of thoughts there too. Because I do agree that it can look different and sometimes shift a little bit depending on the situation. What you were just describing about team makeup that is really interesting to me because, yeah, sustainability can look different for different teams.
If you have, let's say, a lot of earlier career developers on your team, maybe you really want to focus on readability and making sure that they're able to navigate the codebase and figure things out over something like more advanced patterns and skills that will just cause them friction. But maybe you have a team where you all agree that that's what sustainability means to you is choosing those more advanced technical patterns and committing to them and figuring out how to maintain that because it's important to you.
And the other thing that you brought up that is also mentioned in this book is that the more information developers have about the future and direction of the business, the better code we can write. For some reason, I've found myself in situations where I don't know all too much about what we are working towards or what the goals of the business are both in the short term and the long term. And I try to make the best guess I can.
But I think in those scenarios, at least moving forward, I would really like to be better about pushing product folks or leadership to explain to me why we're doing what we're doing, kind of share the information that they have so that we can build the best product that we can.
I think sometimes that information doesn't get shared for some reason. They kind of think that engineers are going to go do their engineer thing, and we'll focus on long-term strategy over here. But yeah, I truly believe that the more information we have, the better quality work we can produce.
JOËL: I 100% agree. And I think that's what we see in a lot of classic agile literature talking about things like cross-functional teams or even the client or the product team should be integrated with the development team. You're all one team working together rather than someone has an idea, and then the technical team executes on it.
We see that also in some of the domain-driven design literature as well, where oftentimes projects start, and you sit down with a subject matter expert, and they just walk you through all of the business aspects. And particularly for the purpose of domain-driven design, you talk about a lot of the terms that make sense for the business. You build up a glossary of terms. I think they call it a ubiquitous language of things that are specific to your business and how does that work on a day-to-day basis.
STEPHANIE: Do you have any strategies for getting more clarity around the work and why you're building it if it's not yet available to you?
JOËL: I think there are sort of two scenarios where you have to do that; one of them that comes up maybe more often for us as consultants is onboarding onto a new client. There's a whole new business that we may know nothing about, and we have to learn a lot of that. And so, as part of the onboarding process, I think it's really valuable to have conversations with people who are not part of the dev team to learn about the business side of things.
On a per-feature basis, if you've already been onboarded on a project, you've been there for a while, it's often good to go back to the person who maybe created a ticket, a product person who's asking for a feature, and ask, "Why? Why do you want this?" Ideally, maybe that's even part of the ticket-creating process because the two teams are more integrated, and product team is like, here's a problem we're trying to solve. Here's what we think would be a solution. Or maybe even just "Here's a business problem. We need a technical solution. Can you do that for us?"
But I've often followed up with people outside of the engineering team to ask follow-up questions. And why are we doing this? And sometimes it's even you have to do like five Whys where it's like, "Oh, we're doing this because we need to do this thing for this customer. They asked for it." And it's like, "Okay, well, why are they asking for that?" "Oh, it's because they have this problem." And why are they having this problem?" And eventually, like, "Oh, I see. Okay."
The real solution has nothing to do with what was asked, and you come up with something that's maybe much tighter scoped or will better solve, and everybody's a winner in that case. But it does require following up. So I guess the short and boring answer is talk to people outside the engineering team.
STEPHANIE: That's a great point. I think the questions that we as engineers ask can drive more clarity to product people as well if we continue to ask those five levels of why in ways that they maybe didn't think about either. We have the opportunity to do that if we want to do our work well, too. That's kind of exciting to me that it isn't just okay, we're handed some work to do, and they've done all of that strategic thinking separately. And having to implement those details, we can kind of start to chip away at what are we really doing here?
And you mentioned talking to people outside of the engineering team. I just was thinking that pairing with non-developers would also be a really great task to do, especially when you get a ticket that's a bit ambiguous and you have questions. And you can always comment on the ticket or whatever and ask your questions. But perhaps there's also a good opportunity to work things through synchronously. In some ways, I think that is a more natural opportunity for that conversation to evolve rather than it being like, okay, I answered these questions, and now I'm going to move on to whatever else I have to do.
JOËL: So you mentioned pairing. It's often good to have someone maybe outside the development team pair with you on a technical thing, but sometimes it's good to flip the script. If you're building especially software for an internal team, it can be really valuable to just shadow one of them for a couple of hours or a day.
I did a project where we were building a tool for an internal sales team. And I had the privilege to shadow a couple of the sales members for a few hours as they're just doing their job. And I'm just asking all the questions like, "Oh, why do you do it that way? And what is the purpose behind this?" And I learned so much about the business by doing that.
STEPHANIE: I love that we took this idea of sustainable development and went beyond just technical design decisions or aspects of how we do our jobs. Because there is so much more that we can do to foster the value of sustainability or whatever other values that you might have, and yeah, I feel really excited to try both these technical strategies from the book and also the collaborative aspects as well.
JOËL: I'm really excited about some of these ideas that are coming up from the book. I think today we basically just talked about the introduction, the idea of sustainability. But I think as maybe you read more in the book, maybe we can do another episode later on talking about some of the more specific technical recommendations, how they relate to sustainability and maybe share some of our thoughts on that.
STEPHANIE: Yeah, I definitely am excited to keep y'all updated on this journey. [laughs]
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up.
JOËL: Show notes for this episode can be found at bikeshed.fm. This show has been produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter. Or at [email protected] via email.
Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël's been traveling. Stephanie's working on professional development. She's also keeping up a little bit more with Ruby news and community news in general and saw that Ruby 3.2 introduced a new class called data to its core library for the use case of creating simple value objects.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
AD:
thoughtbot is thrilled to announce our own incubator launching this year. If you are a non-technical founding team with a business idea that involves a web or mobile app, we encourage you to apply for our eight-week program.
We'll help you move forward with confidence in your team, your product vision, and a roadmap for getting you there. Learn more and apply at tbot.io/incubator.
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been traveling for the past few weeks in Europe. I just recently got back to the U.S. and have just gotten used to drinking American-style drip coffee again after having espresso every day for a few weeks. And it's been an adjustment.
STEPHANIE: I bet. I think that it's such a downgrade compared to European espresso. I remember when I was in Italy, I also would really enjoy espresso every day at a local cafe and just be like sitting outside drinking it. And it was very delightful.
JOËL: They're very different experiences. I have to say I do enjoy just holding a hot mug and sort of sipping on it for a long time. It's also a lot weaker. You wouldn't want to do a full hot mug of espresso. That would just be way too intense. But yeah, I think both experiences are enjoyable. They're just different.
STEPHANIE: Yeah. So, that first day with your measly drip coffee and your jet lag, how are you doing on your first day back at work?
JOËL: I did pretty good. I think part of the fun of coming back to the U.S. from Europe is that the jet lag makes me a very productive morning person for a week. Normally, I'm a little bit more of an evening person. So I get to get a bit of an alter ego for a week, and that helps me to transition back into work.
STEPHANIE: Nice.
JOËL: So you've also been on break and have started work again. How are you feeling productivity-wise, kicking off the New Year?
STEPHANIE: I'm actually unbooked this week and the last week too. So I'm not working on client projects, but I am having a lot of time to work on just professional development. And usually, during this downtime, I also like to reassess just how I'm working, and lately, what that has meant for me is changing my note-taking process. And I'm really excited to share this with you because I know that you have talked about this on the show before, I think in a previous episode with a guest, Amanda Beiner.
And I listened to that episode, and I was really inspired because I was feeling like I didn't have a note-taking system that worked super well for me. But you all talked about some tools you used and some, I guess, philosophies around note-taking that like I said, I was really inspired by. And so I hopped on board the Obsidian train. And I'm really excited to share with you my experience with it.
So I really like it because I previously was taking notes in my editor under the impression that, oh, like, everything is in one place. It'll be like a seamless transition from code to note-taking. And I was already writing in Markdown. But I actually didn't like it that much because I found it kind of distracting to have code things kind of around. And if I was navigating files or something, something work or code-related might come up, and that ended up being a bit distracting for me. But I know that that works really well for some people; a coworker of ours, Aji, I know that he takes his notes in Vim and has a really fancy setup for that.
And so I thought maybe that's what I wanted, but it turns out that what I wanted was actually more of a boundary between code and notes. And so, I was assessing different note-taking and knowledge management software. And I have been really enjoying Obsidian because it also has quite a bit of community support. So I've installed a few plugins for just quality-of-life features like snippets which I had in my editor, and now I get to have in Obsidian.
I also installed things like Natural Language Dates. So for my running to-do list, I can just do a shortcut for today, and it'll autofill today's date, which, I don't know, because for me, [laughs] that is just a little bit less mental work that I have to do to remember the date. And yeah, I've been really liking it. I haven't even fully explored backlinking, and that connectivity aspect, which I know is a core feature, but it's been working well for me so far.
JOËL: That's really exciting. I love notes and note-taking and the ways that we can use those to make our lives better as developers and as human beings. Do you have a particular system or way you've approached that? Because I know for me, I probably looked at Obsidian for six months before I kind of had the courage to download it because I didn't want to go into it and not have a way to organize things.
I was like; I don't want to just throw random notes in here. I want to have a system. That might just be me. But did you just kind of jump into it and see, like, oh, a system will emerge? Did you have a particular philosophy going in? How are you approaching taking notes there?
STEPHANIE: That's definitely a you thing because I've definitely had the opposite experience [laughs] where I'm just like, oh, I've downloaded this thing. I'm going to start typing notes and see what happens. I have never really had a good organizational system, which I think is fine for me. I was really leaning on pen and paper notes for a while, and I still have a certain use case for them.
Because I find that when I'm in meetings or one-on-ones and taking notes, I don't actually like to have my hands on the keyboard because of distractions. Like I mentioned earlier, it's really easy for me to, like, oh, accidentally Command-Tab and open Slack and be like, oh, someone posted something new in Slack; let me go read this. And I'm not giving the meeting or the person I'm talking to my full attention, and I really didn't like that.
So I still do pen and paper for things where I want to make sure that I'm not getting distracted. And then, I will transfer any gems from those notes to Obsidian if I find that they are worth putting in a place where I do have a little bit more discoverability and eventually maybe kind of adding on to my process of using those backlinks and connecting thoughts like that. So, so far, it's truly just a list of separate little pages of notes, and yeah, we'll see how it goes. I'm curious what your system for organizing is or if you have kind of figured out something that works well for you.
JOËL: So my approach focuses very heavily on the backlinks. It's loosely inspired by two similar systems of organization called Zettelkasten and evergreen notes. The idea is that you create notes that are ideas. Typically, the title is like a thesis statement, and you keep them very short, focused on a single thing. And if you have a more complex idea, it probably breaks down into two or three, and then you link them to each other as makes sense.
So you create a web of these atomic ideas that are highly interconnected with each other. And then later on, because I use this a lot for either creating content in the future or to help refine my thinking on various software topics, so later on, I can go through and maybe connect three or four things I didn't realize connected together. Or if I'm writing an article or a talk, maybe find three or four of these ideas that I generated at very different moments, but now they're connected. And I can make an article or a talk out of them. So that's sort of the purpose that I use them for and how I've organized things for myself.
STEPHANIE: I think that's a really interesting topic because while I was assessing different software for note-taking and, like I said, knowledge management, I discovered this blog post by Maggie Appleton that was super interesting because she is talking about the term tools of thought which a lot of these different software kind of leveraged in their marketing copy as like, oh, this software will be like the key to evolving your thinking and help you expand making connections, like you mentioned, in ways that you weren't able to before. And was very obviously trying to upsell you on this product, and she --
JOËL: It's over the top.
STEPHANIE: A little bit, a little bit. So in this article, I liked that she took a critical lens to that idea and rooted her article in history and gave examples of a bunch of different things in human history that also evolved the ways humans were able to express their thoughts and solve problems. And so some of the ones that she listed were like storytelling and oral tradition. Literally, the written language obviously [laughs] empowered humans to be able to communicate and think in ways that we never were before but also drawings, and maps, and spreadsheets.
So I thought that was really cool because she was basically saying that tools of thought don't need to be digital, and people claiming that these software, you know, are the new way to think or whatever, it's like, the way we're thinking now, but we also have this long history of using and developing different things that helped us communicate with each other and think about stuff.
JOËL: I think that's something that appealed to me when I was looking at some of these note-taking systems. Zettelkasten, in particular, predates digital technology. The original system was built on note cards, and the digital stuff just made it a little bit easier. But I think also when I was reading about these ideas of keeping ideas small and linking them together, I realized that's already kind of how I tend to organize information when I just hold it in my brain or even when I try to do something like a tweet thread on Twitter where I'll try to break it up.
It might be a larger, more complex idea, but each tweet, I try to get it to kind of stand on its own to make it easier to retweet and all that. And so it becomes a chain of related ideas that maybe build up to something, but each idea stands on its own. And that's kind of how in these systems notes end up working. And they're in a way that you can kind of remix them with each other. So it's not just a linear chain like you would have on Twitter.
STEPHANIE: Yeah, I remember you all in that episode about note-taking with Amanda talked about the value of having an atomic piece of information in every note that you write. And since then, I've been trying to do that more because, especially when I was doing pen and paper, I would just write very loose, messy thoughts down. And I would just think that maybe I would come back to them one day and try to figure out, like, oh, what did I say here, and can I apply it to something?
But it's kind of like doing any kind of refactoring or whatever. It's like, in that moment, you have the most context about what you just wrote down or created. And so I've been a little more intentional about trying to take that thought to its logical end, and then hopefully, it will provide value later.
What you were saying about the connectivity I also wanted to kind of touch on a little bit further because I've realized that for me, a lot of the connection-making happens during times where I'm not very actively trying to think, or reflect, or do a lot of deep work, if you will. Because lately, I've been having a lot of revelations in the shower, or while I'm trying to fall asleep, or just other kinds of meditative activity. And I'm just coming to terms with that's just how my brain works. And doing those kinds of activities has value for me because it's like something is clearly going on in my brain. And I definitely want to just honor that's how it works for me.
JOËL: I had a great conversation recently with another colleague about the gift of boredom and how that can impact our work and what we think about, and our creativity. That was really great. Sometimes it's important to give ourselves a little bit more blank space in our lives. And counter-intuitively, it can make us more productive, even though we're not scheduling ourselves to be productive.
STEPHANIE: Yes, I wholeheartedly agree with that. I think a lot about the feeling of boredom, and for me, that is like the middle of summer break when you're still in school and you just had no obligations whatsoever. And you could just do whatever you wanted and could just laze around and be bored. But letting your mind wander during those times is something I really miss.
And sometimes, when I do experience that feeling, I get a little bit anxious. I'm like, oh, I could be doing something else. There's whatever endless list of chores or things that are, quote, unquote, "productive." But yeah, I really like how you mentioned that there is value in that experience, and it can feel really indulgent, but that can be good too.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So you mentioned recently that you've had a lot of revelations or new ideas that have come upon you or that you've been able to dig into a little bit more. Is there one you'd like to share with the audience?
STEPHANIE: Yeah. So during this downtime that I've had not working on client work, I have been able to keep up a little bit more with Ruby news or just community news in general. And in, I think, an edition of Ruby Weekly, I saw that Ruby 3.2 introduced this new class called data to its core library for the use case of creating simple value objects.
And I was really excited about this new feature because I remembered that you had written a thoughtbot blog post about value objects back in the summer that I had reviewed. That was an opportunity that I could make a connection between something happening in recent news with some thoughts that I had about this topic a few months ago. But basically, this new class can be used over something like a struct to create objects that are immutable in their values, which is a big improvement if you are trying to follow value objects semantics.
JOËL: So, I have not played around with the new data class. How is it different from the existing struct that we have in Ruby?
STEPHANIE: So I think I might actually answer that first by saying how they're similar, which is that they are both vehicles for holding pieces of data. So we've, in the past, been able to use a struct to very cheaply and easily create a new class that has attributes. But one pitfall of using a struct when you're trying to implement something like a value object is that structs also came with writer methods for all of its members.
And so you could change the value of a member, and that it kind of inherently goes against the semantics of a value object because, ideally, they're immutable. And so, with the data class, it doesn't offer writer methods essentially. And I think that it freezes the instance as well in the constructor. And so even if you tried to add writer methods, you would eventually get an error.
JOËL: That's really convenient. I think that may be an area where I've been a little bit frustrated with structs in the past, which is that they can be modified. They basically get treated as if they're hashes with a slightly nicer syntax to interact with them. And I want slightly harder boundaries around the data.
Particularly when I'm using them as value objects, I generally don't want people to modify them because that might lead to some weird bugs in the code where you've got a, I don't know, something represents a time value or a date value or something, and you're trying to do math on it. And instead of giving you a new time or date, value just modifies the first one. And so now your start date is in the past or something because you happen to subtract a time from it to do a calculation. And you can't assign it to a variable anywhere.
STEPHANIE: Yeah, for sure. Another kind of pitfall I remember noticing about structs were that the struct class includes the enumerable module, which makes a struct kind of like a collection. Whereas if you are using it for a value object, that's maybe not what you want. So there was a bit of discourse about whether or not the data class should inherit from struct. And I think they landed on it not inheriting because then you can draw a line in the sand and have that stricter enforcement of saying like, this is what a data as value object should be, and this is what it should not be. So I found that pretty valuable too.
JOËL: I think I've heard people talk about sort of two classes of problems that are typically solved with a struct; one is something like a value object where you probably don't want it to be writable. You probably don't want it to be enumerable. And it sounds like data now takes on that role very nicely.
The other category of problem is that you have just a hash, and you're trying to incrementally migrate it over to some nicer objects in some kind of domain. And struct actually gives you this really nice intermediate phase where it still mostly behaves like a hash if you needed to, but it also behaves like an object. And it can help you incrementally transition away from just a giant hash into something that's a little bit more programmatic.
STEPHANIE: Yeah, that's a really good point. I think struct will still be a very viable option for that second category that you described. But having this new data class could be a good middle ground before you extract something into its own class because it better encapsulates the idea of a value object.
And one thing that I remember was really interesting about the article that you wrote was that sometimes people forget to implement certain methods when they're writing their own custom value objects. And these come a bit more out of the box with data and just provide a bit more like...what's the word I'm looking for? I'm looking for...you know when you're bowling, and you have those bumpers, I guess? [laughs]
JOËL: Uh-huh.
STEPHANIE: They provide just like safeguards, I guess, for following semantics around value objects that I thought was really important because it's creating an artifact for this concept that didn't exist.
JOËL: And to recap for the audience here, the difference is in how objects are compared for equality. So value objects, if they have the same internal value, even if they're separate objects in memory, should be considered equal. That's how numbers work. That's how hashes work. Generally, primitives in Ruby behave this way. And structs behave that way, and the new data class, it sounds, also behaves that way. Whereas regular objects that you would make they compare based off of the identity of the object, not its value.
So if you create two user instances, not ActiveRecord, but you could create a user class, you create two instances in memory. They both have the same attributes. They will be considered not equal to each other because they're not the same instance in memory, and that's fine for something more complex. But when you're dealing with value objects, it's important that two objects that represent the same thing, like a particular time for a unit of measure or something like that, if they have the same internal value, they must be the same.
STEPHANIE: Right. So prior to the introduction of this class, that wasn't really enforced or codified anywhere. It was something that if you knew what a value object was, you could apply that concept to your code and make sure that the code you wrote was semantically aligned with this concept. And what was kind of exciting to me about the addition of this to the core class library in Ruby is that someone could discover this without having to know what a value object is like more formally.
They might be able to see the use of a data class and be like, oh, let me look this up in the official Ruby docs. And then they could learn like, okay, here's what that means, and here's some rules for this concept in a way that, like I mentioned earlier, felt very implicit to me prior. So that, I don't know, was a really exciting new development in my eyes.
JOËL: One of the first episodes that you and I recorded together was about the value of specific vocabulary. And I think part of what the Ruby team has done here is they've taken an implicit concept and given it a name. It's extracted, and it has a name now. And if you use it now, it's because you're doing this data thing, this value object thing. And now there's a documentation page. You can Google it. You can find it rather than just be wondering like, oh, why did someone use a struct in this way and not realize there are some implicit semantics that are different? Or wondering why did the override double equals on this custom class?
STEPHANIE: Yeah, exactly. I think that the introduction of this class also provides a solution for something that you mentioned in that blog post, which was the idea of testing value objects. Because previously, when you did have to make sure that you implemented methods, those comparison methods to align with the concept of a value object, it was very easy to forget or just not know. And so you provided a potential solution of testing value objects via an RSpec shared example.
And I remember thinking like, ooh, that was a really hot topic because we had also been debating about shared examples in general. But yeah, I was just thinking that now that it's part of the core library, I think, in some ways, that eliminates the need to test something that is using a data class anyway because we can rely a little bit more on that dependency.
JOËL: Right? It's the built-in behavior now. Do you have any fun uses for value objects recently?
STEPHANIE: I have not necessarily had to implement my own recently. But I do think that the next time I work with one or the next time I think that I might want to have something like a value object it will be a lot easier. And I'm just excited to play around with this and see how it will help solve any problem that might come up. So, Joël, do you have any ideas about when you might reach for a data object?
JOËL: A lot of situations, I think, when you see the primitive obsession smell are a great use case for value objects, or maybe we should call them data objects now, now that this is part of Ruby's vocabulary. I think I often tend to; preemptively sounds bad, but a lot of times, I will try to be careful. Anytime I'm doing anything with raw numbers, magic strings, things like that, I'll try to encapsulate them into some sort of struct. Or even if it's like a pair of numbers, it always goes together, maybe a latitude and longitude.
Now, those are a pair. Do I want to just be passing around a two-element array all the time or a hash that would probably make a very nice data object? If I have a unit of measure, some number that represents not just the abstract concept of three but specifically three miles or three minutes, then I might reach for something like a data class.
STEPHANIE: Yeah, I think that's also true if you're doing any kind of arithmetic or, in general, trying to compare anything about two of the same things. That might be a good indicator as well that you could use something richer, like a value object, to make some of that code more readable, and you get some of those convenient methods for doing those comparisons.
JOËL: Have you ever written code where you just have like some number in the code, and there's a comment afterwards that's like minutes or miles or something like that, just giving you the unit as a comment afterwards?
STEPHANIE: Oh yeah. I've definitely seen some of that code. And yeah, I mean, now that you mentioned it, that's a great use case for what we're talking about, and it's definitely a code smell.
JOËL: It can often be nice as you make these more domain concepts; maybe they start as a data object, but then they might grow with their own custom methods. And maybe you extend data the same way you could extend a struct, or maybe you create a custom class to the point where the user...whoever calls that object, doesn't really need to know or care about the particular unit, just like when you have duration value.
If you have a duration object, you can do the math you want. You can do all the operations and don't have to know whether it is in milliseconds, or seconds, or minutes because it knows that internally and keeps all of the math straight as opposed to just holding on to what I've done before, which is you have some really big number somewhere. You have start is, or length is equal to some big number and then comment milliseconds. And then, hopefully, whoever does math on that number later remembers to do the division by 1,000 or whatever they need.
STEPHANIE: I've certainly worked on code where we've tolerated those magic numbers for probably longer than we should have because maybe we did have the shared understanding that that value represents minutes or milliseconds or whatever, and that was just part of the domain knowledge. But you're right, like when you see them, and without a very clear label, all of that stuff is implied and is really not very friendly for someone coming along in the future.
As well as, like you mentioned earlier, if you have to do math on it later to convert it to something else, that is also a red flag that you could use some kind of abstraction or something to represent this concept at a higher level but also be extensible to different forms, so a duration to represent different amounts of time or money to represent different values and different currencies, stuff like that.
JOËL: Do you have a guideline that you follow as to when something starts being worth extracting into some kind of data object?
STEPHANIE: I don't know if I have particularly clear guidelines, but I do remember feeling frustrated when I've had to test really complicated hashes or just primitives that are holding a lot of different pieces of information in a way that just is very unwieldy when you do have to write a test for it. And if those things were encapsulated in methods, that would have been a lot easier. And so I think that is a bit of a signal for me. Do you have any other guidelines or gut instincts around that?
JOËL: We mentioned the comment that is the unit. That's probably a...I wasn't sure if I would have to call it a code smell, but I'm going to call it a code smell that tells you maybe you should...that value wants to be something a little bit more than just a number. I've gotten suspicious of just raw integers in general, not enough to say that I'm going to make all integers data objects now, but enough to make me pause and think a lot of times. What does this number represent? Should it be a data object?
I think I also tend to default to try to do something like a data object when I'm dealing with API responses. You were talking about hashes and how they can be annoying to test. But also, when you're dealing with data coming back from a third-party API, a giant nested hash is not the most convenient thing to work with, both for the implementation but then also just for the readability of your code. I often try to have almost like a translation layer where very quickly I take the payload from a third-party service and turn it into some kind of object.
STEPHANIE: Yeah, I think the data class docs itself has an example of using it for HTTP responses because I think the particular implementation doesn't even require it to have attributes. And so you can use it to just label something rather than requiring a value for it.
JOËL: And that is one thing that is nice about something like a data object versus a hash is that a hash could have literally anything in it. And to a certain extent, a data object is self-documenting. So if I want to know I've gotten to a shopping cart object from a third-party API, what can I get out of the shopping cart?
I can look at the data object. I can open the class and see here are the methods I can call. If it's just a hash, well, I guess I can try to either find the documentation for the API or try to make a real request and then inspect the hash at runtime. But there's not really any way to find out without actually executing the code.
STEPHANIE: Yeah, that's totally fair. And what you said about self-documenting makes a lot of sense. And it's always preferable than that stray comment in the code. [laughs]
JOËL: I'm really excited to use the data class in future Ruby 3.2 projects. So I'm really glad that you brought it up. I've not tried it myself, but I'm excited to use it in future projects.
STEPHANIE: On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeeeeee!!!!!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Happy New Year! It's 2023 🎉 Joël and Stephanie chat about developer resolutions or things they'd like to do this year and then discuss componentization and branching strategies.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So, as of the time that this episode goes live, it's the new year; it's 2023. Happy New Year.
STEPHANIE: Happy New Year, Joël, and Happy New Year to our listeners.
JOËL: So, new year is oftentimes where people like to take maybe resolutions or plan a little bit out of their year. I'm curious, Stephanie, do you have any developer resolutions or things that you'd like to do this year?
STEPHANIE: So, I think last episode, we were talking a little bit about reflection, especially in terms of career progression. And I may have mentioned that I don't really believe in New Year's resolutions. [laughs] But I do think that one intention that I have for myself is to chill a little bit. I think the fall of 2022, for me, was really hectic, exciting but a little busy in terms of speaking and content creation; you know, we started doing this podcast together. And so I do think that winter is my time of hibernation.
Development-wise, one goal that was really inspiring for me that you shared is writing ten blog posts a year. And I think I might keep that in the back of my head as we get into the new year and maybe start to at least just have it in my mind. So if I see anything that comes up, I can be like, oh yes, I had this intention to write more blog posts, which I think might be helpful to just marinate on even during my chill time.
Because when I do finally feel productive and energized again, I can at least have been thinking about it. And I think doing that reflection and marinating is also work, even if it doesn't end up turning into an artifact or anything like that. And so yeah, that's my plan for the beginning of the year, at least.
And then, on a personal note, I'm on a journey to be warm this winter and not be miserable and cold. So as a recovering Californian in Chicago these days, I have just always wanted to stay in and be a little bit of a hermit in the winter because I just have a hard time with the cold. But then I learned that you can just buy things that will help keep you warm.
And so, a recent purchase of mine was a heated, portable blanket. So I will definitely be repping that in my Zoom calls and on the couch. And I've just gotten really stocked up on the warm weather accessories, so fingerless gloves and the like. So I'm excited to just not be miserable this winter. That's my plan for the new year.
JOËL: You'll have to report back on how well that goes, how you feel about that blanket.
STEPHANIE: Oh yeah, I definitely will. Maybe I'll even wear it when we're recording.
JOËL: Maybe next year we can make a gift guide.
STEPHANIE: That would be so fun. I would like that: a gift guide for developers or the developer in your life.
JOËL: Right. Fingerless gloves just because it looks cool. [laughs]
STEPHANIE: I'm wearing them right now. Yeah, I can type, and my hands stay warm, highly recommend.
JOËL: So it's not just a hacker thing to look cool. It actually works.
STEPHANIE: I would say so. As someone who has very cold extremities, it's been a good solution for me.
JOËL: You'd mentioned in a previous episode talking about conference talk ideas that you like to marinate in sort of build up a file of ideas over the course of a year. And it sounds like you're taking that approach but then also applying it to smaller form content like blog posts over the course of this winter.
STEPHANIE: I think so. I realized something about myself is that I am a bit of a slow cooker in terms of coming up with ideas, especially if they're rooted in experience. I think it takes me the process of going through something like going through a client project and then honestly, three months later being like, oh, now I can have some distance from it and be like, oh, this is what I was thinking, or this is what I noticed or observed, which I think that's just how I am, and that's okay. I recognize that I don't necessarily churn out content all the time, and, for me, it kind of does feel more like seasons or cycles.
JOËL: So it's funny you mentioned the ten blog posts goal, and that's what I was going to share for my developer goal for the year. I've had that for the past several years. It's a really fun goal to have, I think, because it's aggressive enough that it requires me to put out a fair amount of content, but it's also achievable.
That being said, as of the time of this recording, which is at some undisclosed time prior to the New Year, I only have eight articles that are live on the thoughtbot blog. So for all you listeners in the future in 2023, if you're curious if I achieved my 2022 goals, go check the thoughtbot blog and see did I successfully publish the last two before the end of the month.
STEPHANIE: One thing you had mentioned to me off-air was that you do also tend to backload blog posts towards the end of the year once you realize that that deadline is coming up. Do you think you will do something differently in 2023?
JOËL: I would like to write a little bit more early in the year. I think I'll have to figure out exactly how I want some of my goals to play out. I think I mentioned this in the previous episode; two large themes that I've wanted to focus on are ways to invest in our team and our teammates and then creating content, so things like blog posts, like this podcast, like conference talks. Those were two broad themes that I had given my year in 2022. And I really enjoy those. I think I would want to repeat those themes for 2023 as well, so figuring out exactly how I'm going to interweave them, something I'm going to iterate on.
STEPHANIE: Have you considered breaking down that yearly goal into smaller time intervals, so maybe two or three blog posts a quarter?
JOËL: That might be a better way to do it to make sure that I'm on track. I give myself this as a goal. I'm not super hard on myself if I don't hit it, although I have hit it for multiple years. If I only have eight blog posts this year, that's still an accomplishment I'm proud of. And I think there's some good content that I put out. So I will not be distraught if I don't hit the 10, but it's good to be aiming for something.
STEPHANIE: Yeah, that's a good mindset to have. I also have a personal goal of reading 52 books a year. And this will be my third year attempting to do so. Or, I guess I have been successful in 2021 and 2022 now. But I remember when I first wanted to do it; I didn't tell anyone because I was terrified of not meeting that goal and just feeling a bit disappointed in myself.
And so, I just kept it to myself and didn't mention it to anyone until I got to around 40 or 45 bookmark, and then I could confidently tell people about my goal because, at that point, I was on track and feeling pretty confident that I was going to finish it. So that's my strategy [chuckles] is to not tell anyone until I am pretty much there and then share, and people can be impressed.
JOËL: I feel like there's always a bit of a tension there where when you've got a goal, sometimes you don't want to tell people about it because you don't want to say a thing and then disappoint other people and not get it done. But in some ways, for me, when I can get a goal out of my head and out into almost the real world by telling someone, it makes that goal more real and maybe inspires me to work harder towards it but also maybe helps me believe in myself a little bit more because I've said it out loud.
And, I don't know, maybe saying it in front of a mirror would have the same effect. But getting it out of just my thinking pattern and saying it, "This is what I think I'm going to achieve. This is what I'm trying to do," somehow makes the goal more real for me, makes it more achievable.
STEPHANIE: That makes sense. It's like the difference between saying, "I think I'm going to do this," and "Okay, I'm going to do this."
JOËL: Right. And maybe there's a little bit of social pressure too, if I tell someone, now I don't want to disappoint them. That can be bad because it causes me to doubt myself, but in small amounts, it can maybe help me to push through moments of doubt or moments of feeling like I want to give up.
STEPHANIE: Yeah, I mean, either way, even if you only ended up with eight blog posts in a year, people are just really excited that you're putting content out there. And they're not counting how many posts you put out.
JOËL: In a sense, it's purely a vanity metric for me to know my progress.
STEPHANIE: So I have one thing that happened this week that I would be curious to get your input on. On my client project, I was tasked with making a small UI change to a navigation menu that existed as a separate React project within our Rails repo. And so the task was for...we had this little caret icon that was used in the mobile nav, and the designer wanted it to be reused somewhere else on the desktop nav menu.
And I was digging through the codebase looking to see where the caret was already. And I realized that it was done with CSS on the menu label. So it was really coupled to this menu label and wasn't reusable in that current state. So it took me a little while to figure out how to pull out the seams and extract it into its own component so I could reuse it where I needed it to.
And so I was trying to figure out how we got here because we are using the styled-components library, which should encourage componentization. And I was just thinking about different approaches to building UI features from scratch. And I did a little bit of digging and learned about component-driven development, which suggests the idea of building each component in isolation and thinking through all the relevant states that it might exist in for at least your first couple of use cases and then combining them to create larger components, and then ultimately pages.
And that was interesting to me because it's a little bit different from a strategy that I'm used to, especially if you're implementing a new page or template where you just kind of scaffold out all of your HTML elements that you need. But then when you add on styling, those primitive components, you might end up having a lot of duplication if you are creating very generic things like buttons that end up being coupled to the page that you're working on, especially if you are putting them on the page in a very specific way.
And you might add CSS rules like margins or padding that, again, is coupled to that particular UI that you're building. So I'm curious if you've really thought about building UI from a component level and starting small and then building out or if you also take it from a top-down approach.
JOËL: It's interesting that you mentioned that the component approach really deals with figuring out state. And I think that's probably an area where it shines a lot when you have situations where components can have multiple states. And it's very easy when you're looking at much more of a scenario-driven approach where you just want to say, oh, I want this form input to look like this, like this one mock. But that was only showing the happy path. And you didn't think about all those other states. And so, for situations where you might have a lot of states, the component approach, I think, is really interesting.
I had a really fantastic experience a few years ago pairing with a thoughtbot designer on fairly...well, what we thought was going to be a simple form input. And it turned out that it had a lot of edge cases and funny state things that could happen. And we ended up drawing what essentially was a...you might call that a finite-state machine in a more formal sense. But it's basically a diagram where we show the state of the input. And then we would say, what are the different things that can happen that might transition it into another state?
So maybe it starts empty, but then you start typing, and then something happens, or you have an invalid value, and then something happens. But then, from the invalid state, can you come back to the empty state? Can you come back into the typing state? At what point do we show a read error? Do we clear it out while you're typing? Even after you have an error, this particular input was also backed by some remote data. So it was like a typeahead kind of thing pulling data from a server.
And there were a lot of extra edge cases for things like, oh, we're waiting for results, or no results matched, or we got exactly one result. And so that was really interesting. We ended up building up this whole diagram where we showed all the transitions that could happen, the ways you can loop back to a previous state, and it forced us to think about a lot of edge cases that we wouldn't have thought of otherwise.
STEPHANIE: That's really interesting. I think the transitions between different states definitely can get really complicated. While you were saying that, I was reminded of Storybook, the tool for building out components in isolation. And one thing that I really like is that they encourage you to think about different states and edge cases as almost like user stories. I think they're called stories. And you can use their DSL to extract those pieces of information and basically think through kind of what you were saying, but it's built into the tool. And so it really encourages that thought process.
Because I definitely have run into just trying to build out a basic button or something but then having all of these questions that I have to ask designers while I'm implementing it to be like, what should happen here? Or, like, what should it look like when it's disabled, or what happens when it, like you mentioned, gets back data that it wasn't expecting or something like that?
JOËL: Sometimes you have situations where your page doesn't have a lot of state, or your components don't really have a lot of state. And it really is just a static page. In those situations, now you're looking at questions more of reusability rather than state management. And you may or may not want some kind of componentized approach for that. That might be a little bit different depending on, like; you may not be using React if it's a completely static page.
So maybe this is server-rendered, and you're trying to componentize using Rails helpers. Or you're using something like BEM the CSS...I don't know if you'd call it a framework or a structural approach of defining classes that are in a more componentized approach so that you can reuse styles. So there are a lot of ways to reuse and componentize. Even though I think oftentimes when we talk about visual components, we're often thinking about React.
STEPHANIE: Yeah, I also did a little digging into ViewComponents because I was, again, just kind of trying to think of a mental model for how to approach building out UIs. And in their docs, they have a really good example about their process for using ViewComponents at GitHub. And basically, the progression is that they implement a single use case component that might live as it is for a while until there is some other use case for that component, and then maybe it's adapted for general use in multiple locations.
And then, if it turns out to be like a really good generic building block, they actually extract it into their open-source component library called Primer, I think is what it is. So that was an interesting process for me as someone who just kind of like did that first step of pulling out this little piece into its own component. And then, right now, it isn't necessarily quite ready for being reused in a bunch of different ways. But I think that was a good first step in setting it up to be able to.
JOËL: Definitely. I think it's easy to overDRY or, in this case, it's almost like over abstract in preparation for reuse that might never happen. But oftentimes, it's an incremental thing where you do as much as it makes sense for your current scenario while also leaving yourself the option to easily keep going down that path for future scenarios where there is more duplication. And then, if those scenarios never come, then great, you've saved yourself some work. And if the scenarios do come, then hopefully, it's easy to take the next step.
I gave a talk several years ago at an Elm meetup. Elm is a front-end language that compiles down to JavaScript. And in Elm they don't have components in the same way that React might have. Everything is just functions because of its very functional DNA. And I was talking about how to structure the view layer in terms of functions and how to do so cleanly in a way that is reusable. And one guideline that I had for myself for structuring this kind of code is that a function can either do something or it can branch, but it can't do both.
So if a function (But you can think component here.) is splitting in two different situations, then it doesn't get to have any logic inside it. It just calls out to some other component. And the only thing that it does is say, "I'm a branching component. If this happens, pull this other subcomponent; otherwise, bring in this subcomponent and maybe set up some arguments or something like that." And then the other child components that are rendering various pieces of UI, they don't get to branch. They are just given this data, render it in this way.
STEPHANIE: That makes a lot of sense. I think that also reminds me of the philosophy of separating your components to be container components or presentational components, where there are some that are just focused on what is being rendered and others that have more of that logic in determining what should or should not be displayed.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: It's interesting that we're talking about branching, how to structure it, how to model components that deal with branching or that have to be called from a branch because branching is something that I've been thinking about a lot for the past couple of years. One thing I noticed that we tend to do as developers is that we really, really want to force everything down a single path as much as possible.
Like, there is a main path, and then if you have to branch off of it, you go off on the side path. As short as possible, find a way to merge back onto the main path. But there is one single main path in your program, and everything tries to merge back into it. Have you encountered that in your own coding journey?
STEPHANIE: Do you mean having a lot of conditionals along this single path that might take you elsewhere but then, in the end, are just little tangents, and you're trying to get back to that main execution of code?
JOËL: Yes. An example of that might be dealing with a value that is potentially nil in Ruby or null in JavaScript. And so you check if that value is present, and then you do some things. And then you want to do something else, so you have to, again, check if the value is present and then potentially do the next step. And then, again, check if the value is present. And you end up repeating this conditional multiple times.
We have some constructs that make this a little bit nicer. Ruby has the lonely operator that will just sort of keep passing that nil down in a safe way. But it is still doing a nil check at every step of your long chain of actions you're trying to take because that value is potentially null at every step.
STEPHANIE: The lonely operator being the safe navigation operator. Is that right?
JOËL: Yes, that's right. That is another name for it.
STEPHANIE: Okay, cool. Glad we're on the same page.
JOËL: The &. We have some quirky names for operators in Ruby.
STEPHANIE: Yeah, we sure do. I hadn't heard of lonely before, so that was a really cool tidbit for me.
JOËL: There's also the spaceship operator, which is the less than equal greater than for comparing objects with comparable.
STEPHANIE: Ooh, I like that. I've also heard squid operator for interpolating Ruby in ERB.
JOËL: Nice.
STEPHANIE: I think, in my experience, I have seen that chaining of the safe navigation operator and the chaining of checking for nil in code that doesn't quite utilize the Tell, Don't Ask principle where you have to check for nil and all the objects down the line rather than having that functionality extracted in a method that is then, in my opinion, more correctly co-located with the relevant domain model. So I'm curious if you think that the conditionals themselves are an issue or if it's just the way that they were implemented or where they exist.
JOËL: In this particular case, I would say the conditionals are a code smell, and they're probably extraneous. You're checking the theme value for null again, or maybe you're checking a derived value for null that you don't need to because you've already checked it earlier. It's code that is not confident because the uncertainty from that initial nil has sort of bled through all of your code.
A classic solution to this problem is to try to push the uncertainty to the edges of your system. And a great resource for that is the book Confident Ruby by Avdi Grimm, which talks about all sorts of techniques for dealing with a lot of those uncertainties that you deal with in code and how to push those to the edges of your system.
STEPHANIE: Thanks. That does actually remind me of what you were saying about componentization and having that outer component make the decision, and then everything else inside of it doesn't need to worry about it anymore.
JOËL: I think it's all kind of connected. I've come up with sort of three, let's call them principles, that I try to use when structuring code that I think are kind of the same idea viewed from three different points of view, or maybe all kind of converge towards the same ideas. The first one being what I showed earlier, the idea of separating branching code and doing code into two separate places. The second one being to try to branch early, push conditionals higher up your decision tree. And the final one being to keep the code within a single method or component or whatever your structuring element is at the same level of abstraction.
So if you're writing at a higher level calling a lot of lower-level methods, that's great. But then don't mix in some lower-level concerns there. Extract those out to a private method or another object or a component and bring those in, and keep everything at a high level in your high-level components, and then everything at a low level in your low-level components.
STEPHANIE: Yeah, that makes sense. I think that is perhaps closer to what I was trying to say earlier, where I think conditionals can be okay if they are in the right place. So if you have a controller and you see a bunch of conditionals, I think if that conditional-checking is related to something like rendering, that feels a bit more okay to me as opposed to seeing conditionals that then execute a procedure or a bunch of different things that might be better extracted somewhere else.
JOËL: I think there are two classes of conditional that you have to think about. Some conditionals are just unnecessary. You're doing extra work that is not required by the code because the code is poorly factored, and so you're having to do this extra work. This commonly happens, I think, in large code bases where they're modified over time, and you get this big, scary large method, maybe with deeply nested code, and you want to just make one modification in it. But you're afraid to break other things, so you wrap it in a conditional. But then everybody else is doing the same.
And then you've got this giant tangle of conditionals, some of which are duplicating each other, some of which will never be called just because it's poorly factored, and it just grew that way over time. And so those, if you're sitting down and looking at cleaning up that code, many of them can be entirely eliminated just by structuring things in a cleaner fashion.
STEPHANIE: Yeah, I've definitely experienced what you're talking about. And I think it does provide a lot of value once someone figures out what the heck is going on with all of these conditionals and wraps their head around it if they're able to refactor it to eliminate some of that complexity that has just downstream effects for everyone working in that code. Like, they don't have to do the work of trying to figure out what is going on, especially for unnecessary logic in the first place.
JOËL: I think a classic case I've seen of this is dealing with wizards where you have a bunch of different steps, and they might be all handled in one place. And a classic way that I've often seen people attempt to do this is say, well, there are a lot of things that might be shared between different steps. Or, again, we want to do this one single linear path. And so you might have, say, one giant Rails controller that accepts inputs from all the possible steps in the wizard. And then it will just say, if this parameter is present, do this action; else, if there is other parameters present, do this action.
It's not even like do this step one action or do the step two action. It might be if the user's name and email are present, then save some data to this table, else if a phone number is present, trigger this background job elseif all these things. But what gets tricky is then you don't know which combinations can happen together.
And then later on, when this gets really big, and you're trying to modify it, and it's like, oh, the customer wants another field on the screen that shows the phone number. But maybe you don't want a background job to be triggered in that case, or maybe it shows up on a different page that you also want to show the phone number on, but now you want the behavior to be slightly different for both of them. And so it gets into this really big tangled mess.
It's also impossible to read that code and know what is going to be executed for each step. So my general preferred approach for that kind of situation...and actually, we have an older episode of The Bike Shed where Steph and Chris discuss this in detail, and their recommendation was similar. So the trick is to branch early, and instead of having a single logical path, it's just check condition, do a thing, keep going. Check condition, do a thing, keep going.
You have branching at the top level that says, if step one, do the step one things; if step two, do the step two things; if step three, do the step three things. And you can have shared logic between them. You might have some private methods that call each other. And all that is fine. You can have levels of abstraction, all the goodies that you're used to.
But now you have a much simpler branching structure because you branch once at the top level. And that might be a four, or five, six, seven-way branch, which is complex. But there is no more branching down below it. After that, it's five or six linear paths going down instead of one giant path with a bunch of branches on it that merge back onto the main path.
STEPHANIE: Speaking of condition with multiple branches, I think we also talked a little bit about this in a previous episode you and I did on case expressions where you talked about how you handled that wizard with a flat case statement. So if folks want to hear more about our opinions on case expressions, I definitely recommend you check out that episode.
JOËL: One thing that I think is really interesting is that when you have extra if...else expressions that you don't need, and maybe they're nested in a certain way, or they're just like really long, you create more paths through your decision tree if you were to model this as a decision tree, then you actually want...so going back to the case of the wizard, the way that you structure it with a case expression is there's one, let's say a five-way branch, and then after that, it's just linear paths. So there are five unique ways to traverse that decision tree, which is exactly the number of ways that you want.
In the original implementation that I talked about where everything is an independent condition that says if this param is present, do a thing, keep going. If this other param is present, do a thing, keep going. And any combination of those might stack up together. Well, now we've got a combinatorial explosion because what if the phone number is present but also the first name and email? Do we do all of those things together? And so it's hard for the reader to understand because there literally are a lot of paths that can happen. And many of them are invalid paths. They shouldn't happen.
STEPHANIE: Yeah, I don't want to be anywhere near a combinatorial explosion based on that term. But, yeah, I think it's also very descriptive of what it feels like to have to parse through a bunch of nested conditionals like that to figure out where you are or what is going to happen next.
JOËL: I mentioned earlier on this podcast that I've done a lot of work with the Elm language. And when they're designing types in their community, they often use the expression make impossible states impossible. And so they'll look at the data structures that they're using and ask, "Are there ways that this data structure can be used to represent values that don't make sense in my domain? And can I change that representation, the definition of these data structures such that it now becomes impossible?" There are some heuristics that you can use to try to make that happen.
There's also a bit of a more mathematical way to think about it, which is thinking in terms of cardinalities, which is how many different types of values can be expressed by a given type. So you think a Boolean can only be one of two values, true or false. That is a type with a cardinality of two. You can do this exercise with different primitive types. But also, once you start combining types together, for example, you've got a pair of Booleans. You've got two values, each of which could be in two different states, and so now those two cardinalities multiply. You've got four possibilities for a type that is a pair of Booleans.
This becomes a really interesting analysis when you start thinking about using this to model a state of your application. So let's say you're trying to model something that has three possible states, and you say, oh, I'm going to use two Booleans to model this. It's problematic because two Booleans have four states, but the thing you're trying to model has only three. And so now you're absolutely going to get in some weird invalid state for that one extra combination that you didn't account for. Maybe that's false and false.
I see that happen a lot, even in database design, where you have two Boolean flag columns that interact with each other. And it's like, oh, but they should never both be false because that's some error state that should never happen, and, of course, inevitably, it does. What was really exciting to me was thinking about this mantra of making impossible states impossible. Can we apply that to branching?
In the way that I've structured my code, there should be the same number of possible branches through my decision tree as there are actual paths through the domain that I'm trying to model. So if it is a wizard with five steps, I want my decision tree to have five paths. If my decision tree has more than five paths, then maybe that's a sign that I need to refactor the implementation because I now have some extra invalid paths that I need to trim.
STEPHANIE: I think the phrase making impossible states impossible is really interesting because that mindset would be really helpful to avoid that defensive coding. I think that shows up as all of those unnecessary conditionals and checking for nil values because you just don't know, even though logically, you might know that it's not possible based on the domain or the business logic.
But we all have seen that no method on nil error come up in our error monitoring service. And you're like, oh shoot, I have to fix that. And you reach for it using that safe navigation operator. And so yeah, the idea of writing confident code, not defensive code (They're opposites to me.), is definitely something that I want to keep in mind.
JOËL: I think something that I'm getting out of this episode is also the value of interacting with other language communities and pulling in ideas from there and how that can enrich the way you think about code in a different language. This episode has talked about components in React in JavaScript. We've talked about architectures and CSS. We've pulled in some typing techniques from Elm and how that might maybe help us think about conditionals in Ruby. So it's a very polyglot episode. And I think that enriches our vocabulary and enriches our toolset, even when we're not coding in those languages.
STEPHANIE: Yeah, absolutely. I think it also shows that a lot of these things are universal. Even though there might be different paradigms, a lot of them kind of, like you said, are enriched by knowledge from other philosophies or frameworks, or it all kind of converges.
JOËL: There's a famous quote, and I've seen it attributed to many people, so I'm not even going to try. And it goes something like this: "History may not repeat itself, but it certainly does rhyme." And I feel like maybe we've got a little bit of that going on here in that the problems and solutions might not exactly replicate across languages and paradigms, but they certainly do rhyme.
STEPHANIE: That's a very, Joël thing to bring up, I think.
JOËL: [laughs]
STEPHANIE: Classic pulling from history to explain the present.
JOËL: On that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast is brought to you by thoughtbot, your expert strategy, design, development, and product management partner. We bring digital products from idea to success and teach you how because we care. Learn more at thoughtbot.com.
Sponsored By:
Joël has been thinking a lot recently about array indexing. Stephanie started volunteering at the Chicago Tooele Library, a non-profit community lending library for Chicagoans to borrow tools and equipment for DIY home projects!
It's the end of the year and often a time of reflection: looking back on the year and thinking about the next. Stephanie and Joël ponder if open source is a critical way to advance careers as software developers.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we are here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been thinking a lot recently about array indexing. I feel like this is one of the areas where you commonly get confused as a new programmer because most languages start array indexing at zero. And what we really have here are two counting systems, either an offset so how many spaces from the beginning of the array, or a counting system where you count 1,2,3,4. At first, it feels like why would computers ever go with the offset approach? It's so illogical. Counting 1,2,3,4 would feel natural.
But then, the more I think about it, the more I've started seeing the zero-based pattern show up in everyday life. One example, because I enjoy reading history, is how we talk about centuries. You might talk about the 19th century is the Victorian age, roughly. But you might also refer to the 19th century as the 1800s. So we've kind of got these two names that are a little bit off by one. And that's because when you're counting the centuries, you count first century, second century, third century, fourth century, and so on.
But when we actually go by the first two digits, you start with the zeros, then the 100s, then the 200s, 300s, and so on. And so we have a zero-based counting system and a one-based counting system, and we sort of have learned to navigate both simultaneously. So that was really interesting to me to make a connection between history and programming and the fact that sometimes we count from zero, and sometimes we count from one.
STEPHANIE: Yeah, I will have to admit that I always get confused when we're talking about centuries and making the mental connection that 19th century is the 1800s. It always takes me a bit of an extra second to make sure I know what I'm hearing, and I'm attributing it to the right year.
I think another example where I get a bit tripped up is the numbering of floors because, in the U.S., we are counting floors using the one-based counting system, whereas I think in Europe and places outside of North America, to my knowledge, the first floor will be considered the ground floor, and then the second floor will be the first floor and onward. So that is a zero-based counting system that I can recall.
JOËL: I never noticed there was a pattern. I just thought every building was arbitrary in where it counted from.
STEPHANIE: Yeah, I do think it's a cultural thing. I would be really curious to know more about the history of how those counting systems get adopted.
JOËL: So that's a fun thing that I've been exploring recently. What's new in your world, Stephanie?
STEPHANIE: I am really excited to talk about a new real-life update. I started volunteering at the Chicago Tooele Library, which is a non-profit community lending library in my city for Chicagoans to borrow tools and equipment for DIY home projects. What I really like about it is they use a pay-what-you-can model so everyone can have access to these resources. It reduces the need for people to buy new things all the time, especially for little one-off projects. And they also provide education to empower folks to learn how to do things themselves, which I thought was really cool.
And another thing that I think might be a little relevant to this audience is that I actually first encountered the Tooele library through its open-source software, which is a Ruby for Good project called Circulate. So the Tooele Library had previously been using this software that was built by community members to do all of their lending. And I got to see it in action when I saw a librarian use it to rent out tools to community members. And then I also interfaced with it myself as a member of the Tooele Library.
I've borrowed things like saws, cooking appliances like air fryers that they also had. And when I was first a guest on this show, I borrowed a microphone from them to do this podcast because I was just a guest at the time and didn't want to commit to buying a whole new microphone, so that was a really awesome way that I got to benefit from it.
JOËL: It's a fantastic resource for the community.
STEPHANIE: Yeah, I love it so much. If anyone is in Chicago and wants to check it out, I highly recommend it. And even if you're not in Chicago, if the idea of a lending library interests you, you can check out the software on Ruby for Good. And it's no longer being used by the Chicago Tooele Library, but it would be really cool to see it be picked up by other people who might want to start something similar in their own hometowns.
JOËL: So you mentioned you're volunteering here. So this means you're going in person and helping people check out items from the library.
STEPHANIE: Yeah, I did my first volunteer librarian shift about a month ago, and right now, they're in the middle of moving from one location to another, so they've had a lot of in-person workdays to get some of that done. But even before that, I had contributed a little bit to the open-source repo, which is just a pretty standard Rails project, so I felt super comfortable with getting my feet wet in it. And it was, I think, my first open-source contribution.
I find that some of the other open-source software, especially developer tooling, is a little scary to get into. So this was a really accessible way for me to contribute to that community, just leveraging the skills that I have for my day-to-day work.
JOËL: Would you recommend this project for our listeners who are looking to maybe get their own first contribution in open source?
STEPHANIE: The Circulate project is actually on a bit of a hiatus right now. But I would definitely suggest people fork it and play around with it if they want to. I also know that Ruby for Good has a bunch of other projects that are Rails apps and have real users and are having an impact that way. So if anyone wants to get into open source in a way that feels accessible and they're building a product that people are using, I definitely recommend checking that out.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So, as we're recording this, it's the end of the year. It's often a time of reflection and looking back on the year and maybe even thinking about the next year and progression. I'm curious since you said this was your introduction to the world of open source, do you think that working on open source is a critical way to advance our careers as software developers?
STEPHANIE: That's a good question. Honestly, I think my answer would be, no, it's not critical. I think it's one avenue for people to explore and increase their impact on the community and increase their technical knowledge, especially if it's in an area that they are not quite working in in their day-to-day, but they're really interested in diving deeper in.
But I do think there's sometimes a lot of pressure to feel like open source is this shining beacon of opportunity for you to dive into and that it'll bring a lot of meaning to the work that you do. And people, obviously, and for a good reason, talk about how special it is that open source is part of the industry that we work in, but I don't necessarily think it's critical.
I do certainly feel inspired by people who create open-source tools or contribute to Ruby or Rails. But I don't necessarily think that it's something that should be a rule and that everyone needs to get into it or contribute to it. Because there are many ways that people can have an impact having influence on the community, and that way is one. But there's also a lot of value, even just focusing on the team that you're on and your company internally.
JOËL: I appreciate the nuance there because I think like you said, we often view open source as the main thing that everyone should be doing to get ahead. And there are a lot of different ways to improve your skill and then to get ahead in your career, which are not always correlated. One kind of really basic way that I was shocked at how much it helped me was I was learning a new language, Elm. I joined their online Slack community and just hung out in the chat room and answered the most beginner questions because I barely knew the language at the time.
And most of these could be found just by looking up the documentation or by opening up a REPL and experimenting with a thing and giving an answer, which are skills that, as a programmer who's got some experience, I take for granted but that not everyone has that as a reflex. Because Googling, searching documentation, crafting experiments in the REPL those are all skills that you have to learn to build over time. But answering those very basic questions over and over over the course of a few months actually taught me so much about the language, and I'm not doing anything fancy.
STEPHANIE: That's awesome. I have a friend who, during a time when I think she was struggling with her confidence in her technical skill and was feeling a bit stuck at work, spent an afternoon answering Stack Overflow questions on basic Ruby and Rails, and that gave her a lot of joy. Because she recognized that she was the person Googling those questions and needing to find answers many years ago, and that was one way that she could pay it forward. And I think she had a lot of empathy, like I said, for those people who are needing a little help, and it felt really good for her to be able to provide it.
JOËL: It's a way to have an impact on other people while also solidifying your own knowledge.
STEPHANIE: Yeah, exactly.
JOËL: So we've mentioned a couple of different ways where you can level up your skills, that might be through helping out other people online, that could be through open source. But I'd like to zoom out a little bit and look at not just improving your technical skills but thinking about career in general when you're looking out over the next 10, 20, 30 years. Do you have an approach that you like to take when you're thinking that broadly?
STEPHANIE: For me, I have had trouble thinking about a five or 10-year plan because things often don't turn out the way that I envisioned them. And so I think I've come to realize that leaning into how I feel about things in any given moment is more valuable and oftentimes more accurate to what I really want. Because I can have an idea of what I want my career to look like, but the things that ring most true are what I'm feeling in the moment.
And so you mentioned we're releasing this episode at the end of the year. I do tend to do a little bit of recap about how my year went if I spent it doing things that fulfilled me and made me feel good, if I grew in the ways that I wanted, even separate from any performance review. I know that this is a time of reflection for a lot of people. And I don't personally ascribe to New Year's resolutions, but I do like to think about themes or intentions. And those are things that ground me rather than setting particular goals that I may or may not achieve; I may realize I want to change.
So yeah, I really recommend just sitting with yourself and spending time thinking about what you want, and that could mean a promotion, but that could also mean a more interesting project using new technology. It could mean more responsibility and decision-making power. It could mean a move into management. I think it's different for everyone. And so when people have asked me about advice or what they should do in terms of coming to a crossroads between jobs or between projects, I think that you really can't tell anyone else what is the right move for them; only they can decide.
JOËL: And tech, it's such a broad field. There are so many different roles and paths you can take through it. Well, there's junior engineer, engineer one, engineer two, engineer three, that's just the same everywhere. And there's only one way forward; it's up or stagnation, and that's it. Like you really get to choose your own adventure in this industry, and that's exciting and maybe a little bit terrifying.
STEPHANIE: Oh yeah, for sure. I like that you brought up the different levels and roles that you could have because I have found companies that provide a career ladder or engineering ladder that has been useful for me in the past in figuring out if the next step at the company that I'm at is what I want. And it's helpful. It's very clear to me, okay, these are the skills that I need to get promoted into this next level. But other times, that description describes something that I'm not interested in, and that is also really helpful information.
JOËL: Was there ever a moment in your own career where you had to navigate some of these decisions to decide what path you wanted to take as opposed to just following a ladder up?
STEPHANIE: Oh yeah. I was presented opportunities to start getting a feel for management or overseeing a team as a lead. And people had really great feedback for me that that was something that I had shown leadership in, and they thought I would do a great job in that role. But I actually decided to kind of hit the brakes a little bit on that particular route because what I realized I wanted at the time was to focus more on being an IC and deepening my technical knowledge. And that was really tough.
I do also think that a lot of women are pushed into management because they end up doing a lot of the glue work that comes with unblocking people, supporting people, and project management and those are all skills that, like, quote, unquote, "lend themselves towards management." But just because we do that work doesn't necessarily mean that that's the direction that we want our careers to go in. And so that was a really tough thing that I had to do was to make it really clear that I wasn't quite ready for that yet. And I might be in the future, but in that moment, just standing my ground and being like, actually, I want to focus elsewhere instead.
JOËL: That's really valuable, knowing yourself and knowing where you want to go, what the next step is. Are there any exercises you like to do to try to figure that out for yourself? Because I know something that I've struggled with sometimes is not being quite sure what I want.
STEPHANIE: I journal a lot in my personal life and also about work. I think I tend to revisit that in my notes, especially about things I've learned or things that I felt excited about in terms of projects and what I've been unlearning, and just going through all of the things that I've collected over the year and synthesizing that information.
I also really like to lean on my friends and peers. So I really enjoy a good one-on-one when we just talk about those types of things, you know, dreams, hopes, goals. I like to lean on my manager a lot, too, because oftentimes, they're able to see things about my work over the past year that maybe I was just too in the weeds to be able to have that higher level perspective about.
As a third-party observer, they see a lot of things that you might not be able to, either on your current project or even opportunities for you to step into at a higher level in the company. So yeah, I think that, in some ways, it's a solitary activity, but it doesn't always have to be.
JOËL: I remember having a really good conversation with my manager as well, at some point, talking about that decision of am I interested in maybe moving into the management track? Do I want to stay on the IC side of things? And that was a really good conversation to have.
STEPHANIE: So after having those conversations and kind of figuring out what direction you wanted to go, were there times when you had to actively make that choice or advocate for yourself?
JOËL: Yes. One of the things that I realized that I care about is investing in other people, and sort of the mentoring, supporting side of things which you might think is kind of a management activity. But management is a little bit different than that. I prefer the coaching aspect than the management aspect. And so what I wanted to do at some point once, I realized that that's what I wanted and that a management position would not fulfill that desire, I started looking to see is there a way to craft that role within the company?
A common thing that happens, I think, in workplaces is that you are given roles or titles for things that you already do. Clearly, if there's something that I care about, I needed to be doing it already in my day-to-day work, and I needed to be doing it at a fairly high level. And so I focused efforts there, trying to say I want to get better at this. I want to do this in the opportunities that I do have in my current role.
And then eventually, I did go to my manager and said, "Look, this is what I am looking for in the next step." Had a discussion about whether or not management could be a fit or if we could customize a management role for this, and eventually decided that an IC role would be a better fit for that. And among other things, we introduce at thoughtbot the role of principal developer, which is kind of the next step on our career ladder. It can be a little bit different emphasis for different people on the team who have that role, but, for me, a big part of that was putting more impact on the broader team as its focus.
STEPHANIE: That's really cool. I really appreciate that you were able to come to the table with what you wanted and able to have a discussion about, okay, so management might not be the right fit. But how can we create this new role that not only benefited you but also benefited the rest of the company because that hadn't been an area that they had quite figured out yet. But by doing that, you essentially did exactly the kind of coaching and making an impact [chuckles] that you had also shared you had been wanting because you just opened this new door for others to also eventually work towards. And I think that's really awesome.
That reminds me a lot of the idea of being directly responsible for yourself and your career. There's a really good blog post by a woman named Cate, who is an engineering director at DuckDuckGo. I'll link it to the show notes. But she writes a lot about how you have to own your own career and find opportunities to have that agency. And you can always ask. Like, you might not get everything that you want, but by asking and by bringing it up, you at least can start the conversation rather than expecting or just hoping that things will turn out the way that you want without having said anything.
A couple of things that she says in the article that I also really like is the idea of expecting less from your job and more from your career.
JOËL: Hmmm.
STEPHANIE: At any given point, your job might not check all of the boxes, but maybe they check some, and that is worthwhile. And once you get to a point where maybe the job is not really doing anything towards the direction you want your overall career to go, that might be time to reevaluate. And then she also mentions learning from feedback and asking for feedback, and making sure that beyond the things that you're able to identify, learning from others areas that you can work on to have a better impact on your team is also really important in progressing your career quickly.
JOËL: So how is this mindset of owning your career path maybe different than the default that a lot of people might assume in our industry? It sounds like it's a much more proactive approach. We talked already about doing the work to figure out what you want out of a career, what you care about, as opposed to just being told what you should care about by others. Are there other aspects that you have to sort of own as part of owning that career?
STEPHANIE: I mean, I think it's just vital to having a work experience that is fulfilling and brings you joy and doesn't bog you down. I know we all have to work, but we also all have the capacity to exercise our agency there.
I know we did talk a little about management earlier, and I wanted to also plug a book, "The Manager's Path" by Camille Fournier, which is about management. But she has a really excellent first chapter about how to be managed and what you can expect from having to be an employee with a manager but also what power you have in that dynamic. She says that while you can be given opportunities and have areas of growth pointed out to you, your manager can't read your mind, and they can't tell you what will make you happy.
And so I have seen a lot of people spend time worrying about if they're doing the right things to get to the next level. But oftentimes, we just haven't really talked enough about how that next level is really totally different. And there are so many routes that that could take, whether that is becoming an open-source maintainer, or producing content like blog posts or podcasts even, or speaking at conferences, or management.
Once I realized that there were so many different opportunities available to me, I did feel a bit liberated because it does seem like, oh, you're just supposed to level up your technical skills until you've become this superstar coder. But that's not what everyone wants, and I think that's okay.
JOËL: And, like you said, there are so many different areas where you might choose to focus or invest time into, and you don't have to do them all. You don't have to be the super prolific open-source person, and also keynoting at conferences, and also publishing the book, and also, you know, whatever you want to add in there.
So once you know your goals, how do you make those goals a reality? We've been talking a lot about know yourself and have some goals. But at some point, you have to translate those goals into actions that will take you one step at a time towards those goals, and sometimes that translation step is hard.
STEPHANIE: It is hard. I think this is another place where I would work with my manager on, especially if I'm on a project where I'm not quite seeing those opportunities. Like I said, usually having another perspective or another set of eyes on what you're working on can make it clear, like, specific and concrete aspects that you can spend your energy on.
So if it's wanting to get better at testing, it's like, okay, what does the current test suite look like, and what are some opportunities that you can provide new value to the test suite to make an impact on the team? Or what are some refactoring opportunities you can make if you are wanting to have more of that experience outside of the regular ticketed feature work that you have to do?
JOËL: I think it's interesting that you mentioned impact on the team because not only do you want to level up some skills, but if nobody knows about it, your odds of getting that promotion or getting recognized for it are very low. So not only do you have to get good at technical systems, you have to get good at social systems as well.
I was recently reading an article about the role of kingship in medieval Europe and how it's very much a role that needs to play out in public in order to build legitimacy so that people will do what you say. You need to be seen to do the things that everybody has in their mental kind of checklist are things that a good king does.
And some of those are somewhat divorced from the reality of what actually is effective governance. It could be various public rituals that you do that people see and are like, oh yes, you're doing this parade every year. You're looking the part of a good king; therefore, I think of you as a good king. It could be military campaigns because there are a lot of those in the Middle Ages.
And there's this interesting cycle where kings that have long and effective reigns then get to influence what the next generation of kings are going to have to do in order to look legitimate because people will point back at you and be like, well, Stephanie was an effective ruler, and she did X,Y,Z. And so, in order to look the part of an effective ruler, you should be doing those same things.
STEPHANIE: That's fascinating. In some ways, I struggle with the idea that you have to prove that you're, you know, doing the kingly things and worthy of that title. But I do think that there is some degree of truth to that in your career as well, where you want to make sure that the work you're doing is visible.
And you also just, in general, bring up a really good idea about the importance of leadership in career progression. And I think that in my experience, and from what I've observed, that is a vital way to progress your career is to just start demonstrating leadership qualities, and that could look like reaching out to new team members and helping them with onboarding. That could mean updating the documentation, just taking the initiative, and doing that.
That could also mean starting to voice more of your opinions about risks or red flags about a certain technical implementation or a project because you have amassed the experience to be able to make those decisions and put in your two cents and then making sure that the choices that are made are the right ones.
JOËL: Additionally, I think even when you're doing things that are a little bit more inward-focused, like learning something new, you can generally find some kind of artifact that you can take and share more broadly with a team. So maybe you experimented with something, and you wrote up a small code example to showcase the thing that you're trying out; make a Gist on GitHub and share it with your team. If you learn something new, maybe write a blog post about it. Maybe even just start a thread in Slack and start a conversation on something that you learned recently.
These can be really low effort, but I always look for opportunities to take things that I have learned, things where I'm sort of working a little bit more inwardly on myself and see how can I share that with the rest of the team? Both because it benefits the team, they get to benefit from the impact of some of what you've done but also, it helps a little bit with making sure that your work is visible.
STEPHANIE: Yeah, absolutely.
JOËL: So we've been talking a lot about improving ourselves technically, but there's one question that we've danced around that we haven't actually addressed, and I'm curious about your thoughts here. For someone who's early career, do you think it's more valuable to be a specialist, someone who goes all in deep on one technology and becomes great at it? Or is it better to go more broad, become a generalist, and know a little bit about a lot of things? From the point of view of what will help move my career forward.
STEPHANIE: I personally do think there is an aspect of being a generalist for a little while, a few years maybe, to get a taste of what is available to you. I think that is valuable before really committing to decide, okay, like, this is what I want to specialize in. Honestly, as a generalist myself, I still do feel a bit like I don't know what I want to dive deep into and commit myself a little bit to being like, okay, I'm going to have to sacrifice learning all of these other things to really focus on this one aspect.
So I have found that being a generalist also kind of gives me the flexibility to work on different projects that might require learning a new language, or at least one that I am less familiar with. And I know that that's a skill in and of itself, being able to move on to different things and gather information and the skills you need to start contributing and working effectively quickly. So, honestly, I think I can really only speak to that experience, but it has served me well and is, for the most part, enjoyable to me at this present moment. What about you? Do you have any thoughts about generalist versus specialist?
JOËL: I think, in a certain sense, there's no right answer. Like we said earlier, there are multiple paths to a career in tech, and you can go through both. I think something that I've seen be less effective, especially very early career folks, is trying to go too broad, jumping on every new language or framework every couple of weeks, every month, and just dipping your toe in it and then moving on to something else and never really learning deeply, or synthesizing, or building a mental model of things. And so you're kind of stuck in the shallow end forever, and it's hard to break through into that initial level of expertise.
So I think, especially very early career people, I tend to recommend pick one language or technology and focus on getting good at that and then branch out. And, of course, you're never doing everything in a vacuum because there are a bajillion dev skills you need to learn beyond a language or framework.
So I often categorize three areas to focus on that I like to recommend for people; one is pick a primary language or framework and get good at it. Two, learn some evergreen skills, these are things like version control, so Git, SQL, using the command line. And these are not things that you need to master on day one because you're going to use these your entire career. So learn a few things, move on, come back to them next month, learn a few more things, and just keep coming back there every now and then over the course of your entire career to deepen those skills, and that will serve you very well.
And then, finally, some random thing you're interested in. I find that I learn so much faster and so much more deeply on topics that I'm interested in or passionate about. And that interest can be very random sometimes, and it can also be fleeting. It can be, oh, I was interested in a thing for a little bit, and I dug into it, and then I moved on to something else.
If I have a career or learning plan, I like to leave that room for spontaneity to say there will be things that are maybe not strategically important as my next step, but I can learn them because I'm interested in them because they bring me joy. And then later on, maybe that will actually be the foundation of something important two years down the line where I can draw on that knowledge.
STEPHANIE: You bring up a really interesting point. I do think my interpretation of generalist did line up more with the idea of those evergreen skills. So I think also about debugging and testing, and those are just part of the things that you're doing every day. And that might look different from project to project depending on what language or framework you're using and what testing philosophy people on your team abide by.
But yeah, those are areas that I do think investing in will serve you well across projects and help put you in a position where you can jump into anything and be like, okay, I have these core foundational beliefs and skills about this work and now, okay, let me figure out how to apply them to the task at hand.
JOËL: Are you familiar with the metaphor of the T-shaped developer?
STEPHANIE: I don't think so.
JOËL: So the idea is that you want to balance out a broad set of skills that you're a generalist at, that you know a little bit about them with a few things that you are a deep expert in. So you have that horizontal bar, but you also have a deep area of expertise which creates a kind of a T shape. In a sense, maybe that's just trying to say, like, do both.
But I was recently reading an article that was advocating for not only a T-shaped developer as a sort of starting point but then also beyond that, over the course of a long career, you have plenty of opportunities to develop more than one specialization. And so now you start having a very broad base of general knowledge as well as multiple areas that you have spent significant time becoming an expert in. And this article referred to this idea as a comb-shaped developer, and that's something you work up to over the course of years or decades in tech.
STEPHANIE: That's very cool. I love the idea that you might start out as a T-shaped but what you're doing is kind of like adding to your harness of skills and it being an additive process. You'd have more teeth in your comb [laughs] rather than it replacing something or a set of skills.
On that note, shall we wrap up?
JOËL: Let's wrap up.
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Stephanie and Joël attended RubyConf Mini, and both spoke there. They discuss takeaways and highlights from the conference.
The core idea for this episode is explained in this article: Constructive vs. Predicative Data. This came up recently in a conversation at thoughtbot about designing a database schema and what constraints could be encoded in the schema directly versus needing some kind of trigger or Rails validation to cover it.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
RubyConf Mini
Episode on CFP - The Bike Shed 352: Case Expressions
Podcast panel: The Ruby on Rails Podcast Episode 446: I'm Giving A Talk on Thursday
Slides for FP talk: Functional Programming for Fun and Profit!!
Episode on language: The Bike Shed - 356: The Value of Specialized Vocabulary
Constructive vs. Predicative data
Avoid the Three-state Boolean Problem
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville.
STEPHANIE: And I'm Stephanie Minn. And together, we're here to share a bit of what we've learned along the way.
JOËL: So something that's very recent in both of our worlds has been that both you and I, Stephanie, attended RubyConf Mini, and we both spoke there. What are some of your takeaways or highlights from the conference?
STEPHANIE: Seeing you in person was definitely a highlight. I really enjoyed that. Because we're working remotely, I don't, you know, get to be in an office with you day to day. And it was really awesome to hang out with you, I think, for the first time as co-hosts of the podcast. And we both, I think, met some people at the conference too that were listeners. And it was really awesome to share that experience with you.
JOËL: I had the interesting experience of several people who told me they recognized me by my voice, which I think is a common thing for podcasters, but as a new host, I was surprised by that.
STEPHANIE: Yeah, that's weird. As a podcast listener, too, I definitely know exactly what you're talking about where it's like, oh yeah, I can identify someone by their voice. But to then be that person that people can recognize is pretty weird.
I also really enjoyed being an audience member of the podcast panel that you are on at the conference with other podcast folks. It was moderated by Brittany Martin. And yeah, I just thought you represented The Bike Shed really well and spoke for both of us about podcasting in a way that I really appreciated.
JOËL: And for any of our listeners who were not able to be there in person, Brittany has published that episode as a podcast, and we will link to it in the show notes.
STEPHANIE: Another thing I really liked about RubyConf Mini was the smaller scale. I think it was about 150 or so attendees, which felt very different from traditional Ruby Central conferences with several hundreds of people. I heard a lot from other folks there that they really liked the regional aspect of it, the intimacy of the smaller conference.
I think I got more of an opportunity to run into people that I'd met at the conference over the next few days. And there was, yeah, definitely a sense of tighter knit community there, you know, when you meet someone, and then you bump into them on the way into a talk, and then you can ask how their day was going and any highlights that they had. And yeah, I guess I haven't really attended a conference that size before, and so that felt like a very special experience for me.
JOËL: I 100% agree. I think the smaller format definitely makes it a little bit more intimate, makes it much easier, I think, to build some of those social connections, to meet with people, and to have some good conversations. I think the format of the conference as well favored that. There were, I think, larger breaks between talks that encouraged people to hang out and talk. And, as you said, because it's smaller, you also get to see the same people over the course of a few different breaks instead of being like, oh, I met a stranger on the morning of day one, and then in the afternoon, I met another stranger. And it's just constantly introducing yourself.
One thing that was really interesting to me is the experience of being a speaker is very different than just attending. As a speaker, you get to go to the speaker dinner and connect with a lot of the other speakers there. Some of them might be quote, unquote "famous people" that you're not quite comfortable just walking up to and introducing yourself. But in the smaller dinner, you just find yourself sitting next to them and enjoying some food or a drink and getting conversations.
It's also much easier to have people come up to you during the conference. Because you're a speaker, people will come and talk to you. So if you tend to be a little bit more introverted, as long as you can get over your fear of being on stage and public speaking, it actually makes social connection interaction much easier to be a speaker. I would recommend to any of our listeners who were wondering how can I get more out of a conference? How can I get better connections, better conversations? Consider being a speaker.
STEPHANIE: Yeah, absolutely. We've talked about this before; I think when we chatted about writing our CFPs for this conference that speaking doesn't have to be a really big, scary thing, but everyone has something to say. I think we had mentioned in previous episodes that your talk topic came out of just a discussion that you had internally, and you were like, wow, enumerables are so cool, like, let me dig deeper into them and just share what I learned. So I totally recommend it.
And this conference was my first in real-life speaking opportunity as well, and that felt super different from my experience last time doing it virtually, you know, talking about how much I love that sense of community all the time. But it really felt true for me this time around, where I could see the audience react to the things I was saying, like, maybe go off the cuff a little bit.
And then yeah, at the end, having people come up to me was really awesome to just talk about pairing, which is what I spoke about, and just share our experiences. And they asked what I thought about some things, and it was really cool to just be able to spread that knowledge around. And one thing I noticed you did a lot was come up to speakers after they wrapped up their talks. You were almost always the first person to get up and congratulate them and just get the ball rolling on following up on the things they talked about. Is that something that you really enjoy doing or find particularly valuable as an audience member or speaker?
JOËL: Yes, both. I think, as a speaker, it's really validating to have people come up to you after the talk and either just tell you they liked the talk or ask a question. I generally don't like to do just open questions after a talk from the audience because then you get the classic; this is more of a comment than a question or people who will tell you that you had a typo on one of your code slides. Like, none of that is useful to anyone.
So, if you're really interested, come talk to me afterwards. And then that actually makes me feel like my talk connected with people, and people were paying attention, people enjoyed it, people were learning. So I try to pay that forward as well for talks that I listened to, go up to the speaker, and tell them one thing that I appreciated about the talk or a thing that I learned, or something that got me excited in their content.
STEPHANIE: Yeah, I'm sure that it's very appreciated. And it also breaks the awkward silence at the end when the speaker finishes and people aren't sure if it's okay for them to get up and start moving around. Yeah, I thought that was a really good way to kind of just encourage people to start chatting with each other and moving into those break times that we mentioned earlier, those opportunities to socialize.
JOËL: Another thing that I think is really fun that you can do at in-person conferences, and I know you were doing it a lot, is going to see the talks of friends and colleagues and sitting in the front row and just being there to cheer them on and encourage them. Again, I think that makes a big difference when you are on stage, and you see these people who are your friends and colleagues there to support you. It gives you that boost of confidence. And when you're there in the audience, it's fun to cheer on somebody else.
STEPHANIE: Oh yeah. You gave me a lot of thumbs-ups during my talk, and I really appreciated that. [laughs] So I'm curious if there were any talks that stood out to you that you got to see.
JOËL: And I was really inspired by your talk, pair programming. I think there are a lot of things that I can take from that to improve the way I pair. I was also inspired by Aji's talk, Aji Slater, on automating manual tasks that you have to do in an iterative way. That one really hit home because, on my current project, I have been doing a lot of manual things. And I just have random snippets of code, like, some shell script lines or Ruby console lines, that I copy-paste out of Slack conversations because I've shared them with other people who are doing similar work.
And I realized that a lot of his advice would apply to the work that I'm doing and how that could really make things better. So that was one of those talks I was listening to, and I was like, oh, you know what? Monday morning, when I go back to my project, this is something that I'm going to start doing. This is something I'm going to change in the way I do my day-to-day work.
STEPHANIE: Yeah, absolutely. I have so many tasks that I would like to get automated, and think that one day I will magically have more time in my schedule to get to it. But I liked that his talk gave pretty concrete strategies for baking it into your regular, like you said, day-to-day workflow, and that lowers the activation energy to getting them done. And then those things can be iterated on and could eventually become, in an ideal world, a fully-fledged feature that you put together from doing those repetitive tasks. And yeah, they provide a lot of value not just to you but can eventually provide value to your co-workers and then even your users in the future.
JOËL: Were there any talks that stood out for you?
STEPHANIE: One talk that I really enjoyed was Jenny Shih's about Functional Programming for Fun and Profit. I have attended a lot of functional programming talks within the Ruby realm, at least to try to get a better sense of how it can apply to my work and the languages and paradigms that I use. And honestly, what I liked about it was that it didn't get too in the weeds about functional programming. What she did was provide mental models for understanding the paradigm that I think was a good vehicle for understanding things very generally.
And, for me, like,¬¬ a talk, it's really hard to pay attention to lines of code and to read code on the fly while people are presenting. For me, that is just not how I like to consume that information. And so she provided themes and, like I said, those mental models, which I know you really like to use a lot too in teaching people new concepts. For me, I didn't fully learn what a monad was, once again, but at least having that repeated exposure to those foundational aspects, I think, will eventually lead me to be able to grok those things a little more comprehensively the next time I see it or whenever I decide to dig deeper.
JOËL: What was a mental model that was shared that connected with you particularly?
STEPHANIE: So one of the main mental models that she shared was thinking about a program in terms of these three dimensions: value, behavior, and time. She had a nice slide that showed the difference between the object-oriented paradigm, where value and behavior are contained by objects, where time is kind of inherently wrapped up in those objects that hold information about the state through values and behavior. Whereas in her functional programming example, those three dimensions were a bit separate. And I found that distinction to be really helpful in separating things that felt very implicit before, but it was nice to see them broken out into very clear concepts in terms of building blocks of a program.
JOËL: So it's helpful then when thinking...when you look at code, if you can think about it in those three different dimensions to help think about, am I taking a functional or other approach in this particular dimension when working with this code?
STEPHANIE: Yeah, exactly. I think it also gave me more of a vocabulary to describe the pros and cons of each and a lens of thinking about which I might want to choose for the particular problem at hand.
JOËL: So you mentioned there's a visual for these three dimensions from the slides. Are those slides publicly available?
STEPHANIE: They are. I will link to them in the show notes.
JOËL: So all of these talks were recorded. They're not yet available to the public, but I think the plan is to publish them on YouTube sometime in the new year, so that means probably January 2023. And a big shout out to the AV team and everyone who is involved in recording these.
STEPHANIE: Yeah, I am definitely looking out for a link to my talk so I can send it to my mom. I also wanted to give a little shout-out to the organizers of RubyConf Mini: Jemma Issroff, Emily Samp, and Andy Croll.
JOËL: Woo!
STEPHANIE: They put on just a really awesome conference, and I feel very grateful that I got a chance to attend with you, Joël.
JOËL: It was definitely a delightful experience.
STEPHANIE: Delightful. That's a reference to Joël's talk for those of you who are listening.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: Coming back from the conference, I recently had a really interesting conversation with some other colleagues at thoughtbot. We were looking at a database schema for a new application and talking about some of the trade-offs involved in how that schema is structured, so what tables we want to have. Do we want to have indexes? Things like that. And particularly around some of the assumptions are business rules that would come into play.
So we're looking at...we'd drawn out this Entity Relationship Diagram (ERD). In it, we're looking at all the tables, and something that comes up immediately is like, oh, it's possible to have some bad data that could show up in these columns. Or it's possible that this relationship could exist where this table has a foreign key on this table, but really, that should never happen in this particular way of working.
And so then the question became, how do we try to prevent these things that currently the schema allows but that are not valid in this particular business domain? Do we want to change the schema somehow and make that stricter or find some way to prevent it? Do we want to add some kind of validation that will check some business rules first before inserting or updating a record? I'm curious, have you ever been in a situation like that where you had to balance those two approaches to enforcing business rules on your database?
A classic small example of this is a situation where let's say, you have a users' table and you have a name column on there. And you want to ensure that that name must always be present; all users must have names. Do you try to enforce that via the schema with a NOT NULL constraint? Or maybe you try to enforce that with a validation, maybe a presence validation at the Rails level.
Or if you're really into SQL, maybe some fancy trigger, but do it in a validation style rather than trying to force this using the schema. And our particular scenario was a little bit more complex than just one column; it was more to do with associations. But I think this sort of problem shows up even in constraints as small as a required field.
STEPHANIE: That's really interesting. I think that, in my experience, when we are spinning up new tables, at that point, we do try to put some intentional thought into what the schema should look like and what requirements we might need to encode at the database level. But things that are more complex might need a little more code, like Ruby code. I have then pushed to an ActiveRecord validation.
One thing that I think is important to know is that when you do set those things on the schema, it's harder to change. And so you usually have to feel pretty confident that that's what you want. Otherwise, you'll run into issues later if that does have to change and making changes to whatever existing data you might have. But it's also pretty common to just do your best when you are deciding on a database schema and then having to make adjustments down the line as you know more about your domain.
JOËL: This conversation reminds me a little bit of the idea of database normalization. I think that might almost fit as a subset of general tactics of using the schema to ensure your data is more correct. When you are generating new tables, let's say you're creating a greenfield app and you need to create four or five tables; how much emphasis do you put on database normalization when you're initially designing those?
STEPHANIE: I think for a greenfield project when you are setting everything up and creating tables for your main domain models, there is an aspect of it that should be considered because you're in this unique position where nothing really is in existence yet. And you do want to try to set yourself up to be successful and hopefully have information about your main use case for this app and can kind of make decisions about the schema then.
At least in my experience, that has been part of the conversation, though, to be fair, because it's so early, you do have the opportunity to change things without as much effort or pain. But I think it's worth considering when you're just sitting down and working through what those models are going to look like.
JOËL: And for our listeners who may not have heard the term normalization before, it's a series of...you can think of them as rules that you apply to your database design to try to avoid data redundancies in your tables. There are different levels of this; they're typically referred to as normal forms. So you'll see things like first normal form, second normal form, third normal form; those are kind of the fancy terms for them.
But they generally involve breaking out other tables so that you don't have data redundancies. And in many ways, this is similar to principles such as the single-responsibility principle that we apply to objects when we're designing our objects in an OO system. But this is more at the table level for databases.
STEPHANIE: I do think that it is so hard, maybe even impossible, to plan something out, to not have any of those redundancies, to begin with. And I do think sometimes they are a bit inevitable. But I also have had the experience of having to figure out what the heck I'm looking at when I am querying data and see all these things that are duplicated or maybe slightly different.
And yeah, I think when you are in that position of starting a greenfield application, it is really interesting to see how you make those decisions about what needs to be enforced and where. Where did you end up landing, or what did you discuss in this conversation with the co-worker?
JOËL: I think we went with a bit of a hybrid approach. Some things, we can use the schema to prevent bad data, and then some things either cannot be represented with a schema, or it's possible, but it's really cumbersome and painful. And so, we chose to try to enforce it with a validation. To me, this feels very similar to a problem in typed languages.
So some communities that use a lot of types try to use those types to only allow data to come through that's in a valid shape. And so you'll hear things like make impossible states impossible or make illegal states unrepresentable. And that works for many things, but it's not always possible to enforce all of your business constraints through a schema. Or sometimes it's possible but just not practical. And so, I think there is a balance of finding when you can use the schema or when it's better to use the validation.¬
STEPHANIE: Yeah, I think my general rule of thumb is, like I mentioned earlier, things I feel really confident about that we want to make sure that we have in our database or in our data for sure. I do lean towards requiring those in a schema, and it also communicates that confidence or communicates that intent that it's something that at one point was decided is important. And so, if a future developer comes in, it would take a lot of work for them to write a migration, to remove some database constraint. Whereas I think sometimes validations at the Rails level are potentially a little more open to change and then even more so if you get to validating on the client side.
JOËL: That can get to be a really, like, it's a useful tool, but one that you can really hurt yourself with. If you modify your validations at the Rails level or at the front-end level, but then you don't backfill those changes on your data in the database, then you might have records in your database that if you were to load them into memory and hit save on them again, would refuse to save because they no longer match the validations. And on longer-lived applications, I've seen that happen sometimes where not all rows in the database pass the Rails validations.
STEPHANIE: Yeah, I think I've seen that be a problem either for developers who then have to backfill that data or write some migration to change some of the data to meet the new requirements, or just unexpected bugs on the users who discover something new but like you said, have been there long enough before those things were implemented.
JOËL: The more I think of this, I think maybe constraints that are enforced at a validation level might still require changing the data in your database. So if you had a constraint enforced via a schema, you don't have a choice. You have to write some way to migrate that data so that it fits the new schema. You can kind of lie to yourself with validation and not change the historic data, and sometimes that is the case; you want to keep the old data and only prevent new data from being written in the old format. But if you need consistency, then you probably need a data migration regardless of which approach you take.
STEPHANIE: Yeah, that definitely sounds like the more robust way to go about it for sure.
JOËL: I have an article that I like to reference a lot by Hillel Wayne on Constructive Versus Predicative Data, which is basically looking at these two general approaches to enforcing data correctness and formalizing them a little bit. So do you try to enforce them based on the construction or the shape of the entity that you're creating, be that a database table, an object, a type, something like that? Or do you enforce it via some kind of predicate? So that could be a validation or other similar logic that runs kind of at runtime to enforce your constraints.
STEPHANIE: That's interesting. I hadn't heard of those terms before, but I think they provide a lens through which you can look at the problem. Did the article end up suggesting different strategies for solving that problem, or was it more theoretical in different ways to look at it?
JOËL: I think the article does two things. First, like you said, it gives us the words to talk about those approaches. And having those labels now, I start seeing them everywhere. I see them in databases, I see them in objects, I see them when doing types across a variety of languages. So that's already a huge win for me. I think you and I had done an episode a couple of months back where we talked about the value of having labels to put to ideas. And I think for me reading that article gave me those two labels. And all of a sudden, it really helped to make connections that I wasn't seeing before.
The second thing that the article does is, I think, explore some of the limitations that each approach has and when you might want to use one versus another. The constructive approach, so using a schema, is more consistent because you know it is impossible for the program to create data that's in the wrong shape. That being said, not all constraints can be represented in a constructive manner, or it might be possible but really cumbersome.
Also, sometimes it's not really invalid data; it's just sort of undesirable data. So you might want a looser schema. And let's say that you're storing some kind of intermediate state or some kind of raw input from another system that you might want to layer validations on top of, but you don't want to reject that data out of your database. You want that sort of incomplete or imperfect data in your system.
Something that I find myself doing more and more these days when I create new tables is to really lock down the schema as much as possible. I think that might be contrary to maybe the way a lot of people in the community like to work. Some people might prefer to start with a very loose schema with no constraints and then work towards making things stricter as they explore the domain, and that's kind of the default that Rails has. If you're creating a new table, all columns, for example, are nullable by default.
Personally, I will put a null false on every column and every migration that I make unless somebody can make a convincing case otherwise, and even then, I might try to think of is there any possible way that we could avoid that scenario and put that null false. Part of the reason for that is that it is much easier to loosen constraints on existing data than to tighten them afterwards. So if I have a column where no value is allowed to be null, and then later on we decide, you know what? It is okay for some of them to be null, I can change the requirement on that column, and I don't need to make any changes to the existing data. It just works.
If the reverse happens, if I have a column that allows a bunch of nulls and then I want to make that column required, now I have to go and find a way to backfill all the empty spots in that column. And that could be a very challenging process. It might even be impossible. There might be some values there that it's just like, the user did not supply them at the time because we didn't ask for them. And now there's nothing we can put in there. So do you put in, like, unknown or not available? Then you have to ask yourself some really difficult questions about your data.
STEPHANIE: Yeah, absolutely. I think I agree with you there. Another thing I like to do is provide default values for columns, especially ones where they can't be null, because, like you were saying, that helps me have a better understanding of just what is going on in the database.
An issue I have seen come up involves a Boolean column where if a default value of false, for example, if that's what we're going with, is not encoded in the schema, you end up with potentially three values for a Boolean, which would be true, false, and null, and that I think has been --
JOËL: The infamous three-state Boolean.
STEPHANIE: Yeah, exactly, the three-state problem, which is just inherently contradictory to what a Boolean is, to begin with. And I've definitely run into issues with that where you have to decide, or figure out, or write code to determine is null false? Is that what we mean here? It's not clear. But if you, like you said, locked it down at the beginning, provided those default values, that puts in those guardrails to prevent things from getting out of hand.
JOËL: It also makes it easier for users of your database, application, whatever to interact with your code. I've run into this a lot when working with GraphQL APIs. And the default in many GraphQL server implementations is to make all fields nullable by default. When you build your schema, you have to add some extra things there to say, "This field is non-nullable," which means that a client that's now consuming it, anytime they deal with the data they need to check, is it present or not? You can't have the confidence that that data is there. And so it can force a lot of extra checks on the client. Or I guess you could just take it on faith and hope nothing breaks.
STEPHANIE: Yeah, it's funny you mention that because I definitely think there's like spheres of impact. So as a developer, you maybe start having to write code that checks those kinds of things, like if it's null or not in your code. Then that can even extend to, like you said, your users or consumers of the API, who then have to contend with data that they have no control over. And I've been there too, and that can be frustrating as well.
JOËL: We've talked a lot about data correctness and different ways to achieve it, different strategies. Why is this something that we care so much about?
STEPHANIE: I think data correctness is really important from a developer experience perspective. And it's way easier to fix a bug in your code than it is to wrangle a lot of accumulated bad data.
JOËL: Yeah, sometimes bad data is not fixable at all, and those are situations where you have a really bad day as a developer.
STEPHANIE: Agreed.
JOËL: Well, on that note, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Joël discovered Bardcore. Stephanie planned and executed an IRL meetup for folks in the WNB.rb virtual community group in Chicago and had a consulting win.
Together, they discuss what deployment processes look like for clients in their current workloads.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Hildegard von Blingen YouTube Channel
Hildegard von Bingen - Historical Character
WNB.rb
git flow
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I've been getting into something that's kind of fun and quirky. It's a new musical genre called Bardcore.
STEPHANIE: Bardcore.
JOËL: Yes, it's basically re-mixing pop songs to make them vaguely more medieval, oftentimes using acoustic instruments, something that sounds a little bit more like maybe a lute in some flutes, oftentimes also kind of changing the lyrics a little bit to use more old-timey language. When the lyrics use words from modern life, sometimes changing them to something that would fit more in a more medieval setting. It's a lot of fun.
STEPHANIE: That sounds so fun. When are you normally in the mood to listen to Bardcore?
JOËL: It can be fun while coding because it's fairly chill as a genre. Honestly, I feel like it can also be good when I'm just sort of feeling a little bit nostalgic or daydreamy. I think it's good for that mood as well.
STEPHANIE: I love that. I can't wait to go and listen to some Bardcore after this.
JOËL: Let me recommend the YouTube channel Hildegard von Blingin' as a great entry point into the genre.
STEPHANIE: Incredible. I can't wait. In fact, I'm going to end up sharing it with all of my D&D friends too.
[laughter]
JOËL: The channel is a play on words of an actual historical character, Hildegard von Bingen, who is, I want to say, a 12th-century nun but also a polymath. So she wrote on all sorts of topics, from biology and the natural world to theology. She was a musician, just one of these like really talented people that made a mark on the medieval world. So it's kind of fun that they used her name as the inspiration for the channel.
STEPHANIE: Yeah, that sounds right up your alley. I knew that you were going to come with a historical tidbit about Bardcore. [laughs]
JOËL: Another really cool thing that I appreciate about the channel is it's not just audio. It's also beautifully illustrated. So the creator has created visuals inspired by medieval manuscripts illustrating the contents of the song. So it's kind of funny to see something that...modern pop songs aren't always the most deep lyrics, and to see them given this medieval manuscript treatment is amusing, for me at least.
STEPHANIE: That sounds really funny and also kind of calming. I was just thinking about what you said earlier about how they sometimes rewrite the lyrics to be more about medieval life. And I love the idea of taking the things that pop songs are about these days and applying them to historic life back in the day.
JOËL: A lot of pop songs also are about love, and romance, and breakups. I think that kind of fits with some of that 12th-century troubadour-style romantic songs because that was definitely the kind of thing that they were singing about back then as well.
STEPHANIE: Absolutely.
JOËL: So I've been jamming out to Bardcore this week. What is new in your world, Stephanie?
STEPHANIE: I'm really excited to share that I had an awesome weekend. One of the things that I had been doing the past couple of weeks was planning an IRL meetup for some folks in Chicago, people in the WNB.rb virtual community group. I've mentioned it on the podcast before when I was a guest. But WNB is a Ruby community group for women and non-binary folks. And we just started creating regional Slack channels.
And so I started a little Chicago channel and planned a brunch. So on Sunday, a few of us, I think it was six, some old friends and some new met up for brunch in Logan Square in Chicago. And it was really awesome to do a local meetup. I haven't done something like that since pre-COVID times, and so it felt really special.
JOËL: That's exciting. Were you big into the meetup scene pre-COVID?
STEPHANIE: So I was working remotely for a previous company when I moved back to Chicago, and so was still trying to meet people here, find a community, find some friends. And I did go to a few community groups, but that was not too soon before COVID started, and so I didn't get to really invest in them the way that I had hoped. So it's really exciting to me to potentially be able to start doing that again.
JOËL: This new meetup that you were at, was it focused more on the social aspect of things, or was it a more technical meetup?
STEPHANIE: It was definitely more of a social aspect. I would be really curious to know if that group would want to focus on some more technical things. But we had a nice diversity and experience levels and the types of work we were doing. So there were a few of us who were consultants, a few of us at product companies. And I think we shared a lot about our different experiences. We talked a bit about the pros and cons of product versus consulting.
And so it was really nice to just learn more about what other people are up to, what tech and framework people are using, and chat casually in that sense. But I also definitely see some more opportunity to focus on technical stuff if that moves us.
JOËL: I think that was probably my favorite part of Ruby meetups back when I was attending those a lot here in Boston, where I'm based, getting to chat with other developers in the city, hearing about their experiences on different topics. And oftentimes, it will sort of revolve around tech to a certain extent, but it's not always like a formal have a presentation. Sometimes just socializing is almost more fun or brings more value to me.
STEPHANIE: Yeah, I totally agree. I also wanted to share another thing that happened to me this week. It was a bit of a consulting win. So on my client project, we have been having retros every two weeks at the end of the sprint. But I was noticing with a fellow thoughtboter that we weren't really getting a lot of engagement in retros. It was kind of the same folks speaking and bringing up issues because we were doing it in a style that was like a retro board, and then folks could write in cards or raise their hand in the meeting to add something to one of the columns.
And so, we ended up proposing to do a round-robin style format for retro. And we just had our first one yesterday using that new format, and it was received really well. Everyone went around and shared things that went well. And then, we went around again and shared things to improve or risks or concerns that we had about the sprint. And it was really nice to have everyone participate, to hear folks piggybacking off of what other people said. And I think we were able to get a better sense of what the group was feeling.
And yeah, there was a new hire who was just observing our retro, and she is going to be facilitating these kinds of meetings for other teams. And she seemed really into it and wanted to bring it over to other teams as well and try it there. And so that felt really good to know that we were able to make a change that was an improvement for our team but might even have an impact on other teams at the company as well.
JOËL: I love that. I think a lot of what we often bring to the table, because we've seen things at a lot of different companies, is not just code improvements but also process improvements. Every company is different, so you can't always just copy-paste things from one place to another. But being willing to try new things, experiment, and then follow this iterative, continuous improvement approach, not just with the code but with the process as well, I think, is something that is really valuable in the work that we do for our clients.
STEPHANIE: Yeah, absolutely.
JOËL: And it sounds like here you iterated on their retro process. And everybody seems to really like this new iteration, so that sounds like a big win. Congratulations.
STEPHANIE: Thanks. I really appreciated that they were open to trying. That made me feel really good and makes me feel empowered as a consultant to be able to like you're saying, leverage that experience and suggest things that can just improve the quality of life for our clients.
JOËL: Another area that I think we've seen a lot of different ways of doing things, and we've actually been able to iterate a lot as far as process goes, is deployments. How do we get our code from, let's say, passes code review, and then, at some point, we want it to go live in production? So what does deployment look like on your current client, Stephanie?
STEPHANIE: I'm glad you asked because I'm experiencing a deployment process on this client that's actually a bit different than what I have seen before. So this client is not a super big team, but maybe, I don't know, between 30 or 50 engineers would be my guess. I am working on a smaller team with just four developers. And so I'm seeing a lot of code get merged into our big Rails app pretty frequently by other teams. And we are also merging to the same app.
So my client has release managers who rotate each day and go through all of the different teams' pull requests that are ready to be merged. They will merge those pull requests on the developers' behalf. And then once everything is merged into an integration branch, they will then merge all of that stuff into their production branch and kick off a deploy.
JOËL: Wow. So does that mean that developers on your team don't merge their code? You just when you get an approval, you ping the release manager and ask them to merge it for you?
STEPHANIE: Yeah, so developers don't merge their own code. We might move the card into ready for deployment, and that's how release managers know that that PR is ready to be merged.
JOËL: And are you then following something that's roughly like Git flow where you've got this sort of development branch, and then at some point, commits get maybe cherry-picked over to the main branch, which then gets released? Or maybe it's even a special dedicated release branch. What does that look like in terms of the Git workflow?
STEPHANIE: Yeah, we have that release branch that you mentioned that eventually gets merged, either through the GitHub GUI or a CLI by the release manager, into the main branch, essentially. And that's what then gets deployed.
JOËL: How do you handle situations where a feature goes out to production, and then you realize that there's a bug or there's something that you don't like about it, and you would like to revert that feature?
STEPHANIE: Yeah, that's a great question. This has happened to me once now, where I merged some code that ended up introducing a regression. And unfortunately, I wasn't tagged or pinged, so I didn't really know about this until the next business day and caught up with Slack and saw that someone else had to resolve my issue, which was kind of a bummer, I think, because with this process, once that code is, quote, unquote, "done," since I'm not the one merging it, and I'm not the one deploying it, I don't get a chance to follow up on the changes in production and then check to see if they look good.
When things go wrong, it seems like it kind of takes a bit of time to figure out how to get it resolved like; who would have the context? And then, if they're not available, someone else might have to jump in and fix it. So it's been interesting because, on one hand, I totally understand that they want to be releasing just once a day. Like, it's nice to have a dedicated person do all of this stuff that is work and would take away time from normal development.
But I do sometimes feel like I don't have as much ownership over my feature with this process because, like I said, it just kind of is out of my hands. And oftentimes, I might be done with my work, but that doesn't get deployed for a few days depending on other things going on with the team.
JOËL: That's interesting that you mentioned that it might not be deployed for a few days even though it's done and maybe merged. I think, generally, we assume that merging a commit into the main branch and deploying it are going to be more or less the same thing.
But oftentimes, you might end up in a situation where there's a feature that's done in development, but we don't want it to actually go live for our customers for a while yet. And that might be for technical reasons because we're waiting for other pieces to be in place, or it might be for business reasons because we did the work, but this feature has to come out on a particular date, and so that's when it's going to go live.
So then you end up in that awkward situation, maybe where you want to deploy something else. But you've got a commit already on the master branch that can't go out with the others. And you've got to do an awkward cherry-pick. Have you ever been in that situation?
STEPHANIE: I have. I remember being on a project where we had features in our main branch, but that hadn't been deployed to users yet. We actually didn't want that to be live yet but then had an issue with an existing feature that was already live that we had to make a hotfix for. And that was definitely one of those cherry-picking situations that did become a bit hairy and wasn't too fun. It sounds like you have had experience with that type of deployment process as well.
JOËL: Yes, I think of a project where that was a very common problem because there were a lot of features on that project that were gated to a particular time. So a lot of the features going live for customers were decoupled to the actual development lifecycle. And on that particular project, we used a lot of feature flags on the commits. So we'd control whether or not a feature was live for the customer. It wasn't, is this commit in the main branch, but it was, is this feature flag on or off?
STEPHANIE: Yeah, we're using feature flags on this client project as well. And so, in some ways, I think that if we did have a more continuous deployment process, it would be okay because this big feature that I'm working on on my team we're not trying to go live until a month from now, but we have been slowly, incrementally pushing features underneath the flag. But even then, we do still have a bit of an async process because of this daily release flow.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: How do you feel about continuous deployment in general?
STEPHANIE: So the reason why I found the way I've been describing so surprising is because I am more used to a continuous deployment flow. When I used to work for a product company, and we were a team of, I don't know, like 30 engineers, we'd merge our own work. And then our main branch would automatically be deployed. And so we could make sure our changes looked good in production and then feel a sense of we finished this feature. But we did also run into problems there because our CI build time which had to run for every single time that code was merged into main, took maybe 20-25 minutes.
So whenever we merged to main, we would have to wait for CI to build, wait for the deploy to go through, and that might be a few extra minutes depending, and then confirm our changes. And that became more of an issue when there was a backup in the queue of a lot of people trying to merge code. And it's funny because we kind of want to be constantly merging. That's kind of a sign that things are moving along.
And it ended up being that deployment was the bottleneck in some instances, especially if there was a CI build that broke, and then it was kind of like a car crash a little bit where there was this huge backup. That wasn't great either because when you have to babysit your deploy like that, I didn't find that I had a ton of focus time to go and pivot to something else. I was just keeping an eye on things the whole time.
JOËL: Would you have preferred a workflow that maybe didn't run on every commit but maybe ran once every 30 minutes and just bundled any commits that happened within that? So maybe it's one commit, maybe it's four. Or would you have maybe preferred one where you didn't run the tests before deploying? You're just like, you know what? We trust that we ran them on the branch. It's good. We can just go straight to production.
STEPHANIE: Well, that idea, the second one of not running tests before deploying, I've never even thought about that. I think that it does provide some value because when you integrate changes into main, sometimes that might cause unexpected issues. So I think, in my experience, the times when CI failed, it usually was for a valid reason, and it wasn't just a blocker that we then had to retry the build for.
But what you were saying about bundling commits or a set of changes and then deploying on a scheduler maybe a few times a day automatically, that sounds really interesting to me. I have never worked on a team that has done it that way. But that sounds like it could be a good, happy medium between the two processes.
JOËL: I think that's effectively what the release manager is probably doing manually. But if there was a way to just do that automatically where you just say you merge to the main branch anytime you want, but on a timer every 30 minutes, the latest main will be run on CI, and if it's green, it will get promoted to production.
STEPHANIE: I think there's still a sense of whose job is it to follow up if something goes wrong in that sense.
JOËL: That's a good point. I think part of that is also it's a coping mechanism because of the slow test suite. We have a big process smell here, and we're trying to find some ways to get around it. And one way where I don't think any amount of process is going to help is when you have to do a hotfix.
So let's say there's a really bad bug in production. We need to get that fix out now, and so I make that fix. I live ping you to review it. We get it done in like 10 minutes, and we merge. And now we've got to wait 20 minutes for CI to pass before we can make that patch go live. And we're really hoping this test suite is not flaky because if not, we might be waiting another 20 minutes.
And so, in a sense, a slow test suite becomes a huge bottleneck to fixing emergency things. And now we're going to be tempted to say, "This is an emergency. We're going to bypass CI and just ship directly to production because this is on fire. We are corrupting customer data or something. This needs to be fixed now." And hopefully, we did not make the problem worse in our hotfix because we were rushing, which I have definitely done.
Luckily, in these situations, it has gotten caught by CI. But there have been situations where I've tried to do a quick hotfix that I thought was going to fix things, and then CI caught it, and I was like, I'm glad I didn't just put that directly in production.
STEPHANIE: Yeah, I think what I've come to realize is that the current process that I am experiencing on my client project, you know, I'm sure there's some history there about how it came to be and why they decided to do it that way. And that might be an artifact of something going wrong and them trying to put guardrails to prevent problems from showing up in production. So I do have some understanding there.
So if anyone out there has a deployment process that they love, I would love to hear about it. You can tweet us @_bikeshed or send us an email to let us know if you have a deployment process that works well for your team.
JOËL: Maybe we'll even feature it on a future episode.
STEPHANIE: Yeah, definitely.
JOËL: I'd like to get into some of the trade-offs that come with different processes, and one that jumps out at me from what you were talking about earlier is the impact of team size. With a smaller team, when you're, you know, 2,3,4,5 developers, you can have a really simple Git-based approach where merging a PR goes directly onto your main branch and maybe even have it set up to automatically deploy, and that's kind of it. If a commit is on main, it is live in production. And if you want to undo something, you just Git revert, and that goes live. And that's a really simple, effective workflow.
But then, as the team starts growing, you start needing something a little bit fancier because there are a lot of commits coming out at once. They might have dependencies on each other. Reverting becomes a little bit more complicated. As the product gets more complicated, too, then you start having to want to have work that's done, but you don't want to just have a PR sitting around waiting until go-live day. So I think that's definitely an axis to think of when you're thinking of trade-offs is some workflows work very well for smaller teams and others are a better fit for larger teams.
STEPHANIE: Absolutely. I think when you were talking about smaller teams, almost everyone has knowledge about what is currently being worked on. And so when problems do happen, that work of reverting or figuring out what went wrong isn't as hairy because most folks on a small team would know what changes are being merged and can pitch in to help there.
But yeah, I am really interested in the transition between a small team where you feel comfortable just merging the code and having the automatic deployment and when you do need to have a heavier-handed solution, I suppose. Do you think that there's an inflection point that pushes that decision to be made?
JOËL: I'm not sure exactly where that inflection point is. I might say as low as maybe 5 or 10 developers on your team, but there are probably some other variables that go along with that. Part of it might even be how good your team is at keeping commits small and focused, and independently deployable. If your team is committing really large commits that potentially break the build or that are tightly coupled to other commits, that might make it really difficult to say that your branch is always deployable. And so, you might want to bring in a heavier process earlier.
Whereas if your team is doing a lot of small, atomic commits, which I think we discussed this on last week's episode, I think that could probably allow you to get a lot more mileage out of a very simple workflow where even with a slightly larger team, you're still able to just merge and deploy and also potentially revert very easily because these are atomic commits.
STEPHANIE: Yeah. I like what you said about how you can get away with a lighter solution if you are really investing in things like making sure that each commit is green on CI. Because, you know, kind of what we were saying earlier, sometimes adding additional process without really figuring out what we're trying to solve here can lead to some of those trade-offs that we're talking about.
JOËL: Agreed. I'm a big fan of using the simplest process that your team can get away with. Maybe we could even extend that more generally to just use the simplest thing that your team can get away with. I think that goes for code complexity, that goes for maybe code optimization. Don't make it more complex just because you're hoping to have this massive scale one day because you don't need it today. So use a process that works for your team at your current team size, and then you can iterate on that and start adding more complex elements as the team starts growing.
So, Stephanie, I'm curious; we've talked about a lot of different types of deploy processes. What would be your ultimate favorite way to handle deploys if you had the choice?
STEPHANIE: I think I do prefer a more automated process. When I was on a medium-sized team, that was working pretty well for us. We were having deploys be kicked off when we merged to main, but then we had a Slack integration that would tell us, "Hey, your thing is being deployed." It would tell us the results of the CI build, and it would tag us if something went wrong. And so I think that was nice in solving that issue of ownership that I had mentioned where I knew that, oh, there was an issue. I have the most context, and I can solve it the most quickly on this team.
And then it was also good to just see what was going out, see what other people were working on. I liked that it made that very transparent. And that sense of feeling like you saw your feature from start to finish and seeing it live on production felt really good and gave me meaning in my development work.
JOËL: Yeah, that sounds like it hits a lot of really positive values, like you said, that ownership that you have from beginning to end, even with maybe the revert if something has to happen, the transparency where you get to see if any issues came through. And then the automation and the simplicity because it's just merge your PR and the work goes out.
Earlier in the episode, we were talking about trade-offs that come with a workflow. So a workflow like what you're describing, what size team do you think would be best suited for a workflow like that?
STEPHANIE: Yeah, I don't know if I have an exact number. I did mention that medium-sized team seemed to feel pretty good where we did have some investment in the infrastructure in place, so, like you were saying, we had guardrails when things went wrong. But it wasn't so much for a really large team where it would have been too noisy in the Slack channel.
And also, merge conflicts would come up if we were merging a lot of work during the day. And that did interrupt that queue and that flow and became something that we had to manually work through sometimes with other developers if that was their code that we had conflicts with. And so I can see it also start to not work past...I think I mentioned that the team was 20 to 30. I would be really curious to know how far that can take a team as it grows.
JOËL: So, 20 to 30 people, this workflow works pretty well. What about sort of maybe experience level? Do you think this is a workflow that requires a certain level of seniority? You're talking about merge conflicts a lot, so maybe a team that is very disciplined with keeping their commits small. Do you think that's required to make this workflow work well for the team?
STEPHANIE: That's a great question. When I was on this team, we did have people with all experience levels. And what I really liked was that it was okay if there were merge conflicts. It was okay if CI was red. People were super helpful in jumping on to work with you to figure it out, also, because they probably had things in the queue that they were waiting to try to go out.
But it felt like a team culture where we were all committed to releasing our code smoothly. And so sometimes merge conflicts would happen, but, like I said, you usually could see it and could jump in to help out if someone was maybe stressed out about it or needed an extra hand.
JOËL: I love the process you described. And the culture that your team had didn't require everyone to get it right all the time. There's room for mistakes or not even mistakes, but just less experience where you don't always know to scope everything super tightly, or your Git process isn't quite perfect every time. And that's great for a team because there's room to grow, room to bring in people of different levels of experience.
STEPHANIE: Yeah, I also think it's more realistic.
JOËL: Oh, 100%. I'd like to look at one more axis of trade-offs, and that is product type. What kind of product do you think that this workflow you described would fit well as opposed to maybe a different type of product that wouldn't be as good of a fit? I think what comes to mind for me immediately is maybe situations where you do a lot of work upfront, but then you only want it to go live for clients later, but you do want it merged. And so you decouple the Git history from actually releasing to customers.
So that's a product lifecycle that might be a little bit different. It could be a product where you even just do big releases at set intervals. So people don't want continuous change, but you're like, once every season, we release the new version or something like that.
STEPHANIE: Yeah, I was thinking that the continuous deployment process worked well for that team who was building a product that was very customer-facing in the sense that people were visiting the site every day. And they were running a lot of A/B tests on those customers as well. And so that was helpful because we could be releasing those tests iteratively and getting continuous feedback that way.
JOËL: So, as we discussed in this episode, no process is perfect. There are always trade-offs. So I think it was really fun to look at a concrete example of a process that you liked, Stephanie, and then look at maybe some of the trade-offs for when does it work and when does it not work so well? And with that, shall we wrap up?
STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
JOËL: This show has been produced and edited by Mandy Moore.
STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter.
STEPHANIE: Or reach both of us at [email protected] via email.
JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
This week, Steph and Joël discuss investment time and keeping track of things they want to learn.
How do you, dear listener, keep track of things you want to learn? When investment time rolls around, what do you reach for, or how do you prioritize that list? Are there things you actively decide not to focus on when choosing where to develop deep expertise? Are there things you wish you could spend time on if you could?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn.
JOËL: And I'm Joël Quenneville. And together, we're here to share a little bit of what we've learned along the way.
STEPHANIE: So, Joël, what's new in your world?
JOËL: I was recently having a conversation with another colleague at thoughtbot, and they brought up Bloom's Taxonomy, which is a taxonomy of different phases of learning. It's often visualized as a pyramid with a broad base that starts with remembering facts and then expands up to understanding and then up to applying, and then analyzing, evaluating, and then finally creating. So it's a way to kind of quantify progression of someone who is trying to master a topic.
And what really struck me when I saw this diagram was I immediately thought about how the tech industry interviews and a lot of our interviews are focused on the base of that pyramid. It's all about did you memorize certain facts, or APIs, or things like that? But a lot of the value that we create as developers...but to be good at our jobs, we have to actually be active much higher up in that pyramid in the analyze, evaluate, and create layers.
But unfortunately, I feel like interviews often don't go that far; they're really just focused on the base. So that was a really interesting realization. We were not talking about interviewing, but this colleague shared the diagram. I looked at it, and the first thing I thought was like, oh, this is the problem with a lot of tech interviews these days.
STEPHANIE: Yeah, I think a lot about how in interviews, we want to be showing off our best selves in a sense. Like, we want our interviewers to see the version of ourselves that we bring to work, which is usually like you were saying, at that top layer and isn't recalling particular facts about how our framework works or things we might have learned in computer science class in college.
And one thing I actually really like about thoughtbot's interview...even in the job application, I think it says, "We want to see your strengths and see you at your best self." And it asks what can we, as thoughtbot, interview you on in a way that gives you the opportunity to display those skills? And so I really like that.
I think I remember when I submitted that application, I might have said something along the lines of debugging a problem because I think that's where I personally shine. I don't know if it ended up being a conscious thing. But I do remember when I was doing the pairing interview, there was an aspect of debugging, and I was like, yes, this is where I can show off what I would normally do in a real-life work situation. So that really resonates with me.
JOËL: Debugging is such a core developer skill, and yet I feel it's not often something that we dig into in a process like an interview. Sometimes you have almost like a code review style where you've got, oh, there is one bug hidden in here, find it, and it's almost like a gotcha sort of thing. I don't like those. But a real situation where you could show off your problem-solving and debugging skills sounds like a really good way to play to your strengths.
STEPHANIE: Yeah. Where else do you think that higher level of critical analysis and creative output shows up in your day-to-day work?
JOËL: I think it has to pervade the day-to-day work. The majority of my job is not remembering what method from enumerable is used to sort an array; it's trying to find a way to translate a problem that the business has into code or a code solution that will satisfy quite a lot of different constraints. This might be something that is doable in one or two days because that's all we have to allocate to this problem.
So a lot of that work could be scoping down a problem. There might be some performance-related constraints where it needs to be faster than X. There are certainly some correctness constraints as well that you're trying to work within. So all of that, I think, is much more at that analysis, evaluation, and creation layers of the pyramid.
STEPHANIE: Yeah, that's a really good point. I think sometimes I've seen interviews try to replicate that or recreate it in an interview question, even though they may be genuinely based off of real-life experiences that companies might have had. But most often, it's really hard to be evaluated on that situation until you're really just doing that work.
JOËL: It is really hard to translate that into an interview format. I think one aspect that I do appreciate, and maybe that's just the consultant in me but having a conversation about trade-offs in a situation where there isn't a single correct answer. And so, maybe the interviewer and the candidate have different conclusions. But as long as they can show their reasoning down that path of why they came to the conclusion that they did, I think that's the important part of that.
The hard thing is if the interviewer has their preferred solution, and they're just like, "No, you didn't come to my conclusion," then that's not a good interview. But a situation where a candidate gets to demonstrate their critical thinking skills, their analysis skills, their ability to make difficult decisions to balance trade-offs, I think that's a great way to show off some of those high-level skills that honestly we use on a daily basis.
STEPHANIE: Yeah, I agree 100%.
JOËL: So that's what I've been kind of excited about recently, just seeing this diagram and having that moment of clarity about interviewing. What's something new in your world, Stephanie?
STEPHANIE: That's really interesting that you brought that up because it's kind of related to what I was going to say about what I've been working on on my client project, which is the ambiguity of the rewrite. So I mentioned last week that I've been rewriting some Rails views. And we're working on a pretty old legacy application, so there are a lot of things that, as we're rewriting, we need to figure out whether or not we want to include it in the new version.
So it's been a little more challenging than just copying over the functionality that you want because there are a lot of things in this legacy app that were written 10-12 years ago that we don't have any context on, especially as consultants and even the people we're working with on this team, the code might even predate them.
So we do our best to ask them questions about, hey, is this still necessary? Do you think we want it in this rewrite? And they don't always know the answers. And so we have to make our best judgment and make a lot of micro-level decisions about what we think is important to bring into this rewrite without a ton of that historical context. So when you were talking about those analytical, critical thinking skills, that seemed like a very relevant experience that I would say has been utilizing those aspects of learning.
JOËL: Definitely, especially for a codebase that is that old. I feel like ten years is almost like a generation in software developer terms. Ten years ago would be what? 2012. That's Rails 3 still. I forget when Rails 4 came out. But yeah, that's a long time ago when you talk about technology. And at a company, even the odds of someone sticking around for that long are very low.
STEPHANIE: Absolutely. And so sometimes we just choose to leave the code as it is, and we will just copy and paste it. But other times, we might try to rewrite it in a more modern way. One thing that we did recently was migrate a hand-rolled form builder to use Simple Form. And we did our best to retain most of the original functionality. But there were aspects of it, things like browser validation and stuff like that, that had to change because we made the conscious decision to use a more modern form builder.
But then there were always going to be some differences, and so we had to reconcile those with the product team, have a lot of communication around what was important to keep and what wasn't. And yeah, really, just try to get the code in a better spot if we can while also acknowledging that some things have been working for ten years, and that's okay too.
JOËL: So you're talking about a lot of old code that you're working with and seeing how much things have changed over ten years. And I feel like, as software developers, we're constantly having to learn and hone our skills, but it can really be overwhelming because there's so much to learn. How do you prioritize what you want to learn next?
STEPHANIE: At thoughtbot, we're lucky enough to have investment times. So typically, on Fridays, most of us will not be working on client work, but we'll be working on things to improve thoughtbot internally or improve ourselves professionally. So I'm really grateful that I have dedicated learning time, and figuring out how to spend it has been both fun and also fraught in a way because like you were saying, there are so many things I want to learn about, and we internally have so much lively discussion about really cool technical things.
But I've kind of accepted that I'm not going to be able to learn it all. And so when Friday does roll around, I do have to figure out, okay, how do I want to spend my precious investment time today? For me, it honestly feels really dependent on how I'm doing that Friday. So I do have a bit of a backlog of talks and articles that I've collected along the way or bookmarked that I might come back to if that is the mode I'm in. I also have bigger themes, I think, around frameworks and technologies that I want to dig a little more deeply into.
I've been trying to work through a TypeScript tutorial for a while now, especially because it's not something that I've gotten a chance to spend a ton of time on in client work. And so in some ways, it's like, well, if I want to work on a client project using TypeScript, then I feel like I should brush up on TypeScript first. So that's kind of in the back of my head is just a more nebulous goal. But I also think that it really changes depending on how I'm feeling throughout the year. It could be very well that the TypeScript thing never comes to fruition and maybe something else will grab my attention.
JOËL: I'm sure there are lessons, though, that you would learn from TypeScript that you could then use to improve your day-to day-work on a Rails project, for example.
STEPHANIE: Yeah, absolutely. I think that's the really cool thing is that everything I learn in some way can connect to other things that I do know, or experience, or come across during my everyday work. So none of it ever feels like a waste of time. I think the best feeling is when you can make that connection as you are experiencing something in the codebase that reminds you of something you read about in a blog post or something like that.
JOËL: Connections are one of the most crucial parts of, I think, knowledge creation. And in a past episode on note-taking, we had a whole deep conversation about how sometimes making connections between some of your notes is almost more valuable than taking a note by itself.
STEPHANIE: Joël, how do you prioritize your learning?
JOËL: I have three broad categories of technical learning that I like to do. The first is anything related to my core language and framework, and as of right now, that is Ruby, Rails. And maybe a little bit more broadly, anything related to the paradigms related to that, so object-oriented design, patterns related to that, all things that will help me to write better Ruby and Rails code.
Then there are evergreen skills that are always great to invest in, things like getting better at Git, learning a little bit of SQL, getting better at doing things on the command line. Those are all things that I look to level up every now and then. And then, finally, just whatever interests me right now. I find that the return on investment for the amount of time you put in versus the amount of knowledge you get out is much higher when I'm personally interested.
So it might be something completely unrelated to maybe more strategic elements of tech that I'm trying to get, but if I'm interested, it's worth putting a little bit of time into that. And so, for me, several years ago, that was functional programming types. Elm, I went really deep into that. And I think that really unlocked a whole other way of thinking about software for me and helped me...like we were saying earlier, I was able to bring that back to the way I think about Rails applications, the way I think about test-driven development. And that really rounded out my thinking, I think.
STEPHANIE: Yeah, I think focusing your energy into where you're interested in makes it easier, for sure. It makes it more fun. I think like you're saying, your learning gets accelerated. And I think it's also really cool that people have different interests that they do like to go deep on. So maybe you might be thinking that you should focus your energy on this other aspect of development that you think would be really cool or useful in your work but doesn't necessarily interest you that much. Chances are that there's someone else who loves learning and talking about it, and you can use them as a resource when you want to know more.
JOËL: That is a really important aspect because learning is not necessarily a solo activity. So sometimes, maybe I'm not even just prioritizing things that I think are strategically good for me or even things I'm just interested in. It might be things that my colleagues are interested in. So we have a book club that we run at thoughtbot. We've been going through the book Ruby Science, and there have been some great discussions around that. Recently, we've also been doing watch parties for episodes of I know it is RubyTapas by Avdi Grimm, but I think it rebranded recently, and I forget the new name of it, Graceful...I think Graceful.Dev.
STEPHANIE: Graceful Devs, I think, yeah.
JOËL: So we've been watching some of these together as a team and then having a conversation afterwards, so that's also been great.
STEPHANIE: That's really cool. Yeah, I think getting other people involved makes it a lot more fun. And you have an accountability buddy. And you can have those deep, thoughtful conversations about the things you've learned.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: I'm curious, have you ever made a conscious effort to not focus on something super deeply?
JOËL: I don't know that I've made a decision to be like, I will not spend time here. But I've definitely made a decision to I will invest here and maybe not care quite as much there. So I've done quite a bit of different front-end technologies, starting with jQuery and Backbone.js and moving through a lot of the frameworks. Somehow I have not yet done much React. It's sort of a big hole in that list of frameworks that I have worked with. It's just not something that I've prioritized. I've done other things. I've learned concepts that I think mirror a lot of what React does, but that's not been something that I've dug into.
STEPHANIE: That's really interesting because I think a lot of people think that they need to learn React because it's the popular front-end framework of the time. And so they think that it's something that they should know, or if they do ever have to work on a project with React, that kind of contributes to that feeling. But I like what you were saying earlier about how you have experience with other front-end frameworks. And that can help inform you if you ever do have to work in it. And also, there are so many great expert React devs out there. Like, we don't have to all be that dev.
JOËL: Yeah. I think there can definitely be a pressure to feel like you have to know it all. And a lot of these tech stacks are changing so quickly that it becomes overwhelming to try to just keep up with everything.
STEPHANIE: For sure. I remember having to write some tests for a React app, and the things that I had learned several years ago using Enzyme or something were no longer as relevant today, and having to pick up on the new best practices for writing Jest tests and React Testing Library. It was a lot, even though I was able to identify aspects of it that lined up with what I knew. It can be overwhelming, for sure. And people spend a lot of time digging deep into this framework and like I said, becoming those experts and accepting that I probably won't be that person [laughs] was also a little bit liberating, I think.
JOËL: It's also important, I think, to accept that these sorts of labels of I'm that person, or I'm not that person are not permanent. It's I'm not that person now because that's not where I want to prioritize my time. Maybe in two or three years, it will make sense for me to become that person. And I can become that person if I put in the time, but today is not the day for me to be that person.
STEPHANIE: That's a really good way of putting that. I like that a lot.
JOËL: One struggle that I have, and I've seen a lot of people too is that it's easy to get very scattered in your learning that you'll have a lot of different things you're trying to learn at the same time or you feel like you want to do a little bit of this and a little bit of that. And then maybe you don't go very deep in any of them and feel like you're not being very effective with your time. Do you ever feel that, and do you have any strategies you like to use to make the most out of your learning time?
STEPHANIE: I really relate to that. And I think one resource that helped me reframe that conundrum if you will, was this book called Four Thousand Weeks: Time Management for Mortals by Oliver Burkeman. It was really interesting because it kind of turned productivity culture around a bit on its head because his whole thesis is that you won't achieve at all and that by trying to hack your own productivity, what you're really preventing yourself from doing is accepting the fact that time is finite. And that you have to make hard decisions about where to focus your time in a way that will enrich your life the most.
And sacrifice the idea that you will get to do everything on your to-do list, that you will learn every framework that you want to learn. And it's still hard for me to totally accept that. But I think I'm inching towards the idea that if I do drop a ball on something that I have had bookmarked for at this point, you know, a year, I'm probably never going to get around to reading that. And that's okay because I'm still getting by with the things that I am learning and applying them in the aspects of my work that are relevant to me today.
JOËL: That sounds like a really refreshing take on productivity culture, maybe with some hard truths in there as well. Is 4,000 weeks the human lifespan?
STEPHANIE: [laughs] Yeah, it is. It's really funny because I think he even starts off in the book quizzing one of his friends, like, how many weeks do you think we have to live? And his friend very naively answered, "Oh, must be, you know, 500,000 or so," or something like that. But he used that as an illustration of how we inflate how much time we think that we might have in a day, a week, our lifespan. [laughs]
JOËL: I'm a big history nerd in my personal time. You see this theme that comes up a lot in medieval European art and the 1400s after a lot of these big plagues have happened where they feature a lot of death or skeletons or those sorts of motifs that are much more prevalent than maybe an earlier art, and this idea that comes with a Latin phrase Memento Mori (remember death). And I think there's maybe an element of that that comes back into this book at least the way you were describing it, the idea that you only have 4,000 weeks, roughly, in your life, so make the best use of it.
STEPHANIE: Yeah, absolutely. It's nothing new, for sure. I think it's just one of those things that we've been grappling with as a species for as long as we've existed. [laughs] So I don't know if anyone out there feels slightly relieved that it's okay for them not to get through their list of bookmarked articles about technical things. I hope that feels slightly better for you.
JOËL: We give you permission for you, the audience, to go to your bookmarks and those articles that you've been meaning to read for two years and you haven't got to; it's okay to remove them. You will be okay.
STEPHANIE: Agreed. So we've talked about how we spend our investment time. But I'm curious, do you have any strategies for people who do most of their learning in their everyday work?
JOËL: You know, I think that applies to me as well. We've been heavily emphasizing investment time, but that's only one day a week. And four days a week, I am doing regular application development for clients. And so the majority of my hours in a week are going to be dedicated to that. I find that being very self-aware for the things that you do and trying to notice when I learn something new or when I interact with something new has really helped me get more out of my day-to-day work.
And a way to level that, I think, is to be on the lookout for opportunities to share with others. And that can be as small as just put a today I learned message in a group chat, maybe in thoughtbot's Slack developer channel, and just say, "Hey, today I learned this interesting thing about a particular method." Or "Today I learned this weird thing about time zones." Or "Today I learned this interesting fact about testing." And then that might start a discussion, or it might not. But the fact that I took the time to take it out of my head and write it out, I think, makes that more concrete, and it helps me hold on to it.
STEPHANIE: I've noticed you are really good about doing that, about sharing things that you encounter in your everyday work in a very low-stakes kind of way. I am not so good at doing that. I tend to be so steeped in client work, and I have to really intentionally, after a project is over, think about what I learned along the way. And oftentimes, they're not as small, incremental atomic bits of information but bigger picture things about, oh, I learned how to navigate this aspect of ambiguity.
And maybe the next time, I can point to a past experience or lean on a little bit more on my gut instinct to guide me towards making the right decision. And I think that's an important aspect of learning too, even if it wasn't necessarily a technical tidbit. It is part of becoming a better developer, just as equally as gaining that more concrete technical knowledge.
JOËL: Intuition, I think, is really important as developers, and honing that intuition is something that is really valuable. One way that I found helpful is dialogue, just a conversation with one other person, maybe it's asynchronous over Slack, maybe it's a call in person, and just talking through an idea that I have.
A recent one and I think I mentioned this on the previous episode of The Bike Shed, was talking about RSpec matchers. And does your choice of matcher impact the sorts of design that will come out of the code that you write? Does EQ tend to push you in a direction maybe where you're less strongly encapsulating data? And so that's just a thought, and then you have a conversation about it. And then that can help sharpen your intuition so that the next time you're writing a test you're not just thoughtlessly bringing in a matcher because whatever; it's the thing to do.
And initially, maybe it's not intuition; it's much more explicit. You're thinking, ooh, do I want EQ, or do I want not? But I imagine that after six months of me being hyperaware of that, I will have built up some intuitions to be like, oh, this is the place where we want a custom matcher, or here's the place where I want EQ.
And my hope is that that will eventually come to the point where it's so natural. Someone would almost have to stop me and say, hey, wait, why are you choosing that? And then I have to think a little bit and be like, oh, it's because of these things. But I'll have started with a conversation, which then turned into just hyperawareness thinking about it every time I do that action which then turns into intuition.
STEPHANIE: Yeah. I think you can also call that experience. I remember having a conversation with someone, and I told them that I could inject their brain with all of the knowledge and information that I had. But that isn't quite the same as having really experienced the process of gaining that knowledge through more conventional learning methods but also that day-to-day client work that you're doing. So I totally agree with you there.
JOËL: You took this whole long thing I had to say and were able to condense it down to one word: experience.
STEPHANIE: [laughs]
JOËL: Which I think, yeah, exactly describes what I'm trying to say. And with that, shall we wrap up?
STEPHANIE: Let's wrap up.
JOËL: The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email.
Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
thoughtbotter Stephanie Minn joins The Bike Shed as co-host! 🎉
Joël and Stephanie talk about continuing on a rewrite and redesign of a legacy Rails app and working incrementally.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by someone very special, Stephanie Minn, who will be joining the podcast as a co-host.
STEPHANIE: Hi, Joël. It's me, Stephanie Minn. [laughs]
JOËL: Welcome to The Bike Shed.
STEPHANIE: Thanks. I am really excited to be here for the third time, I guess, but now in a more official capacity.
JOËL: And together, I think both of us are now excited to share a little bit of what we've learned along the way. So for the first time as a co-host, I'm happy to ask you the question, what's new in your world?
STEPHANIE: Well, I think I would have to say co-hosting this podcast. It's pretty big news for me, personally. I'm really pumped and also really nervous. I never thought I would co-host a podcast. But I have been a long-time listener of the show, and it feels very surreal to be here. I have big shoes to fill, even if I do have the same name as former co-host Steph Viccari.
It's funny; I listened to a previous episode of The Bike Shed this morning, where I had submitted a listener question about project estimation. It came out earlier this year. It was really funny because Steph and Chris had a whole bit about speaking to this other Stephanie out in the world. That was me. And now it's kind of come full circle that I am in this position now. So I thought that was kind of fun. Now I could just say hello to all the Stephs out there.
JOËL: Regular listeners will have recognized you because you've been a recent guest on a couple of different episodes, one where you talked about case expressions and how they fit into our style of programming. And then another one where we talked about the use of domain-specific or industry-specific vocabulary and jargon as a form of communication and how that plays out in the workplace. And for those who have not listened to those, we'll link them in the show notes if you want to catch up on the backlog.
STEPHANIE: I really enjoyed being a guest on the last few episodes. It was cool to see things that we talk about internally come up in a way that we want to share with a broader audience because I think some of the things that we get into in book club or in the dev channel on Slack end up being really interesting and sometimes not things that we always see out in the content world, and we can dig really deep into that with this format. So that is really exciting to me.
I think, in general, being on this podcast, I'm hoping in some ways will also be a growth opportunity for me because I am wanting to get more comfortable sharing my thoughts and ideas in a public space more frequently and informally. I'm very comfortable hoarding my thoughts until they're perfectly refined. And I'm like, okay, now I finally feel ready to share them in the form of a blog post or a talk.
But I kind of want to open myself up to hearing from others, different perspectives, being more comfortable being wrong in public, and changing my mind, and evolving how I work and what I think in this way because I think it's important to normalize that. Yeah, I don't know how I can be any more vulnerable than being on a podcast on the internet.
JOËL: Podcasting is really interesting because it's much more raw and unfiltered as opposed to something like a blog post or a conference talk where you've gone through a whole editing phase; you've rehearsed it. And you've spoken at conferences before, and you're speaking again. You'll be at RubyConf Mini in a few weeks.
STEPHANIE: Yeah, I was thinking about my background because I actually come from a journalism background; that's what my degree is. And so I'm very comfortable in the role of editor, and I am a very obsessive and tedious editor when it comes to my own personal work in words and in code, I would say. The idea of putting stuff out there in a more unfiltered and less polished way is uncomfortable for me, but I want to get better at it, which is why I'm here because, like you were saying, in some ways, it's more realistic in just how we talk about some of these things at work.
I kind of want to remove the Instagram filter of talking about software and technical topics. And sometimes, we are just learning about things along the way. And I think that's one of the things that I've really appreciated about The Bike Shed in the past.
JOËL: Have you ever tried improv?
STEPHANIE: Oh my God. Once I took an improv class as a team bonding activity at an old job, and that was really something. I think we all collectively were not into the idea, but that's what we were doing. So we went to I a think comedy club or something in New York with my team at my first job. And it was actually a lot more fun than I thought it was going to be. I think the instructors knew that most people would not be super comfortable with improv, and so they did provide a lot of structure in terms of the types of exercises and games we played.
And we weren't doing improv comedy; we were just doing the exercises that would make us feel a little more comfortable and just having fun with it. Given some prompts, we would maybe walk across the room in a silly walk. And then the other person that we would walk to would imitate it, but it would be slightly different, and it would kind of evolve that way. So that's funny that you mentioned that. I had to really dig that memory from the archives [laughs] because I probably repressed it at some point.
JOËL: I've also done improv a couple of times in a similar setting to you, like a team-building activity. I really enjoyed it. And in many ways, I feel like podcasting can feel like the improv version of the content world.
STEPHANIE: Yes, and I agree.
[laughter]
JOËL: Yes, and very true.
STEPHANIE: So, Joël, what's new in your world?
JOËL: So you'd mentioned earlier how our developer channel on Slack is just a fantastic resource. There are some great conversations that happen there, and a lot of them eventually I like them so much I want to pull them into The Bike Shed and make an episode inspired by them. And I had a conversation today about the impact of what matchers you choose when you write tests and how that might impact the code that you write. So, for example, an equality matcher is almost like the primitive obsession version of testing matchers in RSpec and how which one you pick might impact the implementation of your code.
An example of that might be if you have... let's say you're testing behavior on an order, and you might say you do some things, and you expect the order status to be the string or to equal the string pending. You've now exposed this internal of this order status string where instead you might using some of RSpec's automatic matchers for predicate methods, expect the order to be _pending because you now have a predicate method pending defined on your order.
So by choosing to use a more rich matcher, you may have actually improved the encapsulation of your object. I had never thought about matters in that way before. It kind of blew my mind, and so I'm still kind of chewing on that, and maybe some of the implications of it.
STEPHANIE: That's really interesting because, in my experience, I think I would reach for the more general matchers, the ones that seem to be more top of mind for me. And it takes an extra intentional thought to be like, oh, actually, I want this particular specific matcher to better reflect the behavior that I'm desiring.
JOËL: This just came out of a conversation because fellow thoughtboter, Mike Burns, has a blog post that just lists a bunch of almost code smells or things that make him raise an eyebrow during code review that might lead to a follow-up comment and asking for clarification about why that choice was used. And on that list is the RSpec eq matcher that checks for just regular equality. And so we went a little bit deep into why that might be the case, and that brought up a lot of really interesting ideas that I had not thought about before.
STEPHANIE: I'm curious if you are doing test-driven development if using a more specific selector. This is kind of the opposite of what I said earlier, but I guess I'm wondering if there's a possibility that it pigeonholes you into a particular implementation.
JOËL: It might pigeonhole you into a particular interface, and I'm still exploring the idea. I think that the richer matchers move you away from implementation. So, in this case, or in the case of the example I talked about earlier, all you know is that there's a predicate method pending on an order. You don't know whether it's implemented as a Boolean internally or if it's a string that is being checked or some other thing. Maybe it's a status code.
STEPHANIE: That's cool. Do you think you might explore this in your own work moving forward?
JOËL: In a recent episode with Amanda Beiner, we had a whole conversation about note-taking systems, and I talked about having a sort of personal knowledge base where I keep deeper thoughts about code. And I definitely added a couple of entries to that today based off of that conversation.
STEPHANIE: That's awesome. I totally know what you're talking about when you learn things or pick up little bits and pieces of information that you want to hold on to. But you might not have a particularly applicable project or codebase you're working in at the moment to apply that, but you want to hold on to it when you encounter that situation in the future. So I really like what you're saying about just adding it to your knowledge base and coming back to it.
JOËL: I think the next time I'm writing a test and I feel the need to reach for eq I will immediately think of this and ask myself, is there something else that might be better? And if other matches feel awkward, why? So it's going to definitely cause me to be a lot more thoughtful about the way I write assertions. I'm curious to see if that will have an impact on the types of designs that my tests drive.
STEPHANIE: Yeah, so maybe not a code smell but a code whiff.
JOËL: Definitely.
MID-ROLL AD:
Debugging errors can be a developer's worst nightmare...but it doesn't have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake's debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake's lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: So one thing I've been thinking a lot about in my client project currently is working on features more incrementally. Right now, we are working on a rewrite of the front end of a legacy Rails app. So they did a big, modern refresh on the look of the app. And we are rewriting a bunch of the views to those specs. And all of this is happening behind a feature flag. So we are able to ship work incrementally and not have as much of an issue where all of this work is happening on one big branch.
But because a lot of this work builds on top of each other, I have been experiencing folks cutting branches off of feature branches, off of feature branches. And I've been thinking a lot about the intentional steps that we made to be able to deploy this in pieces and more safely but the friction that people still might have to work incrementally.
One thing that I've noticed is that there are different levels to how this shows up in our work. You can work incrementally on an individual level where you are in your own work writing small commits that capture individual pieces, and you can feel good about that. But then you also have it at a team level where we have to collectively decide how we want to ship features. And I am trying to figure out how to encourage other people to agree on an approach and encourage the benefits of shipping features in small chunks. Have you ever noticed a discrepancy between individual work and how a team works in that way?
JOËL: It sounds like what you're describing is that by encouraging the team to work in smaller chunks, it has made the Git and branching and merging process more complicated for the team as a whole. Maybe there's a bit of that tension whereby increasing the individual granularity; you're making the merging for the team more complex.
STEPHANIE: Yeah, we've had a lot of rebasing issues where we do spend a lot of time in that mode because one branch went through code review and had some changes and then went through UAT and had some changes. And then we had to reconcile those changes with another feature that had been started and cut off of that feature branch.
JOËL: That's really interesting because I feel like I've almost had the opposite experience. When I was a new developer, I would constantly get Git conflicts, and you have to figure out the merge, and it's a mess. And then eventually, I got decently good at resolving conflicts. Nowadays, I very rarely encounter conflicts, and the ones I get tend to be really minimal. And I think that's because I've started to really work incrementally and to keep my change sets small.
And so Git conflicts are not really something I run into very much anymore. And I tend to think of them now as more of a symptom of large patches and code changes that might be bigger than they need to be. Does that line up maybe with your experience as well?
STEPHANIE: Yeah, I think it does. But one area of friction that we are experiencing right now is that we may be working incrementally, but those changes don't make it all the way to our integration or production branches, and so they are still just hanging around. And we have had to fix merge conflicts, not necessarily with other people's work but with our own work where changes happened upstream, and then they kind of cascade down. So we are working in pieces. But because there is a little bit of process challenges that we're facing, we haven't quite closed the loop on that feedback cycle in a way that allows us to move as quickly as I think what you're describing.
JOËL: It sounds like maybe your team is bundling multiple incremental changes and then trying to merge a bundle of them together. So while it's composed of small, incremental changes, the effect is similar to merging a branch that's got a large change set together.
STEPHANIE: Yeah, that's exactly it.
JOËL: In my own work, I tend to really view commits as the atomic chunks of the work that I'm doing, so each of them is going to be very tightly scoped and do a single thing. Ideally, also pass all the tests and be independently mergeable. I tend to view PRs not so much as like a unit of work but just a unit of review, a way to get feedback on one or more commits. I don't want to put too many commits in a PR because then it's painful for the reviewer. But also, another reason that you got me thinking about is that trying to shove fewer commits in a PR will also make it easier to do that final merge.
STEPHANIE: Yeah, I know that I really appreciate when PRs are just one, maybe a few commits, but the overall diff is small. And in my opinion, I think we are able to move faster that way because reviews are quicker, conflicts are fewer. And it better captures the idea of working incrementally, even if it does involve more than a single, small atomic commit. And I would love to figure out how to move in that direction.
Right now, in my client project, one of the barriers is the processes we've built into the agile methodology we're following, where our PRs have to get a couple of approvals and then be tested by folks from product. And so it's sometimes easier to just add a little bit of things to an open PR already. But what we were talking about with incorporating more of that intentionality really pays the cost up front rather than just pushing it until later when we do run into problems with conflicts or having to go back to debug something that went in into a big bundled PR and then having to spend time and energy at that point in the lifecycle of our work.
JOËL: It's really interesting that you highlight the organizational impact and the process impacts. Definitely, when you increase the cost of merging when you make it like to merge a single PR, you're going to need to wait at least 24 hours because of all the other checks that need to go through, then people will tend to make larger PRs. And so sometimes it's not about programmer discipline or good habits even; it's about the process pressures that just really incentivize making larger PRs because of how expensive it is to open a new one.
I'm curious, though, in a perfect world, when you are reviewing a PR for some code written in Ruby on Rails, what is the max amount of lines that you'd want to see in a diff before you start thinking this is too big; I wish we could split it up?
STEPHANIE: I don't know if I have a number in my head. I want to say somewhere in the couple hundreds, maybe. I will love a PR where the diff is less than 100. That feels great to review and just feels right. I don't know; it's I don't open it and be like, ah, I have to now read through tens of files, some of which I have no context about. I like a tidy PR where everything that's changed is related to what the PR title is. What about you? Do you have a heuristic?
JOËL: I think I'm probably similar to you, 100 lines is probably about the cut-off for where it starts to become more of a chore to review a PR. I've definitely had moments where somebody sends me a link and says, "Hey, can you review?" And I click the link and I open it up, and I'm like, oh okay, well, I should set aside a half hour here to really get into this because this is not going to be quick.
STEPHANIE: For sure. And it also makes addressing that review feedback more difficult because you have to likely make more changes because there's just more to review and more to improve if it wasn't quite right the first time. The only time I do a big diff is if it's all in red. [laughter]
JOËL: Yes.
STEPHANIE: We have a Slack channel at thoughtbot called Dead Code Society where people post screenshots of their negative diffs, and it's so fun.
JOËL: I'm all for that. When I look at code that has been broken down into nice, small commits, it just looks so clean. It looks so natural. But when I try to write code like that, it's anything but; it doesn't feel natural. Have you had a similar experience?
STEPHANIE: I completely agree. I think it's really hard, and it's something that I am still practicing. Because when you are first learning to code, no one teaches you to write it incrementally; at least, that was my experience. It requires a lot of discipline to think about code in little, tiny chunks when you are just so excited to get your feature working and seeing it in a browser and playing around with it.
When I first started doing it, I thought it was impossible. I thought it was wild to have a single commit be passing CI all the time because when I was writing code, there were so many work in progresses. And then I would run the test suite and be like, ah, 20 test failures. Now I have to go through and fix all of them. I guess what I learned from that was the pain of not working incrementally, and that is what motivates me to be disciplined. And it doesn't always happen.
Sometimes I'm lazy and just decide that it's fine for now, and then will usually have to come back through when I am in that kind of headspace where I'm like, okay, let me really get down to business. And I'm able to see the seams of the code that I wrote to be able to extract them out into encapsulated pieces because that doesn't always come supernaturally. So yeah, I would say that it's an upward journey for sure. Do you find that to be true for your work?
JOËL: It has definitely been a journey. I think as I have gained experience, as I have discovered new techniques, even picked up different perspectives and mindsets, all of these have helped hone that skill. And I feel like it's one of those skills that feels very mundane, but it's actually one of the more valuable skills I have as a developer is able to take something complex and decompose it into atomic pieces.
STEPHANIE: Do you ever find yourselves in a position where there are obstacles that keep you from doing that, or would you say it's just an internal state of mind?
JOËL: Maybe a little bit of both. You'd mentioned earlier that there can be process or organizational obstacles that make it much harder to try to scope down the work that you have. It can be internal in that you don't know the tools that you need for this particular scenario. But it's something where you're constantly on the lookout for ways to learn to be better.
Over the course of your career, it is a skill that I think you're going to keep improving probably forever. I don't think you'll ever get to the point which is like, yep, I've mastered this. I'm as good as I will ever be, and that's the end. I'm curious, are there some tools or techniques that you like to use when trying to keep your work focused?
STEPHANIE: From a commit level, I really love the git add --patch command. I think that it is really helpful because I like to litter my code with random debuggers and little changes that I don't end up wanting to bring into my commit later on. And so that is a great place for me to discard things that I know are distractions or were part of little rabbit holes I went down along the way. And so I highly recommend folks to check that out if that's not something that's already familiar to them.
At a higher level, TDD has helped with staying focused because you have that built-in feedback loop. And once you are green, you can commit essentially and know that you did the least amount of work possible to get the behavior that you wanted without starting to sprawl into other territory. Joël, do you have any other tips or tools you want to share with our listeners?
JOËL: A sort of mental tool that I've gotten into recently is drawing things out as a directed graph to understand what are the changes that I need to accomplish my goal and then what changes rely on other changes being there first? And a directed graph, you just draw a bunch of circles with arrows pointing towards things that they depend on. Anytime there is a cycle, that chunk of changes all has to go out together. They're all sort of interdependent on each other.
And if you can find a way to restructure the graph or introduce a new step in there that will break the cycle, now individual pieces of that can be shipped independently. I've started using that visual approach sometimes to look at changes that feel like they're one sort of big blob of changes that can't be broken down and to say, oh, well, in order to ship this change, I need to introduce this new gem. I'm going to need to change behavior in this part of the system, but that will also need a change in another part of the system.
And now that I can see how things are connected to each other, it gives me a clue to where I can rearrange the changes. Maybe there's a refactor that I can ship out first completely separate that makes the follow-up work much easier. Or maybe there's a way that I can introduce some of the changes without needing to do all of them at once.
STEPHANIE: That's a really cool way to identify those pieces because it can be nearly impossible to figure it out from files in a repo or to keep in your head. And I think what's even cooler is that you can share those graphs too so someone else can come along and pick up that work, as well as just have the same level of understanding of what things depend on each other just through our clients.
JOËL: I've really fallen in love with directed graphs and dependency graphs, in particular over the past year, and I feel like now I see them everywhere. And regular listeners of the show will have noticed that I have mentioned them multiple times. And I almost feel now like I'm that Parks and Recs meme where he's got [laughter] all his conspiracy board with all the threads connecting to each other and just like, let me tell you about graphs and how everything is all connected. They're behind everything.
STEPHANIE: I'm with you. I think everything is connected, even just in our informal conversations on the pod and off the pod. We're constantly being like, oh, that's a great idea. Like, we can do a completely different episode if we go down this rabbit hole, but it also still maps back to things we talked about in another episode. And yeah, that feels very true to me in terms of software and also in terms of life, not to make that sound too deep. [laughs] But it's cool that you have found a way to manage that complexity for yourself at work and a way to share it with others.
JOËL: I think one of these days, we're going to have to do a dedicated episode all about dependency graphs.
STEPHANIE: Yeah, I would love to hear more. Too bad the podcast is just an auditory platform and not a visual one. [laughs]
JOËL: That's the challenge, right? Because it's such a visual topic.
STEPHANIE: That's cool. I'm really looking forward to hearing more about them. On that note, shall we wrap up?
JOËL: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeee!!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Fellow thoughtboter Sarah Lima joins Joël to discuss an issue Sarah had when she was doing a code review recently: making HTTP requests in an ActiveRecord model. Her concern with that approach was that a class was having too many responsibilities that would break the single-responsibility principle, and that it would make the class hard to maintain. Because the ActiveRecord layer is a layer that's meant to encapsulate business roles and data, her issue was that adding another responsibility on top of it would be too much. Her solution was to extract a class that would handle the whole HTTP request process.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter Sarah Lima.
SARAH: Happy to be here.
JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Sarah, what's new in your world?
SARAH: Well, after a year and a half working on the same thoughtbot client, I have rolled off, and I have joined a new team. And I am learning a lot about not only a new codebase but learning to work with a new team. So that's always challenging, and this time it's not different.
JOËL: What is something that you like to do when joining a new team to help smooth the onboarding process?
SARAH: Well, I think especially getting to know people with one on ones. This time, I didn't do that right away because I had a bunch of time off scheduled right at the beginning of the project. But I did it right after I came back. And I'm learning a lot about my new colleagues, how they like to work, how they learn best. So, for instance, there are some people that like to learn and grow by reading blog posts, reading books, and there are other people that don't like that as much.
JOËL: So when you joined the new project, you just reached out to all of these people and set up a few meetings just to get to know them.
SARAH: Yeah, exactly.
JOËL: That's really good. I've never done that on a project. And now that you've said it, it kind of seems obvious. Maybe I should do that moving forward to get to know new teammates.
SARAH: Yeah. And I think it's easier on my project because it's a very small team. There are four of us thoughtboters, and there are just two client developers. So it was easier.
JOËL: What about on the code side of things? Are there any tricks you like to do when you're first getting started in a new codebase?
SARAH: Well, I think I really enjoy diving in right away, working on something small, and asking questions. I have also found it helpful in the past, especially on larger codebases, that someone that's experienced on a project gives me an overview showing me the quirks. And, of course, a good README is always a good thing to have, and during the process, always be updating the README. In this recent project, it was not different. I opened a lot of PRs to update the README. So that was good to have a PR right on your first day.
JOËL: I love that. I think that's usually my goal when I start on a new project is to have a PR the first day that fixes anything in the setup script that has been broken since the last person onboarded or documentation that was wrong.
SARAH: Yeah, absolutely.
JOËL: It's always a strong first contribution.
SARAH: Yeah. What about you, Joël? What's going on? What's new in your world?
JOËL: I've been investigating flaky tests, and I ran across a wild bug this week. I had a test that would fail every now and then. And it was pulling some data from Postgres and then doing some transformations on it. And I couldn't figure out why it was failing. It was a complex query. So it was just pulling out not ActiveRecord objects but a raw array of values. At some point, I was putting a PUT statement in the code with the array of values I expected to get and the array I would actually get.
And I was surprised to see that there is a field in there that is a float that was rounded to a different number of decimal places. I was like, that doesn't seem right. And so I was digging into it more, and I found out that this decimal value is from a timestamp that is in a file name for an mp4 video file name. And what is happening is that when we're querying the database, we're trying to extract the timestamp out of the file name by dropping the .mp4 file extension. And we're using the SQL TRIM function.
Unfortunately, TRIM does not do whatever the original authors thought it does. It doesn't just remove that substring from the end, but instead, it will remove any of those characters, so in my case, any of dot, M, P, or 4 in any combination from the end of the string. So anytime that my timestamp ended in a four, any fours were just getting chopped off. So if it ended in 44.mp4, the 44 would also get removed, not just the .mp4, which meant that randomly whenever a timestamp happened to end in 4, my test would flake.
SARAH: Wow. Do you have any idea how much time you spent debugging that?
JOËL: Oh, probably took, I'd say, a day, two days. This is spread over a couple of debugging sessions. But eventually, finding that particular location for the bug probably took us a couple of days. In the end, the bug fix for this is just a couple of lines, a couple of days work, and the diff is only a few lines. But I'm sure that the discussion on the PR is going to be really interesting. There's probably going to be a description that is a lot longer than the actual diff.
SARAH: Yeah, 100%. [laughs]
JOËL: Have you run across any interesting PRs on your new project?
SARAH: Yeah, I did. In fact, I recently reviewed a PR that had three interesting main issues that I wanted to address. And I wanted to lead the person that was working on it to a slightly better solution. So the three issues I saw were that the tests that were added were very DRY, so that was making everything a bit difficult to understand. The second one was that I saw one of the ActiveRecord classes was making HTTP requests, and that didn't sound like a good idea to me.
JOËL: That is unusual.
SARAH: Yes. The third one was that there were a lot of collections being built iteratively where another innumerable method would be a better fit, such as map instead of an each call.
JOËL: Oh, this is a classic situation where you're just using each to go through and transform something, and you've got some sort of external array that you're mutating as part of the each.
SARAH: Yes.
JOËL: There's a great thought article, I believe, by Joe Ferris on Iteration as an Anti-pattern.
SARAH: I think it's by Mike Burns. And I have referred to that article. In fact, I had very good articles for two of these three problems. I referred to a bunch of articles about WET tests as opposed to DRY tests, like how striving for tests that are DRY is not a good idea as opposed to telling a whole story in your tests. And I referred to that other article how iteratively building a collection can be an anti-pattern by Mike Burns. But the second issue about HTTP requests I didn't have anything to refer to. Maybe we should write one.
JOËL: This reminds me that in the thoughtbot Slack, we have a custom emoji for you should write a blog post about that. And this would probably be a good time to use it.
SARAH: Yes. So, Joël, how do you typically handle a PR that is maybe too long, and you have a lot of concerns about it? And how do you handle delivering that feedback?
JOËL: Oh, that is a challenge. I've definitely done it poorly in the past. And I think the wrong way to go about that situation is to go thoroughly through the PR and leave 50, 60 comments. That is overwhelming for the other person. And they're going to have a really bad day when they see 50 comments come through. And there's so much that they can't really address the main things you were talking about anyway.
So what I generally try to do, and it's kind of nice now that GitHub doesn't immediately publish your comments, is if I realize...like I start putting some more detailed comments, and then I realize, oh, there's going to be a lot, zoom out a little bit, and try to find are there some higher level trends that I can talk about? And maybe even just summarize in a larger comment at the bottom and say, "Hey, I see some larger structural issues," or "This PR is leaning very heavily on a technique that I think is maybe not the best use here. Maybe we should discuss that," instead of digging into maybe the actual implementation details of the code.
SARAH: Yeah, funny, you should mention that. I have recently also started doing that, using the summary version of GitHub reviews. And I used to just go file by file and leaving comments right away. And I'm thinking that this is not a good idea, especially when the PR is long. So I think another thing I would do is also call the person to pair and ask questions and understand where the person is coming from and also explain what are your concerns and how you both can get to a better place with that PR.
JOËL: That's really important. You have to remember there's another person on the other end of this. I love the idea of reaching out to them directly. Especially if there's a larger conversation to be had around approach or implementation, it's often easier to resolve those directly rather than back and forth in GitHub comments. So you mentioned situations where the PR is really long. Have you ever had to push back on that in some way?
SARAH: Yes. Especially when I saw, whoa, that's going to be difficult to understand, that's going to be difficult to review. And I have reached out to the person to say, "Hey, what about we split that PR in two?" Of course thinking about splitting the PR in a way that makes sense, in a way that still delivers our users’ value as soon as possible.
JOËL: I've been in situations like that where it's a really long PR, and the person has already invested a lot of work into it. And maybe it's even gone through a round of reviews. It feels almost too late to ask them to split up the work. But then I've actually regretted not doing that because there's so much complexity going on that then it doesn't work, or there are some bugs in it. We struggle to ship this, or it might just have to go through so many rounds of review and re-review and re-review. And because the PR is so long, it's a huge commitment for me to re-review it every time.
So there are situations I've been in where I wish that before even looking at the code at all, I was like, this is too long. We need to either slim down the story of what's being done. Because sometimes that's what happens is that the ticket is not well-defined, and someone goes in and just sort of keeps adding more code. And it becomes a bit of a big ball of mud. So, either helping to refine the ticket first or splitting the PR rather than actually looking at the code.
SARAH: Yeah, and pairing often can also help with that. So especially as consultants, our clients may ask us to work on different projects, and you work alone. And you may have tight deadlines, but I think it's always helpful to find time anyway to help your colleagues as well.
JOËL: I like that. I think there's a lot of value in the work that we do, where we collaborate with others in addition to whatever we do solo. So, oftentimes, it's great to pair with people at a client where possible to become involved in the code review process to even get involved in maybe some of the more broader system design conversations, sprint planning. All of those things are really good to jump into more than just getting siloed into working on just a solo feature.
SARAH: Yes, 100%.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So one of the things you mentioned that stood out for you when you were doing some code review recently was making HTTP requests in an ActiveRecord model. Why is that something that sort of caught your eyes, maybe an area to push back on in a particular design?
SARAH: That's a good question. My concern with that approach was that our class was having too many responsibilities that would break the SRP principle, the single-responsibility principle, and that would make our class hard to maintain. So the ActiveRecord layer is a layer that's meant to encapsulate business roles and data. So I was worried that adding another responsibility on top of it would be too much. So my idea was that we would extract a class that would handle the whole HTTP request process.
JOËL: Yeah, I feel like my instincts typically when I've done third-party integrations is that the ActiveRecord class should not know about the external internet world. It knows about the database. It knows about some of its core model functionality but that knowing about the internet world is somebody else's responsibility and that, ideally, the direction of dependency should flow the other way. So maybe the class that makes an external request knows about the ActiveRecord object if it needs to let's say, instantiate an instance of that model using data from an external request.
Or maybe it's even some third-party thing; maybe it's their controller that knows how to make or that will ask another object to make a request to some API and might also make a request to the model and ask it for some database data and then combine those two together. But that the ActiveRecord object only knows about that database area of responsibility and doesn't know that other things are also happening in the system.
SARAH: Absolutely. And I was also thinking that that class would have a difficult test to write. So a good idea is to separate our code that is side-effectful into their own classes, and that makes our tests so much easier.
JOËL: I actually wrote an article on the topic where one of my realizations at some point was that a lot of the pain points in code are what functional programmers would call side effects, so things like HTTP requests. And these are often things where we need to stub or do other things. And so isolating them as much as possible often simplifies our tests.
SARAH: Yeah, certainly. And I refer to that article every time I have the chance.
JOËL: Have you encountered the general concept of layered architectures, or hexagonal architectures, or things like that in the world of Rails or maybe elsewhere?
SARAH: Not hexagonal architecture. I have heard about it, but I haven't dived into it yet. Can you give us an overview?
JOËL: So I've also not worked with an actual hexagonal architecture. But the general idea, I guess, of layered architectures is that you build your code in a variety of layers, and different layers don't have access to or don't know about the ones...and I forget in this model if it's above or below, let's say it's below. So the inner layers don't know about the outer layers, but the outer layers can know about anything below them.
And so if the core of your app is the database, your database is most definitely not knowing about anything outside of just its data. And your ActiveRecord models that sit on top of that know about the database, but they don't know if they're being fronted by a web application, or a command line, or anything else. And then, above that, you might have more of a business process layer that knows about the database. It might know about how to make some external requests, but it doesn't know about anything above that.
And then, maybe at the final layer, you've got an application layer that handles things like controllers and interactions with users of the site. The core idea is that you split it into layers, and the higher-up layers know about everything below them, but no layer knows about what's above it. I feel like we're loosely applying that to the situation here with ActiveRecord in that it feels like the ActiveRecord layer if you will, shouldn't really know about third-party API requests.
SARAH: So, one exception to that is the ActiveResource approach that connects our business objects to REST services. So if you have an external website and you want to connect it via HTTP, you can do it using Rails ActiveResource.
JOËL: That is interesting because it functions like an ActiveRecord object, but instead of being backed by the database, it's backed by some kind of API. I almost wonder if...let's refactor our mental model here. And instead of saying that HTTP belongs in a separate layer that's higher up, maybe, in this case, it's almost like a sibling layer.
So your ActiveRecord models know about the database, and they make database requests in ActiveResource, or I think there are some gems that provide similar behavior. It might be backed by a particular API, but neither of them should know about the other. So maybe an ActiveResource model should not be making database requests.
SARAH: Yes, I like that line of thought.
JOËL: I guess the question then becomes, what about interactions between the two where you want to, I don't know, have some kind of association? You know, I don't think I've ever used ActiveResource on a project.
SARAH: I did once when trying to work with something close to microservice architecture. So we had a monolith, and we built a small service that was also in Rails, and we needed to consume the data that was stored in the monolith.
JOËL: And did you like that approach?
SARAH: Yeah. I think in that specific scenario, it was very productive. And I enjoyed a lot the API that Rails provided me via ActiveResources.
JOËL: Did you ever have to mix ActiveResource models and ActiveRecord models?
SARAH: No, I didn't; thankfully, not. I have never thought about that.
JOËL: So maybe in most applications, those two will just sort of naturally fall into maybe separate parts of the app, and they don't need to interact that much.
SARAH: Yeah, I think that will be the case. So mixing two of those subjects we're talking about here, that's testing and HTTP requests; we've been having a discussion in our project about the usage of VCR. That's a gem that records your HTTP requests interactions and replays them during tests. We've been discussing if using it is a good idea or not because we've been having issues with cassettes, that's one of VCR's concepts when these cassettes are not valid anymore. So do you have any thoughts on the subject? Maybe that will make a whole episode.
JOËL: We could definitely do a whole episode, I think, on testing third-party APIs. VCR is one of multiple different strategies that can be used to not make actual real network requests in your tests which brings some stability. There are also some downsides to it. I have found, in general, that over time, cassettes become brittle. So the idea of VCR is really cool. In practice, I think I've found that a few hand-rolled Webmock stubs usually do the job better for my needs.
SARAH: Yeah, I'll be interested in hearing that episode because, at least in my project, we have a lot of HTTP requests to external services, and they return a lot of information. I'm wondering if just dealing with that with Webmock would be too much work.
JOËL: One of the really useful things about VCR is that you can just make your request from anywhere, and it will just completely handle it. In some ways, though, I think it maybe hides some of that test pain that we were talking about earlier and allows you to sort of put HTTP in a lot of places that maybe you don't want it to. And by allowing yourself to feel a little bit of that test pain, you can more easily notice the places where maybe an object should not be making a request.
Or the actual HTTP logic can be moved to a concentrated place where all the HTTP is done together. And then only that object will need unit tests that actually need to mock the network, and most of your objects are fine. Where it gets interesting is more for things like integration tests, where now you're doing a lot of interactions, and you might have quite a few background requests that need to be made.
SARAH: I'm looking forward to the whole episode on this subject because I feel there's so much to talk about.
JOËL: There really is. I have a blog post that sort of summarizes a few different common categories of approaches to testing third-party requests, which might be different depending on whether you're doing a unit test or an integration test. But I grouped common solutions into four different categories. We'll make sure to link that in the show notes. So we've been talking a lot about testing. I'm curious when you review PR, do you start with the tests, maybe read through the tests first, and then the implementation?
SARAH: That's a good question. I have never thought about starting with tests. I think I'm going to give that a try anytime. But I just start reviewing them like by the first file that comes up. [laughs]
JOËL: I'm the same. I normally just do them in order. I have occasionally tried to do a test first, and that is sometimes interesting. Sometimes you read the test and, especially when you don't know what the implementation is going to be, you're like, why is this in the test? And then you jump to the implementation like, oh, that's what's going on.
Well, thank you so much, Sarah, for joining us on this whirlwind tour of code review, design of objects, and interacting with HTTP and testing.
SARAH: My pleasure.
JOËL: Where can people find you online if they would like to follow your work?
SARAH: I'm on Twitter @sarahlima_rb.
JOËL: We'll make sure to link that in the show notes. And with that, let's wrap up.
The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris Toomey is back! (For an episode.) He talks about what he's been up to since handing off the reins to Joël. He's been playing around with something at Sagewell that he enjoys. At the core of it? Serializers.
Primalize gem
Derek's talk on code review
Inertia.js
Phantom types
io-ts
dry-rb
parse don't validate
value objects
broader perspective on parsing
Enumerable#tally
RubyConf mini
where.missing
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by a very special guest, former host Chris Toomey.
CHRIS: Hi, Joël. Thanks for having me.
JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Chris, what's new in your world?
CHRIS: Being on this podcast is new in my world, or everything old is new again, or something along those lines. But, yeah, thank you so much for having me back. It's a pleasure. Although it's very odd, it feels somehow so different and yet very familiar.
But yeah, more generally, what's new in my world? I think this was probably in development as I was winding down my time as a host here on The Bike Shed, but I don't know that I ever got a chance to talk about it. There has been a fun sort of deep-in-the-weeds technical thing that we've been playing around with at Sagewell that I've really enjoyed.
So at the core of it, we have serializers. So we take some data structures in our Ruby on Rails code base, and we need to serialize them to JSON to send them to the front end. In our case, we're using Inertia, so it's not quite a JSON API, but it's fine to think about it in that way for the context of this discussion.
And what we were finding is our front end has TypeScript. So we're writing Svelte, which is using TypeScript. And so we're stating or asserting that the types like, hey, we're going to get this data in from the back end, and it's going to have this shape to it. And we found that it was really hard to keep those in sync to keep, like, what does the user mean on the front end? What's the data that we're going to get? It's going to have a full name, which is a string, except sometimes that might be null. So how do we make sure that those are keeping up to date?
And then we had a growing number of serializers on the back end and determining which serializer we were actually using, and it was just...it was a mess, to put it lightly. And so we had explored a couple of different options around it, and eventually, we found a library called Primalize. So Primalize is a Ruby library. It is for writing JSON serializers. But what's really interesting about it is it has a typing layer. It's like a type system sort of thing at play.
So when you define a serializer in Primalize, instead of just saying, here are the fields; there is an ID, a name, et cetera, you say, there is an ID, and it is a string. There is a name, and it is a string, or an optional string, which is the even more interesting bit. You can say array. You can say object. You can say an enum of a couple of different values. And so we looked at that, and we said, ooh, this is very interesting. Astute listeners will know that this is probably useless in a Ruby system, which doesn't have types or a compilation step or anything like that.
But what's really cool about this is when you use a Primalize serializer, as you're serializing an object, if there is ever a type mismatch, so the observed type at runtime and the authored type if those ever mismatch, then you can have some sort of notification happen. So in our case, we configured it to send a warning to Sentry to say, "Hey, you said the types were this, but we're actually seeing this other thing." Most often, it will be like an Optional, a null sneaking through, a nil sneaking through on the Ruby side.
But what was really interesting is as we were squinting at this, we're like, huh, so now we're going to write all this type information. What if we could somehow get that type information down to the front end? So I had a long weekend, one weekend, and I went away, and I wrote a bunch of code that took all of those serializers, ran through them, and generated the associated TypeScript interfaces. And so now we have a build step that will essentially run that and assert that we're getting the same thing in CI as we have committed to the codebase.
But now we have the generated serializer types on the front end that match to the used serializer on the back end, as well as the observed run-time types. So it's a combination of a true compilation step type system on the front end and a run-time type system on the back end, which has been very, very interesting.
JOËL: I have a lot of thoughts here.
CHRIS: I figured you would. [laughs]
JOËL: But the first thing that came to mind is, as a consultant, there's a scenario with especially smaller startups that generally concerns me, and that is the CTO goes away for a weekend and writes a lot of code...
CHRIS: [laughs]
JOËL: And brings in a new system on Monday, which is exactly what you're describing here. How do you feel about the fact that you've done that?
CHRIS: I wasn't ready to go this deep this early on in this episode.
JOËL: [laughs]
CHRIS: But honestly, that is a fantastic question. It's a thing that I have been truly not struggling with but really thinking about. We're going to go on a slight aside here, but I am finding it really difficult to engage with the actual day-to-day coding work that we're doing and to still stay close to the codebase and not be in the way.
There's a pattern that I've seen happen a number of times now where I pick up a piece of work that is, you know, one of the tickets at the top of the backlog. I start to work on it. I get pulled into a meeting, then another meeting, then three more meetings. And suddenly, it's three days later. I haven't completed this piece of work that was defined to be the next most important piece of work. And suddenly, I'm blocking the team.
JOËL: Hmmm.
CHRIS: So I actually made a rule that I'm not allowed to own critical path work, which feels weird because it's like, I want to be engaged with that work. So the counterpoint to that is I'm now trying to schedule pairing sessions with each of the developers on the team once a week. And in that time, I can work on that sort of stuff with them, and they'll then own it and run with it. So it makes sure that I'm not blocking on those sorts of things, but I'm still connected to the core work that we're doing.
But the other thing that you're describing of the CTO goes away for the weekend and then comes back with a new harebrained scheme; I'm very sensitive to that, having worked on; frankly, I think the same project. I can think of a project that you and I worked on where we experienced this.
JOËL: I think we're thinking of the same project.
CHRIS: So yes. Like, I'm scarred by that and, frankly, a handful of experiences of that nature. So we actually, I think, have a really healthy system in place at Sagewell for capturing, documenting, prioritizing this sort of other work, this developer-centric work. So this is the feature and bug work that gets prioritized and one list over here that is owned by our product manager. Separately, the dev team gets to say, here are the pain points. Here's the stuff that keeps breaking. Here are the things that I wish was better. Here is the observability hard-to-understand bits.
And so we have a couple of different systems at play and recurring meetings and sort of unique ceremonies around that, and so this work was very much a fallout of that. It was actually a recurring topic that we kept trying a couple of different stabs at, and we never quite landed it. And then I showed up this one Monday morning, and I was like, "I found a thing; what do we think?" And then, critically, from there, I made sure I paired with other folks on the team as we pushed on the implementation.
And then, actually, I mentioned Primalize, the library that we're using. We have now since deprecated Primalize within the app because we kept just adding to it so much that eventually, we're like, at this point, should we own this stuff? So we ended up rewriting the core bits of Primalize to better fit our use cases. And now we've actually removed Primalize, wonderful library. I highly recommend it to anyone who has that particular use case but then the additional type generation for the front end.
Plus, we have some custom types within our app, Money being the most interesting one. We decided to model Money as our first-class consideration rather than just letting JavaScript have the sole idea of a number. But yes, in a very long-winded way, yes, I'm very sensitive to the thing you described. And I hope, in this case, I did not fall prey to the CTO goes away for the weekend and made a thing.
JOËL: I think what I'm hearing is the key difference here is that you got buy-in from the team around this idea before you went out and implemented it. So you're not off doing your own things disconnected from the team and then imposing it from on high. The team already agreed this is the thing we want to do, and then you just did it for them.
CHRIS: Largely, yes. Although I will say there are times that each developer on the team, myself included, have sort of gone away, come back with something, and said, "Hey, here's a WIP PR exploring an area." And there was actually...I'm forgetting what the context was, but there was one that happened recently that I introduced. I was like; I had to do this. And the team talked me out of it, and I ended up closing that PR. Someone else actually made a different PR that was an alternative implementation. I was like, no, that's better; we should absolutely do that.
And I think that's really healthy. That's a hard thing to maintain but making sure that everyone feels like they've got a strong voice and that we're considering all of the different ways in which we might consider the work. Most critically, you know, how does this impact users at the end of the day? That's always the primary consideration. How do we make sure we build a robust, maintainable, observable system, all those sorts of things?
And primarily, this work should go in that other direction, but I also don't want to stifle that creative spark of I got this thing in my head, and I had to explore it. Like, we shouldn't then need to never mind, throw away the work, put it into a ticket. Like, for as long as we can, that more organic, intuitive process if we can retain that, I like that. Critically, with the ability for everyone to tell me, "No, this is a bad idea. Stop it. What are you doing?" And that has happened recently. I mean, they were kinder about it, but they did talk me out of a bad idea. So here we are.
JOËL: So you showed up on Monday morning, not with telling everyone, "Hey, I merged this thing over the weekend." You're showing up with a work-in-progress PR.
CHRIS: Yes, definitely. I mean, everything goes through a PR, and everything has discussion and conversation around it. That's a strong, strong like Derek Prior's wonderful talk Building a Culture of Code Review. I forget the exact name of it. But it's one of my favorite talks in talking about the utility of code review as a way to share ideas and all of those wonderful things. So everything goes through code review, and particularly anything that is of that more exploratory architectural space.
Often we'll say any one review from anyone on the team is sufficient to merge most things but something like that, I would want to say, "Hey, can everybody take a look at this? And if anyone has any reservations, then let's talk about it more." But if I or anyone else on the team for this sort of work gets everybody approving it, then cool, we're good to go. But yeah, code review critical, critical part of the process.
JOËL: I'm curious about Primalize, the gem that you mentioned. It sounds like it's some kind of validation layer between some Ruby data structure and your serializers.
CHRIS: It is the serializer, but in the process of serializing, it does run-time type validation, essentially. So as it's accessing, you know, you say first name. You have a user object. You pass it in, and you say, "Serializer, there's a first name, and it's a string." It will call the first name method on that user object. And then, it will check that it has the expected type, and if it doesn't, then, in our case, it sends to Sentry.
We have configured it...it's actually interesting. In development and test mode, it will raise for a type mismatch, and in production mode, it will alert Sentry so you can configure that differently. But that ends up being really nice because these type mismatches end up being very loud early on. And it's surprisingly easy to maintain and ends up telling us a lot of truths about our system because, really, what we're doing is connecting data from many different systems and flowing it in and out. And all of the inputs and outputs from our system feel very meaningful to lock down in this way. But yeah, it's been an adventure.
JOËL: It seems to me there could almost be two sets of types here, the inputs coming into Primalize from your Ruby data structures and then the outputs that are the actual serialized values. And so you might expect, let's say, an integer on the Ruby side, but maybe at the serialization level, you're serializing it to a string. Do you have that sort of conversion step as part of your serializers sometimes, or is the idea that everything's already the right type on the Ruby side, and then we just, like, to JSON it at the end?
CHRIS: Yep. Primalize, I think, probably works a little closer to what you're describing. They have the idea of coercions. So within Primalize, there is the concept of a timestamp; that is one of the types that is available. But a timestamp is sort of the union of a date, a time, or I think they might let through a string; I'm not sure if there is as well. But frankly, for us, that was more ambiguity than we wanted or more blurring across the lines. And in the implementation that we've now built, date and time are distinct. And critically, a string is not a valid date or time; it is a string, that's another thing.
And so there's a bunch of plumbing within the way you define the serializers. There are override methods so that you can locally within the serializer say, like, oh, we need to coerce from the shape of data into this other shape of data, even little like in-line proc, so we can do it quickly. But the idea is that the data, once it has been passed to the serializer, should be up the right shape. And so when we get to the type assertion part of the library, we expect that things are in the asserted type and will warn if not. We get surprisingly few warnings, which is interesting now.
This whole process has made us pay a little more intention, and it's been less arduous simultaneously than I would have expected because like this is kind of a lot of work that I'm describing. And yet it ends up being very natural when you're the developer in context, like, oh, I've been reading these docs for days. I know the shape of this JSON that I'm working with inside and out, and now I'll just write it down in the serializer. It's very easy to do in that moment, and then it captures it and enforces it in such a useful way.
As an aside, as I've been looking at this, I'm like, this is just GraphQL, but inside out, I'm pretty sure. But that is a choice that we have made. We didn't want to adopt the whole GraphQL thing. But just for anyone out there who is listening and is thinking, isn't this just GraphQL but inside out? Kind of. Yes.
JOËL: I think my favorite part of GraphQL is the schema, which is not really the selling point for GraphQL, you know, like the idea that you can traverse the graph and get any subset of data that you want and all that. I think I would be more than happy with a REST API that has some kind of schema built around it. And someone told me that maybe what I really just want is SOAP, and I don't know how to feel about that comment.
CHRIS: You just got to have some XML, and some WSDLs, and other fun things. I've heard people say good things about SOAP. SOAP seems like a fine idea. If anything, I think a critical part of this is we don't have a JSON API. We have a very tightly coupled front end and back end, and a singular front end, frankly. And so that I think naturally...that makes the thing that I'm describing here a much more comfortable fit.
If we had multiple different downstream clients that we're trying to consume from the same back end, then I think a GraphQL API or some other structured JSON schema, whatever it is type of API, and associated documentation and typing layer would be probably a better fit. But as I've said many a time on this here, Bike Shed, Inertia is one of my favorite libraries or frameworks (They're probably more of a framework.) one of my favorite technological approaches that I have ever found.
And particularly in buildings Sagewell, it has allowed us to move so rapidly the idea that changes are, you know, one fell swoop changes everything within the codebase. We don't have to think about syncing deploys for the back end and the front end and how to coordinate across them. Our app is so much easier to understand by virtue of that architecture that Inertia implies.
JOËL: So, if I understand correctly, you don't serialize to JSON as part of the serializers. You're serializing directly to JavaScript.
CHRIS: We do serialize to JSON. At the end of the day, Inertia takes care of this on both the Rails side and the client side. There is a JSON API. Like, if you look at the network inspector, you will see XHR requests happening. But critically, we're not doing that. We're not the ones in charge of it. We're not hitting a specific endpoint.
It feels as an application coder much closer to a traditional Rails app. It just happens to be that we're writing our view layer. Instead of an ERB, we're writing them in Svelte files. But otherwise, it feels almost identical to a normal traditional Rails app with controllers and the normal routing and all that kind of stuff.
JOËL: One thing that's really interesting about JSON as an interchange format is that it is very restrictive. The primitives it has are even narrower than, say, the primitives that Ruby has. So you'd mentioned sending a date through. There is no JSON date. You have to serialize it to some other type, potentially an integer, potentially a string that has a format that the other side knows how it's going to interpret. And I feel like it's those sorts of richer types when we need to pass them through JSON that serialization and deserialization or parsing on the other end become really interesting.
CHRIS: Yeah, I definitely agree with that. It was a struggling point for a while until we found this new approach that we're doing with the serializers in the type system. But so far, the only thing that we've done this with is Money. But on the front end, a while ago, we introduced a specific TypeScript type. So it's a phantom type, and I believe I'm getting this correct. It's a phantom type called Cents, C-E-N-T-S. So it represents...I'm going to say an integer. I know that JavaScript doesn't have integers, but logically, it represents an integer amount of cents. And critically, it is not a number, like, the lowercase number in the type system. We cannot add them together. We can't --
JOËL: I thought you were going to say, NaN.
CHRIS: [laughs] It is not a number. I saw a n/a for not applicable somewhere in the application the other day. I was like, oh my God, we have a NaN? It happened? But it wasn't, it was just n/a, and I was fine. But yeah, so we have this idea of Cents within the application. We have a money input, which is a special input designed exactly for this. So to a user, it is formatted to look like you're entering dollars and cents. But under the hood, we are bidirectionally converting that to the integer amount of cents that we need. And we strictly, within the type system, those are cents.
And you can't do math on Cents unless you use a special set of helper functions. You cannot generate Cents on the fly unless you use a special set of helper functions, the constructor functions. So we've been really restrictive about that, which was kind of annoying because a lot of the data coming from the server is just, you know, numbers.
But now, with this type system that we've introduced on the Ruby side, we can assert and enforce that these are money.new on the Ruby side, so using the Money gem. And they come down to the front end as capital C Cents in the type system on the TypeScript side. So we're able to actually bind that together and then enforce proper usage sort of on both sides.
The next step that we plan to do after that is dates and times. And those are actually almost weirder because they end up...we just have to sort of say what they are, and they will be ISO 8601 date and time strings, respectively. But we'll have functions that know this is a date string; that's a thing. It is, again, a phantom type implemented within our TypeScript type system.
But we will have custom functions that deal with that and really constrain...lock ourselves down to only working with them correctly. And critically, saying that is the only date and time format that we work with; there is no other. We don't have arbitrary dates. Is this a JSON date or something else? I don't know; there are too many date syntaxes.
JOËL: I like the idea of what you're doing in that it sounds like you're very much narrowing that sort of window of where in the stack the data exists in the sort of unstructured, free-floating primitives that could be misinterpreted. And so, at this point, it's almost narrowed to the point where it can't be touched by any user or developer-written code because you've pushed the boundaries on the Rails side down and then on the JavaScript side up to the point where the translation here you define translations on one side or, I guess, a parser on one side and a serializer on the other. And they guarantee that everything is good up until that point.
CHRIS: Yep, with the added fun of the runtime reflection on the Ruby side. So it's an interesting thing. Like, TypeScript actually has similar things. You can say what the type is all day long, and your code will consistently conform to that asserted type. But at the end of the day, if your JSON API gets in some different data...unless you're using a library like io-ts, is one that I've looked at, which actually does parsing and returns a result object of did we parse to the thing that you wanted or did we get an error in that data structure?
So we could get to that level on the client side as well. We haven't done that yet largely because we've essentially pushed that concern up to the Ruby layer. So where we're authoring the data, because we own that, we're going to do it at that level. There are a bunch of benefits of defining it there and then sort of reflecting it down.
But yeah, TypeScript, you can absolutely lie to yourself, whereas Elm, a language that I know you love dearly, you cannot lie to yourself in Elm. You've got to tell the truth. It's the only option. You've got to prove it. Whereas in TypeScript, you can just kind of suggest, and TypeScript will be like, all right, cool, I'll make sure you stay honest on that, but I'm not going to make you prove it, which is an interesting sort of set of related trade-offs there.
But I think we found a very comfortable resting spot for right now. Although now, we're starting to look at the edges of the Ruby system where data is coming in. So we have lots of webhooks and other external partners that we're integrating with, and they're sending us data. And that data is of varying shapes. Some will send us a payload with the word amount, and it refers to an integer amount of cents because, of course, it does. Some will send us the word amount in their payload, and it will be a floating amount of dollars. And I get a little sad on those days.
But critically, our job is to make sure all of those are the same and that we never pass dollars as cents or cents as dollars because that's where things go sad. That is job number one at Sagewell in the engineering team is never get the decimal place wrong in money.
JOËL: That would be a pretty terrible mistake to make.
CHRIS: It would. I mean, it happens. In fintech, that problem comes up a lot. And again, the fact that...I'm honestly surprised to see situations out there where we're getting in floating point dollars. That is a surprise to me because I thought we had all agreed sort of as a community that it was integer cents but especially in a language that has integers. JavaScript, it's kind of making it up the whole time. But Ruby has integers. JSON, I guess, doesn't have integers, so I'm sort of mixing concerns here, but you get the idea.
JOËL: Despite Ruby not having a static type system, I've found that generally, when I'm integrating with a third-party API, I get to the point where I want something that approximates like Elm's JSON decoders or io-ts or something like that. Because JSON is just a big blob of data that could be of any shape, and I don't really trust it because it's third-party data, and you should not trust third parties. And I find that I end up maybe cobbling something together commonly with like a bunch of usage of hash.fetch, things like that. But I feel like Ruby doesn't have a great approach to parsing and composing these validators for external data.
CHRIS: Ruby as a language certainly doesn't, and the ecosystem, I would say, is rather limited in terms of the options here. We have looked a bit at the dry-rb stack of gems, so dry-validation and dry-schema, in particular, both offer potentially useful aspects. We've actually done a little bit of spiking internally around that sort of thing of, like, let's parse this incoming data instead of just coercing to hash and saying that it's got probably the shape that we want. And then similarly, I will fetch all day instead of digging because I want to be quite loud when we get it wrong.
But we're already using dry-monads. So we have the idea of result types within the system. We can either succeed or fail at certain operations. And I think it's just a little further down the stack. But probably something that we will implement soon is at those external boundaries where data is coming in doing some form of parsing and validation to make sure that it conforms to unknown data structure. And then, within the app, we can do things more cleanly.
That also would allow us to, like, let's push the idea that this is floating point dollars all the way out to the edge. And the minute it hits our system, we convert it into a money.new, which means that cents are properly handled. It's the same type of money or dollar, same type of currency handling as everywhere else in the app. And so pushing that to the very edges of our application is a very interesting idea. And so that could happen in the library or sort of a parsing client, I guess, is probably the best way to think about it. So I'm excited to do that at some point.
JOËL: Have you read the article, Parse, Don't Validate?
CHRIS: I actually posted that in some code review the other day to one of the developers on the team, and they replied, "You're just going to quietly drop one of my favorite articles of all time in code review?" [laughs] So yes, I've read it; I love it. It's a wonderful idea, definitely something that I'm intrigued by. And sort of bringing dry-monads into Ruby, on the one hand, feels like a forced fit and yet has also been one of the other, I think strongest sort of architectural decisions that we've made within the application.
There's so much imperative work that we ended up having to do. Send this off to this external API, then tell this other one, then tell this other one. Put the whole thing in a transaction so that our local data properly handles it. And having dry-monads do notation, in particular, to allow us to make that manageable but fail in all the ways it needs to fail, very expressive in its failure modes, that's been great. And then parse, don't validate we don't quite do it yet. But that's one of the dreams of, like, our codebase really should do that thing. We believe in that. So let's get there soon.
JOËL: And the core idea behind parse, don't validate is that instead of just having some data that you don't trust, running a check on it and passing that blob of now checked but still untrusted data down to the next person who might also want to check it. Generally, you want to pass it through some sort of filter that will, one, validate that it's correct but then actually typically convert it into some other trusted shape.
In Ruby, that might be something like taking an amorphous blob of JSON and turning it into some kind of value object or something like that. And then anybody downstream that receives, let's say, money object can trust that they're dealing with a well-formed money value as opposed to an arbitrary blob of JSON, which hopefully somebody else has validated, but who knows? So I'm going to validate it again.
CHRIS: You can tell that I've been out of the podcasting game for a while because I just started responding to yes; I love that blog post without describing the core premise of it. So kudos to you, Joël; you are a fantastic podcast host over there.
I will say one of the things you just described is an interesting...it's been a bit of a struggle for us. We keep sort of talking through what's the architecture. How do we want to build this application? What do we care about? What are the things that really matter within this codebase, and then what is all the other stuff? And we've been good at determining the things that really matter, thinking collectively as a group, and I think coming up with some novel, useful, elegant...I'm saying too many positive adjectives for what we're doing.
But I've been very happy with sort of the thing that we decide. And then there's the long-tail work of actually propagating that change throughout the rest of the application. We're, like, okay, here's how it works. Every incoming webhook, we now parse and yield a value object. That sentence that you just said a minute ago is exactly what I want. That's like a bunch of work. It's particularly a bunch of work to convert an existing codebase. It's easy to say, okay, from here forward, any new webhooks, payloads that are coming in, we're going to do in this way. But we have a lot of things in our app now that exist in this half-converted way.
There was a brief period where we had three different serializer technologies at play. Just this week, I did the work of killing off the middle ground one, the Primalized-based thing, and we now have only our new hotness and then the very old. We were using Blueprinter as the serializer as the initial sort of stub. And so that still exists within the codebase in some places. But trying to figure out how to prioritize that work, the finishing out those maintenance-type conversions is a tricky one. It's never the priority. But it is really nice to have consistency in a codebase. So it's...yeah, do you have any thoughts on that?
JOËL: I think going back to the article and what the meaning of parsing is, I used to always think of parsing as taking strings and turning them into something else, and I think this really broadened my perspective on the idea of parsing. And now, I think of it more as converting from a broader type to a narrower type with failures.
So, for example, you could go from a string to an integer, and not all strings are valid integers. So you're narrowing the type. And if you have the string hello world, it will fail, and it will give you an error of some type. But you can have multiple layers of that. So maybe you have a string that you parse into an integer, but then, later on, you might want to parse that integer into something else that requires an integer in a range. Let's say it's a percentage. So you have a value object that is a percentage, but it's encoded in the JSON as a string.
So that first pass, you parse it from a string into an integer, and then you parse that integer into a percentage object. But if it's outside the range of valid percentage numbers, then maybe you get an error there as well. So it's a thing that can happen at multiple layers. And I've now really connected it with the primitive obsession smell in code. So oftentimes, when you decide, wait, I don't want a primitive here; I want a richer type, commonly, there's going to be a parsing step that should exist to go from that primitive into the richer type.
CHRIS: I like that. That was a classic Joël wildly concise summary of a deeply complex technical topic right there.
JOËL: It's like I'm going to connect some ideas from functional programming and a classic object-oriented code smell and, yeah, just kind of mash it all together with a popular article.
CHRIS: If only you had a diagram. Podcast is not the best medium for diagrams, but I think you could do it. You could speak one out loud, and everyone would be able to see it in their mind's eye.
JOËL: So I will tell you what my diagram is for this because I've actually created it already. I imagine this as a sort of like pyramid with different layers that keep getting smaller and smaller. So the size of type is sort of the width of a layer. And so your strings are a very wide layer. Then on top of that, you have a narrower layer that might be, you know, it could be an integer, or you could even if you're parsing JSON, you first start with a string, then you parse that into a Ruby hash, not all strings are valid hashes. So that's going to be narrower.
Then you might extract some values out of that hash. But if the keys aren't right, that might also fail. You're trying to pull the user out of it. And so each layer it gets a richer type, but that richer type, by virtue of being richer, is narrower. And as you're trying to move up that pyramid at every step, there is a possibility for a failure.
CHRIS: Have you written a blog post about this with said diagram in it? And is that why you have that so readily at hand? [laughs]
JOËL: Yes, that is the case.
CHRIS: Okay. Yeah, that made sense to me. [laughs]
JOËL: We'll make sure to link to it in the show notes.
CHRIS: Now you have to link to Joël blog posts, whereas I used to have to link to them [chuckles] in almost every episode of The Bike Shed that I recorded.
JOËL: Another thing I've been thinking about in terms of this parsing is that parsing and serializing are, in a sense, almost opposites of each other. Typically, when you're parsing, you're going from a broad type to a narrow one. And when you're serializing, you're going from a narrow type to a broader one. So you might go from a user into a hash into a string. So you're sort of going down that pyramid rather than going up.
CHRIS: It is an interesting observation and one that immediately my brain is like, okay, cool. So can we reuse our serializers but just run them in reverse or? And then I try and talk myself out of that because that's a classic don't repeat yourself sort of failure mode of, like, actually, it's fine. You can repeat a little bit. So long as you can repeat and constrain, that's a fine version. But yeah, feels true, though, at the core.
JOËL: I think, in some ways, if you want a single source of truth, what you want is a schema, and then you can derive serializers and parsers from that schema.
CHRIS: It's interesting because you used the word derive. That has been an interesting evolution at Sagewell. The engineering team seems to be very collected around the idea of explicitness, almost the Zen of Python; explicit is better than implicit. And we are willing to write a lot of words down a lot of times and be happy with that. I think we actually made the explicit choice at one point that we will not implement an automatic camel case conversion in our serializer, even though we could; this is a knowable piece of code.
But what we want is the grepability from the front end to the back end to say, like, where's this data coming from? And being able to say, like, it is this data, which is from this serializer, which comes from this object method, and being able to trace that very literally and very explicitly in the code, even though that is definitely the sort of thing that we could derive or automatically infer or have Ruby do that translation for us.
And our codebase is more verbose and a little noisier. But I think overall, I've been very happy with it, and I think the team has been very happy. But it is an interesting one because I've seen plenty of teams where it is the exact opposite. Any repeated characters must be destroyed. We must write code to write the code for us. And so it's fun to be working with a team where we seem to be aligned around an approach on that front.
JOËL: That example that you gave is really interesting because I feel like a common thing that happens in a serialization layer is also a form of normalization. And so, for example, you might downcase all strings as part of the serialization, definitely, like dates always get written in ISO 8601 format whenever that happens. And so, regardless of how you might have it stored on the Ruby side, by the time it gets to the JSON, it's always in a standard format. And it sounds like you're not necessarily doing that with capitalization.
CHRIS: I think the distinction would be the keys and the values, so we are definitely doing normalization on the values side. So ISO 8601 date and time strings, respectively that, is the direction that we plan to go for the value. But then for the key that's associated with that, what is the name for this data, those we're choosing to be explicit and somewhat repetitive, or not even necessarily repetitive, but the idea of, like, it's first_name on the Ruby side, and it's first capital N name camel case, or it's...I forget the name. It's not quite camel case; it's a different one but lower camel, maybe.
But whatever JavaScript uses, we try to bias towards that when we're going to the front end. It does get a little tricky coming back into the Ruby side. So our controllers have a bunch of places where they need to know about what I think is called lower camel case, and so we're not perfect there. But that critical distinction between sort of the names for things, and the values for things, transformations, and normalizations on the values, I'm good with that. But we've chosen to go with a much more explicit version for the names of things or the keys in JSON objects specifically.
JOËL: One thing that can be interesting if you have a normalization phase in your serializer is that that can mean that your serializer and parsers are not necessarily symmetric. So you might accept malformed data into your parser and parse it correctly. But then you can't guarantee that the data that gets serialized out is going to identically match the data that got parsed in.
CHRIS: Yeah, that is interesting. I'm not quite sure of the ramifications, although I feel like there are some. It almost feels like formatting Prettier and things like that where they need to hold on to whitespace in some cases and throw out in others. I'm thinking about how ASTs work. And, I don't know, there's interesting stuff, but, again, not sure of the ramifications.
But actually, to flip the tables just a little bit, and that's an aggressive terminology, but we're going to roll with it. To flip the script, let's go with that, Joël; what's been up in your world? You've been hosting this wonderful show. I've listened in to a number of episodes. You're doing a fantastic job. I want to hear a little bit more of what's new in your world, Joël.
JOËL: So I've been working on a project that has a lot of flaky tests, and we're trying to figure out the source of that flakiness. It's easy to just dive into, oh, I saw a flaky Test. Let me try to fix it. But we have so much flakiness that I want to go about it a little bit more systematically. And so my first step has actually been gathering data. So I've actually been able to make API requests to our CI server. And the way we figure out flakiness is looking at the commit hash that a particular test suite run has executed on.
And if there's more than one CI build for a given commit hash, we know that's probably some kind of flakiness. It could be a legitimate failure that somebody assumed was flakiness, and so they just re-run CI. But the symptom that we are trying to address is the fact that we have a very high level of people re-verifying their code. And so to do that or to figure out some stats, I made a request to the API grouped by commit hash and then was able to get the stats of how many re-verifications there are and even the distribution.
The classic way that you would do that is in Ruby; you would use the GroupBy function from enumerable. And then, you would transform values instead of having, like, say; each commit hash then points to all the builds, an array of builds that match that commit hash. You would then thumb those. So now you have commit hashes that point to counts of how many builds there were for that commit hash. Newer versions of Ruby introduced the tally method, which I love, which allows you to basically do all of that in one step.
One thing that I found really interesting, though, is that that will then give me a hash of commit hashes that point to the number of builds that are there. If I want to get the distribution for the whole project over the course of, say, the last week, and I want to say, "How many times do people run only one CI run versus running twice in the same commit versus running three times, or four times, or five or six times?" I want to see that distribution of how many times people are rerunning their build. You're effectively doing that tally process twice.
So once you have a list of all the builds, you group by hash. You count, and so you end up with that. You have the Ruby hash of commit SHAs pointing to number of times the build was run on that. And then, you again group by the number of builds for each commit SHA. And so now what you have is you'll have something like one, and then that points to an array of SHA one, SHA two, SHA three, SHA four like all the builds.
And then you tally that again, or you transform values, or however, you end up doing it. And what you end up with is saying for running only once, I now have 200 builds that ran only once. For running twice in the same commit SHA, there are 15. For running three times, there are two. For running four times, there is one. And now I've got my distribution broken down by how many times it was run. It took me a while to work through all of that. But now the shortcut in my head is going to be you double tally to get distribution.
CHRIS: As an aside, the whole everything you're talking about is interesting and getting to that distribution. I feel like I've tried to solve that problem on data recently and struggled with it. But particularly tally, I just want to spend a minute because tally is such a fantastic addition to the Ruby standard library. I used to have in sort of like loose muscle memory transform value is grouped by ampersand itself, transform values count, sort, reverse to H. That whole string of nonsense gets replaced by tally, and, oof, what a beautiful example of Ruby, and enumerable, and all of the wonder that you can encapsulate there.
JOËL: Enumerable is one of the best parts of Ruby. I love it so much. It was one of the first things that just blew my mind about Ruby when I started. I came from a PHP, C++ background and was used to writing for loops for everything and not the nice for each loops that a lot of languages have these days. You're writing like a legit for or while loop, and you're managing the indexes yourself. And there's so much room for things to go wrong.
And being introduced to each blew my mind. And I was like, this is so beautiful. I'm not dealing with indexes. I'm not dealing with the raw implementation of the array. I can just say do a thing for each element. This is amazing. And that is when I truly fell in love with Ruby.
CHRIS: I want to say I came from Python, most recently before Ruby. And Python has pretty nice list comprehensions and, in fact, in some ways, features that enumerable doesn't have. But, still, coming to Ruby, I was like, oh, this enumerable; this is cool. This is something. And it's only gotten better. It still keeps growing, and the idea of custom enumerables. And yeah, there's some real neat stuff in there.
JOËL: I'm going to be speaking at RubyConf Mini this fall in November, and my talk is all about Enumerators and ranges in enumerable and ways you can use those to make the APIs of the objects that you create delightful for other people to use.
CHRIS: That sounds like a classic Joël talk right there that I will be happy to listen to when it comes out. A very quick related, a semi-related aside, so, tally, beautiful addition to the Ruby language. On the Rails side, there was one that I used recently, which is where.missing. Have you seen where.missing?
JOËL: I have not heard of this.
CHRIS: So where.missing is fantastic. Let's assume you've got two related objects, so you've got like a has many blah, so like a user has many posts. I think you can...if I'm remembering it correctly, it's User.where.missing(:posts). So it's where dot missing and then parentheses the symbol posts. And under the hood, Rails will do the whole LEFT OUTER JOIN where the count is null, et cetera. It turns into this wildly complex SQL query or understandably complex, but there's a lot going on there. And yet it compresses down so elegantly into this nice, little ActiveRecord bit.
So where.missing is my new favorite addition into the Rails landscape to complement tally on the Ruby side, which I think tally is Ruby 2.7, I want to say. So it's been around for a while. And where.missing might be a Ruby 7 feature. It might be a six-something, but still, wonderful features, ever-evolving these tool sets that we use.
JOËL: One of the really nice things about enumerable and family is the fact that they build on a very small amount of primitives, and so as long as you basically understand blocks, you can use enumerable and anything in there. It's not special syntax that you have to memorize. It's just regular functions and blocks.
Well, Chris, thank you so much for coming back for a visit. It's been a pleasure. And it's always good to have you share the cool things that you're doing at Sagewell.
CHRIS: Well, thank you so much, Joël. It's been an absolute pleasure getting to come back to this whole Bike Shed. And, again, just to add a note here, you're doing a really fantastic job with the show. It's been interesting transitioning back into listener mode for the show. Weirdly, I wasn't listening when I was a host. But now I've regained the ability to listen to The Bike Shed and really enjoy the episodes that you've been doing and the wonderful spectrum of guests that you've had on and variety of topics. So, yeah, thank you for hosting this whole Bike Shed. It's been great.
JOËL: And with that, let's wrap up.
The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Inspired by a Slack thread, Joël invites fellow thoughtbotter Aji Slater on the show to talk about when you should use class methods and when you should avoid them. Are there particular anti-patterns to look out for? How does this fit in with good object-oriented programming? What about Rails? What is an "alternate constructor"? What about service objects? So many questions, and friends: Aji and Joël deliver answers!
Backbone.js collections
Query object
Rails is a dialect
Meditations on a Class Method
Why Ruby Class Methods Resist Refactoring
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter Aji Slater.
AJI: Howdy.
JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Aji, what's new in your world?
AJI: Yeah, well, I just joined a new project, so that's kind of the newest thing in my day-to-day work world. I say just joined, but I guess it was about a month ago now. I'm on the Liftoff team at thoughtbot, which is different than the team that you're on. We do more closer to greenfield ideas and things like that. So there's actually not much to speak about there in that project just yet. Rails new is still just over the horizon for us.
So I've been putting a lot of unused brain cycles toward a side project that is sort of a personal knowledge base concept, and that's a whole thing that I could probably host an entire podcast about. So we don't have to go too deep into my theories about that. But suffice it to say I've talked to some other ADHDers like myself who find that that space is not really conducive to the way that we think and have to organize ourselves and our personal knowledge stores. So sort of writing an app that can lend itself to our fast brains a little bit better.
JOËL: Nice. I just recently recorded an episode of this podcast talking a little bit about note-taking approaches and knowledge-base systems. So, yeah, it's a topic that's very much top of mind for me right now.
AJI: Yeah, what else is going on in your world?
JOËL: I'm based in New England in the U.S. East Coast, and it is fall here. I feel like it happened kind of all of a sudden. And the traditional fall thing to do here is to go to an orchard and pick apples. It's a fun activity to do, and so I'm in the middle of planning that. Yeah, it's fun to go out into nature, very artificial space.
AJI: [laughs]
JOËL: But it's a fun thing to do every fall.
AJI: Yeah, we do that here too. There's an orchard up north of us where my wife and I live in Chicago that we try to visit. And Apple Fest in Lincoln Square is this weekend, and we've been really looking forward to that. Try another time at making homemade hard cider this season, I think, and see how that goes.
JOËL: Fun. When you say another time, does that mean there was a previous unsuccessful attempt?
AJI: Yes. Did the sort of naive approach to it, and there is apparently a lot more subtlety to cidermaking than there is home-brew beer. And we got some real strong funk in that cider that did not make it necessarily an enjoyable experience. Like, it worked but wasn't the tastiest.
JOËL: So it got alcoholic. It was just terrible to drink.
AJI: Yeah, I would back that up.
JOËL: So recently, at thoughtbot, we had a conversation among different team members about the use of Ruby class methods, when they make sense, when they are to be avoided. What is their use case? And different people had different opinions. So I'm curious what your take on class methods are. When do you like to use them?
AJI: Yeah, I remember those conversations coming up. I think I might have even started one of those threads because this is something that comes up to me a lot. I'm a long-time listener, first-time caller to The Bike Shed. [laughs] I can remember awaiting new episodes from Sage and Derek to listen to on my way to and from my first dev job. And at one point, Sage had said, "Never put your business logic in something that you can't call .new on."
And being a young, impressionable developer at the time, I took that to heart, and that seems something that just has been baked in and stayed very truthful to me. And I think one of the times that I asked that and got some conversation started was I was trying to figure out why did I feel that, and like, why did they say that? And I think, yeah, I try to avoid them. I like making instances of things. What is your stance on the Class Method, capital C, capital M?
JOËL: I also generally avoid them. I have sort of two main scenarios that I like to use class methods, first is as an alternate constructor. So new is effectively a class method that's built into Ruby's object model. But sometimes, you want variations on your constructor that maybe sets values by default or that construct things with some slightly different inputs, things like that. And so those almost always make sense as class methods. The other thing that I sometimes use a class method for is as an alias for newing up an instance and then immediately calling an instance method on it. So it's just a slightly shorthand way to call some code.
AJI: That's usually been my first line defense of when there's someone who might feel more comfortable doing class methods that sees me making an instance and says, "Well, you don't need an instance, just make a class method here because it'll get too long if you have to .new and then dot this other thing." And so I'll throw in that magic little trick and be like, here you go. You can call it a class method, and you still get all the benefits of your instance. I love that one.
JOËL: Do you feel like that maybe defeats the purpose? In terms of the interface that people are using, if you're calling it a class method, do you lose the benefits of trying to do things at the instance level instead? Or is it more in the implementation that the benefits are not at the caller level?
AJI: I think that's more true that the benefits are at the instance level, and you're getting all of that that goes along with it. And you're not carrying along a lot of what I see as baggage of the class method version, but you're picking up a little bit of that syntactic sugar. And sometimes it's even easier just to conceptualize, especially in the Rails space because we have all of these different class methods like, you know, Find is one I'm sure that we use all the time to call it on a class, and we get back an instance. And so that feels very natural in the Rails world.
JOËL: I think you could make an argument that that is a form of alternate constructor. It's a class method you call to get an instance back.
AJI: Yeah, absolutely.
JOËL: The fact that it makes a background request to the database is an implementation detail.
AJI: For sure. I agree with that. I had a similar need in a recent project where the data was kept on a third-party API. So I treated it the same way as, instead of going out to the database like ActiveRecord does, made a class method that went off to the API and then came back and made the object that was the representation of that idea in our application. So, yeah, I wholeheartedly agree with that.
JOËL: So in Rails, we have the scope keyword, which will run some query to get a collection of records. But another way that they're often implemented is as class methods, and they're more or less interchangeable. How do you feel about that kind of use of class methods on an ActiveRecord object? Does that violate some of the ideas that we've been talking about? Does it sort of fit in?
AJI: I think when reaching for that sort of need, I sort of fall into the camp of making a class method rather than using a scope. It feels a little less like extending some basic Rails functionality or implying that it's part of the inherent framework and makes it a little more like behavior that's been added that's specific to this domain. And I think that distinction comes into my thinking there. I'm sure there are other reasons. What are your thoughts there? Maybe it'll spark an idea for me.
JOËL: For me, I think I also generally prefer to write them as class methods rather than using the scope keyword, even though they're more or less the same thing. What is interesting is that, in a way, they kind of feel like alternate constructors in that they don't give you an instance; they give you back a collection of instances back. So if we bend the rules a little bit...these are not hard and fast rules but the guidelines. If we bend the guidelines a little bit, they kind of fit under the general categories for best uses of class method that we discussed earlier.
AJI: Yeah, I can definitely see that. I tend to think, or at least I think when you had first brought up the term of alternate constructors, my first thought was of one instance; you ask for a thing, and it gives you this thing back. But it's the same sort of idea with that collection because you're not getting just one instance; you're getting many instances. But it's the same kind of idea. You've asked the larger concept of the thing, the class, to give you back individuals of that class. So that totally falls in line with how I think about acceptable uses of these class methods the way that we've been talking about them.
JOËL: Rails is something really interesting where a lot of the logic that pertains to a single item will live at the instance level. And then logic that pertains to a group of items will live at the class level. So you almost have like two categories of operations that you can run that semantically live either at the class or the instance level. Have you ever noticed that separation before?
AJI: I think that separation feels natural to me because I came into programming through Rails. And I might have been colored in my thinking about this by the framework. The way that I conceptualize what a class is being sort of this blueprint or platonic ideal of what an individual might be and sort of describing the potential behaviors of such an individual. Having that kind of larger concept be able to work across multiple instances feels, yeah, it feels sort of natural.
Like, if you were to think about this idea of a chair, then if you went in and modified what a chair is to mean, then any chair that you asked for later on would kind of come with that behavior along with it. Or if you ask for several chairs, they would all sort of have that idea.
JOËL: I think similar to you; I had that outlook on that's almost like a natural structuring of things. And then, years ago, I got into the hot, new JavaScript framework that was Backbone.js. And it actually separates...it has like a model for individual instances, and then a separate kind of model thing for collections. And that kind of blew my mind.
But what was interesting, then, is that you effectively have instance methods that can deal with all things collection-related, any sort of filtering, any sort of transformations. All of those are done, which you have an instance of a collection, basically, that you act on. And I guess if we were trying to translate that into Rails, that's almost like the concept of a query object.
AJI: Hmm, it's sort of an interesting way to think about that. And Backbone, I feel like I did a day of that in bootcamp. But it has been some time, so I'm not sure that I've worked with that pattern specifically. But it does sort of bring up the idea of how much do you want to be in one model class? And do you want it to contain both of these concepts?
If you have a lot of complex logic that is going to be dealing with a collection, rather than putting that in your model, I think I would probably reach for something like a service object that is going to be specifically doing that and sort of more along that Backboney approach maybe like a query object or something like that.
JOËL: Interesting. When you use the term service object, do you mean something that's not a Rails model, just in general? Or are you talking specifically about one of these objects that can respond to call and is... I've heard them sometimes called Command objects or method objects.
AJI: Yeah, that's an overloaded term certainly in the real space, isn't it? Service object, and what does that mean? I think generally, when I say it, I'm meaning just a plain, old Ruby object like something that is doing its one thing. You're going to use it to do its implementation details. They're all kind of hidden behind in private methods and return you something useful that you can then plug into what you were doing or what you need going on in some other place in your app.
So it, to me, doesn't imply any specific implementation of, like, do you have call? Do you use it this way? Do you use it that way? But it's something that's kind of outside of it is either a model, a view, a controller, and it encapsulates some kind of behavior. So whether that, like we're saying, is a filtering or, you know, it's going to wrap that up.
JOËL: I see. So, for you, a query object would be a service object.
AJI: Yeah, I think so. You know, maybe this is one of the reasons why I generally don't like the overuse of the term service object in our space. I don't know if that's a hot take, and I'm going to get emails for this. But --
JOËL: Everybody send your angry tweets @Aji.
AJI: Yeah, do it to @Aji on Twitter because I've been trying to get that three-letter handle for years. No, but if you want to talk to me, I'm @DoodlingDev. But, yeah, certainly, it does feel sometimes like an overloaded term, and I just want to go back to talking about plain, old Ruby objects.
JOËL: So, service object is definitely an overloaded term. It's used for a lot of things. One thing that I've often seen it referring to are objects that respond to call. And just to keep away the confusion, maybe let's call them Command objects for the purposes of this conversation.
AJI: Sounds good.
JOËL: I commonly see them done where the implementation is done with a class method named call. Sometimes it delegates to an instance that also has call. Sometimes it's all implemented as a class method. How do you feel about that pattern?
AJI: I don't mind the idea of a thing that responds to call. It, in a way, sort of implies that the class is sort of named as an action, which I don't like. It has an er name, and that kind of has a class named as a pattern. And that always sort of bugs me a little bit. But what I hope for when I open up one of those sorts of classes or objects is that it's going to delegate to an instance because then you're, again, picking up all of those wonderful benefits of the instance-level programming.
JOËL: You keep mentioning the wonderful benefits of instance-level programming. What are some of those benefits?
AJI: One of the ones that sort of strikes me most visibly or kind of viscerally when I see it is that they're very easy to understand. You can extract methods pretty easily that don't turn into kind of clumsy code of a bunch of different class methods that all have four arguments passed in because they're all operating on the same context. And when you're all operating on the same context, you have really a shared state.
And if you're just passing that shared state around, it just gets super confusing. And you get into the order of your arguments, making a big impact on how you are interacting with these different things. And so I think that's sort of the first thing that comes to mind is just visually noisy, which for me is super hard to get my head around, like, well, how am I supposed to use this thing? Can I extend it?
JOËL: Yeah, I would definitely say that if you have a group of class methods that all take, commonly, it's the first argument, the same piece of data and tries to operate on it, that's probably a code smell that points to the fact that these things want to be an instance that lives around it. This could be a form of primitive obsession if you're passing around, let's say, a hash, all of these, and maybe what you really want is to sort of reify that hash into an object. And then all these class methods that used to operate on the hash can now become instance methods on your richer domain object.
AJI: Yeah. What do you say to the folks that come from maybe a more functional mindset or are kind of picking up on the wave of functional programming that's out there in the ethos that say that you've got a bunch of side effects when you don't have everything that your method is operating on, being passed on or passed in?
JOËL: I think side effect is a broad term. You could refer to it as modifying the internal state of an object. Technically, mutation is a side effect. And then you have things like doing effects out in the outside world, like making an HTTP query, printing to the screen, things like that. I think those are probably two separate concepts. Functional programming is great. I love writing functional code.
When you're writing Ruby, Ruby is primarily an object-oriented language with some functional aspects brought in. In my opinion, it's very, you know, a great combination of the two. I think they've gotten the balance well so that the two paradigms play nicely together rather than competing. But I think it's an object-oriented language first with some functional added in. And so you're not going to be, I mean, I guess you could; there is a way to write Ruby where everything is a lambda or where everything is a class method that is pure and takes in inputs. But that's not the idiomatic way to write Ruby. Generally, you're creating objects that have some state.
That being said, if an object is mutating a lot of global state, that's going to become problematic. With regards to its internal state, though, because it is very much localized and it's private, nobody else gets to see it; in many ways, an object can mutate itself, and that chain stays pretty local.
AJI: Yeah, absolutely. You've tripped onto another one of my favorite rabbit holes of idiomatic code, and, like, what does that mean, and why should we strive for that? But I absolutely agree that when Ruby is written to conform to other paradigms that aren't mostly object-oriented is when it starts to get hard to use. It starts to feel a little off. Maybe it has code smells around it. It's going to give me the heebie-jeebies, whatever that might mean for you or for different developers.
I think we all have our things that are sort of this doesn't feel right. And you kind of dig into it, and you can sort of back that up. And whenever Ruby starts to look like something that isn't lots of little objects sending messages, is when I start to get a little on edge, maybe.
JOËL: It is worth, I think, calling out the fact that Ruby is a very expressive language. And there are effectively many...you could call them dialects of it. You have sort of your pure sort of OO approach. You have what's typically written in Rails, which has some OO things. But Rails is also, in many ways, it's very DSL-heavy and, in some ways, very class method-heavy. So writing Rails is sort of its own twist on Ruby.
And then, some people will try to completely retrofit a functional approach onto Ruby, and that's also a way that some people like to write their code. And some of these, you can't necessarily say they're not valid, but they're not what you'll mostly see in the wild. And they're not necessarily the approach that I would recommend.
AJI: Yeah, that's the blessing, and the curse of both programming in general and such an expressive language like Ruby is that there are many different valid ways to do it. And what are your trade-offs going to be when you make those choices? I think that falls kind of smack dab into that idiomatic conversation.
And it comes up for me, too, as a consultant because I try to tend towards that idiomatic, those common patterns and practices because I'm not going to live with this code forever. I need to hand this off. And the closer it is to what you might see out there in the wild more commonly, the easier it will be for the next Ruby developer to come pick it up and extend it.
JOËL: So you'd mentioned earlier some of the benefits of instance programming. One of the things that I find is maybe a little bit weird when you go heavily into the class method approach is that there is only one instance of the class, and it is globally available.
AJI: Are you talking about a singleton there?
JOËL: Yes. And, in fact, your class is effectively a singleton, potentially with globally mutable state. I hope not, but potentially with all of the gotchas and warnings that that entails. And so, if you think of your user instance, you need a reference to it, and there can be multiple of them, and you can call methods on them. If everything is happening at the class level, there is a single user class in memory shared by anyone who wants to use it. It's globally accessible. You can all call methods on it. Yeah, in many ways, it does act like a singleton.
AJI: And let's not even get into the Ruby chestnut of everything's an object. So it is an instance of a class in and of itself.
JOËL: Yes.
AJI: But, absolutely, it can start to act that way. But the singleton it's enshrined in the Gang of Four book of patterns. Like, so what's wrong about a singleton? I hope you can understand over the airwaves the devil's advocate that I'm playing here. [laughs]
JOËL: Yes. There are little horns that have sprouted on your head right now. I think part of the problem with singletons is that, generally, they are globally accessible. There's the problem of global mutable state again. There was a time, I think, when the OO community went pretty wild with singletons, and people realized that this was not great. And so, over time, a consensus evolved that singletons are a pattern that, while useful, should be used rarely and in moderation.
And a lot of warnings have been shared in the community, like, be careful not to overuse the singleton pattern or don't build your system out of singletons. And maybe that's what feels so weird about a system that's built primarily in terms of class methods for me is that it feels like it's built out of singletons.
AJI: Yeah. When I think of object-oriented programming, I kind of fall back to maybe one of the ideals of it is that it represents the world more accurately or maybe more understandably. And that sort of idea doesn't fit that paradigm, does it? If you're a factory that is making widgets, there's not the one canonical widget that all of your customers are going to be talking to and using. They are going to each have their own individual widgets. And those customers can be thought of like the consumers of your methods, your objects.
JOËL: The idea being the real-world thing you're simulating normally, there are multiple actors of every type rather than a single sort of generic one that stands in for everybody.
AJI: If this singleton is going to be your interface or the way that you interact with each of these things that are conceptually different, like a user or something like that, then differentiating between which user becomes a lot harder to do. It takes a lot more setup and involved process in referring to this user when and that kind of thing and creating the little instances. Then you've got more kind of direct reference to a single concept, a single individual.
JOËL: So what you've described is a very sort of classic OO mindset. You find the data and the behaviors that go together. You try to oftentimes simulate the world, model it in terms of actors that give and receive messages. In many ways, though, I think when you're building a system out of class methods, you're thinking about the world in an almost different paradigm. In many ways, it feels almost procedural. What are the behaviors that need to happen in my app? What are the things that need to be done?
You'd mentioned earlier that oftentimes these classes or the methods on them will end up with E-R; they're all verbs. You have a thing-doer, a thing executor, thing manager. They all do things rather than having domain concepts extracted and pulled out. Would you say that that feels somewhat procedural to you as well?
AJI: Yeah. I think a great way to divide it is the way that you have right there; it's these sorts of mindsets. Do you have collections of things that have behaviors, or do you have collections of behaviors that might refer to things? And where you're approaching the design of a system, either from that behavior side or from that object side, is going to be a different mindset. Procedural being more focused on that kind of behavior and telling it what to do rather than putting... I think this is probably a butchered Sandi Metz example, but putting your roommate who hates cats and a cat that doesn't want its tail stepped on in one room, and eventually, things will happen accordingly.
And those two mindsets are going to end up with very different architectures, very different designs, very different ways of building these applications that we make. And, again, does that come back to...Ruby, potentially to a lesser extent but still in the same camp, is object-oriented language, and it sort of functions best when considered and then constructed in that mindset.
And I often wonder sometimes if language developers and language designers make anti-patterns sort of purposefully awkward to use. Like, if you want to hide a lot of class methods, you can do the class shovels self version of things or have private_class_method littered all the way through your file. And it seems to me like that might be a little bit of a flag that, like, hey, you're working against the system here. You're trying to make it do a thing that it doesn't naturally want to do.
JOËL: Yeah, because you'd mentioned this private_class method thing because, by default, it's hard to get class methods to be private. You have to use a special keyword. You can't just write private in the class and then assume that the methods below it are going to be private because that does not apply to class methods.
AJI: Exactly. And that friction to making an object that has a smaller interface, that kind of hides its implementation, seems as though it's a purposeful way that Ruby itself was designed to maybe nudge us, developers, into a certain way of working or suggesting a certain mindset.
JOËL: There's a classic Code Climate article titled Class Methods Resist Refactoring. And it mentions different ways that when you're relying heavily on class methods, it's harder to do some of the traditional refactors things like just extract method because it is clunkier because you can't have private methods as easily. You can't share state, so you have to thread variables through. I guess, technically, you can share state with things like class variables and class instance variables, but if you do that, you will probably be very sad.
AJI: [laughs] Yeah, you're opening yourself up to a whole world of hurt there, aren't you? And, yeah, you're opening yourself up to a whole world of hurt there with that, aren't you? Sort of sharing data so dangerously around your app.
JOËL: So I'm a big fan of test-driven development. And one of the things that TDD believes in is that test pain should help guide the design of your system and that, generally, things that are easier to test are better designed.
AJI: Yeah.
JOËL: It's often easier to test class methods because they are globally available singletons. I can easily stub a class. Whereas if I need to stub an instance, I need to do some uglier things like stub any instance of or stub the constructor to return a double, or do some other kind of dirty tricks like that. Does that mean that TDD would prefer a class method-based approach to writing code?
AJI: I think that a surface-level reading of that might say that it does. And I think that maybe the first pass on things, if you're thinking about I want to get this thing done that's right in front of me right now and just move forward, might kind of imply that. But if you start to think about or have come back to something that was implemented in that way, anytime that sort of behavior is going to grow or change, then it's going to start to...the number of backflips that you have to do become a lot more complicated and a lot higher when you've got class methods.
Because I find that, yes, you might have to stub out or pass in a created object or something like that. But if you've got a class method, especially if it is calling other class methods inside it, then all of a sudden, you have in your test this setup that looks completely unrelated to anything that you're running and testing, that you have to have all of this insight or knowledge of what those classes are doing just to set up your test framework before you can even run that.
Another thing that is looked to as an axiom when writing tests that can imply this class approach is that you shouldn't change your code just for the test. If you're doing dependency injection or something like that, passing around little objects, then you're making your code more complicated to make your tests look a certain way.
JOËL: That's interesting. So maybe I'm reacting to some test pain by trying to change my tests first. So I'm trying to deal with some collaborators, and it is tricky to do. And so I decide, well, the thing I want to do is I want to reach for stubbing. But then that's hard to do because it's instances. So in order to make already that compromise in my test work better, now I change the code to be nicer for the test to use mostly classes because those are global.
Whereas maybe the correct path to take initially is, say, oh, there isn't test pain here because I'm trying to isolate an object from its collaborators. Maybe we need to pass an object in as an argument rather than hard coding it inside the class.
AJI: Yeah, absolutely.
JOËL: So I guess you follow the test pain, but maybe the problem is that you've already kind of gone down a path that might not be the best before you got to the point where you decided that you needed a class method.
AJI: And I think that idea of following the test pain can be, again, there are only shades of gray; there is no black and white. It can be sort of taken in a lot of different ways. And the way that I think about it is that test pain is also sort of an early warning sign that there's going to be pain if you want to reuse this class or these behaviors somewhere else. And if it was useful somewhere, it's likely it's going to be useful in another place.
And there are many different kinds of tests pain. The testing is a little easier with a class method because you're not stubbing out any instance of. You're just stubbing; really, what's the difference between stubbing out any instance of or stubbing out the class? Is that just a semantic difference? Is that --
JOËL: Because someone on the internet said that stubbing any instance of is bad.
AJI: Ooh, right, the internet. I should have read that one. The thing that you can do with passing around instances or sending messages to instances as you do when you're calling a method is that you can easily swap in a different object if you need to stub it. It's similar to how you can change the implementation under the hood of an object and pass in an object that responds to the same messages and kind of keep moving forward with your duck typing.
If you can go into your tests and pass it sort of an object that's always going to return a thing...because we're not testing what that does; we just need a certain response so that we can move forward with the pathway that is under test. You can do that in so many different ways. You could have FactoryBot, for instance, give you a certain shape of a thing. You can create a tiny, little class right there in your tests that does something specific, that can be easily understood what's going on under the hood here.
And instead of having to potentially stub out or create all of these pathways that need to be followed that are overwriting logic that's happening in different class methods or different places otherwhere in the application, you can just pass in this one simplified thing to keep your tests sort of smaller and easier to wrap your head around all in just one go.
JOËL: I think what I'm getting here is that when you design your code around instances, you're more likely to build it in a modular way where you pass objects to other objects. And when you build your code using class methods, you're more likely to write it in a hard-coded way. Because you have that globally available class, you just hard-code it and then call it directly rather than passing things in. And so things end up more coupled and, therefore, high coupling leads to more test pain.
AJI: Yeah, I think you've really kind of hit on something here that the approach of using class methods is locking that class into kind of a single context or use case. Usually, it is this global thing that is this one way, and that's even kind of backed up by the fact that class methods are load-time logic instead of run-time logic. And it really kind of not only couples but it makes it more brittle and less amenable to kind of reuse.
JOËL: That's a really interesting distinction. I often tend to think of runtime versus load time in terms of composition versus inheritance. Composition, you can combine objects together at runtime and get behaviors built on the fly as the code is executing, whereas inheritance sort of inherently freezes you into a particular combination of behaviors at the time of loading the code. It's something that the programmers set up, and so it is much less flexible.
And that is one of the arguments why the Gang of Four patterns book recommends composition over inheritance in many situations is because of that runtime versus load time dichotomy. And I hadn't made that connection for class methods versus instance methods, but I think there's a parallel there.
AJI: Yeah, absolutely. The composition versus inheritance thing, I think, goes very hand in hand with the conversation that we're having about putting your behavior on a class versus an instance because...and I don't know if this is again yielding my thoughts to 'the internet said' in that composition is preferable to inheritance. But without unpacking that right there, that is certainly something that I strive for as well. And while it might have, much like TDD, some kind of superficial, short-term complexity, it has long-term payoff in that flexibility and that reuse, and that extensibility, and all of those other buzzwords that we developers like to throw around.
JOËL: So you've shared a lot of thoughts on the use of class methods. I think this could branch into so many other aspects of object-oriented design that we haven't looked at or that we could go deeper, things like TDD. We could look into how it works with the solid principles, all sorts of things. But I think the big takeaway for me is that class methods are very useful, but it's easy to use them as our single hammer to every problem being a nail.
And it's good to diversify your toolset. And some tools are specialized; they're good to be used in very specific situations that don't come across very often, and others are used every day. And maybe class methods are the former.
AJI: Absolutely. That hammer-and-nail metaphor was right where I was headed for too. Love it.
JOËL: Well, thank you so much, Aji, for joining the conversation today. Where can people find you online?
AJI: Yeah, anywhere you want to look for me: Instagram, GitHub, Twitter. I'm @DoodlingDev, so just send all your angry emails that way.
JOËL: And with that, let's wrap up.
The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Joël is joined by Amanda Beiner, a Senior Software Engineer at GitHub, who is known for her legendary well-organized notes. They talk about various types of notes: debugging, todos, mental stack, Zetelkasten/evergreen notes, notetaking apps and systems, and visual note-taking and diagramming too!
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Amanda Beiner, a Senior Software Engineer at GitHub.
AMANDA: Hey, Joël. Great to see you.
JOËL: And together, we're here to share a little bit of what we've learned along the way. So, Amanda, what is new in your world?
AMANDA: Well, one thing I'm really excited about is that my team at GitHub is experimenting with how we're going to incorporate learning and sharing what we've learned with each other in new ways, and I'm really excited to see where people take that.
So, one of the things that we're thinking of is that we all get really busy, and we all have exciting projects that we're working on in the day-to-day, and sometimes it can be really hard to pull yourself away from them to do some learning that would be something that will probably help you in the long run. But every time we do do projects like that, people are really excited about it, and people like to collaborate. So we're just trying to figure out how we can make that a more regular thing because it's great for our whole team.
JOËL: I love that. Do you have a project or something that you've been getting into recently to learn?
AMANDA: Yeah. One of the things that I have been working on is that this is the first backend-focused role that I've had in my entire career. So I feel like I just kind of keep pulling back layers on how different forms of magic work. And I'm just trying to get closer to the metal of what is powering our databases. And that's something that I've been really excited to learn some more about.
JOËL: So it's digging into a lot of, like, Postgres and just general database theory.
AMANDA: Yeah. So for me, I've spent a lot of time at the Active Record layer as I have been settling into my role and figuring out what our domain models are that we care about. And I'm trying to get a little bit more into the questions of why did these tables end up looking the way that they do? Why are they normalized or denormalized where they are? And trying to get a better idea of the theory behind those decisions.
JOËL: And this is a new team that you've joined.
AMANDA: This is an existing team that I've joined a year ago now.
JOËL: So it sounds like you're dealing with a somewhat unfamiliar codebase. You're looking at a bunch of existing models and database tables. That can be a lot to process and understand when you first join a team. Do you have an approach that you like to use when you're looking at unknown code for the first time?
AMANDA: Yeah. I usually like to dive right in as much as I can, even if it's with a very small bug fix or something like that, something that allows me to just get my hands dirty from the beginning and poke around what models I'm dealing with, and maybe some of the adjacent ones that I don't need to know about now but might want to come back to later.
JOËL: One thing that I find is really helpful for me are diagramming and note-taking. So if it's something like a database table or ActiveRecord models that I'm not familiar with, if it's more than maybe two or three, which is probably the most I can keep in my head, I have to start drawing some kind of like an entity-relationship diagram or maybe even just a bulleted list somewhere where it's like here are the things and how they connect to each other. Otherwise, I’m like, I don't know, I don't have enough RAM in my brain for that.
AMANDA: That sounds like a really helpful approach. How do you approach creating these diagrams?
JOËL: Occasionally, I will just draw it out by hand with pen and paper. But more recently, I've been using tools like Mermaid.js and specifically the website mermaid.live that allows you to just put in some names and arrows, and it will build out a diagram for you. And that's been really helpful to explore and understand what is going on with different entities that relate to each other.
AMANDA: I've used Mermaid.js recently, and I really enjoyed it as well. I found that writing something that lets me write words or something somewhat like words and takes care of the drawing for me is probably best for everyone involved.
JOËL: Yeah, that's a good point. It's kind of like Markdown, the ability to just write a little bit of text and move on and not worry about the size of boxes or the shape of the arrows or whatever. It helps you to really stay in that flow and keep moving.
AMANDA: I definitely agree. I feel like I can't have been the only person that somehow ended up very deep into the Figma documentation because I didn't quite know how to do what I was supposed to do, and I forgot what I was trying to draw in the first place.
JOËL: Right. It's really easy to put your designer hat on and want to make something like a beautiful diagram when this is really more of a capturing your state of mind. It's a rough note, not something you're necessarily going to publish. So, in addition to visuals, do you find yourself taking a lot of notes when you're exploring code or debugging code?
AMANDA: Yeah. I feel like I tend to jot a lot of things down, maybe class names, maybe some links to PRs or issues, or anywhere that might have context about what I'm looking at and how it got in that way. At this point in the process, it feels my notes usually feel like a bit of a bullet point list that doesn't quite make sense to me yet but maybe will get some shaping later.
JOËL: What kind of things do you tend to record in those notes?
AMANDA: I think one of the things that I'm usually trying to get out of those notes is just a snapshot of what I'm trying to accomplish at the time that I'm creating them. What's the bug that I'm trying to solve, and how did I get into this rabbit hole? So that if it ends up being the wrong one, I can follow my breadcrumbs back out and start a different way.
JOËL: Oh, that is really powerful. I love the imagery you used there of following breadcrumbs. And I feel like that's sometimes something I wish I had when I'm either exploring a particular code path or trying to find a bug. And at some point, I've gone a pretty long path, and I need to back up. And I don't remember exactly where I was or how I got to this point, especially if I've gone down a path, backtracked a little bit, gone down a different path, backtracked, gone further down a third path. And so having breadcrumbs, I think, is a really valuable thing that I wish I did more when I was debugging.
AMANDA: Yeah. And one of the most helpful breadcrumbs that I found is just a list of questions. What was the question that I was trying to answer when I opened this file or looked at this method, and did it help me solve that question or answer that question? And if the answer is no, then I can refer back to what the question was and try to think about what else might help me solve that question.
JOËL: I also love that. It's really easy to get sidetracked by other questions or other ideas when exploring or debugging. And sometimes I find that half hour later, I haven't answered the original question I came here to answer, and I kind of haven't even tried. And so, maybe writing down my questions before I go down a path would help me stay more focused during a debugging session rather than just trying to keep it all in my head.
AMANDA: I very much relate to getting nerd sniped by something that looks interesting but ultimately doesn't solve the original problem that you were trying to.
JOËL: This even happens to me when I'm pair programming. And so we'll say out loud the question we're trying to answer is this; let's open this file. And then you go into it, and you're like, oh, now that is an unusual line of code right there.
AMANDA: [laughs]
JOËL: I wonder why they're doing that. Let me check the git blame on this line. Oh, it's from 2015?
AMANDA: [laughs]
JOËL: I wonder what was happening there. Was that part of a Rails upgrade? And then, at some point, the other person has to interject and be like, "That's all fascinating, but I think the question we're actually trying to answer is..." and we get back on track.
AMANDA: I feel like that's a really good opportunity, maybe for a different kind of note of just interesting curiosities in a given codebase. I find that one of the skills that I'm trying to get better at is, rather than building a repository of information or answers to questions, just building a mental map of where the information I'm trying to find lives so that when someone asks me a question or when I have to solve something I don't necessarily know the answer, but I just know the resource to find that will point me in the direction of that answer.
And I feel like those kinds of explorations are really helpful for building out that mental model, even if it may be at the time seems like an unrelated rabbit hole.
JOËL: So this kind of note is a bit more permanent than a bread crumb style note would be.
AMANDA: Yeah, maybe. And I guess maybe it's less of a note, and it feels kind of like an index.
JOËL: Hmmm.
AMANDA: Like something that's connecting other pieces of information.
JOËL: That's really interesting. It's got me thinking about the fact that note-taking can be very different in different situations and for different purposes. So we've talked a little bit about debugging. I think we've mixed debugging and exploration. Maybe those two are not the same, and you treat notes differently. Actually, do you treat those two as different, or do you have different approaches to note-taking when you're exploring a new codebase versus debugging a particular problem?
AMANDA: I think that those kinds of notes could probably be a little bit different because I think when I'm onboarding onto a new codebase, I'm trying to cast a pretty wide net and give some overall information about what these things do that by the time I'm very deep in debugging, it might be information that I already know very well. So I feel like maybe debugging notes are a little bit more procedural. They are a little bit more I did X, and I did Y, and I did Z, and these were the questions.
And the introductory notes to a new codebase might be more along the lines of this is what this model does, and stuff that will eventually become second nature and might be useful to pass off to someone else who's onboarding but which I might myself not refer back to after a certain amount of time.
JOËL: I see. That's an interesting point because not only might the type of notes you take be different in different scenarios, but even their lifespan could be different. The value of a debugging note, that sort of breadcrumbs, might really only be that useful for a few hours or a couple of days. I can imagine notes you're taking while you're exploring a codebase those might be helpful for a much longer period and, as you said, maybe in passing them on to someone else when they're joining a team.
AMANDA: So that makes me think of whether the debugging notes should be as short-lived as I'm making them sound because I feel like there are times where you know you've debugged something previously, but you didn't keep the notes. Maybe they were just on a scrap of paper, and now they're gone. And I feel like I'd like to do a better job of digesting those notes a little bit better and eventually turning them into something that can be a little bit longer-lived.
JOËL: That's fair. I find that, especially for debugging, I like to capture a lot of what was in my notes in the eventual commit message for the fix. Of course, my random breadcrumbs probably don't make sense in the commit message, but a lot of what I have learned along the way often is helpful.
AMANDA: That's a really good point. I hadn't thought of commit messages as notes, but you're right; they totally are.
JOËL: One thing I've done is I've sort of taken this idea to the extreme. I was debugging some weird database table ActiveRecord model interactions, and the modeling was just a little bit unusual. There were multiple sources of truth in the relationships. And there were enough models that I struggled to really understand what was going on.
And so I drew an entity-relationship diagram. And I felt that that was important to understand for people reviewing the code but also anybody looking back at the commit later on. So I used a tool called Monodraw, which allows you to draw simple diagrams as ASCII art. And so, I have a little ASCII art ERD in my commit message.
AMANDA: That's incredible. I feel like if I were a developer git logging and I saw that commit message, I would be both thrilled and terrified of what exactly I was diving into in the git blame. [laughs]
JOËL: Definitely both, definitely both. But I have referred back maybe a few months later. Like you said, I had to refer back to that commit because a similar bug had cropped up somewhere else. And I knew that that commit had information that I had gathered that would make the debugging experience easier.
AMANDA: I guess the commit message is a really good example of having a note that's very closely tied to its context. Like, it's in the context of like a commit, which is a set of changes at a point in time, and it's really well situated in there. What do you think about the trade-offs of having that as part of a commit message versus something like some other sort of documentation where something like that could live?
JOËL: I guess it depends on how you think you're going to use it in the future. Again, for debugging things, it feels like you don't often need to refer back to them, so I don't think you would want to just dump that on a wiki somewhere. It probably makes sense to have that either in just a collection of debugging notes that you have or that you could then dig into if you needed or in a commit message, something like that. But maybe some of the things that you learned along the way could be pulled out and turned into something that lives somewhere else that's maybe less of a note at that point and more of a publication.
AMANDA: That sounds like a fine line between note and publication.
JOËL: Perhaps it's an artificial line that I'm making.
AMANDA: [laughs]
JOËL: But yeah, I guess the idea is that sometimes I will look at my own debugging notes and try to turn them into something like either a wiki page for a particular codebase or potentially even a blog post on the thoughtbot blog, something that I've been able to synthesize out of the notes there. But now you've kind of gone a few steps beyond the underlying raw notes.
AMANDA: I'm interested in your thoughts on that synthesis of notes into how does something go from a commit message into a blog? What does that process look like for you?
JOËL: I have a personal note-taking system that's loosely inspired by a system called Zettelkasten and also another similar system called evergreen notes. The idea is that when you learn things, you capture small atomic notes, so they are an idea in your note-taking application, and then you connect them. You create links between notes. And the idea is that there's a lot of value in making connections between notes that's almost as much part of the knowledge-creating experience as capturing single notes on their own.
And as you capture a bunch of these little, tiny notes over time and they become very interconnected, then you can start seeing, oh, this note from this one experience, this note from this conference talk, and this note from this book all connect together. And they maybe even make connections I hadn't seen, or I hadn't thought of individually in those moments. But now I see that they all kind of come together with a theme. And I might then combine those together to make a blog post or to use as the foundation for a conference talk.
AMANDA: That's really interesting. I like the concept of being able to capture bits of information at the time that they felt relevant without having to have an entire thesis in this note. Or that idea doesn't have to be fully fleshed out; it can become fleshed out later when you connect the dots.
JOËL: That's a really powerful concept. One of the big ideas that I picked up through this was that there are always byproducts of knowledge creation. So if I'm writing a blog post, there are always some things that I cut that didn't make it into the blog post because I'm trying to keep it focused. But those are still things that I learned, things that are valuable, that could be used for something else.
And so anytime I'm writing a blog post, preparing for a conference talk, learning some things in a debugging session by reading a book, there are always some things that I don't use necessarily immediately. But I can capture those little chunks, and eventually, I have enough of them that I can combine them together to make some kind of other work.
AMANDA: I'm really curious about your process of creating those notes. If you're reading a blog post, say, to learn a new topic and you're taking notes on that, how do you go from this concept that you're learning in the blog post to these really focused notes that can be combined in other ways?
JOËL: So the Zettelkasten approach suggests that you have two forms of notes, one it calls literature notes which are just sort of ideas you jot down as you are reading some work. You're reading a book or a blog post or watching a talk, and then, later on, you go and turn those into those atomic-separated, linked notes together, what Zettelkasten calls permanent notes. And so I'll often do that is just focus on the work itself and jot down some notes and then convert those later on into these smaller atomic chunks.
AMANDA: That concept of taking a larger theme and then actually spending the energy to distill that into a different kind of artifact that might be helpful later on is really interesting. And I don't do Zettelkasten note-taking, but I've also found that to be useful in other contexts as well.
JOËL: One thing that I sort of hold myself to is when I am writing those atomic notes is, I don't write them as bullet points. They're always written in prose and complete sentences. The title is usually a sort of thesis statement, a thing that I think is true or at least a thing that posits that could be true, and then a short paragraph expanding on that idea which I think helps cement a lot of information in my mind but also helps to give me little chunks of things that I can more or less copy-paste into an article and already have almost a rough draft of something I want to say.
Do you find that when you synthesize ideas into notes that you do something similar, or do you stick mostly to bullet points?
AMANDA: I think I might do a mixture of the two. I think procedurally, I use bullet points a lot, but I think those bullet points tend to be full sentences or several sentences together. I've definitely run up against some of the drawbacks of terseness, where they're less helpful when you refer back to it later. But I do like the visual cues that come with things like bullet points, or numbered lists, or even emoji and note-taking to be a visual cue of what I was thinking of or where I can find this later on.
JOËL: I love emoji; emoji is great.
AMANDA: I guess actually I've started using emoji as bullet points. That's something that I've found even to be helpful just with remembering or with grouping things thematically in my mind. And when I'm going back through my notes, I find it easier to find the information that I was looking for because it had a list, or an emoji, or an image, or something like that.
JOËL: Yeah, that makes it really easy to scan and pick out the things that you're looking for. It's almost like adding metadata to your notes.
AMANDA: Totally.
JOËL: That's a great tip. I should do that.
AMANDA: You can definitely run into the Figma problem of you then spend so much time finding the right emoji to be the bullet point that you forgot what you were doing, [laughs] but that's a problem for a different day.
JOËL: So this sort of Zettelkasten evergreen notes approach is a system that I use specifically to help me capture long-term thoughts about software that could eventually turn into content. So this is very much not a debugging note. It's not an exploratory note. It has a very particular purpose, which is why I write it in this particular way.
I'm curious; I know you have a lot of different systems that you use for your notes, Amanda. Is there one that you'd like to share with the audience? Maybe tell us a little bit about what the system is and why it's a good fit for the type of scenario that you'd like to use it in.
AMANDA: Sure. One situation that I found myself in recently is I have started taking classes on things that I'm interested in, development-related and non-development-related. And that's a formal structure that requires some note-taking that I haven't really done since I was in school. And the tools were very different back then as to what we had available to us for note-taking. It was basically Microsoft Word or bust.
So I have found myself having to develop a new note system for that kind of content delivery method, basically of watching a video and taking notes and having something that then makes sense outside of the context of sitting down and watching a video. So that has been a little bit of a process journey that I've been on the last couple of months.
JOËL: And what does your note-taking system look like?
AMANDA: So it's been a mix of things, actually. I started out just kind of brain-dumping as I went along with the instructor talking trying to type and keep up. And I found that very not scannable to look back on. I was looking for some more visual cues, and I didn't really have time to insert those visual cues as I was trying to keep up with a lecture essentially.
I then transitioned back to old school pen and paper, like, got myself a notebook and started writing in it. And obviously, that has some benefits of the free-formness, like, I'm not constrained by the offerings of any specific tool. But the trade-off for that is always that you have different notebooks for everything. And it's like, where's my X class notebook?
And so I've been trying to bring those two methods together into something that makes a little bit more sense for me and also bring in some of that synthesis process that you were talking about with your note-taking method of doing the full literature notes and then synthesizing them down into something a little bit more well-scoped for the particular piece of information.
JOËL: So you have like a two-step process then.
AMANDA: It did end up being a two-step process because one of the things that I found was the grouping of ideas that make sense when you're first learning a concept and the grouping of ideas that make sense when I'm revisiting that concept, later on, aren't necessarily the same. And so, keeping it in the original context doesn't necessarily help me recall the information when I'm referring back to my notes.
JOËL: That's really interesting. When you're writing it, it's going to be different than when you're reading it.
So we've been talking a lot about the purpose of different notes along the way, and you mentioned the word recall here. Do you use these notes mostly as a way to recall things that you would look back at them and try to remember, or are they more of a way to digest the material as you're going through it?
AMANDA: I think at the time that I'm writing them, they definitely served the purpose of helping me digest the information. But at some point, I probably want to be able to look back at them and remember the things that I learned and see if maybe they have new salience now that I have sat on them for a little bit.
JOËL: Hmmm, that's good. So it's valuable for both in different contexts.
AMANDA: Yeah, definitely. And one of the more surprising things that I've learned through that process has been that when I'm learning something, I really like a chronological kind of step-by-step through that process and building blocks of complexity that basically go one on top of the other. But then, once I've kind of made it to the end when I look back on it, I look back on those notes, and they're usually pretty thorough. They probably have a lot of details that aren't going to be top-level priority at the end.
But I didn't really have that concept of priority when I was first learning it. I was kind of grasping onto each bit of information, saying, "I'm going to scroll this away in case I need it later." And then when you get a better understanding of the full picture, you realize, okay, I'm glad that I know that, but it's not necessarily something that I'd want to look back on. So I really like having systems that then allow me to regroup that information once I have built out a fuller picture of what it is I'm trying to learn.
JOËL: Interesting. So the sort of digesting step that happens afterwards or the synthesis step, a lot of the value that you're adding there is by putting structure on a lot of the information you captured.
AMANDA: Yeah, I think putting structure and changing the structure, and not being afraid to change that structure to fit my new understanding in how I see this concept now instead of just how this concept was explained to me.
JOËL: So you mentioned that you'd initially used notebooks and paper and that that felt a little bit constraining in terms of organization. Is there any kind of software or apps that you like to use to organize your notes, and how do they fit in with your approach to note-taking?
AMANDA: I've been using Notion for the last few years. I found that that application works well with my visual preferences for note-taking. I think there's a lot of opportunity for visual cues that help me recall things, such as emoji and bullet points. And I like that I can do all of that by writing Markdown without then also having to read Markdown.
JOËL: Yeah, I definitely agree that a little visual change there where you can actually see the rendered Markdown is a nice quality-of-life improvement.
AMANDA: Absolutely. And I also think that the way that it turns Markdown into blocks that then you can rearrange has served me really well for that synthesis process of maybe this bullet point makes sense, and I want to keep it as is. But I want to rearrange it into these new themes that I'm seeing as I'm reviewing these things that I've learned.
JOËL: That's fascinating. So it has some really good tools for evolving your notes and reorganizing them, it sounds like.
AMANDA: I like that I can group my notes by concept, and notes can be subsets or sub-notes of other notes. And you can kind of move the individual notes in between those blocks pretty easily, which helps me rearrange things when I see different themes evolving.
JOËL: I've heard really good things about Notion, but I've not tried it myself. My app of choice so far has been Obsidian, which I really appreciate its focus on linking between notes. It doesn't have this concept of blocks where you can embed parts of notes as notes into other notes and things like that. But that has been okay for me because I keep my notes very small and atomic. But the focus on hyperlinking between notes has been really useful for me because, in my approach, it's all about the connections.
AMANDA: So, what does that process look like when you are referring back to all of these notes that are hyperlinked together?
JOËL: That's actually really important because the recall aspect is a big part of how you would use a note-taking system. For me, it's sort of like walking the graph. So I'll use search, or maybe I know a note that's in the general theme of what I care about, and then I'll just follow the links to other related articles or notes that talk about things that are related to it. And I might walk that graph three, four steps out in a few different directions. It's kind of like surfing Wikipedia. You find some entry point, and then you just follow the links until you have the material that you're interested in.
AMANDA: It sounds like creating a Wikipedia wormhole of your own.
JOËL: It kind of is. I guess, in a way, it's kind of like a little mini personal wiki where the articles are very, very condensed because I have that limitation that everything must be atomic.
Wow. So this has been a really fascinating conversation. I feel like one of the big takeaways that I have is that types of notes matter. Note-taking can take very different forms in different contexts. And the way you organize them would be vastly different; how long you care about them is also going to be different.
So going into a particular situation, knowing what sort of situation is this that I'm using notes and what is their purpose is going to be really helpful to think in terms of how I want to do my note-keeping. Whereas I think previously, I probably was just like, yeah, notes. You open a document, and you put in some bullet points.
AMANDA: I am definitely guilty of doing that as well. And I like the idea of having a purpose for your notes. You mentioned your purpose was ultimately to build a map that would produce content. And I really like how you have found a system that works really well for that purpose. And I'm going to keep thinking about how to be more intentional in what is the purpose of the notes that I'm taking in the future.
JOËL: Well, thank you so much for joining the conversation today. Where can people find you on the web?
AMANDA: Thanks so much for having me, Joël. You can find me @amandabeiner on Twitter.
JOËL: And we'll link to that in the show notes. And with that, let's wrap up.
The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Guest and fellow thoughtbotter Stephanie Minn and Joël chat about how the idea of specialized vocabulary came up during a discussion of the Ruby Science book. We have all these names for code smells and refactors. Before knowing these names, we often have a vague sense of the ideas but having a name makes them more real. They also give us ways to talk precisely about what we mean. However, there is a downside since not everyone is familiar with the jargon.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined with fellow thoughtboter Stephanie Minn.
STEPHANIE: Hey, Joël.
JOËL: And together, we're here to share a little bit of what we've learned along the way. Stephanie, what is new in your world?
STEPHANIE: Thanks for asking. I am on a new project I just started a few weeks ago, and I'm feeling the new project vibes, I think, kind of scoping out what's going on with the client with the work that they're doing. Trying to make a good impression. I think lately I've been in that mode of where can I find some work to do even when I'm just getting on boarded and learning the domain, trying to make those README updates in the areas that are a bit outdated, and yeah, just kind of along for the ride.
One thing that has been surprising already is that in my second week, the project pivoted into a different direction than what I was expecting. So that has been kind of exciting and also pretty interesting to see sometimes this stuff happens. I was brought on thinking that we were working on rebuilding the front end in React and TypeScript, pulling out pieces of their 15-year-old Rails monolith. So that was kind of one area that they decided to start with.
But recently, they actually decided to pivot to just revamping the look of the existing pages in the Rails app using the same templates and forms. So it's kind of shifted from more greenfield-esque work to figuring out what the heck's going on in this legacy codebase and making it a little bit more modern-looking and cleaning out the cobwebs, I suppose as we find them.
JOËL: As a consultant, how do you deal with that kind of dramatic shift in expectations?
STEPHANIE: I think it's tough because I necessarily wasn't in those conversations, and so I have to come at it with the understanding that they have a deep knowledge of the business and things that are going on behind the scenes that I don't, and I am coming in kind of with a fresh set of eyes. And it definitely raises some questions for me, right? Like, why now? What were the trade-offs that were made in the decisions?
And I hope that as a consultant, I can poke and prod a little bit to help them with the transition and also figuring out its impact on the rest of the team in a way maybe someone who is more familiar with the situation and more tied to the politics of the org might not have that perspective.
JOËL: I have a lot of questions here. But actually, I'm thinking that onboarding as a topic would probably make a good standalone episode. So maybe we'll have to bring you back for a future episode to talk about how to onboard well and how to deal with surprises.
STEPHANIE: Yeah, I think that's a great idea. What about you, Joël? What's going on in your world?
JOËL: I'm doing an integration with a third-party gem, and I am really struggling. And I've gotten to the point where I'm reading through the source of the gem to try to figure out some weird errors, some things that come up that are not documented. I think that's a really valuable skill. Ideally, you're not having to bring it out too often, but when you do, being able to drop into the code can really help unblock you or at least make some amount of progress.
STEPHANIE: Are you having to dig into the gem's code because you weren't able to find what you needed from the documentation?
JOËL: That's correct. I'm getting some cryptic errors where the gem is crashing, and I'm finding some lines in my logs that point back to the gem. And now I'm trying to reconstruct what is happening. Why is it not behaving the way it should be based on the documentation that I read?
STEPHANIE: Oh, that's tough. Getting into gem code is uncharted territory.
JOËL: It's nice when you're working with an open-source gem because the source is there, and you can just follow the stack trace and go through the code. Sometimes it's long and tedious, but it generally gives you results. It definitely is intimidating.
STEPHANIE: Yeah. When you're facing this kind of problem where you have no idea where the light at the end of the tunnel might be, how long do you spend with it? At what point do you take away with what you've learned and try to figure out a different approach?
JOËL: That's a good observation because digging through the source of a gem can definitely be a time sink. I think how much time I want to budget depends on a variety of other factors. How big of a problem is this? If I can't figure it out through reading the source, do I have alternate approaches to debug the problem, such as asking for help, opening an issue, reaching out to somebody else who's used it, complaining about it on The Bike Shed and hoping someone responds with an answer?
There are other options that I can do that might leave me blocked but maybe eventually give me results. The advantage with reading the source is that you're at least feeling like you're making progress.
STEPHANIE: Nice. Well, I wish you luck on that journey. [laughs] It sounds pretty tough. I'm sure that you'll get to one of those solutions and figure out how to get unblocked.
JOËL: I hope so. I'm pursuing a few strategies in tandem. So I've asked for help, but I'm also reading the source code. And between the two of those, I hope I'll get to a good solution.
So, Stephanie, last time you were on the show, you talked about your experience creating talk proposals for RubyConf. Have you heard back from them since then?
STEPHANIE: I have. I will be speaking at RubyConf Mini this year. And I'm really excited because this will be my first IRL conference talk. So last time, I recorded my talk for RubyConf, and this time I will be up on a stage in front of real people.
JOËL: That's really exciting. Congratulations.
STEPHANIE: Thanks.
JOËL: What is the topic of your talk?
STEPHANIE: I will be talking about pair programming and specifically pair programming through the lens of a framework called Nonviolent Communication, which is a framework I learned about through a friend who recommended the canonical book on it. And it's a self-help book, to be totally frank, but I found it so illuminating. It really changed how I communicated in my relationships in my personal life.
And the more time I spent with it, the more I realized that it would be a great application in pair programming because it's so collaborative and intimate. I've experienced the highs and lows of pair programming. You can feel so good when you are super connected with your pair. You make a lot of progress. You meet whatever professional goals that you might be meeting, and you have someone along for the ride the whole time. It can be just so rewarding.
But it can also be really challenging when you are having more of those interpersonal conflicts, and navigating that can be tough. And so I'm really excited to share this style of communication that might help others who want to take their pair programming to the next level and get the most out of that experience no matter who they're pairing with.
JOËL: I'm excited to hear this talk because pair programming has always been an important part of what we do at thoughtbot. And I think now that we're remote, we do a lot of remote pair programming. And the interpersonal interactions are a little bit different there than when you're physically in a room with each other, or you have to maybe pay a little bit more attention to them. I'm really excited to hear that. I think that's going to be really useful, not just for me but for a lot of the audience who are there.
STEPHANIE: Thanks. Joël, you have a talk accepted at RubyConf Mini too.
JOËL: Yes, I also had a talk accepted titled Teaching Ruby to Count. And it's going to be all about series, enumerators, enumerables, and ranges in Ruby and the cool things that you can do with them. So I'm really excited to share about that. I've done some deep dives on these topics, and I'm excited to share that with the broader Ruby community.
STEPHANIE: Nice. I'm really excited to hear more about it.
JOËL: Did you submit more than one proposal this year?
STEPHANIE: This year, I didn't. But I would love to get to a point where I have a lot of content on the backburner and can pull it out when CFP season rolls around to just have some more options. Yeah, I have all these ideas in my head. I think we talked about how we come up with content in our last episode. But yeah, having a content bank sounds really nice for the future, so maybe when that season rolls around, it is a lot easier to get the ball rolling on submitting. What about you? Did you submit more than one?
JOËL: I submitted two, but this is the one I was most excited about. I think the other idea was maybe a little bit more polished, but this one was a newer one I came up with towards the end of the CFP period. And I was like, ooh, I'm excited about this. I've just done a deep dive on enumerators, and I think there are some cool things to share with the community. And so that's what I'm excited to share about, and maybe that came through the proposal because that is what the committee picked. So I'm super happy to be talking about that.
STEPHANIE: Nice. Yeah, I was just thinking the same, that your excitement about it was probably palpable to the committee.
JOËL: For any of our viewers who are interested in coming to watch the talks by Stephanie and myself and plenty of other gifted speakers, this will be at RubyConf Mini in Providence, Rhode Island, from November 15th to 17th. And if you can't make it in person, the videos will be published online early in 2023. And we'll definitely share the links to that when they come out.
So as we mentioned in your last episode, thoughtbot has a book club where we've been discussing the book Ruby Science, which goes through a lot of code smells and talks about some various refactoring patterns that can be used to fix them. Most recently, we looked at a code smell that has a very evocative name; it's called shotgun surgery.
STEPHANIE: Yeah, it's a very visceral name for sure. I think that is what prompted this next topic that we're about to discuss because someone in the book club, another thoughtboter, mentioned that they were learning this term for the first time. But it made a lot of sense to them because they had experienced shotgun surgery and didn't have the term for it previously. Joël, do you mind giving the listeners a recap of what shotgun surgery is?
JOËL: So shotgun surgery is when in order to make a change to a codebase, you have to make a bunch of little changes in a lot of different files, a lot of different locations. And that means that all of these little pieces are weirdly coupled to each other in a way that to make one change, you have to make a bunch of little changes in a lot of places. And that results in a very high churn diff, and that's a common symptom of this problem.
STEPHANIE: Nice. Thanks for explaining.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPHANIE: I think I came away from that conversation thinking about the idea of learning new terms, especially technical ones, and the power that learning those terms can give you as a developer, especially when you're communicating with other people on your team.
JOËL: So you mentioned the value in communication there. Some terms have a very precise meaning, and so that allows you to communicate a very specific idea. How do you balance having some jargon and some terminology that allows you to speak very precisely versus having to learn all the terms? Because the more narrow the term is, the more terms you need to talk about all the different things.
STEPHANIE: That's a great question. I don't know if I have a great answer because I think I'm still on my journey. I have always noticed when developers I work with have that really precise, articulate technical vocabulary, probably because I don't. I am constantly referring to functions or classes as things, like, that thingy over there talks to this thing over here, and then does something. [laughs]
And I think it's because I maybe didn't always have that exposure to very precise technical vocabulary. And so I had an understanding of how things worked in my head, but I couldn't necessarily map that to words. And I'm also from California, so, I don't know, maybe some of that is showing through a little bit. [laughs]
But I've been trying to incorporate more technical terms when I speak and also in written form, too, such as in code review, because I want to be able to communicate more clearly my intentions and leave less room for ambiguity. Sometimes I've noticed when you do speak more casually about code, turns out other people can interpret it in different ways. And if you are using, like you said, that narrower technical term for it, that leaves less room for misunderstanding.
But in the same vein, I think a lot of people are like me, where they might not know the technical terms for things. And when you start using a lot of jargon like that, it can be a bit exclusive to folks earlier in their career, especially since software as an industry we have folks from all different backgrounds. We don't necessarily have the expectation of assured formal training. We want to be inclusive of people who came to this career from different places and make sure that we are speaking the same language. And so it's been top of mind for me thinking about how we can balance those two things. I don't know, what do you think?
JOËL: I want to speak to some of the value of precision first because I think there are a few different points. We have the value of precision, then we have the difficulty of learning vocabulary, and how are we inclusive of everyone. But on the topic of precision, I don't know if you saw not too long ago, another fellow thoughtboter, Matheus Sales, published an article on the thoughtbot blog about the concept of connascence. And he introduces this as a new set of vocabulary, not vocabulary that he's created but a vocabulary that is out there that not that many developers are aware of for different ways to talk about coupling.
So it's easy in a pull request to just say, "Oh, well, that thing looks coupled. How about this other way?" And then I respond, "Well, that's also coupled in a different way." And then we just go back and forth like, "Well, mine is more coupled than yours is," or whatever. And connascence provides a more precise, narrow vocabulary to talk about the different ways that things are coupled and which ones are more coupled than others. And so it allows us to break out of maybe those unproductive discussions because now we can talk about things in a more granular way.
STEPHANIE: Yeah, I loved that blog post. It was really exciting for me to pick up a new term to describe something that I had experienced, or seen in codebases, or felt the pain of, and be able to describe it more accurately. I'm curious, Joël, if you were to use that term next time, how would you make sure that folks also have the same level of familiarity with it?
JOËL: I think on a pull request, I would link to Matheus' article depending on...I might give a little bit of context in a comment. So I might say something like, "This area here is coupled. Here's a suggested refactor. It's also coupled but in a different way. It's because we've moved up this hierarchy of connascence from, you know, connascence of names to some other form" (I don't have them all memorized.) and then link to the article. And hopefully, that becomes the start of a productive discussion.
But yeah, having the resources you can link to people is great. And that's one of the nice things about text communication on a pull request is that you can just link to external resources that people can find helpful.
STEPHANIE: To continue talking about the value of precision and specialized vocabulary, Joël, I think you are a very articulate communicator. And I'm curious from your perspective if you have always been this way, if you've always wanted to collect technical terms to describe exactly what you want to convey, or if this was a bit of a journey for you to get to this level of clear communication in your technical speaking and writing.
JOËL: It's definitely been a journey. I think there are sort of two components to this; one is being able to communicate clearly to others; make sure that they understand what you're talking about. So for that, it's really important to be able to put yourself in somebody else's shoes.
So when I'm building a conference talk or writing up a blog post, I will try to read it or go through my slide deck and try to pretend that I am the audience. And then I ask myself the questions: where do I get confused? Where am I going to have questions? Maybe even where am I going to roll my eyes a little bit and be like, eh, I didn't agree with that leap of logic there; where are you going? And then shift back in author mode and say, how can I address these? How can I make my content speak to you in an area where maybe you disagreed, or you were confused?
So I kind of jump between moving from the audience seat to back to the author and try to make that material as much as possible resonate with those people.
STEPHANIE: Do you do that in more real-time communication, such as in meetings or in pairing?
JOËL: I think that's a little bit harder to do. And then it's maybe a little bit more of asking directly, either pausing to let people interject, or you can ask the question directly and say, "Are you familiar with this term?" That can also sometimes be tricky to manage because you don't want to make it sound like you think they don't know anything.
But you can also make it sound really natural in a conversation where you're like, "Oh, we're going to do this thing with a strategy pattern. Have you seen a strategy pattern before? Are you familiar with this? Great, let's keep moving." And if not, maybe it's like, "Hey, let's take a few minutes to talk about what the strategy pattern means."
STEPHANIE: I think you are really great at asking the audience about their level of familiarity with the content, especially in book club. I have definitely experienced just as a developer pairing, or in meetings, or whatnot times when people don't pause and ask. And usually, I have to muster up the courage to interrupt and ask, "Hey, what is X, Y, and Z?" And that is tough sometimes.
I am certainly comfortable with it in a space where there is trust developed in terms of I don't feel worried that people might question my level of familiarity or experience. And I can very enthusiastically say, "Hey, I don't know what this means. Could you please explain it?" But sometimes it can be a little tough when you might not have that relationship with someone, or you haven't talked about it, talked about assumptions about your knowledge or experience level upfront.
And so I have found that to be a really good way to build that trust to make sure that we aren't excluding folks is to just talk about some of that stuff, even before we start pairing or before a meeting. And that can really help with some of those miscommunications that might come down later in the process.
JOËL: It's interesting that you bring up miscommunication because I think sometimes, even though certain jargon can be very precise, sometimes people will not use it to mean exactly what its dictionary definition is. And so sometimes two people are using the same term, and you're not meaning quite the same thing.
And so sometimes I'll be pairing with someone, and I'll have to sort of pause and say, "Hey, wait a minute, you're using the term adapter in a certain way that seems to be a little bit different than the way I'm using it. Can you maybe tell me what your personal definition is? And I'll tell you mine, and we can reconcile those two together."
Sometimes that can also feel like a situation where maybe I'm hazy on the topic. Like, I have a vague sense of it, and maybe it does or does not align with the way the other person is using it. And so that's an opportunity for me to ask them to define the term for me without completely having to say, "I have no idea what this term is. Please, oh, great sage, explain the meaning."
STEPHANIE: Are there times that you feel more or less comfortable doing that kind of reset?
JOËL: I think sometimes the fear is in breaking flow. And so you're doing a thing, and then somebody is trying to explain something, and you don't want to break out of that. Or you're trying to explain something, and you have to decide, is it worth making sure to explain a term, or do you keep moving? So I think that is a big concern.
And there is just the interpersonal concern of if there is less trust, do I want to put myself out there? Does somebody else maybe not feel comfortable you asked them to explain a term? Maybe they're using it wrong. It's not always good in a pairing situation to just come up and say, "Hey, that's not technically the adapter pattern; you're wrong. Let me pull out The Gang of Four book. You see on page 54..." that's not productive.
STEPHANIE: Yeah, for sure.
JOËL: So a lot of it, I think...and maybe this ties into your topic of communication while pairing. But ideally, you're working constructively with a person. And so debating definitions is not generally productive but asking someone, "What do you mean when you say this?" I find is a very helpful way to lead into that type of conversation.
STEPHANIE: Yeah, that's a great strategy because you're coming from a place of curiosity rather than a place of this is my definition, and it's the right definition, and so, therefore, you are wrong. [laughs]
JOËL: It's interesting the place that jargon occupies in our imagination of expertise. If you've ever seen any movie where they're trying to show that somebody is technically competent, they usually demonstrate that a person is competent by having them just spout out a long chain of jargon, and that makes them sound smart. But I think to a certain extent; maybe we believe it in the industry as well. If somebody can use a lot of terms and talk about a system using this very specific jargon, we tend to think that they're smart or at least look up to them a little bit.
STEPHANIE: Yeah, which I think isn't always the best assumption because I've certainly worked with folks who did throw out a lot of jargon but weren't necessarily, like you were saying, using it the way that I understood it, and that made communicating with them challenging.
I also think what true expertise really is is having the knowledge that when you use a jargony term that not everyone might be familiar with it, having the awareness to pause and ask someone how they're doing with the vocabulary and be able to tailor how you explain that term to that other person. I think that demonstrates a really deep level of understanding that doesn't get enough credit.
JOËL: I 100% agree. Jargon, vocabulary, it's a means to an end, not an end in and of itself. So the goal is to communicate clearly to others and maybe to help yourself in your own learning. And if you're not accomplishing those goals, then what's the point? I guess maybe there is another personal goal which is to sound smart, but that's not really a good goal, [laughs] especially not when the way you do that is by confusing everybody else in the room because they don't understand you, to make you try to feel smarter than them. Like, that's bad communication.
STEPHANIE: Yeah, for sure. I've definitely experienced listening to someone explain something and have to really think very hard about every single word that they're saying because they were using terms that are just less common. And so, in my brain, I had to map them to things that made sense to me, and things that I was familiar with that were the same concepts.
Like, I was experienced enough to have that shared understanding, but just the words that they used required another layer of brain work. Maybe we could have found a happy medium between them communicating the way that they expressed themselves the best with my ability to understand easily and quickly so that we could get on the same page.
JOËL: So you mentioned that there are sometimes situations where you're aware of a particular concept, but maybe you're just not aware that the term that somebody else is using maps to this concept you already understand. And I know that for me, oftentimes, being able to give a name to something that I understand is an incredibly powerful thing.
Even though I already know the idea of passing objects to another object in this particular configuration, or of wrapping things in some way or whatever the thing that I'm trying to do, all of a sudden, instead of it being a more nebulous concept in my head or a list of 10 steps or something like that, now I have one thing I can just point to and say it is this.
So that's been really helpful for me in my learning to be able to take a label and put it on something that I already know. And somehow, it cements the idea in my head and also then allows me to build on it to the next things that I want to learn.
STEPHANIE: Yeah, absolutely. It's really exciting when you're able to have that light-bulb moment when you have that precise term, or you learn that precise term for something that you have been wrestling with or experiencing for a while now.
I was just reminded of reading documentation. I have a very vivid memory of the first time I read; I don't know, even the Rails official docs, all of these terms that I didn't understand at the time. But then once I started digging into it, exploring, and just doing the work, when I revisited those docs, I could understand them a lot more comprehensively because I had experience with the things (There I am using things again.) [laughs] and seeing the terms for them and that helping solidify my understanding.
JOËL: I'm curious, in your personal learning, do you find it easier to encounter a term first and then learn what it means, or do the reverse, learn the concept first and then cap it off by being able to give it a name?
STEPHANIE: That's a good question. I think the latter because I've certainly spent a lot of time Googling terms and then reading whatever first search results came up and being like, okay, I think I got it, and then Googling the same term like two weeks later because I didn't really get it the first time. But whenever I come across a term for a concept I already am familiar with, it is like, oh yes, uh-huh! That really ends up sticking with me.
Matheus Sales' blog post that you mentioned earlier is a really great example of that term really standing out to me because I didn't know it at the time, but I suppose was seeking out something to describe the concept of connascence. So that was really cool and really memorable. What about you? Do you have a preferred way of learning new technical terms?
JOËL: I think there can be value to both approaches. But I'm with you; I think it generally is easier to add a name to a concept you already understand. And I experienced this pretty dramatically when I tried to get into functional programming.
So several years ago, I tried to learn the language Haskell which is notorious for being difficult to learn and very abstract and technical. And the way that the Haskell community typically tries to teach things is learn the fundamentals first, very top-down, learn the theory, and then, later on, you can do things in practice. So it's like before you can write an actual program, let us teach you about applicatives, and monads, and all these things that are really difficult to learn. And they're kind of scary technical terms.
So I choked out partway through, gave up on Haskell. A year later, got back into it, tried it again, choked out again. And then, eventually, I pivoted. I started getting into a similar language called Elm, which is similar syntax but compiles to JavaScript for the front end. And that community has the opposite philosophy when it comes to teaching. They want to get you productive as soon as possible. And you can learn some of the theory as you go along. And so with that, I felt like I was learning something new all the time and being productive as well, like, constantly adding new features to things in an application, and that's really exciting.
And what's really beautiful there is that you eventually learn a lot of the same concepts that you would learn in something like Haskell because the two languages share a lot of similar concepts. But instead of saying first, you need to learn about monads as a general concept, and then you can build a program; Elm says, build a bunch of programs first. We'll show you the basic syntax. And after you've built a bunch of them, you'll start realizing, wait a minute, these things all kind of look alike. There are patterns I'm starting to recognize.
And then you can just point to that and say, hey, that pattern that you started recognizing, and you see a bunch of times that's monad. You've known it all along, and now you can put a label on it. And you've gotten there. And so that's the way that I learned those concepts. And that was much easier for me than the approach of trying to learn the abstract concept first.
STEPHANIE: Monad is literally the word I just Googled earlier this week and still have a very, very hazy understanding of. So maybe I'll have to go learn Elm now. [chuckles]
JOËL: I recommend a lot of people to use that as their entry point into the statically typed functional programming world, just because of how much more shallow the learning curve is compared to alternatives. And I think a lot of it has to do with that approach of saying, let's get you productive quickly. Let's get you doing things. And eventually, patterns will emerge, and you can put names on them later. But we'll not make you learn all of the theory upfront, all the jargon.
STEPHANIE: Now that you do understand all the technical jargon around functional programming, how do you approach communicating about it when you do talk about Elm or those kinds of concepts?
JOËL: A lot of it depends on your audience. If you have an audience that already knows these concepts, then being able to use those names is really valuable because it's a shortcut. You can just say, oh yeah, this thing is a monad, and so, therefore, we can do these actions with it. And everybody in the audience just already knows monads have these properties. That's wonderful. Now I can follow to step two instead of having to have a slow build-up.
So if I'm writing an article or giving a talk, or even just having a conversation with someone, if I knew they didn't know the term, I would have to really build up to it, and maybe I wouldn't introduce the term at all. I would just talk about some of the properties that are interesting for the purpose of this particular demo.
But I would probably have to work up to it and say, "See, we have this simpler thing, and then this more complex thing. But here are the problems that we have with it. Here's a change we can make to our code that will make it work." And you walk through the process without necessarily getting into all of the theory. But with somebody else who did know, I could just say, "Oh, what we need here is monad." And they look at me, and they're like, "Oh, of course," and then we do it.
STEPHANIE: What you just described reminds me a lot of the WIRED Video Series, five levels of teaching where they have an expert come in and teach the same concept to different-aged people starting from young kids to an expert in their field as well. And I really liked how you answered that question just with the awareness that you tailor how you explain something to your audience because we could all benefit from just having that intentionality when we communicate in order to get the most value out of our interactions and knowledge sharing, and collaborative working.
JOËL: I think a theme that underlies a lot of what you and I have talked about today is just that communication, good communication is the fundamental value that we're going for here. And jargon and vocabulary can be something that empowers that but used poorly; it can also defeat that purpose. And most importantly, good communication starts with the audience, not with you. So when you work back from the audience, you can use the appropriate vocabulary and words that serve everybody and your ultimate goal of communicating.
STEPHANIE: I love that.
JOËL: So, Stephanie, thank you so much for joining us on The Bike Shed today. And as we wrap up, I wanted to ask you, what is a really fun piece of vocabulary that you’ve learned that you might want to share with the audience?
STEPHANIE: So lately, I learned the term WYSIWYG, which stands for What You See Is What You Get to describe text editing software that lets you see and edit the content as it would actually be displayed. So that was a fun, little term that someone brought up when we were paring and looking at some text editing code. And I was really excited because it sounds fun, and also, now I had just an opportunity to say it on a podcast. [laughs]
JOËL: It's amazing that an acronym that is that long has enough vowels in the right places that you can just pronounce it.
STEPHANIE: Oh yeah.
JOËL: WYSIWYG. That's a fun word to say.
STEPHANIE: 100%. I also try to pronounce all acronyms, regardless of how pronounceable they actually are. [laughs] Thanks for asking.
JOËL: With that, shall we wrap up?
STEPHANIE: Let's wrap up.
JOËL: The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed, or reach me at @joelquen on Twitter, or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Guest Geoff Harcourt, CTO of CommonLit, joins Joël to talk about a thing that comes up with a lot with clients: the performance of their test suite. It's often a concern because with test suites, until it becomes a problem, people tend to not treat it very well, and people ask for help on making their test suites faster. Geoff shares how he handles a scenario like this at CommonLit.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Geoff Harcourt, who is the CTO of CommonLit.
GEOFF: Hi, Joël.
JOËL: And together, we're here to share a little bit of what we've learned along the way. Geoff, can you briefly tell us what is CommonLit? What do you do?
GEOFF: CommonLit is a 501(c)(3) non-profit that delivers a literacy curriculum in English and Spanish to millions of students around the world. Most of our tools are free. So we take a lot of pride in delivering great tools to teachers and students who need them the most.
JOËL: And what does your role as CTO look like there?
GEOFF: So we have a small engineering team. There are nine of us, and we run a Rails monolith. I'd say a fair amount of the time; I'm hands down in the code. But I also do the things that an engineering head has to do, so working with vendors, and figuring out infrastructure, and hiring, and things like that.
JOËL: So that's quite a variety of things that you have to do. What is new in your world? What's something that you've encountered recently that's been fun or interesting?
GEOFF: It's the start of the school year in America, so traffic has gone from a very tiny amount over the summer to almost the highest load that we'll encounter all year. So we're at a new hosting provider this fall. So we're watching our infrastructure and keeping an eye on it.
The analogy that we've been using to describe this is like when you set up a bunch of plumbing, it looks like it all works, but until you really pump water through it, you don't see if there are any leaks. So things are in good shape right now, but it's a very exciting time of year for us.
JOËL: Have you ever done some actual plumbing yourself?
GEOFF: I am very, very bad at home repair. But I have fixed a toilet or two. I've installed a water filter but nothing else. What about you?
JOËL: I've done a little bit of it when I was younger with my dad. Like, I actually welded copper pipes and that kind of thing.
GEOFF: Oh, that's amazing. That's cool. Nice.
JOËL: So I've definitely felt that thing where you turn the water source back on, and it's like, huh, let's see, is this joint going to leak, or are we good?
GEOFF: Yeah, they don't have CI for plumbing, right?
JOËL: [laughs] You know, test it in production, right?
GEOFF: Yeah. [laughs] So we're really watching right now traffic starting to rise as students and teachers are coming back. And we're also figuring out all kinds of things that we want to do to do better monitoring of our application, so some of this is watching metrics to see if things happen. But some of this is also doing some simulated user activity after we do deploys. So we're using some automated browsers with Cypress to log into our application and do some user flows, and then report back on the results.
JOËL: So is this kind of like a feature test in CI, except that you're running it in production?
GEOFF: Yeah. Smoke test is the word that we've settled on for it, but we run it against our production server every time we deploy. And it's a small suite. It's nowhere as big as our big Capybara suite that we run in CI, but we're trying to get feedback in less than six minutes. That's sort of the goal.
In addition to running tests, we also take screenshots with a tool called Percy, and that's a visual regression testing tool. So we get to see the screenshots, and if they differ by more than one pixel, we get a ping that lets us know that maybe our CSS has moved around or something like that.
JOËL: Has that caught some visual bugs for you?
GEOFF: Definitely. The state of CSS at CommonLit was very messy when I arrived, and it's gotten better, but it still definitely needs some love. There are some false positives, but it's been really, really nice to be able to see visual changes on our production pages and then be able to approve them or know that there's something we have to go back and fix.
JOËL: I'm curious, for this smoke test suite, how long does it take to run?
GEOFF: We run it in parallel. It runs on Buildkite, which is the same tool that we use to orchestrate our CI, and the longest test takes about five minutes. It signs in as a teacher, creates an account. It creates a class; it invites the student to that class. It then logs out, logs in as that student creates the student account, signs in as the student, joins the class.
It then assigns a lesson to the student then the student goes and takes the lesson. And then, when the student submits the lesson, then the test is over. And that confirms all of the most critical flows that we would want someone to drop what they were doing if it's broken, you know, account creation, class creation, lesson creation, and students taking a lesson.
JOËL: So you're compressing the first few weeks of school into five minutes.
GEOFF: Yes. And I pity the school that has thousands of fake teachers, all named Aaron McCarronson at the school.
JOËL: [laughs]
GEOFF: But we go through and delete that data every once in a while. But we have a marketer who just started at CommonLit maybe a few weeks ago, and she thought that someone was spamming our signup form because she said, "I see hundreds of teachers named Aaron McCarronson in our user list."
JOËL: You had to admit that you were the spammer?
GEOFF: Yes, I did. [laughs] We now have some controls to filter those people out of reports. But it's always funny when you look at the list, and you see all these fake people there.
JOËL: Do you have any rate limiting on your site?
GEOFF: Yeah, we do quite a bit of it, actually. Some of it we do through Cloudflare. We have tools that limit a certain flow, like people trying to credential stuffing our password, our user sign-in forms. But we also do some further stuff to prevent people from hitting key endpoints. We use Rack::Attack, which is a really nice framework. Have you had to do that in client work with clients setting that stuff up?
JOËL: I've used Rack:Attack before.
GEOFF: Yeah, it's got a reasonably nice interface that you can work with. And I always worry about accidentally setting those things up to be too sensitive, and then you get lots of stuff back. One issue that we sometimes find is that lots of kids at the same school are sharing an IP address. So that's not the thing that we want to use for rate limiting. We want to use some other criteria for rate limiting.
JOËL: Right, right. Do you ever find that you rate limit your smoke tests? Or have you had to bypass the rate limiting in the smoke tests?
GEOFF: Our smoke tests bypass our rate limiting and our bot detection. So they've got some fingerprints they use to bypass that.
JOËL: That must have been an interesting day at the office.
GEOFF: Yes. [laughter] With all of these things, I think it's a big challenge to figure out, and it's similar when you're making tests for development, how to make tests that are high signal. So if a test is failing really frequently, even if it's testing something that's worthwhile, if people start ignoring it, then it stops having value as a piece of signal. So we've invested a ton of time in making our test suite as reliable as possible, but you sometimes do have these things that just require a change.
I've become a really big fan of...there's a Ruby driver for Capybara called Cuprite, and it doesn't control chrome with Chrome Driver or with Selenium. It controls it with the Chrome DevTools protocol, so it's like a direct connection into the browser. And we find that it's very, very fast and very, very reliable. So we saw that our Capybara specs got significantly more reliable when we started using this as our driver.
JOËL: Is this because it's not actually moving the mouse around and clicking but instead issuing commands in the background?
GEOFF: Yeah. My understanding of this is a little bit hazy. But I think that Selenium and ChromeDriver are communicating over a network pipe, and sometimes that network pipe is a little bit lossy. And so it results in asynchronous commands where maybe you don't get the feedback back after something happens. And CDP is what Chrome's team and I think what Puppeteer uses to control things directly. So it's great.
And you can even do things with it. Like, you can simulate different time zone for a user almost natively. You can speed up or slow down the traveling of time and the direction of time in the browser and all kinds of things like that. You can flip it into mobile mode so that the device reports that it's a touch browser, even though it's not. We have a set of mobile specs where we flip it with CDP into mobile mode, and that's been really good too.
Do you find when you're doing client work that you have a demand to build mobile-specific specs for system tests?
JOËL: Generally not, no.
GEOFF: You've managed to escape it.
JOËL: For something that's specific to mobile, maybe one or two tests that have a weird interaction that we know is different on mobile. But in general, we're not doing the whole suite under mobile and the whole suite under desktop.
GEOFF: When you hand off a project...it's been a while since you and I have worked together.
JOËL: For those who don't know, Geoff used to be with us at thoughtbot. We were colleagues.
GEOFF: Yeah, for a while. I remember my very first thoughtbot Summer Summit; you gave a really cool lightning talk about Eleanor of Aquitaine.
JOËL: [laughs]
GEOFF: That was great. So when you're handing a project off to a client after your ending, do you find that there's a transition period where you're educating them about the norms of the test suite before you leave it in their hands?
JOËL: It depends a lot on the client. With many clients, we're working alongside an existing dev team. And so it's not so much one big handoff at the end as it is just building that in the day-to-day, making sure that we are integrating with the team from the outset of the engagement.
So one thing that does come up a lot with clients is the performance of their test suite. That's often a concern because the test suite until it becomes a problem, people tend to not treat it very well. And by the time that you're bringing on an external consultant to help, generally, that's one of the areas of the code that's been a little bit neglected. And so people ask for help on making their test suite faster. Is that something that you've had to deal with at CommonLit as well?
GEOFF: Yeah, that's a great question. We have struggled a lot with the speed that our test suite...the time it takes for our test suite to run. We've done a few things to improve it. The first is that we have quite a bit of caching that we do in our CI suite around dependencies. So gems get cached separately from NPM packages and browser assets. So all three of those things are independently cached.
And then, we run our suites in parallel. Our Jest specs get split up into eight containers. Our Ruby non-system tests...I'd like to say unit tests, but we all know that some of those are actually integration tests.
JOËL: [laughs]
GEOFF: But those tests run in 15 containers, and they start the moment gems are built. So they don't wait for NPM packages. They don't wait for assets. They immediately start going. And then our system specs as soon as the assets are built kick off and start running. And we actually run that in 40 parallel containers so we can get everything finished.
So our CI suite can finish...if there are no dependency bumps and no asset bumps, our specs suite you can finish in just under five minutes. But if you add up all of that time, cumulatively, it's something like 75 minutes is the total execution as it goes. Have you tried FactoryDoctor before for speeding up test suites?
JOËL: This is the gem from Evil Martians?
GEOFF: Yeah, it's part of TestProf, which is their really, really unbelievable toolkit for improving specs, and they have a whole bunch of things. But one of them will tell you how many invocations of FactoryBot factories each factory got. So you can see a user factory was fired 13,000 times in the test suite. It can even do some tagging where it can go in and add metadata to your specs to show which ones might be candidates for optimization.
JOËL: I gave a talk at RailsConf this year titled Your Tests Are Making Too Many Database Calls.
GEOFF: Nice.
JOËL: And one of the things I talked about was creating a lot more data via factories than you think that you are. And I should give a shout-out to FactoryProf for finding those.
GEOFF: Yeah, it's kind of a silent killer with the test suite, and you really don't think that you're doing a whole lot with it, and then you see how many associations. How do you fight that tension between creating enough data that things are realistic versus the streamlining of not creating extraneous things or having maybe mystery guests via associations and things like that?
JOËL: I try to have my base factories be as minimal as possible. So if there's a line in there that I can remove, and the factory or the model still saves, then it should be removed. Some associations, you can't do that if there's a foreign key constraint, and so then I'll leave it in. But I am a very hardcore minimalist, at least with the base factory.
GEOFF: I think that makes a lot of sense. We use foreign keys all over the place because we're always worried about somehow inserting student data that we can't recover with a bug. So we'd rather blow up than think we recorded it. And as a result, sometimes setting up specs for things like a student answering a multiple choice question on a quiz ends up being this sort of if you give a mouse a cookie thing where it's you need the answer options. You need the question. You need the quiz. You need the activity. You need the roster, the students to be in the roster. There has to be a teacher for the roster. It just balloons out because everything has a foreign key.
JOËL: The database requires it, but the test doesn't really care. It's just like, give me a student and make it valid.
GEOFF: Yes, yeah. And I find that that challenge is really hard. And sometimes, you don't see how hard it is to enforce things like database integrity until you have a lot of concurrency going on in your application. It was a very rude surprise to me to find out that browser requests if you have multiple servers going on might not necessarily be served in the order that they were made.
JOËL: [laughs] So you're talking about a scenario where you're running multiple instances of your app. You make two requests from, say, two browser tabs, and somehow they get served from two different instances?
GEOFF: Or not even two browser tabs. Imagine you have a situation where you're auto-saving.
JOËL: Oooh, background requests.
GEOFF: Yeah. So one of the coolest features we have at CommonLit is that students can annotate and highlight a text. And then, the teachers can see the annotations and highlights they've made, and it's actually part of their assignment often to highlight key evidence in a passage. And those things all fire in the background asynchronously so that it doesn't block the student from doing more stuff.
But it also means that potentially if they make two changes to a highlight really quickly that they might arrive out of order. So we've had to do some things to make sure that we're receiving in the right order and that we're not blowing away data that was supposed to be there.
Just think about in a Heroku environment, for example, which is where we used to be, you'd have four dynos running. If dyno one takes too long to serve the thing for dyno two, request one may finish after request two. That was a very, very rude surprise to learn that the world was not as clean and neat as I thought.
JOËL: I've had to do something similar where I'm making a bunch of background requests to a server. And even with a single dyno, it is possible for your request to come back out of order just because of how TCP works. So if it's waiting for a packet and you have two of these requests that went out not too long before each other, there's no guarantee that all the packets for request one come back before all the packets from request two.
GEOFF: Yeah, what are the strategies for on the client side for dealing with that kind of out-of-order response?
JOËL: Find some way to effectively version the requests that you make. Timestamp is an easy one. Whenever a request comes in, you take the response from the latest timestamp, and that wins out.
GEOFF: Yeah, we've started doing some unique IDs. And part of the unique ID is the browser's timestamp. We figure that no one would try to hack themselves and intentionally screw up their own data by submitting out of order.
JOËL: Right, right.
GEOFF: It's funny how you have to pick something to trust. [laughs]
JOËL: I'd imagine, in this case, if somebody did mess around with it, they would really only just be screwing up their own UI. It's not like that's going to then potentially crash the server because of something, and then you've got a potential vector for a denial of service.
GEOFF: Yeah, yeah, that's always what we're worried about, and we have to figure out how to trust these sorts of requests as what's a valid thing and what is, as you're saying, is just the user hurting themselves as opposed to hurting someone else's stuff?
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
GEOFF: You were talking about test suites. What are some things that you have found are consistently problems in real-world apps, but they're really, really hard to test in a test suite?
JOËL: Difficult to test or difficult to optimize for performance?
GEOFF: Maybe difficult to test.
JOËL: Third-party integrations. Anything that's over the network that's going to be difficult. Complex interactions that involve some heavy frontend but then also need a lot of backend processing potentially with asynchronous workers or something like that, there are a lot of techniques that we can use to make all those play together, but that means there's a lot of complexity in that test.
GEOFF: Yeah, definitely. I've taken a deep interest in what I'm sure there's a better technical term for this, but what I call network hostile environments or bandwidth hostile environments. And we see this a lot with kids. Especially during the pandemic, kids would often be trying to do their assignments from home. And maybe there are five kids in the house, and they're all trying to do their homework at the same time. And they're all sharing a home internet connection.
Maybe they're in the basement because they're trying to get some peace and quiet so they can do their assignment or something like that. And maybe they're not strongly connected. And the challenge of dealing with intermittent connectivity is such an interesting problem, very frustrating but very interesting to deal with.
JOËL: Have you explored at all the concept of Formal Methods to model or verify situations like that?
GEOFF: No, but I'm intrigued. Tell me more.
JOËL: I've not tried it myself. But I've read some articles on the topic. Hillel Wayne is a good person to follow for this.
GEOFF: Oh yeah.
JOËL: But it's really fascinating when you'll see, okay, here are some invariants and things. And then here are some things that you set up some basic properties for a system. And then some of these modeling languages will then poke holes and say, hey, it's possible for this 10-step sequence of events to happen that will then crash your server. Because you didn't think that it's possible for five people to be making concurrent requests, and then one of them fails and retries, whatever the steps are. So it's really good at modeling situations that, as developers, we don't always have great intuition, things like parallelism.
GEOFF: Yeah, that sounds so interesting. I'm going to add that to my list of reading for the fall. Once the school year calms down, I feel like I can dig into some technical topics again. I've got this book sitting right next to my desk, Designing Data-Intensive Applications. I saw it referenced somewhere on Twitter, and I did the thing where I got really excited about the book, bought it, and then didn't have time to read it. So it's just sitting there unopened next to my desk, taunting me.
JOËL: What's the 30-second spiel for what is a data-intensive app, and why should we design for it differently?
GEOFF: You know, that's a great question. I'd probably find out if I'd dug further into the book.
JOËL: [laughs]
GEOFF: I have found at CommonLit that we...I had a couple of clients at thoughtbot that dealt with data at the scale that we deal with here. And I'm sure there are bigger teams doing, quote, "bigger data" than we're doing. But it really does seem like one of our key challenges is making sure that we just move data around fast enough that nothing becomes a bottleneck.
We made a really key optimization in our application last year where we changed the way that we autosave students' answers as they go. And it resulted in a massive increase in throughput for us because we went from trying to store updated versions of the students' final answers to just storing essentially a draft and often storing that draft in local storage in the browser and then updating it on the server when we could.
And then, as a result of this, we're making key updates to the table where we store a student's answers much less frequently. And that has a huge impact because, in addition to being one of the biggest tables at CommonLit...it's got almost a billion recorded answers that we've gotten from students over the years. But because we're not writing to it as often, it also means that reads that are made from the table, like when the teacher is getting a report for how the students are doing in a class or when a principal is looking at how a school is doing, now, those queries are seeing less contention from ongoing writes. And so we've seen a nice improvement.
JOËL: One strategy I've seen for that sort of problem, especially when you have a very write-heavy table but that also has a different set of users that needs to read from it, is to set up a read replica. So you have your main that is being written to, and then the read replica is used for reports and people who need to look at the data without being in contention with the table being written.
GEOFF: Yeah, Rails multi-DB support now that it's native to the framework is excellent. It's so nice to be able to just drop that in and fire it up and have it work. We used to use a solution that Instacart had built. It was great for our needs, but it wasn't native to the framework.
So every single time we upgraded Rails, we had to cross our fingers and hope that it didn't, you know, whatever private APIs of ActiveRecord it was using hadn't broken. So now that that stuff, which I think was open sourced from GitHub's multi-database implementation, so now that that's all native in Rails, it's really, really nice to be able to use that.
JOËL: So these kinds of database tricks can help make the application much more performant. You'd mentioned earlier that when you were trying to make your test performant that you had introduced parallelism, and I feel like that's maybe a bit of an intimidating thing for a lot of people. How would you go about converting a test suite that's just vanilla RSpec, single-threaded, and then moving it in a direction of being more parallel?
GEOFF: There's a really, really nice tool called Knapsack, which has a free version. But the pro version, I feel like if you're spending any money at all on CI, it's immediately worth the cost. I think it's something like $75 a month for each suite that you run on it. And Knapsack does this dynamic allocation of tests across containers.
And it interfaces with several of the popular CI providers so that it looks at environment variables and can tell how many containers you're splitting across. It'll do some things, like if some of your containers start early and some of them start late, it will distribute the work so that they all end at the same time, which is really nice.
We've preferred CI providers that charge by the minute. So rather than just paying for a service that we might not be using, we've used services like Semaphore, and right now, we're on Buildkite, which charge by the minute, which means that you can decide to do as much parallelism as you want. You're just paying for the compute time as you run things.
JOËL: So that would mean that two minutes of sequential build time costs just the same as splitting it up in parallel and doing two simultaneous minutes of build time.
GEOFF: Yeah, that is almost true. There's a little bit of setup time when a container spins up. And that's one of the key things that we optimize. I guess if we ran 200 containers if we were like Shopify or something like that, we could technically make our CI suite finish faster, but it might cost us three times as much.
Because if it takes a container 30 seconds to spin up and to get ready, that's 30 seconds of dead time when you're not testing, but you're paying for the compute. So that's one of the key optimizations that we make is figuring out how many containers do we need to finish fast when we're not just blowing time on starting and finishing?
JOËL: Right, because there is a startup cost for each container.
GEOFF: Yeah, and during the work day when our engineers are working along, we spin up 200 EC2 machines or 150 EC2 machines, and they're there in the fleet, and they're ready to go to run CI jobs for us. But if you don't have enough machines, then you have jobs that sit around waiting to start, that sort of thing. So there's definitely a tension between figuring out how much parallelism you're going to do. But I feel like to start; you could always break your test suite into four pieces or two pieces and just see if you get some benefit to running a smaller number of tests in parallel.
JOËL: So, manually splitting up the test suite.
GEOFF: No, no, using something like Knapsack Pro where you're feeding it the suite, and then it's dividing up the tests for you. I think manually splitting up the suite is probably not a good practice overall because I'm guessing you'll probably spend more engineering time on fiddling with which tests go where such that it wouldn't be cost-effective.
JOËL: So I've spent a lot of time recently working to improve a parallel test suite. And one of the big problems that you have is trying to make sure that all of your parallel surfaces are being used efficiently, so you have to split the work evenly. So if you said you have 70 minutes worth of work, if you give 50 minutes to one worker and 20 minutes to the other, that means that your total test suite is still 50 minutes, and that's not good.
So ideally, you split it as evenly as possible. So I think there are three evolutionary steps on the path here. So you start off, and you're going to manually split things out. So you're going to say our biggest chunk of tests by time are the feature specs. We'll make them almost like a separate suite. Then we'll make the models and controllers and views their own thing, and that's roughly half and half, and run those. And maybe you're off by a little bit, but it's still better than putting them all in one.
It becomes difficult, though, to balance all of these because then one might get significantly longer than the other then, you have to manually rebalance it. It works okay if you're only splitting it among two workers. But if you're having to split it among 4, 8, 16, and more, it's not manageable to do this, at least not by hand.
If you want to get fancy, you can try to automate that process and record a timing file of how long every file takes. And then when you kick off the build process, look at that timing file and say, okay, we have 70 minutes, and then we'll just split the file so that we have roughly 70 divided by number of workers' files or minutes of work in each process. And that's what gems like parallel_tests do. And Knapsack's Classic mode works like this as well. That's decently good.
But the problem is you're working off of past information. And so if the test has changed or just if it's highly variable, you might not get a balanced set of workers. And as you mentioned, there's a startup cost, and so not all of your workers boot up at the same time. And so you might still have a very uneven amount of work done by each worker by statically determining the work to be done via a timing file.
So the third evolution here is a dynamic or a self-balancing approach where you just put all of the tests or the files in a queue and then just have every worker pull one or two tests when it's ready to work. So that way, if something takes a lot longer than expected, well, it's just not pulling more from the queue. And everybody else still pulls, and they end up all balancing each other out. And then ideally, every worker finishes work at exactly the same time. And that's how you know you got the most value you could out of your parallel processes.
GEOFF: Yeah, there's something about watching all the jobs finish in almost exactly, you know, within 10 seconds of each other. It just feels very, very satisfying. I think in addition to getting this dynamic splitting where you're getting either per file or per example split across to get things finishing at the same time, we've really valued getting fast feedback.
So I mentioned before that our Jest specs start the moment NPM packages get built. So as soon as there's JavaScripts that can be executed in test, those kick-off. As soon as our gems are ready, the RSpec non-system tests go off, and they start running specs immediately. So we get that really, really fast feedback.
Unfortunately, the browser tests take the longest because they have to wait for the most setup. They have the most dependencies. And then they also run the slowest because they run in the browser and everything. But I think when things are really well-oiled, you watch all of those containers end at roughly the same time, and it feels very satisfying.
JOËL: So, a few weeks ago, on an episode of The Bike Shed, I talked with Eebs Kobeissi about dependency graphs and how I'm super excited about it. And I think I see a dependency graph in what you're describing here in that some things only depend on the gem file, and so they can start working. But other things also depend on the NPM packages. And so your build pipeline is not one linear process or one linear process that forks into other linear processes; it's actually a dependency graph.
GEOFF: That is very true. And the CI tool we used to use called Semaphore actually does a nice job of drawing the dependency graph between all of your steps. Buildkite does not have that, but we do have a bunch of steps that have to wait for other steps to finish. And we do it in our wiki. On our repo, we do have a diagram of how all of this works.
We found that one of the things that was most wasteful for us in CI was rebuilding gems, reinstalling NPM packages (We use Yarn but same thing.), and then rebuilding browser assets. So at the very start of every CI run, we build hashes of a bunch of files in the repository. And then, we use those hashes to name Docker images that contain the outputs of those files so that we are able to skip huge parts of our CI suite if things have already happened.
So I'll give an example if Ruby gems have not changed, which we would know by the Gemfile.lock not having changed, then we know that we can reuse a previously built gems image that has the gems that just gets melted in, same thing with yarn.lock. If yarn.lock hasn't changed, then we don't have to build NPM packages. We know that that already exists somewhere in our Docker registry.
In addition to skipping steps by not redoing work, we also have started to experiment...actually, in response to a comment that Chris Toomey made in a prior Bike Shed episode, we've started to experiment with skipping irrelevant steps. So I'll give an example of this if no Ruby files have changed in our repository, we don't run our RSpec unit tests. We just know that those are valid. There's nothing that needs to be rerun.
Similarly, if no JavaScript has changed, we don't run our Jest tests because we assume that everything is good. We don't lint our views with erb-lint if our view files haven't changed. We don't lint our factories if the model or the database hasn't changed. So we've got all these things to skip key types of processing.
I always try to err on the side of not having a false pass. So I'm sure we could shave this even tighter and do even less work and sometimes finish the build even faster. But I don't want to ever have a thing where the build passes and we get false confidence.
JOËL: Right. Right. So you're using a heuristic that eliminates the really obvious tests that don't need to be run but the ones that maybe are a little bit more borderline, you keep them in. Shaving two seconds is not worth missing a failure.
GEOFF: Yeah. And I've read things about big enterprises doing very sophisticated versions of this where they're guessing at which CI specs might be most relevant and things like that. We're nowhere near that level of sophistication right now.
But I do think that once you get your test suite parallelized and you're not doing wasted work in the form of rebuilding dependencies or rebuilding assets that don't need to be rebuilt, there is some maybe not low, maybe medium hanging fruit that you can use to get some extra oomph out of your test suite.
JOËL: I really like that you brought up this idea of infrastructure and skipping. I think in my own way of thinking about improving test suites, there are three broad categories of approaches you can take. One variable you get to work with is that total number of time single-threaded, so you mentioned 70 minutes. You can make that 70 minutes shorter by avoiding database writes where you don't need them, all the common tricks that we would do to actually change the test themselves. Then we can change...as another variable; we get to work with parallelism, we talked about that.
And then finally, there's all that other stuff that's not actually executing RSpec like you said, loading the gems, installing NPM packages, Docker images. All of those, if we can skip work running migrations, setting up a database, if there are situations where we can improve the speed there, that also improves the total time.
GEOFF: Yeah, there are so many little things that you can pick at to...like, one of the slowest things for us is Elasticsearch. And so we really try to limit the number of specs that use Elasticsearch if we can. You actually have to opt-in to using Elasticsearch on a spec, or else we silently mock and disable all of the things that happen there.
When you're looking at that first variable that you were talking about, just sort of the overall time, beyond using FactoryDoctor and FactoryProf, is there anything else that you've used to just identify the most egregious offenders in a test suite and then figure out if they're worth it?
JOËL: One thing you can do is hook into Active Support notification to try to find database writes. And so you can find, oh, here's where all of the...this test is making way too many database writes for some reason, or it's making a lot, maybe I should take a look at it; it's a hotspot.
GEOFF: Oh, that's really nice. There's one that I've always found is like a big offender, which is people doing negative expectations in system specs.
JOËL: Oh, for their Capybara wait time.
GEOFF: Yeah. So there's a really cool gem, and the name of it is eluding me right now. But there's a gem that raises a special exception if Capybara waits the full time for something to happen. So it lets you know that those things exist. And so we've done a lot of like hunting for...Knapsack will report the slowest examples in your test suite. So we've done some stuff to look for the slowest files and then look to see if there are examples of these negative expectations that are waiting 10 seconds or waiting 8 seconds before they fail.
JOËL: Right. Some files are slow, but they're slow for a reason. Like, a feature spec is going to be much slower than a model test. But the model tests might be very wasteful and because you have so many of them, if you're doing the same pattern in a bunch of them or if it's a factory that's reused across a lot of them, then a small fix there can have some pretty big ripple effects.
GEOFF: Yeah, I think that's true. Have you ever done any evaluation of test suite to see what files or examples you could throw away?
JOËL: Not holistically. I think it's more on an ad hoc basis. You find a place, and you're like, oh, these tests we probably don't need them. We can throw them out. I have found dead tests, tests that are not executed but still committed to the repo.
GEOFF: [laughs]
JOËL: It's just like, hey, I'm going to get a lot of red in my diff today.
GEOFF: That always feels good to have that diff-y check-in, and it's 250 lines or 1,000 lines of red and 1 line of green.
JOËL: So that's been a pretty good overview of a lot of different areas related to performance and infrastructure around tests. Thank you so much, Geoff, for joining us today on The Bike Shed to talk about your experience at CommonLit doing this. Do you have any final words for our listeners?
GEOFF: Yeah. CommonLit is hiring a senior full-stack engineer, so if you'd like to work on Rails and TypeScript in a place with a great test suite and a great team. I've been here for five years, and it's a really, really excellent place to work. And also, it's been really a pleasure to catch up with you again, Joël.
JOËL: And, Geoff, where can people find you online?
GEOFF: I'm Geoff with a G, G-E-O-F-F Harcourt, @geoffharcourt. And that's my name on Twitter, and it's my name on GitHub, so you can find me there.
JOËL: And we'll make sure to include a link to your Twitter profile in the show notes.
The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Why does the history of computing matter? Joël and Developer at thoughtbot Sara Jackson, ponder this and share some cool stories (and trivia!!) behind the tools we use in the industry.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by fellow thoughtboter, Team Lead, and Developer Sara Jackson.
SARA: Hello, happy to be here.
JOËL: Together, we're here to share a little bit of what we've learned along the way. So, Sara, what's new in your world?
SARA: Well, Joël, you might know that recently our team had a small get-together in Toronto.
JOËL: And our team, for those who are not aware, is fully remote distributed across multiple countries. So this was a chance to get together in person.
SARA: Yes, correct. This was a chance for those on the Boost team to get together and work together as if we had a physical office.
JOËL: Was this your first time meeting some members of the team?
SARA: It was my second, for the most part. So I joined thoughtbot, but after thoughtbot had already gotten remote. Fortunately, I was able to meet many other thoughtboters in May at our summit.
JOËL: Had you worked at a remote company before coming to thoughtbot?
SARA: Yes, I actually started working remotely in 2019, but even then, that wasn't my first time working remotely. I actually had a full year of internship in college that was remote.
JOËL: So you were a pro at this long before the pandemic made us all try it out.
SARA: I don't know about that, but I've certainly dealt with the idiosyncrasies that come with remote work for longer.
JOËL: What do you think are some of the challenges of remote work as opposed to working in person in an office?
SARA: I think definitely growing and maintaining a culture. When you're in an office, it's easy to create ad hoc conversations and have events that are small that build on the culture. But when you're remote, it has to be a lot more intentional.
JOËL: That definitely rings true for me. One of the things that I really appreciated about in-person office culture was the serendipity that you have those sort of random meetings at the water cooler, those conversations, waiting for coffee with people who are not necessarily on the same team or the same project as you are.
SARA: I also really miss being able to have lunch in person with folks where I can casually gripe about an issue I might be having, and almost certainly, someone would have the answer. Now, if I'm having an issue, I have to intentionally seek help. [chuckles]
JOËL: One of the funny things that often happened, at least the office where I worked at, was that lunches would often devolve into taxonomy conversations.
SARA: I wish I had been there for that.
[laughter]
JOËL: Well, we do have a taxonomy channel on Slack to somewhat continue that legacy.
SARA: Do you have a favorite taxonomy lunch discussion that you recall?
JOËL: I definitely got to the point where I hated the classifying a sandwich. That one has been way overdone.
SARA: Absolutely.
JOËL: There was an interesting one about motorcycles, and mopeds, and bicycles, and e-bikes, and trying to see how do you distinguish one from the other. Is it an electric motor? Is it the power of the engine that you have? Is it the size?
SARA: My brain is already turning on those thoughts. I feel like I could get lost down that rabbit hole very easily.
[laughter]
JOËL: Maybe that should be like a special anniversary episode for The Bike Shed, just one long taxonomy ramble.
SARA: Where we talk about bikes.
JOËL: Ooh, that's so perfect. I love it. One thing that I really appreciated during our time in Toronto was that we actually got to have lunch in person again.
SARA: Yeah, that was so wonderful. Having folks coming together that had maybe never worked together directly on clients just getting to sit down and talk about our day.
JOËL: Yeah, and talk about maybe it's work-related, maybe it's not. There's a lot of power to having some amount of deeper interpersonal connection with your co-workers beyond just the we work on a project together.
SARA: Yeah, it's like camaraderie beyond the shared mission of the company. It's the shared interpersonal mission, like you say. Did you have any in-person pairing sessions in Toronto?
JOËL: I did. It was actually kind of serendipitous. Someone was stuck with a weird failing test because somehow the order factories were getting created in was not behaving in the expected way, and we herd on it, dug into it, found some weird thing with composite primary keys, and solved the issue.
SARA: That's wonderful. I love that. I wonder if that interaction would have happened or gotten solved as quickly if we hadn't been in person.
JOËL: I don't know about you, but I feel like I sometimes struggle to ask for help or ask for a pair more when I'm online.
SARA: Yeah, I agree. It's easier to feel like you're not as big of an impediment when you're in person. You tap someone on the shoulder, "Hey, can you take a look at this?"
JOËL: Especially when they're on the same team as you, they're sitting at the next desk over. I don't know; it just felt easier. Even though it's literally one button press to get Tuple to make a call, somehow, I feel like I'm interrupting more.
SARA: To combat that, I've been trying to pair more frequently and consistently regardless of if I'm struggling with a problem.
JOËL: Has that worked pretty well?
SARA: It's been wonderful. The only downside has been pairing fatigue.
JOËL: Pairing fatigue is real.
SARA: But other than that, problems have gotten solved quickly. We've all learned something for those that I've paired with. It goes faster.
JOËL: So it was really great that we had this experience of doing our daily work but co-located in person; we have these experiences of working together. What would you say has been one of the highlights for you of that time?
SARA: 100% karaoke.
JOËL: [laughs]
SARA: Only two folks did not attend. Many of the folks that did attend told me they weren't going to sing, but they were just going to watch. By the end of the night, everyone had sung. We were there for nearly three and a half hours. [laughs]
JOËL: It was a good time all around.
SARA: I saw a different side to Chad.
JOËL: [laughs]
SARA: And everyone, honestly. Were there any musical choices that surprised you?
JOËL: Not particularly. Karaoke is always fun when you have a group of people that you trust to be a little bit foolish in front of to put yourself out there. I really appreciated the style that we went for, where we have a private room for just the people who were there as opposed to a stage in a bar somewhere. I think that makes it a little bit more accessible to pick up the mic and try to sing a song.
SARA: I agree. That style of karaoke is a lot more popular in Asia, having your private room. Sometimes you can find it in major cities. But I also prefer it for that reason.
JOËL: One of my highlights of this trip was this very sort of serendipitous moment that happened. Someone was asking a question about the difference between a Mac and Linux operating systems. And then just an impromptu gathering happened. And you pulled up a chair, and you're like, gather around, everyone. In the beginning, there was Multics. It was amazing.
SARA: I felt like some kind of historian or librarian coming out from the deep. Let me tell you about this random operating system knowledge that I have. [laughs]
JOËL: The ancient lore.
SARA: The ancient lore in the year 1969.
JOËL: [laughs] And then yeah, we had a conversation walking the history of operating systems, and why we have macOS and Linux, and why they're different, and why Windows is a totally different kind of family there.
SARA: Yeah, macOS and Linux are sort of like cousins coming from the same tree.
JOËL: Is that because they're both related through Unix?
SARA: Yes. Linux and macOS are both built based off of different versions of Unix. Over the years, there's almost like a family tree of these different Nix operating systems as they're called.
JOËL: I've sometimes seen asterisk N-I-X. This is what you're referring to as Nix.
SARA: Yes, where the asterisk is like the RegEx catch-all.
JOËL: So this might be Unix. It might be Linux. It might be...
SARA: Minix.
JOËL: All of those.
SARA: Do you know the origin of the name Unix?
JOËL: I do not.
SARA: It's kind of a fun trivia piece. So, in the beginning, there was Multics spelled M-U-L-T-I-C-S, standing for the Multiplexed Information and Computing Service. Dennis Ritchie and Ken Thompson of Bell Labs famous for the C programming language...
JOËL: You may have heard of it.
SARA: You may have heard of it maybe on a different podcast. They were employees at Bell Labs when Multics was being created. They felt that Multics was very bulky and heavy. It was trying to do too many things at once. It did have a few good concepts. So they developed their own smaller Unix originally, Unics, the Uniplexed Information and Computing Service, Uniplexed versus Multiplexed. We do one thing really well.
JOËL: And that's the Unix philosophy.
SARA: It absolutely is. The Unix philosophy developed out of the creation of Unix and C. Do you know the four main points?
JOËL: No, is it small sharp tools? It's the main one I hear.
SARA: Yes, that is the kind of quippy version that has come out for sure.
JOËL: But there is a formal four-point manifesto.
SARA: I believe it's evolved over the years. But it's interesting looking at the Unix philosophy and seeing how relevant it is today in web development. The four points being make each program do one thing well. To this end, don't add features; make a new program. I feel like we have this a lot in encapsulation.
JOËL: Hmm, maybe even the open-closed principle.
SARA: Absolutely.
JOËL: Similar idea.
SARA: Another part of the philosophy is expecting output of your program to become input of another program that is yet unknown. The key being don't clutter your output; don't have extraneous text. This feels very similar to how we develop APIs.
JOËL: With a focus on composability.
SARA: Absolutely. Being able to chain commands together like you see in Ruby all the time.
JOËL: I love being able to do this, for example, the enumerable API in Ruby and just being able to chain all these methods together to just very nicely do some pretty big transformations on an array or some other data structure.
SARA: 100% agree there. That ability almost certainly came out of following the tenets of this philosophy, maybe not knowingly so but maybe knowingly so. [chuckles]
JOËL: So is that three or four?
SARA: So that was two. The third being what we know as agile.
JOËL: Really?
SARA: Yeah, right? The '70s brought us agile. Design and build software to be tried early, and don't hesitate to throw away clumsy parts and rebuild.
JOËL: Hmmm.
SARA: Even in those days, despite waterfall style still coming on the horizon. It was known for those writing software that it was important to iterate quickly.
JOËL: Wow, I would never have known.
SARA: It's neat having this history available to us. It's sort of like a lens at where we came from.
Another piece of this history that might seem like a more modern concept but was a very big part of the movement in the '70s and the '80s was using tools rather than unskilled help or trying to struggle through something yourself when you're lightening a programming task. We see this all the time at thoughtbot. Folks do this many times there is an issue on a client code. We are able to generalize the solution, extract into a tool that can then be reused.
JOËL: So that's the same kind of genesis as a lot of thoughtbot's open-source gems, so I'm thinking of FactoryBot, Clearance, Paperclip, the old-timey file upload gem, Suspenders, the Rails app generator, and the list goes on.
SARA: I love that in this last point of the Unix philosophy, they specifically call out that you should create a new tool, even if it means detouring, even if it means throwing the tools out later.
JOËL: What impact do you think that has had on the way that tooling in the Unix, or maybe I should say *Nix, ecosystem has developed?
SARA: It was a major aspect of the Nix environment community because Unix was available, not free, but very inexpensively to educational institutions. And because of how lightweight it was and its focus on single-use programs, programs that were designed to do one thing, and also the way the shell was allowing you to use commands directly and having it be the same language as the shell scripting language, users, students, amateurs, and I say that in a loving way, were able to create their own tools very quickly. It was almost like a renaissance of Homebrew.
JOËL: Not Homebrew as in the macOS package manager.
SARA: [laughs] And also not Homebrew as in the alcoholic beverage.
JOËL: [laughs] So, this kind of history is fun trivia to know. Is it really something valuable for us as a jobbing developer in 2022?
SARA: I would say it's a difficult question. If you are someone that doesn't dive into the why of something, especially when something goes wrong, maybe it wouldn't be important or useful.
But what sparked the conversation in Toronto was trying to determine why we as thoughtbot tend to prefer using Macs to develop on versus Linux or Windows. There is a reason, and the reason is in the history. Knowing that can clarify decisions and can give meaning where it feels like an arbitrary decision.
JOËL: Right. We're not just picking Macs because they're shiny.
SARA: They are certainly shiny. And the first thing I did was to put a matte case on it.
JOËL: [laughs] So no shiny in your office.
SARA: If there were too many shiny things in my office, boy, I would never get work done. The cats would be all over me.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: So we've talked a little bit about Unix or *Nix, this evolution of systems. I've also heard the term POSIX thrown around when talking about things that seem to encompass both macOS and Linux. How does that fit into this history?
SARA: POSIX is sort of an umbrella of standards around operating systems that was based on Unix and the things that were standard in Unix. It stands for the Portable Operating System Interface. This allowed for compatibility between OSs, very similar to USB being the standard for peripherals.
JOËL: So, if I was implementing my own Unix-like operating system in the '80s, I would try to conform to the POSIX standard.
SARA: Absolutely. Now, not every Nix operating system is POSIX-compliant, but most are or at least 90% of the way there.
JOËL: Are any of the big ones that people tend to think about not compliant?
SARA: A major player in the operating system space that is not generally considered POSIX-compliant is Microsoft Windows.
JOËL: [laughs] It doesn't even try to be Unix-like, right? It's just its own thing,
SARA: It is completely its own thing. I don't think it even has a standard necessarily that it conforms to.
JOËL: It is its own standard, its own branch of the family tree.
SARA: And that's what happens when your operating system is very proprietary. This has caused folks pain, I'm sure, in the past that may have tried to develop software on their computers using languages that are more readily compatible with POSIX operating systems.
JOËL: So would you say that a language like Ruby is more compatible with one of the POSIX-compatible operating systems?
SARA: 100% yes. In fact, to even use Ruby as a development tool in Windows, prior to Windows 10, you needed an additional tool. You needed something like Cygwin or MinGW, which were POSIX-compliant programs that it was almost like a shell in your Windows computer that would allow you to run those commands.
JOËL: Really? For some reason, I thought that they had some executables that you could run just on Windows by itself.
SARA: Now they do, fortunately, to the benefit of Ruby developers everywhere. As of Windows 10, we now have WSL, the Windows Subsystem for Linux that's built-in. You don't have to worry about installing or configuring some third-party software.
JOËL: I guess that kind of almost cheats by just having a POSIX system embedded in your non-POSIX system.
SARA: It does feel like a cheat, but I think it was born out of demand. The Windows NT kernel, for example, is mostly POSIX-compliant.
JOËL: Really?
SARA: As a result of it being used primarily for servers.
JOËL: So you mentioned the Ruby tends and the Rails ecosystem tends to run better and much more frequently on the various Nix systems. Did it have to be that way? Or is it just kind of an accident of history that we happen to end up with Ruby and Rails in this ecosystem, but just as easily, it could have evolved in the Windows world?
SARA: I think it is an amalgam of things. For example, Unix and Nix operating systems being developed earlier, being widely spread due to being license-free oftentimes, and being widely used in the education space. Also, because it is so lightweight, it is the operating system of choice. For most servers in the world, they're running some form of Unix, Linux, or macOS.
JOËL: I don't think I've ever seen a server that runs macOS; exclusively seen it on dev machines.
SARA: If you go to an animation company, they have server farms of macOS machines because they're really good at rendering. This might not be the case anymore, but it was at one point.
JOËL: That's a whole other world that I've not interacted with a whole lot.
SARA: [chuckles]
JOËL: It's a fun intersection between software, and design, and storytelling. That is an important part for the software field.
SARA: Yeah, it's definitely an aspect that deserves its own deep dive of sorts. If you have a server that's running a Windows-based operating system like NT and you have a website or a program that's designed to be served under a Unix-based server, it can easily be hosted on the Windows server; it's not an issue. The reverse is not true.
JOËL: Oh.
SARA: And this is why programming on a Nix system is the better choice.
JOËL: It's more broadly compatible.
SARA: Absolutely. Significantly more compatible with more things.
JOËL: So today, when I develop, a lot of the tooling that I use is open source. The open-source movement has created a lot of the languages that we know and love, including Ruby, including Rails. Do you think there's some connection between a lot of that tooling being open source and maybe some of the Unix family of operating systems and movements that came out of that branch of the operating system family tree?
SARA: I think that there is a lot of tie-in with today's open-source culture and the computing history that we've been talking about, for example, people finding something that they dislike about the tools that are available and then rolling their own. That's what Ken Thompson and Dennis Ritchie did. Unix was not an official Bell development. It was a side project for them.
JOËL: I love that.
SARA: You see this happen a lot in the software world where a program gets shared widely, and due to this, it gains traction and gains buy-in from the community. If your software is easily accessible to students, folks that are learning, and breaking things, and rebuilding, and trying, and inventing, it's going to persist. And we saw that with Unix.
JOËL: I feel like this background on where a lot of these operating systems came but then also the ecosystems, the values that evolved with them has given me a deeper appreciation of the tooling, the systems that we work with today. Are there any other advantages, do you think, to trying to learn a little bit of computing history?
SARA: I think the main benefit that I mentioned before of if you're a person that wants to know why, then there is a great benefit in knowing some of these details. That being said, you don't need to deep dive or read multiple books or write papers on it. You can get enough information from reading or skimming some Wikipedia pages.
But it's interesting to know where we came from and how it still affects us today. Ruby was written in C, for example. Unix was written in C as well, originally Assembly Language, but it got rewritten in C. And understanding the underlying tooling that goes into that that when things go wrong, you know where to look.
JOËL: I guess that that is the next question is where do you look if you're kind of interested? Is Wikipedia good enough? You just sort of look up operating system, and it tells you where to go? Or do you have other sources you like to search for or start pulling at those threads to understand history?
SARA: That's a great question. And Wikipedia is a wonderful starting point for sure. It has a lot of the abbreviated history and links to better references. I don't have them off the top of my head. So I will find them for you for the show notes. But there are some old esoteric websites with some of this history more thoroughly documented by the people that lived it.
JOËL: I feel like those websites always end up being in HTML 2; your very basic text, horizontal rules, no CSS.
SARA: Mm-hmm. And those are the sites that have many wonderful kernels of knowledge.
JOËL: Uh-huh! Great pun.
SARA: [chuckles] Thank you.
JOËL: Do you read any content by Hillel Wayne?
SARA: I have not.
JOËL: So Hillel produces a lot of deep dives into computing history, oftentimes trying to answer very particular questions such as when and why did we start using reversing a linked list as the canonical interview question? And there are often urban legends around like, oh, it's because of this. And then Hillel will do some research and go through actual archives of messages on message boards or...what is that protocol?
SARA: BBS.
JOËL: Yes. And then find the real answer, like, do actual historical methodology, and I love that.
SARA: I had not heard of this before. I don't know how. And that is all I'm going to be doing this weekend is reading these. That kind of history speaks to my heart. I have a random fun fact along those lines that I wanted to bring to the show, which was that the echo command that we know and love in the terminal was first introduced by the Multics operating system.
JOËL: Wow. So that's like the most common piece of Multics that as an everyday user of a modern operating system that we would still touch a little bit of that history every day when we work.
SARA: Yeah, it's one of those things that we don't think about too much. Where did it come from? How long has it been around? I'm sure the implementation today is very different. But it's like etymology, and like taxonomy, pulling those threads.
JOËL: Two fantastic topics. On that wonderful little nugget of knowledge, let's wrap up. Sara, where can people find you online?
SARA: You can find me on Twitter at @csarajackson.
JOËL: And we will include a link to that in the show notes.
SARA: Thank you so much for having me on the show and letting me nerd out about operating system history.
JOËL: It's been a pleasure.
The show notes for this episode can be found at bikeshed.fm.
This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at [email protected] via email.
Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeee!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Mental models are metaphors that help us understand complex problems we work on. They can be a simplified roadmap over an infinite area of complexity.
How does one come up with mental models? How are they useful? Are they primarily a solo thing, or can they be used to communicate with the team? What happens when your model is inaccurate? Today, Joël is joined by Eebs Kobeissi, a Developer and Dev Manager at You Need a Budget, to discuss.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Eebs on Twitter
You Need a Budget
Skill floors and skill ceilings
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And today, I'm joined by Eebs Kobeissi, a Developer and Dev Manager at You Need a Budget.
EEBS: Hi, Joël. It's really good to be here.
JOËL: And together, we're here to share a little bit about what we've learned along the way. So, Eebs, what's new in your world?
EEBS: Oh, a whole lot. I'm a new dad, so I'm getting to experience all those things. But in the developer world, I've recently picked up programming on an ESP32, which controls LED lights. And so I'm having fun lighting up my office.
JOËL: Is that like one of those little microboards, kind of like a Raspberry Pi?
EEBS: Yeah, exactly. It's a little board that's compatible with the Arduino IDE. And I literally only played with it last weekend, so it's still very new to me.
JOËL: Nice. Have you done any Arduino development or Raspberry Pi or anything like that before?
EEBS: No, I have a Raspberry Pi that I run like a DNS server on, but I haven't done any actual programming. I did make an LED blink, which is pretty cool.
JOËL: What kind of programming is required for a board like that?
EEBS: From my understanding, it's either in Python or C. Those are, I think, the two languages that you can program on it. I definitely do not know C. And so I'm just going through a bunch of tutorials and reading some sample code. But I think if I ever end up trying to implement something more complex, I'll probably switch over to Python because that's a little more familiar.
JOËL: So the coding feels fairly high level even though you're writing controller code for LEDs.
EEBS: I hope so. I'd love to be able to take advantage of whatever abstractions I can.
JOËL: Do you have any fun goals you're trying to do with this? Or is this just for the fun of trying a completely different environment than web development?
EEBS: No, it's actually rooted in something visual. So I have these shelves behind me that are in my webcam when I'm in meetings or whatever. And so I want to be able to put a light strip across these shelves and have some sort of visual thing in the background.
JOËL: Like LED mood ring?
EEBS: Yeah, kind of. My eventual goal would be that as I'm talking, a little equalizer display pops up behind me. I thought that would be pretty neat.
JOËL: That is amazing. That will give you all of the cred in the meetings.
EEBS: Right? I thought that'd be pretty cool. What have you been thinking about recently, Joël?
JOËL: I've been submitting to the RubyConf call for proposals which, as of the recording of this episode, has just closed this week. And like many people, I submitted on the last day.
EEBS: [laughs]
JOËL: And it was really fun trying to take some ideas that I'm excited about and then turn them into a proposal that is accessible to other people.
EEBS: Nice. Do you want to share a little bit about what the talk is, or is it under wraps for now?
JOËL: I don't know if anyone on the committee will listen to this before the review goes out. This might break the anonymity of the proposal.
EEBS: Oh, right, right.
JOËL: One thing I will share that's interesting is that there are topics that I'm excited about. It's like, oh, here are a bunch of cool things about something, some technical topic. But talks that are just ten cool things about X are not that great. And so I needed to find some sort of unifying idea that I could use to share that. And that generally is in the form of trying to find a story that I can tell. What unifies all of these things together? What tells a compelling story? Is there some metaphor I can lean into?
EEBS: Nice. I think that's a really powerful way of communicating something deeper is through telling something that people can relate to.
JOËL: One way that thinking about metaphors has been really impactful for me recently is the idea of mental models and how those can help us in development. I'm curious; we've thrown around the phrase a little bit you and I in past conversations; what does a mental model mean to you?
EEBS: I tend to be a visual thinker. And from talking to others, I've heard similar statements. So for me, a mental model is how I think about a particular domain or how I think about code flow or structure. And for me, it's usually either as two-dimensional objects or occasionally three-dimensional objects that I have floating in my visual space.
So, for example, if we had two classes that are collaborators in some way, I often think of them maybe as two rectangles that are side by side. And when they interact, there's some little amorphous blob from one of those rectangles that reaches out into the other one or passes a message from one to the other.
And I sort of have this idea of how many connections are there between these two physical things. Or, if I'm thinking about code flow and the path of execution that code might take, sometimes I visualize it as maybe a tree or potentially loops if there are such cases.
JOËL: So when you think of these concepts, just in general, you're seeing in your mind's eye squares and rectangles floating in the air.
EEBS: Yeah, pretty often. Sometimes it takes those shapes, and as I build up a mental model of some code, I'm usually adding new shapes into that picture I have in my mind. I tend to view things sort of top-down. So like, the start of code or the start of execution is usually at the top or maybe the far left or far right.
And as execution happens, I usually view that as moving in towards the middle and potentially going back out when a response is returned. If it's a web request, something like that, I view it as this sort of outside in. And there's a bunch of pieces in there that are all talking to each other.
JOËL: That's really cool. So not only is there a geometric aspect to it, but there's a spatial aspect to it as well.
EEBS: Yeah. And it's interesting, like, I haven't actually thought about it [laughs] in this level of detail before. But yeah, there certainly is a spatial aspect to it. And I have this idea in my mind of like things and domain objects kind of belong at the bottom, and they should have well-defined boundaries. But the pieces that are a little bit towards the outer edges may be a little more fuzzy and may have less definition around them.
JOËL: That's really interesting because I also have this sort of in my mind's eye see these things when I'm thinking about concepts like that. But I've talked to other people, and some people don't even have much of a mind's eye at all. They don't tend to visualize things in their mind in that way.
EEBS: Yeah, it's really interesting how different people approach this thinking about code. A lot of people write things down. And I write things down, too, and draw little arrows that don't really make any sense. But it helps me do something physically sometimes as well as just thinking about it.
JOËL: Have you ever tried to convert these pictures you see in your mind and actually draw them on paper?
EEBS: Occasionally. And for the most part, that usually takes the form of some kind of domain modeling, whether it's based on database tables or just domain objects. And sometimes I will try and draw them out and then specify the relationships between them like, oh, you know, this one model talks to this other model in this particular way. And I'll define a relationship between them, which helps me think about them and how they interact.
JOËL: I've found that even though for some things I can see it very vividly in my mind's eye, I struggle to then concretely translate that onto paper or digital paper if you will. It's almost like trying to, say, translate an emotion into words and that even though I feel like I see a visual picture, I can't reproduce it by drawing necessarily.
EEBS: It's interesting you brought up feeling because a lot of the times, I have this gut feeling about a mental model, like whether I think it is correct or not. And sometimes I have this uneasy feeling of, like, that doesn't feel right to me, but it's hard to articulate why. And I think sometimes that's when I have to pull out something physical, start making those relations, start connecting things.
And that's when I might uncover, like, oh, this feels odd because I have a circle here or a cycle or something. Or I've sort of represented the truth of something in two different places. Do you have any techniques for getting it out of your head and into something physical that you could share with someone else, maybe it's text or a picture?
JOËL: I think I do struggle with that conversion sometimes. Practice definitely helps. I think maybe there is a metaphor here between converting these, let's call them, pictures that I see in my mind's eye and then drawing diagrams with trying to take feelings and expressing them in words. In the same way that, maybe I might have some feelings, and then I want to journal how I feel, and I struggle to express that.
But finding a way to express that gives me a certain amount of precision and a more concrete thing. In the same way, these things that flash in front of my mind's eye, if I can take the time to put them on paper, they're now more real. They're more concrete. I think you can probe the edges, the ways that it kind of falls apart more easily.
EEBS: Yeah, that makes a lot of sense. There's a lot of value in writing that down and going through those details in a methodical way because oftentimes, you'll catch inconsistencies, or you'll find better ways to describe it. And being able to share your mental model with someone else is often...well, it can be really tricky.
And I think that's why it's important to go through and maybe find a common medium that you can share because I can't see into your brain. You can't see into mine. But if we can share our mental models, then hopefully, we have a better chance of agreeing on the solution or finding inconsistencies.
JOËL: Exactly. I think, in many ways, there are almost multiple layers of mental models and that you might have an abstraction or a metaphor for a concept that you're working with separate from the diagram. And then the diagram is yet another metaphor, but now we're going geometric to represent a broader idea.
EEBS: Yeah. Are there any other ways that you take that picture from your mind's eye besides written documents or conversations? Do you use any diagramming tools that specifically help with that? Or is it just kind of free-form?
JOËL: I do a mix. I am a big fan of draw.io, which allows you to just free-hand or pull shapes together, things like that. There are some more structured tools that I will use. I'll use Mermaid.js.
EEBS: Yeah, I've been using that a lot too.
JOËL: Yeah, that's great. I've been digging into more structured diagrams recently, particularly the idea of graphs, directed graphs. And those have interesting properties.
EEBS: Can you share a little more detail about what you mean?
JOËL: So a graph in the computer science sense is a bunch of nodes. They are typically represented as circles and then edges which are the connections between them. A directed graph is now there's an arrow pointing in a particular direction. A really interesting property that you can have with directed graphs is whether or not they include cycles. So can you only by following the arrows effectively create a loop? Or will the arrows always lead you to some kind of terminal node?
EEBS: Gotcha. Is that a directed acyclic graph?
JOËL: If there are no cycles, yes, it is a directed acyclic graph or DAG, as you'll often see it abbreviated.
EEBS: [laughs] How do you relate that graph to code? And what benefits do you get from expressing it that way?
JOËL: So this shows up in a lot of places. And I'd even say that thinking of certain aspects of my code as a graph and a potentially directed acyclic graph is itself a mental model or a metaphor that helps bring clarity to the way I think about things. So, for example, code, you know, you invoke some main function at some point to call the code, and then that's going to call out some other functions, which call out some other functions, and so on. You may have heard that referred to as a call graph. But that is a graph of calls.
There might be cycles in there for co-recursive functions and things like that. But that is one way you can then sift through and analyze how control flow or how logic flows through your application is through a function graph. You mentioned earlier the idea of objects and how they're connected to each other. That's an object graph.
EEBS: Right. Recently, I had to work through a state transition problem where a customer has some billing, and they can go through many different states, whether it's active, or canceled, or past due, those sorts of things. And so actually, I reached for Mermaid.js and built a graph of, okay, they start here in this empty state. And then they subscribe, which then they become active. They might cancel their subscription, which moves them to a different state.
And by listing out all the states and the transitions between them, it helped me to understand what methods I might need to define on which objects in order to allow those transitions to happen and what checks I might need to make before allowing those transitions depending on the state of the system.
JOËL: I'm hearing the keywords states and transitions. And that's making me think of finite-state machines. Are you drawing a finite-state machine graph or something a bit more free-handed?
EEBS: It's a bit more of free-handed. I don't think I've actually drawn out a state machine since college but just representing the different states as different boxes and the transitions that are possible from those states. I mean, I guess that kind of is a state machine in some way. So graphs are great visual approaches. Are there any non-visual approaches that you take?
JOËL: That's a great question because not all mental models have to be visual. I think the power of a mental model exists in a metaphor. And one that's kind of broad but that I've applied to a lot of different areas is the general idea of something being parallel or in series. I think I first came across this concept talking about electric circuits. And are we talking about two little light bulbs that are in parallel, and if the electricity to one is cut, the other one still lights up? Or are they chained together in series?
EEBS: Yeah, like my LEDs.
JOËL: Exactly, going back to Arduino, but it can also be applied to a bunch of other things. We can talk about code being in parallel or in series. We can talk about work being in parallel or in series. Interestingly, I took that mental model as a sort of quick shortcut when I was digging into some functional programming ideas. Monads and applicatives are the fancy terms here.
EEBS: Oh boy, I'm ready.
JOËL: In general, and there's a hefty asterisk here, I think of monads as being serial, so you're chaining something; one thing happens, then another. So you can think of, for example, chaining promises in JavaScript, promise one, then promise two, as opposed to applicatives which are parallel. So you might think of maybe zipping two lists or two arrays in Ruby. The two arrays, there are no dependencies between the two of them. They get processed side by side as you're traversing both of them together.
EEBS: Interesting. I've heard the term monad a lot, but I haven't heard the term applicative. Are there any other details you can share about them and what makes them different or how they might be seen in our code?
JOËL: I think that the key difference is that distinction in how they're processed. Applicatives are a way of combining two independent, let's call them data sources, and then you find a way to combine them together. So it could be two independent arrays, and you're zipping through them. It might be two independent HTTP requests, and they can both fire in parallel. But then you want to combine their outputs. So you say wait until both are successful and then combine their output.
EEBS: Oh, okay, gotcha.
JOËL: It could even be nullable values. So you say do this thing if both values are present. But you're not...the value of one or the fact that one is null or not is not dependent on whether the other one is null or not. They're independently null or not as opposed to something...Monads are, again, a different way of combining. You might call them data sources or operations. But in this case, there is a clear dependency one, and then its output influences the next one.
You might say check the value is null or present or not. And then, if it is present, take that value and then put it as the input of my next operation. And then, if it is null or not, do another thing. See, now you have a sort of chain.
EEBS: Where do you see these chains happening in code? Or is it everywhere?
JOËL: Once you know that pattern which, again, could be thought of as another mental model, you start seeing it everywhere. So promises in JavaScript chaining together that's effectively monads. Don't @ me, all the functional programming people.
EEBS: [laughs]
JOËL: I know that's not quite true. Anything dealing with multiple operations that could succeed or fail depending on, again, whether you're treating them as dependent or independent, that's probably going to look very similar to either monad or applicative.
EEBS: So the first thing that actually comes to mind here is things like background jobs. Using Sidekiq or Resque or other job processors, you can have a queue of jobs that need to be executed, and they might need to run in serial, or potentially you have multiple workers pulling from a single queue, and thus the work is happening in parallel. Is that a reasonable analogy?
JOËL: I think it's good for the serial versus parallel, but it's not necessarily a good analogy for understanding monads and applicatives.
EEBS: Gotcha.
JOËL: So with two workers, you can process a queue in parallel, and a bunch of things happen.
EEBS: But there's not necessarily anything that is bringing those two workers together to produce a single output.
JOËL: Yes. And there's no dependency between the tasks in the queue.
EEBS: Right, right, gotcha.
JOËL: So if you have a task that says execute this task and then only if this task succeeds, then do the second task, now you've created a dependency. And you couldn't process that in parallel because if task one, which has to be executed first, is executed by worker one, task two should not get processed unless task one is successful. You can't just say, oh, I've got another worker free. I haven't processed task two because it's waiting to know does task one succeed.
EEBS: Right. So an example in code would be a user creates a new order. And when they create a new order, we send them a confirmation email. That would be an example of that happening in serial or a monad-like thing. [chuckle]
JOËL: Yes, I found that thinking of things as serial or parallel is a good shortcut for thinking about monads and applicatives. I don't know that the reverse is necessarily true. They don't necessarily transfer one-to-one with each other. And maybe that's a danger of mental models, right? You find a mental model that describes a situation, and then you try to reverse it, and then you make false assumptions about the world.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: Another mental model that is not necessarily visual that I like actually comes from the video game community, and that's thinking of skill ceilings and skill floors. So in, I think, particularly the MOBA Community, that's a Multiplayer Online Battle Arena, they'll talk about characters as having a high skill floor or a low skill ceiling.
And generally, what that means, and again, the meaning varies a little bit by community, is that a character with a low-skill floor is an easily accessible character. They might not have a lot of skill shots, like, you press a button, and things happen around your character. You don't need to aim, things like that. A high skill ceiling means that there's a lot of room for you to grow, and as you get more skilled, you can get significantly better with that character.
EEBS: Gotcha. So the opportunity is greater with a higher skill ceiling.
JOËL: Correct. And depending on how the character is set up, you might have a very narrow range that could be in the low range where it has a low skill floor and a low skill ceiling, which means that the character is easy to learn. But once you've learned it, there's not really a lot you can do with it. It's a fairly basic character. So getting better at the game is not necessarily going to make you that much more impactful.
And then you could have one that's the opposite that is high both skill floor and skill ceiling where a character is very hard to learn. But once you learn it, that's kind of all there is to it. And then you might have one that has a large range somewhere; maybe it's easy to learn, but it's hard to master, or there's a lot of room for growth.
And so, taking this framework for analyzing characters and video games, I think we can apply that to technology in general. This could be language design. This could be just API design. And you might say, well, I want this to be very accessible. People can jump in very easily. You might say I want this to be very powerful and have a lot of high-end features that make your power users very happy and very productive.
EEBS: That's interesting. When you first were talking about it, I was actually less thinking about it from a user's perspective of what maybe they could do in the application but potentially from the standpoint of a developer writing the system itself.
One of the pieces I always come back to in software development is that change is inevitable. And so, making something easy to change often pays great benefits down the road. And so I wonder how that fits into this idea of a low skill ceiling or a high skill ceiling in terms of perhaps flexibility or being decoupled such that you can take one idea and easily extend it or easily get more from it than you originally set out to build.
JOËL: There's often a trade-off. So you make something easy to change. It's highly decoupled. But you maybe introduce more indirection to the system. So while it's easier to change one single piece, it's harder to understand the system as a whole.
EEBS: Yeah, that's true. And sometimes, you bake in assumptions that you make about the future, which turn out not to be true.
JOËL: [laughs] Yes, that is definitely something I'm guilty of.
EEBS: I think we all are.
JOËL: One thing that I find interesting is as you evolve the design of an architecture pattern, a system, a whole language, you might want to move one of those if I think of them like two independent sliders on a one-dimensional scale. So maybe you want to move the upper boundary a little bit and say I want a higher skill ceiling for this, but they don't actually move completely independently.
So introducing some advanced features might inadvertently also raise the skill floor. And conversely, making the language super accessible so that it has a low-skill floor, you might have to decide I will not introduce certain features.
EEBS: One thing I wanted to ask you about is, do you view different languages as having different skill floors and ceilings? And, you know, I love Ruby. I know you love Element. I've played with Element. It's been a great learning tool for me. How do you view those two languages in terms of skill ceilings and skill floors in terms of, I guess, what you can do with them?
JOËL: That's a great question. And I think you can definitely apply that to languages. Admittedly, I think you could probably start a lot of flame wars with that.
EEBS: [chuckles] Let's not do that.
JOËL: I wrote an article a while back where I applied that mental model to look at the F# programming language. And there was a debate in that community about certain features to add and whether they would allow advanced programming but potentially at the cost of accessibility to newer members of the community and how to balance those. And so I thought, hey, let's throw this video game metaphor at the problem and talk about it through that lens.
EEBS: That's really cool. Did you draw any conclusions, or was it as a way to start a conversation?
JOËL: It is a way to start a conversation. I don't think there is a single correct or best distribution of your skill, ceiling, and floor. It has to match the goals you set out for your project. Just like in games, people love to rank which characters are best and not. And sometimes you can show that, in general, this character is better.
But oftentimes, in a balanced game, you can talk about this character being easier to get started with or this character working very well if you're a pro. But the fact that you have a higher or lower skill ceiling or floor doesn't necessarily make the character better or worse.
EEBS: So, this conversation about differing mental models, I think I hadn't realized that there can be so many different types of mental models. And some things that I do in my thinking I haven't classified as a mental model. But now that you bring it up, I think one that I think about fairly often is this idea of two objects that are collaborators and reaching into the internals of one of those objects from the other object.
So A and B are two separate things. And if A reaches into B's bucket and messes with the state of B, I view that as sort of a bad practice. You're not really adhering to maybe the public API that that object is exposing. You're kind of reaching in and going around behind its back and changing some stuff that it may not expect.
JOËL: Would you refer to that maybe as tight coupling?
EEBS: Yeah, it's definitely tight coupling. It's not just tightly coupled; it's almost worse than that. It's almost like going behind somebody's back and making a change without them knowing. And so when I see that in code or when I write code that does that, I have this really intense desire to separate that and to say, no, no, you can't go in and update this record directly in the database. You have to send it a message and say, "Hey, I would like you to be aware of something," and then it goes and changes its own internal state as a response to that.
And so I have this very vivid sort of mental feeling of it being wrong, of it being like, I'm being sneaky, or I'm not being gracious to the person I'm interacting with as though I were one of these objects.
JOËL: That's fascinating. You've practically anthropomorphized these objects.
EEBS: I do. I view them as little people.
JOËL: You describe this interaction as going behind someone's back. That is the thing that I, a person, do to someone else. It's not a function making a direct call. And yet, it's such a strong...we use a social mental model to talk about objects and interactions.
EEBS: Yeah, I almost want them to be friends. And I think that applies to real-life relationships, right? If you have a nice dialogue back and forth, there's an understanding. There's commonality that you can find. But if I were to go do something behind your back without chatting with you about it first, you might not be so happy with me.
JOËL: I'd feel betrayed.
EEBS: Right.
JOËL: I feel like there's probably a really fun conference talk to be done about that. We often use that metaphor; I think when talking about objects sort of subconsciously but making it explicit and just being, hey, let's talk about these objects as if they were people. Why don't we want to do this? Because this one here is betraying the other object there. This one here is being impolite.
EEBS: We could have two people get on stage and talk to each other. And I might then go and reach in your pocket and pull out some change without you knowing, and you might be upset with me.
JOËL: That would be great. Get a little skit going up on stage. Or even if you're artistically inclined, you could probably draw some really fun little characters to illustrate this.
EEBS: That would be really cool. I, unfortunately, don't have the artistic talent to do that.
JOËL: Well, free conference talk idea to all listeners of the podcast. I expect to see this for RailsConf 2023, maybe.
EEBS: I'll be looking for it. So I've shared a mental model that I didn't really know was a mental model. Are there other mental models that you want to share that I may not be thinking of?
JOËL: Here's one that I've just come to realize recently that I'm actually quite excited about: when you think about the word refactoring, how would you describe that idea?
EEBS: Well, refactoring to me is changing the implementation without changing the behavior.
JOËL: Yes, I think that is the classical definition. You should be able to change the implementation of a method, and the tests without changing are still green after you've done that.
EEBS: I guess, mentally, I think about that as perhaps drawing a box around some of the objects that are floating in my mind's eye, rearranging how they exist within that box, and then the box dissolves. And the tests still pass, but the structure of the objects or the code has changed in my mind.
JOËL: I love that you immediately went to a visual approach there. And I think I have something similar, but I'm coming at it from a slightly more domain modeling perspective. So thinking maybe less from an individual method approach but looking at maybe a larger system, what you're trying to do is use code to describe some version of reality. So it might be a business process that you have. It might be trying to describe some aspect of your customer's life that you're trying to automate for them.
Oftentimes, this thing you're trying to describe in code terms is going to be a simplification because life has a ton of edge cases, and many of them we don't care about. So if we go with a visual metaphor here, you're trying to draw some kind of shape using only straight lines to approximate some weird curve.
And so, let's say you draw something with only four lines. It's really simple, how you have a diamond. That's the shape you're trying to create. And then you're going to fill it in with little other shapes that approximate a diamond. And those are your different models and functions and all the other components that we use to build software.
At some point, your understanding of the underlying reality might change. Maybe you need more precision, or maybe the actual feature requirements have changed. The thing you're trying to approximate with your code is not a diamond. Maybe you've added a few more sides to it. It's a pentagon. So we've gone from four sides to five. And the little components, and modules, and things that you have there approximate that diamond work.
They still mostly approximate your pentagon, but it's really clunky because the initial design was to approximate something else. They were really good for fitting in really tightly and being very loosely coupled to each other when we were trying to do a diamond, but then they don't work as well in the pentagon.
EEBS: So maybe some of the internal shapes need to change or adjust to fill the space that the pentagon has now created.
JOËL: To fill the space or maybe even just to fill it in a way that's less clunky. And so the idea here in this metaphor is that the reality we're targeting in software is always changing. And so the underlying reality changes, and so we're changing that shape that we're creating all the time.
But also, we're getting more precision as we decide; oh, we care about this edge case now. We didn't in version one, and so as part of that, we're constantly having to take the modules that maybe were very well designed initially but then restructure them to fit the new requirements because now there's a fourth object coming in, and it's kind of clunky with our current configuration.
EEBS: That's interesting. One of the first things that jumps to mind is that maybe there are better ways or worse ways to do that refactoring to fit that new shape. Do you think there's any truth to that in the sense that you might initially design a system that perfectly fits that diamond or very closely fits that diamond but then as it changes to a pentagon, do you need to simply add a new piece to fill in that empty space? Or do you need to restructure everything within the diamond now to fit the shape of the pentagon?
JOËL: Oftentimes, you do need to restructure. And I think there's this wonderful little phrase from; I believe it's Kent Beck that says, "Make the change easy, and then make the easy change."
EEBS: Yep.
JOËL: And so, to me, that makes the change easy is that initial restructuring that you need to do of those first shapes so that you can finally bring in the new one.
EEBS: Oh, that's a cool visual. I immediately can imagine the pieces in the pentagon moving around to make space for a new piece that you need to now bring in. And that movement of all those pieces can be really difficult.
Have you ever played that game where it's a square, and you're trying to get a ship out of a port, but there's a whole bunch of other ships, and you can only move them left and right and up and down? And you can do that. And that's what I'm picturing right now is moving shapes within that pentagon to then make space for either a new shape or to allow a shape to escape that is no longer relevant.
JOËL: I played a version of that that had cars, cars, and trucks.
EEBS: Gotcha. Yeah, I think I played that too.
JOËL: That would also be a fun conference talk, right? Like, start with that game as your initial metaphor. And then you use that as a way to talk about refactoring.
EEBS: That would be really cool.
JOËL: I would watch that talk. To anybody listening who wants to give that talk, I want to see you at RailsConf 2023.
EEBS: [laughs] Are we just a talk factory now?
[laughter]
JOËL: I love talk ideas. Maybe this should become a segment. Just have Eebs come in for five minutes once a month and give us a talk idea. It could even be fun to see a talk idea that multiple people implemented differently.
EEBS: That would be really cool, actually. I always get nervous about giving talks or being on podcasts like this one. I would love to be the person that gets to sit there and throw out random ideas and have other people fulfill my dreams.
JOËL: Well, thank you so much, Eebs, for joining us to talk about mental models. And to all of our listeners, I'd love to hear about what mental models you find are helpful, and so please share them with us. On Twitter, you can reach us at @_bikeshed.
EEBS: Thanks for having me, JOËL. This has been super fun.
JOËL: And on that note, let's wrap up. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at [email protected] via email.
Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
As developers, we care a lot about code quality. How do we know how good is good enough? When do we stop improving code? Alternatively, when working on code that's really bad, how much do you improve it before calling it a day? thoughtbot's Stephanie Minn joins Joël to chat about this and case expressions: We recently discussed these as part of thoughtbot's RubyScience reading group. Are case expressions bad? Are they equivalent to multi-way conditionals? When do you use polymorphism?
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
RubyConf 2022
RubyConf Mini
Stephanie's talk at RubyConf 2021
WNB.rb
Joël's RailsConf 2022 talk
Ruby Science
older episode on wizards
TCR
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Joël Quenneville. And we're here to share a little bit about what we've learned along the way. Today, I'm joined by a fellow thoughtboter, Stephanie Minn.
STEPHANIE: Hi, Joël.
JOËL: Welcome to the show.
STEPHANIE: Thanks. Happy to be here.
JOËL: Stephanie, what's new in your world?
STEPHANIE: Thanks for asking. I've been working on writing a CFP for RubyConf, which you have been plugging internally at thoughtbot. I wasn't really sure if I wanted to do it, and then I found out about RubyConf Mini, which is happening as an alternative to the main conference in Houston. And that got me really excited to have some more options and just got me thinking about what I might have sitting on the back-burner that I might want to give a talk about.
JOËL: That's really exciting. I'm curious, what is your process for coming up with an idea for a talk?
STEPHANIE: I think they come in seasons, ideas, for me, so not even necessarily when it's conference time. But if something has been sticking in my brain for a really long time, especially as it relates to processes on teams that I'm on or my day-to-day workflow, and it's something that I keep coming back to and trying to figure out what it is about it that my brain wants to work through a process, that usually tips me off that this might be something that other people are thinking about or working through.
In this case, I am planning to write a CFP about pair programming. And yeah, it's something that I've been thinking about doing for a while, and it seems kind of an evergreen topic. So I thought I would pull it out for this conference. And if it doesn't end up getting accepted, I can always resubmit again in the future.
JOËL: I love that. So it sounds like you have a note or maybe an actual written notepad somewhere where you just, over the course of the year, build up ideas, and then you take a look at that when conference time comes around.
STEPHANIE: Yeah, that's about it. I have just a very long-running note of half-formed thoughts. And then, when I give myself time to really reflect on how things have been going at work, I usually revisit it, and if any of them still resonate or stand out to me, I will go through and try to see if there's any content to come out of that. What about you? I know you are an extensive note-taker and ideas blogger.
JOËL: I have to say I really like your approach of gathering ideas throughout the year. I've worked with many people who would love to give a talk at a conference as a professional goal but then get stuck in the I don't have any ideas. I don't know what I would talk about. And most people have a thing they could talk about. They just don't know it.
And it sounds like you've done a really great job of gathering this info throughout the year so that when the time does come, you don't just freeze. You're like, no, here are the 10-20 things that I experienced or that I am an expert in or that I would love to share. And maybe there are two or three in there that would be very well-fitted for the conference you're looking at. So I love that idea. I have not done that myself personally, but maybe I should start doing that.
STEPHANIE: What about you, Joël? What's up in your world?
JOËL: So I've also been thinking a lot about RubyConf coming up, working on a few ideas for myself. And then, there are a few people that have reached out to me to help them craft ideas or get a little bit of feedback on their proposal. So I've been doing a lot of proposal reviews as well.
STEPHANIE: What do you enjoy about reviewing other people's proposals?
JOËL: I think for many people speaking at a conference is a really big, ambitious professional goal, and so helping people achieve that is really fulfilling for me. Some people might feel almost inadequate or unprepared. But because I know them, I know they've got good things to share.
And so it's almost seeing the greatness in them that they don't quite see yet or that they don't feel confident about. And so being able to see that in their proposal and say, "Oh, there’s a core of a great idea right here, tweak it a little bit, and that'll give you a slightly better chance with the committee and help you towards that path of being on stage for the first time," is really exciting.
STEPHANIE: I spoke at RubyConf last year, in 2021, virtually. And I remember that was my first time speaking at a conference. And I was worried that my talk was not super hardcore, technical enough. But my goal for my talk was to aim it towards other developers like me who are maybe mid-level and wanting to reach this whole audience of people who are attending these conferences to learn and to level up who aren't necessarily super senior experienced developers.
And it was a really great experience. People seemed to really resonate with that. So I really encourage folks to speak about things that are resonating with them at whatever point in their careers because there are so many people out there who are probably in the same boat and want to hear what you have to say.
JOËL: Absolutely. I'm curious, now that you have experienced the full cycle at least once, from ideas to crafting a submission, to getting accepted, preparing a talk, delivering the talk, and then recovering from that, what are maybe some lessons learned or some things you weren't expecting the first time you went into that that now you do know going into another cycle?
STEPHANIE: Yeah, the power of community; I had a lot of support from WNB.rb, a woman and non-binary Ruby community. We crafted our CFPs together and then practiced our talks together and had a working group that met every couple of weeks to give feedback on our talks as we were working on them. And it was really awesome to have that accountability, to have that support, people to tell you that your talk is good and give you a thumbs-up.
And I really want to continue investing in my community that way. And I really appreciate you asking this question because I guess I do have things I've learned and would want to share that with other people in my community, and yeah, just continue to encourage folks who may not have been traditionally encouraged to speak at conferences.
JOËL: Community is so powerful. Even though I've spoken at multiple conferences, I still get nervous about my talks. I have a lot of self-doubt about whether my topic is good, whether I'm sharing it in a way that's going to be impactful. And I had a magical experience at RailsConf this year where a group of us were at a hotel lobby practicing our talks the night before. And I was just still so unsure about my talk.
And the feedback that I got there gave me a huge boost of confidence that I was able to ride into the next day and give a talk that I think turned out rather well. But honestly, that was my favorite moment of the conference was 11:30 p.m., a group of people in the hotel lobby taking turns practicing their talks.
STEPHANIE: Yeah, I love that.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: One thing we've been doing recently at thoughtbot is every other week book club discussion, and we've been looking through the book Ruby Science published by thoughtbot. Every week, we'll have someone who is a facilitator, who has done the reading and has prepared some questions for the group. And you recently facilitated a session on the topic of CASE expressions and why they might be a code smell. That sounds like a really controversial statement. How did you approach that topic?
STEPHANIE: It's funny because I was looking at the upcoming chapters to pick a topic to facilitate for this book club, and CASE statements stood out to me because I was like, oh, I know what that is; that will be easy. [laughs] But it turned out to be a bit meatier than I thought it was going to be. I'd say that I didn't really consider them a code smell until I read the chapter of Ruby Science talking about them. So I was a bit surprised because they seem so common, which is probably also why I thought it would be an easy topic.
JOËL: Would you say that reading the book or that particular chapter changed your mind?
STEPHANIE: I think it did only because I hadn't necessarily given them a second thought or thought of them more deeply in that way. I think that, at least in my experience, you encounter CASE expressions pretty early in your career, and you think they're a cool tool for making your conditionals look a bit nicer.
And it takes probably a bit more experience, a little bit more pain using them or trying to extend them that you start to have a bit of a more higher level awareness of what might be problematic about a CASE expression. But the book club that I facilitated, we had a really engaging discussion where most folks agreed that it was a code smell but also said that it depends.
JOËL: Classic consultants.
STEPHANIE: Truly. One thing that someone said that was a really nice takeaway for me was that CASE expressions get a bad rap in object-oriented languages because there are typically other tools or options you can reach for that might be preferred. And someone else said that it's probably a sign that you might be doing too much in the method.
JOËL: Hmm. You mentioned that there are some tools and things that might be preferred over CASE expressions. What are the common alternatives that people say you should use instead of a CASE expression?
STEPHANIE: I think one simple solution that we discussed in the book club for more straightforward cases would be a hash lookup to use instead of checking for equality via CASE statement. Another solution we talked about was polymorphism, which I think might refer back to the idea of having a bit of a higher level understanding of abstractions in the codebase and what things might look like in the future, especially when you might not have too many conditionals yet in your CASE expression.
Another thing that really stuck out to me in our book club discussion was another thoughtboter mentioned Sandi Metz’s 99 Bottles of OOP. And in that initial solution, she presents a CASE statement as the perfectly fine solution for now. And I thought that was really interesting because, in some cases, that might be all we know about the problem, and that is perfectly fine. What do you think about that?
JOËL: I love the idea of starting simple. Don't try to start with abstraction, especially if you're doing test-driven development. You have a test, make it go green, use the simplest thing, use duplication, use all the dirty tricks. And then from there, now that you know I have a test that was red and this code makes it go green, now the question isn't how do I solve the problem? It's how do I improve the solution? So we've kind of separated those two steps out, and I really like that.
STEPHANIE: Yeah, I remember you mentioned that you had refactored something into using a CASE statement. And I'm curious if you want to share more about that.
JOËL: Oh boy, this was a "fun" problem. Fun is in air quotes, by the way. It was a multi-step form, AKA a wizard in a Rails app, where every step submitted to the same controller, and the controller was a huge mess. It had to handle submissions from four or five different forms, some of which shared fields.
And it was this huge, deeply nested conditional thing that checked if this field is present in params, that probably means we're on step three, except if this other field is also present, then it probably means we're on step four. But if this Boolean flag in the database is set to false, we might be on a variation of step three. And because there was branching, potentially, it was an absolute mess. By looking at the code, you could never know what step of the processing you were on.
What I did is instead of all of these nested if else conditions, I wrote a flat CASE expression that just said, if step one, do step one logic or process step one form. If step two, process step two form and so on. So it was nice and flat. I was able to reuse some parts of the work across by making private methods or other objects, things like that.
But you could easily tell in any part of the code what step you were processing. Which means if you get a new feature from the client that says, "Can you modify the behavior on step four?" Now you actually know where to go to. You go to this controller; you find the big CASE expression. You find the branch that says, "If step four," and then you drill down from there.
STEPHANIE: So after refactoring that into a flat structure, did you find the code more readable? Did other folks on the project think so too?
JOËL: Yes, it was unanimously loved because this is the part of the code that everybody feared to touch. It was the most awful, gnarliest code. And a few of us had touched it, and so if you did the git blame, our names would show up, which meant that anytime anybody got stuck in that code, they would reach out to us and say, "Hey, you're the last one who touched it. Can you fix this for me?" And it was a big game of not it. Cleaning it up made this code accessible to everybody on the team.
STEPHANIE: And why do you think a CASE statement was the right solution in this particular case?
JOËL: I think if it's a multi-step form that maybe had seven steps in it, you clearly have seven branches that you're working with. And so hiding that behind nested conditions where you try to reuse each other's branches just muddied the waters. We have a seven-way branching path; let's be honest about it upfront and do a seven-way CASE expression.
STEPHANIE: One thing that we didn't really talk about as much in our book club discussion was that CASE statements can be quite readable, especially for newer developers. And even though we all did think it was a bit of a code smell, I recently encountered on my client project in a code review someone saying that they preferred a CASE statement in that situation because it was easier for them to grok. I think that's a benefit worth considering before trying to do something fancier in some cases. And I'm curious what you think about that.
JOËL: I strongly agree that a CASE expression is a great place to start, especially when you have actually more than two branches. Your logic could go one of n ways. I generally like to branch earlier than later in a lot of code. It's better in my mind to have a seven-way branch at the top of your decision tree and then just straight lines down than this constantly looping back and branching again and looping back, trying to force everything down a single path when it really doesn't want to be.
STEPHANIE: So, in this case, did you know that you had those seven branching paths upfront, or did you have to tease that out?
JOËL: I did not know from the code. Honestly, it would be very difficult to infer that from the code. But from the product, I knew this is a multi-step form with seven steps. And so I knew what the branches were from the product description. But no, it was almost impossible to infer that from the code.
Long-time listeners of The Bike Shed may remember an older episode where Steph and Chris discussed multi-step forms and how best to approach them in Rails and also in JavaScript. And one thing that did come up is that an ideal way to work with a multi-step form in Rails is to have every step be its own controller. So you have a view, it submits to a controller, which renders another view or redirects to another view which submits to another controller. And that is the direction that we went with this multi-step form eventually.
Once the single controller had a big CASE expression in it, we slowly started moving each branch out to its own controller. And now we had the step one controller, the step two controller, the step three controller, and so on. And I think that was probably the best solution in the end. But we had to go through the CASE expression just to know what was safe to move out.
Interestingly, this refactor is effectively replacing a conditional with polymorphism because all of our controllers are controller objects. They respond to the same interface. And so this gets classic refactor that Ruby Science suggests, which is what we did and kind of what Steph and Chris recommended if you had the luxury of starting from scratch all those episodes ago.
STEPHANIE: Nice, I'm glad it turned out that way and was a lot more manageable.
JOËL: It's really interesting when you're working with a situation like that where you've got really messy code, and you can make some improvements. And it's like, how far do you go? Especially because there's usually a backlog of new features that the customer wants you to implement.
So I'm curious for you, Stephanie, how do you know when you've gone far enough in improving code, either in a refactor step for your own code for a feature you're writing or maybe you're trying to take a break and say I'm going to take a little bit of time today to improve this area of the code. How do you know how good is good enough?
STEPHANIE: That's an interesting question. I think I encounter that in a couple of ways, either in my own work when I am tasked with a feature, and I start getting into the code, and it stresses me out and leaves me a bit confused and not sure where to go to work on my feature. That is usually a signal that I might need to pay some attention first and make the change easy and then making the easy change.
The other common thing that I have experienced on teams is we collectively feel the pain of an area of the codebase. And maybe we talk about it at a developer meeting, and all agree that, yeah, we really want to give this part of the code some love and add it to the backlog. But it's tough in that case because, like you said, there are a lot of new features that stakeholders want. And we as developers want to be over here taking care of our little codebase [laughs], making sure that it is healthy and it feels good to work with.
JOËL: I feel like I don't just want my codebase healthy; I want it pristine.
STEPHANIE: Ooh, pristine. What does that mean to you?
JOËL: I want it perfect, nice, and shiny. And, of course, it's never that, which is why it's always tempting to toss out the old code and start over and do it right this time. That's not a good thing. You have to be able to live with the messiness of everyday life and the fact that, okay, here's an idea. I think if your codebase is perfect, you've put too much work into it. You've gone too far, and you're beyond that; when is good enough good enough?
STEPHANIE: Whoa, that's a big statement.
JOËL: [laughs] Feel free to disagree with me here.
STEPHANIE: I guess I'm curious what perfect is in this case.
JOËL: I think it's subjective for the developers who are writing it. But oftentimes, the ones who are looking for perfection go way too far in their quest for that.
STEPHANIE: Way too far at the expense of things like business value or other things?
JOËL: I think in two ways; one, you probably ended up overengineering things to try to make it so perfect. Your design needs to have some amount of flex in it for the unknown. It's okay to have some rough corners because it's going to change. And you're going to have to redo that corner next week anyway. So you need to not go all the way in making everything absolutely perfect.
The other thing is that if you are putting in the effort to make everything perfect, at some point, you hit diminishing returns. And that's not worth your time from a business perspective or even on a personal project where you're just trying to ship things. At some point, you need to make actual progress.
STEPHANIE: I'm curious if you mainly hold yourself to those very high standards or if you also think about that when reviewing other people's code.
JOËL: So I mentioned earlier that I want my code to be pristine. I want to be clear, that's a bad thing. I do not actually hold myself to that standard. And I try not to hold other people to that standard, either. It's sort of tempering idealism with pragmatism. So being able to say, look, can we cut scope and focus on just one thing? Or does this fulfill the need that we have? And will it hurt us if we leave it like this and come back to it later? Or that question I asked you at the beginning, is this good enough? And maybe we can come back to it eventually.
STEPHANIE: I really struggle with that question sometimes because, in some ways, people talk about software as a craft. And if we were building it in a vacuum, we could fine-tune and hone it until it's this beautiful, perfect, pristine thing. But because we write software for real things in the real world, we are constrained by the needs of our users, or the business, or just the purpose of building software is for folks to use it. And in that case, part of the job is evaluating trade-offs and deciding when is good enough.
But sometimes, when I'm by myself working or coding for a little while, I do get sucked into wanting to make this the best that it could be just for my own personal fulfillment and joy. And I have to pull myself out of it sometimes and take a step back and be like, is this good enough for now, good enough for other people to be able to understand, work with in the future? And sometimes, it also requires getting other people's input too.
JOËL: That's really valuable.
STEPHANIE: Yeah, the worst thing that could happen is squirreling away with your code, and then you emerge with something that was totally not what was asked for. [laughs]
JOËL: Especially on a work project. On a personal project, it's often good to know why you're doing a thing. And so maybe you want to see how far you can get away with pushing a particular metric, whether it's you want to go extreme on the decoupling or 100% TDD, or maybe you want to try something like test && commit || revert which is a development methodology. And those are all great as learning experiments, and then you go as deep as you want.
I'm going to make another hot take here, and again, feel free to tell me I'm wrong. I'm going to say that on your own personal projects, if you pursue perfection, even when it's not for work, pursuing perfection on personal projects dooms them to join the others on the pile of uncompleted projects on your GitHub.
STEPHANIE: That is an interesting take. I think it depends on the goal of your personal project because I personally like to have my projects be a bit of a sandbox, and I have no expectations that they will end up being anything that other people would necessarily look at, even though I guess they just end up public on my GitHub and are just sitting there in a weird, unfinished state.
But yeah, I like to use them as an opportunity to, like you said, practice those concepts that I am really excited to explore but might not necessarily have the opportunity to on whatever client project I'm currently working on. And sometimes, I end up just scrapping it, but the exercise itself was valuable for me. I'm curious, though, what types of personal projects you have that lead you to have that opinion.
JOËL: I think the way I use personal projects is very similar to you in that they're generally for my own personal growth and entertainment. It's about the journey, not the destination. So I generally have no intention of making this a thing that other people will use. It's typically a way to try out a technique, or a concept, or an idea. And so, for those, going really far on a performance or quality metric can be the goal. And that's completely okay with the knowledge that I probably will not complete or ship this project.
I've done a few others where I've done the opposite. I've joined Game Jam events where you typically have a hard deadline. This could be a longer one, like maybe a month or as short as a day or a weekend, and you have to build and ship something within that deadline. And then you have to really make some pragmatic choices.
STEPHANIE: Yeah, that sounds like a lot of pressure. I don't know if I would necessarily thrive in that kind of environment. I really like to spend a lot of time thinking about my code and looking over it again, sometimes to the point where I might be a little bit too precious about it.
I was reflecting on this recently, and I thought about back when I was earlier in my career and didn't have any idea of what clean or good code was or looked like. And I would just write the code that would make my future work and just put it up for review. And I was very blissfully naive, I think, at that point in my career where I wasn't self-conscious about it in any way.
And I think I'm trying to find a good middle ground between being comfortable with whatever comes out when I do some work or write some code while also having more knowledge and experience being able to revisit it and give it a deeper look after some space and feeling good about it without spending too much precious time on it.
JOËL: Yeah, it's that classic consulting; it depends. Learn to balance code quality idealism versus the pragmatic reality of your goal, which is I want to ship something, both on your personal project and at work. That perfect code is useless if you can't ship it for contexts where you actually care about shipping.
And on that note, let's wrap up. The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me at @joelquen on Twitter or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeeeee!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
It's Joël's first episode as host of The Bike Shed! 👋
Joël has fellow thoughtbotter Steve Polito join him to talk about the benefits and drawbacks of "learning in public" and how there are many, many different ways to do it.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
JOËL: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot. I'm Joël Quenneville. And I'm joined today by fellow thoughtboter, Steve Polito.
STEVE: Hey, Joël. Thanks for having me; excited to be here. And congrats again on the new hosting gig.
JOËL: It's exciting to record with my first guest, and I'm excited that you get to be a part of this. And together, Steve and I are here to share a little bit of what we've learned along the way. So, Steve, what's new in your world?
STEVE: Well, on the professional side of things, I've been working on a Rails backend application that connects to a React frontend. And specifically, it's in the healthcare space. My biggest, I guess, struggle but also [laughs] thing that I've learned the most from this project is working with an inconsistent API can be very challenging. And that's been the consistent theme with this project.
But from that, I'm learning a lot. Because prior to this, I'd done a lot of work with just traditional CRUD apps where it's just all-encompassed in the Rails application. So you're kind of in control of everything. This is my first time where we're very much dependent on a third-party API. I'm learning a lot, but it can be challenging at times. But we've gotten in a space now where it's a lot more predictable, and therefore, working with it is easier. That's on the professional side of things.
Personal side of things, I, for whatever reason, decided to run a marathon in September, which means all of the training has to happen in the heat of the summer. I live in New England, and it's been unbearably hot the past two or three weeks, which means training has been unbearable. [laughs]
JOËL: Ouch.
STEVE: Well, I mean, that's what I get for signing up for a marathon at the end of the summer. So I look forward to just working with the unreliable API versus [laughs] doing this right now. That's what's going on with me. What's going on in your world, Joël?
JOËL: For long-time listeners of the podcast, they'll know that former host, Steph Viccari, has been working on a slow test suite. And part of that work has been converting some old Test::Unit tests over to RSpec. I've been working on the same project with her, and so the saga continues.
One of the really frustrating aspects of this work has been the Test::Unit tests rely a lot on fixtures which are just full of mystery guests. The fixtures that are just loaded at the top of the file refer to a few thousand records in the database, most of which are not relevant to the test that I'm trying to convert over.
The problem is that I don't know which ten records, you know, which two users out of the 100 defined in the fixture file are relevant. They're not referenced directly anywhere in the test. But if the RSpec conversion that I do fails, it will break because some user is not present in the database. And so I need to reverse engineer the code and figure out what is missing, which user record is just assumed to be in the database.
STEVE: Yeah, that sounds frustrating. Honestly, until working at thoughtbot, I didn't quite understand the concept of mystery guest. Because when I learned Rails, I just did the Michael Hartl Rails Tutorial, which, in an effort to make it as easy as possible, he just kind of does vanilla Rails. So there's no Factory Bot or RSpec, for example, and it's all fixtures. And it works very well for teaching you how to build and test an application without getting bogged down with too many of the extra things that come along with that.
So I always thought, okay, fixtures, cool, no big deal. Why does everyone always use Factory Bot?[laughs] Like, what problem is this solving? And I'm realizing now because I've run into this too, the issue that it’s solving is this mystery guest, so that's one of the issues that it's solving. And that's just one of those things that I didn't really appreciate until I would run up against it in a similar situation you're describing now where you're writing a test. It might even be a very simple test, right? Like five lines or something, and you're just expecting something trivial to happen.
And it's failing, and it's failing for the wrong reason. The message is so cryptic. And you're just like, what is this thing talking about? It's like referencing something that has nothing to do with the test. And that pain right there is the pain of a mystery guest. I just didn't have a name for it until listening to these episodes. And now I can appreciate why you want to avoid that type of stuff and also why Factory Bot is helpful for that.
JOËL: I think it's the kind of pain that tends to bite you more when you're modifying the tests later on than when you're writing them upfront. And since now, with the work that I'm currently doing, it's all modifying existing tests, I'm feeling this pain on a daily basis. Does that track with your experience as well?
STEVE: Yeah, pretty much. And then what ends up happening is you're working on a feature, and the test fails for the wrong reason. And then you realize 30 other tests fail for the wrong reason. And then, before you know it, you've spent four hours going down a rabbit hole to clean up the fixtures, or the mystery guests, or the implied setup that might be shared across other tests. It's just such a momentum killer, first of all.
You're in this headspace of like, okay, here's the feature I'm working on. Let's just bang this out real quick; no big deal. I want to go to lunch in a couple of minutes. [laughs] And then you're trying to fix this test because it's failing for the wrong reason. And then you keep pulling the string, and you're like, oh, okay, well, there must be a mystery guest or something, but that took like 20 minutes to figure out.
And then you figure that part out but then maybe fixing that mystery guest involves either updating that particular fixture, which could then fan out and cause other tests to fail because it depended on that fixture to have certain properties. Or you have to create a new fixture. But if you create a new fixture, there's now an extra record in the database. And that could break other tests because they are maybe expecting there to be a certain set of users and other things.
But that's just one of those things that early on, I would listen to episodes like this or hear about mystery guests, and I would be like, I just don't get what they're talking about. If you have a few fixtures, how is that so hard to keep straight in your head? And sure, at first, it's not a big deal if you have maybe two fixtures or something.
But then it quickly just reaches an inflection point where either there's more than one person on your team, or you have to add more fixtures or whatever. And then it just reaches an inflection point where it's just not sustainable anymore. And that sounds like that's obviously the point at which this project is at, and that's where you're trying to rein it back in.
JOËL: Yes. So it's definitely making the conversion from Test::Unit over to RSpec more difficult. I've been trying something a little bit clever to try to figure out what data is actually needed because that's my core problem. I have a Test::Unit test that doesn't define any initial setup data. It just assumes that data has been created by fixtures at some point. But there are thousands of records in the database. So which ones do I need to port over to this setup phase of my RSpec test?
What I've been doing is hooking into Active Support notification and watching the records that get read from the database from the Test::Unit tests. And that can tell me, oh, it's these ten that this particular test is using. Those are the ones you're going to need to convert over to your setup block.
STEVE: That's clever. I like that. So you were just looking at the logs essentially. Or did you have to do any puts statements or anything? Or was it just the default internal logging mechanism that Rails has under the hood?
JOËL: The simple version of this would be to look at the logs, so tail the test log file. I'm trying to be a little bit fancier and hooking into Active Support notification, which is something built into Rails that allows you to just listen to certain events in the system and then do actions based off those events. So I can subscribe to any database read and then say call this block when a database read happens. And in that block, I can then update a stats object that, over the course of the test, will then tell me what objects have been read from the database.
STEVE: Oh, okay. That's clever. I'm glad you shared that too. Based on this discussion of mystery guests, I feel like that's just a good use for that tool. I almost wonder if there's an opportunity not even to abstract that because that'd be too much work and overkill, but just, I don't know, make like a gist or something and just reference that for the future.
Because I feel like this is the type of thing that other people are going to run up against...or maybe even a blog post or something because it's just like, if nothing else, it would be good for future Joël to be like, how did I do that again? Oh, yeah, here it is. Like, it's in this blog post I wrote six months ago. So I could just copy and paste the code snippet and call it a day.
JOËL: It's funny you mention blog posts because we have a lot of these conversations internally at thoughtbot. And without fail, someone will eventually comment, "This is great content. You should turn it into a blog post," to the point where we now have an emoji reaction for you should make this into a blog post.
STEVE: Right. That's what's really maybe special about the software industry is there's just a lot of knowledge share built into it just with open-source software, for example. I mean, that's already a form of knowledge share. It's not a blog post, but it's a form of knowledge share. And I just think getting into a habit of just sharing these little artifacts, big or small, whether it's a blog post or just a code snippet, is really helpful for a variety of reasons.
But one, and in this case, to go back to the issue that you've been facing on the client work is you just explained...We talked about a lot of things; two of them were like, what's the mystery guest? That's helpful for some people to know because until very recently like I said, I didn't even really understand what the pains of mystery guests were. And then, we also talked about a potential solution to that.
So a naive approach is tailing the logs, but then you took it a step further with that clever solution to use that notification object. And if we weren't recording this right now, that might be lost in the ether forever. Maybe the people you're working with on your team would know about it, but that would kind of be it. So I think there's an opportunity for you to maybe abstract that into like a code snippet or blog post or something and just store it away for later so that future Joël or another developer can learn from that.
JOËL: That's a really good point, Steve. Creating public artifacts like that is a form of...I've heard it referred to as learning in public before. And that's actually a topic that I think you've really demonstrated mastery of. You are great at sharing the things you've learned or even the questions you have with your colleagues, and the team at thoughtbot, and the broader developer community. What was your journey into starting to share in public like this? Because I know it can be really intimidating, especially for someone who's early in their career.
STEVE: Yeah, that's a good question. So a very brief background on my career history is I'm not classically trained, so to speak, and I'm doing air quotes for those listening. When I say, I'm not classically trained, what I mean is I went to school for graphic design. And there was some overlap with web design, obviously.
But I ended my collegiate career really just knowing how to use Dreamweaver and knowing a little bit about HTML and CSS and barely anything about JavaScript, and I didn't know anything about server-side languages. That was my base. And I was fortunate enough to get a job at a small WordPress agency. I got really good at understanding WordPress and how to configure a website and then making it look like the Photoshop document.
JOËL: There's a shocking amount of the web that runs on WordPress.
STEVE: It's a huge amount. And what's nice is that it doesn't...as you just heard, I didn't have a lot of experience with making production websites. So WordPress made it easy enough for me to get my feet wet. But I would run into a lot of problems. And I was the only developer at this agency, so I couldn't turn to my co-worker and say, "Hey, can you take a look at this real quick?" It was just me and Stack Overflow. That was it.
The reason I'm saying this is because Stack Overflow and being the only person at the agency forced me to learn in public but from a different mindset. I wasn't necessarily learning in public; I was desperately trying to solve a problem by the end of the day. And it just happened to be in public because I would have to either go on Google or Stack Overflow or forums to find the answer.
JOËL: So were you asking questions on these sites then? Like, you were going into a chat room and asking questions or going to Stack Overflow and asking questions.
STEVE: Yeah. If I couldn't find a solution quickly, I would just go on there and just shamelessly ask questions which they were, in some sense, naive questions. Looking back at them now, it clearly highlighted that I didn't understand the fundamentals, but that's okay because I know I didn't. [laughs] And I'm sharing that with everyone right now, so it's not like it's a secret. Because that was the only way I was going to figure it out, like I said. And I didn't have anyone at that agency to ask for help.
So that got me into the mindset of just ask for help. But it also got me into a mindset of...one thing was, okay, I can't just paste the entire error message, like, the entire 3,000-line error message from the logs onto Stack Overflow. That's not going to help anybody. No one's going to answer that question. I needed to start to get good at distilling down the problem into its smallest part to then be able to share it, so I would at least incentivize someone to answer it versus pushing them away because who wants to read a non-formatted log file dump?
JOËL: That is a skill in and of itself.
STEVE: Yeah. I mean, it took time, don't get me wrong. And at first, I was posting those [laughs] giant log files. I would just say, "Hey, can you help me?" [laughs] And it's like, there's no context, and it's just 3,000 lines of gibberish. So obviously, I quickly learned, well, I got to make these bite-size. But then, from there, I slowly learned over time thanks to the community, and just the advent of the internet, and searching and everything like that.
But then I got to a point where I was confident enough with the skills I was learning that I wanted to start giving back and if nothing else, it was really just to help future Steve. So when I would run into an issue that I couldn't solve, typically at this point, it was like WordPress or Drupal issues. Once I was able to solve it, I would then write up a blog post with that solution, and they were very simple posts.
And just by chance, they happened to be very search-engine friendly because I would just, like, the title of the post would be basically the error message or how to do X in Drupal. Obviously, as a software developer, no shortage of problems, right? Like, every day, you're going to run into something that you actually just do not know the answer to. So I would just amass dozens of these problems. And if I found one interesting enough, I would post about it. And I just got into a habit of that because, like I said, if nothing else, it helps me for the future.
But then it's also nice to know there's certainly going to be someone else out there who has the same issue. And it's kind of exciting to think someone on the other side of the globe is going to possibly search this thing and maybe land on my website or something, just like I have done countless times where I've put in something into a search engine, and I land on someone's website, typically the thoughtbot website, [laughs] and I read the solution there. So it's exciting to be part of that.
JOËL: Were you ever afraid that somebody else would come along and tell you your solution is wrong?
STEVE: I wasn't necessarily afraid because that comes with the territory. Honestly, fortunately, I've never really had a situation where someone was outright mean or disrespectful. For the most part, I find folks are very helpful. But it does help like I said, if you distill those questions down and make it simple for someone to help you with. But yeah, I mean, that is one of the...I don't want to say risks, but that comes with the territory of learning in public, which is you might face criticism.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all of your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM helps developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps to include modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
JOËL: One thing that I really appreciate with some of the things that you share on social media is kind of like what we say here on the show, sharing a little bit of what we've learned along the way. So you get to follow Steve's journey. And it's like, I'm trying this problem; here is the solution I have so far. It seems to solve some problems, not everything. Tune in tomorrow to hear how the problem keeps developing.
STEVE: Yeah, exactly. That's kind of why I try to make sure I'm giving back at least half as much as I'm taking, so to speak. So in these transactions, like on social media, I'm stumped on something or maybe not even stumped, but I'm just, like you said, exploring an idea. And I want to have it peer-reviewed, so to speak. I mean, in some ways, some of these things are almost like a Twitter code review, right? It's just like, instead of having the formality of doing it on GitHub or whatever, it's just like, here's a snippet, quick gut check here.
And then what's nice is that people are nice enough to respond with what they think. They'll reference other posts or other projects that might touch upon that. And what's nice and what I hope is happening with these exchanges is maybe someone learned a little something about what I just posted. Because I know that I'm certainly learning something from the feedback I get.
And then again, it's almost like code review. You get this nice history of what this idea is about and then different stances on it. And then it just sort of serves as a little bit of a learning tool right there. And then yeah, I'll try to follow up later on, like, hey, here's where I landed now. But maybe it's a little more fleshed out.
JOËL: What are maybe some drawbacks of this concept of learning in public? Are there some reasons you might not want to or not be able to do this?
STEVE: I do think there are...maybe not drawbacks, maybe just risks. An obvious one I can think of is if you're working on proprietary software; for example, you legally probably can't share anything that you're working on. So that makes it challenging because you can't just straight up take a screenshot of your editor [chuckles] and be like, "Hey, look at this cool thing I'm working on today."
That adds a little bit of a roadblock because then you probably have to simplify it or, I don't know, anonymize it in a way that it's just generic. So it just adds a little bit of extra work. But in some ways, that might actually be a good thing because then you've simplified the problem to its purest form, so to speak.
Another drawback we kind of touched upon is you're opening yourself up to criticism, that can be a challenge. Everyone communicates differently. People may not want to use Twitter, for example, to learn in public. They might just want to have a personal blog and do it that way.
JOËL: Turn off the comments.
STEVE: Exactly. Turn off the comments. If you're someone that is hesitant of criticism or just being on social media in general and all that, learning in public can be whatever you want it to be. So what I mean by that is we talked about Twitter and social media. That's kind of an obvious way to learn in public. But another way is you could just have some GitHub repos or your own personal blog, you know, things like that.
What's being implied here is learning in public means, like, public, anyone could access it. But I want to challenge that because public could mean different things in different contexts. So at thoughtbot, for example, we have our dev channel, and people post there all the time. And that's public in a sense, but it's not public to the world. So it's a little more controlled. You know that you're going to get helpful feedback. It's a safe space to do that.
So I would encourage folks listening now that work in an agency or just work in software development in general see if you can create your own dev channel at work or something like that if you don't already have something like that because that's a good way to, I guess, encourage people to learn in public.
JOËL: I love that you're redefining public a little bit here and the idea that public could just mean your team at work or your company. That's a concept that I really like because now maybe it's a little bit less intense to share with them. And it can be something as simple as today I learned. It could be a question about a particular technical thing, or here's the thing I did; it works. Is there a better way to do it?
STEVE: Exactly. If you think about it, code review is a form of learning in public that's built into our day-to-day job because it encompasses a lot of these things. You have to be...I don't want to say ready to take criticism, but it's very common to open up a PR, and you're going to get feedback on it.
The reason I'm hesitant to say criticism is there's a connotation of just criticize me and the person is being rude or something; I don't mean that. I just mean someone who is being critical of your work in the sense that they're making sure it meets the requirement. And it is quote, unquote "good code" given the constraints. So you have to open yourself up to criticism that way.
You're also creating these little artifacts because, in code review, there's going to be a back and forth. Someone might suggest a change; someone else might praise or just give you a shout-out to be like, "Hey, I've never seen this before. I've never seen this method before or this pattern before. This is really neat. It reminds me of something I learned over here." And they might paste a link to something else.
So yeah, code review is a form of learning in public. It's like a very controlled, simplified version of that. And it can also be a good source for learning in public through social media. Because then from that, you get this distilled concept that you can then share to the world or just at work with other people that may not be on your team.
JOËL: thoughtbot has a few, I think, different cultural things that we like to do that all converge on some of these ideas, one being that we have dedicated investment time to try to improve ourselves. Two being that we try to share anything that we create as publicly as possible. So default to making something publicly available unless there's a good reason to keep it private which is the opposite of a lot of companies.
And then finally, bringing that all together, trying to, in the things that we learn, in the work that we do, pull out shareable artifacts. So if you're reading a book, if you're working on a project, is there something tangible you can pull out of it to share back with the team or even the broader world? And that might just be dropping, "Today I learned this," in our dev channel. It might be putting up a little proof of concept repo and publishing it publicly. It might be, as you mentioned earlier, writing a blog post about a cool technique that you found helpful on a project.
So we're constantly trying to find ways to take anything that we've learned and not just make it a personal thing but also try to sort of multiply that to, at the very least, our team but where it makes sense also the broader dev community.
STEVE: Yeah, exactly. I don't know about you, but I feel like there are a lot of similarities with learning in public in their many forms with open-source software. Because open-source software is basically learning in public, right? For folks listening who might be hesitant to start getting in the habit of this, I would just encourage you to look at any popular repository and look at all the open issues.
And what I mean by that is these popular repositories that are used by millions of people they're not perfect. Like, they didn't get it right on their first try. And you can read the source code and you can see everything about it. And it kind of embodies learning in public in that way. So it opens itself up for criticism but also praise. And then it's also just a resource there where you can learn from it.
There are so many times where I'll open up the Rails, like, I'll just go to the Rails source code, not because I need to but because I'm curious, like, how do they do that particular thing? Like, the other day, we were working on something where we had an object or a class, and we wanted it to have two class methods, one called perform, and one called perform with an exclamation point. The details of that don't matter, but I was just kind of like, well, that reminds me of Rails with destroy and destroy with an exclamation point.
And I just want to see how do they do that under the hood? Like, not every single detail, but just how does the destroy method with the exclamation point, like, does the call destroy under the hood? What does it do? And I was just like, well, let's just see what Rails does. And we can kind of copy that pattern for what we're doing over here, which was great. And, again, that wouldn't have happened if we didn't have open-source software, which, again, I think is a form of collective learning in public. It's like, it's the source. It's a result of many people working on it.
JOËL: Even for projects that have only a single author, I think there can be a lot of value there. Long-time listeners of the show will know that I'm a big fan of the Elm programming language. And I've participated in a few Game Jam events where you have a deadline, typically a few days or maybe a month, to create a game based on a theme. And I've built some games using Elm. Later on, people will ask me about particular patterns that can be used in Elm, maybe related to games, maybe not related to games.
And I've been able to link them to parts of that open-source code for the games that I built, which are built under pressure. They're not always great quality. But I can link to a particular section of the code and say, "Here's the pattern we were talking about." And that can spin off a whole conversation.
STEVE: Yeah, that's just one of the many advantages to doing these things. And I should also say, too, you say that I'm good at learning in public, but the same goes for you too. I mean, you're constantly sharing things in the dev channel, writing posts. I want to recognize that too because I think that's a skill that you've also mastered. So I appreciate that.
JOËL: Thank you.
STEVE: You share as much as you do, especially because you have significantly more experience than me. So again, to circle back to the mystery guests, I would hear you talking about mystery guests. I've heard other experienced devs talk about it. But a year or two ago, I'm like, I trust these people. Like, I really trust them. They're smart. They're credible. They have more experience. But I just don't really get what the problem is because I haven't actually experienced it firsthand, but I at least knew to be aware of it. And it was in my back pocket, and I could take it out when I was ready to do a deep dive on that.
So if it weren't for things like this podcast or blog posts or other things like that, I feel like the dev community wouldn't be nearly as...it just wouldn't be at the level that it is now. And I don't mean necessarily even Rails; I just mean software development in general. Imagine if all programmers just worked in isolation and couldn't use information from other developers or imagine that in any career, right? Physicians...imagine if they couldn't do knowledge share.
So I just think being in the software industry, it's just easier to share what you're doing because we make the internet in a way. So it's like, we're already on the internet all day, so we might as well just sprinkle in what we're learning.
JOËL: I have a personal note in my notes. It says that the best knowledge is created in the connections between people. So if you're imagining a graph where people are nodes, and the connections between them are edges, all the best ideas are on those edges where interactions happen between people and not just solo geniuses.
STEVE: Exactly, exactly. I like that.
JOËL: Power of collaboration.
STEVE: Right. It's like a neural network or something. It's just like, everything coming together, passing knowledge along.
JOËL: So, Steve, we've talked about how learning in public can be really good for your own personal growth and learning. Are there any other advantages to this approach to work where you're learning in public?
STEVE: Yes, absolutely. I think learning in public is very beneficial for junior devs in particular. And there are a few reasons I think that one of which is I think it helps you stand out amongst other candidates that are applying for a job. I think that just because...if you're constantly sharing what you learn and what you know, and again, these can be very small things. I'm not talking about multi-part blog posts or something. I'm just talking about sharing simple code snippets but just being kind of consistent about it.
Doing that really helps hiring managers to get a sense for how you think, and how you communicate, and how you code because those are all very important aspects of software development. Like, it's not just coding. If it was just coding, I don't know, GitHub Copilot, that would be it, right? We could all just [laughs] pack up our bags and head home. But there's so much more. There's so much more communicating that is involved in the job.
And if you're constantly sharing what you learn, that just makes it easier for maybe a hiring manager or someone to get a sense of how you think, how you code, how you problem solve, and again, how you communicate too because maybe you'll face some criticism like in the comment section or something. I'm not saying that's justified, but also, maybe that's an opportunity to practice your communication skills and maybe ask that person, like, hey, how would you solve this problem? Or, what do you recommend?
Because again, to go back to code review, that back and forth, that exchange that happens every single day. And I just think that if you're learning in public, it's just going to make it that much more easy for someone to get a sense of what you're like before they've even met you.
JOËL: And I think it's a really virtuous cycle here because you mentioned how this is a great way to show your work for potential employers, but at the same time, it's a great way to practice that work. You're talking about how this will help you improve your communication. But at the same time, it's also proving to everyone that you are good at communicating or that you have grown a lot in your communication.
STEVE: Exactly. Yep, exactly. If you're consistent about it, too, you could just scroll through your old blog posts and see what was I talking about three years ago? Versus what am I talking about now? And hopefully, there'll be some improvement and more depth to the articles. And again, it's just a great way to let folks know how you think and how you solve problems.
JOËL: I found that it's not just valuable for junior developers. I think it can be really helpful throughout your career to have public artifacts to point to. I've found that for some of my clients, being able to point back to blog posts I've written, or even conference talks I've given helps build trust, helps to build credibility for some of the work that I'm trying to do.
STEVE: Exactly, yep. And what's really exciting about it is in that moment, when you send that link or send an artifact, that transaction took two seconds. But it just embodies so much of that credibility because it took you years to get all that knowledge. But now, it's just foundational. You have this big foundation of artifacts that you can share. I think that's just wonderful.
JOËL: Keep learning in public. You're building an archive of valuable resources that will just keep compounding in value over the course of your career.
STEVE: Exactly. That's a good way to put it. I like that.
JOËL: Well, Steve, thanks so much for joining us on the show to talk about learning in public. If people are curious to see some examples of how you do this, where can they find you online?
STEVE: If you just search Steve Polito Design, you'll find me, which is kind of a callback to when I was studying graphic design back in college. So that's the best way to find me.
JOËL: So this is a handle on multiple different social media sites?
STEVE: Yep, exactly.
JOËL: Excellent. We'll make sure to link a few of those in the show notes as well. Thank you so much, Steve, for joining us this week to talk about learning in public. Do you have any last words you'd like to share with our audience?
STEVE: Yeah, I just want to thank you, again, for having me on the show. Just for context, a lot of what I learned about software development came from The Bike Shed, so, again, plus-one for learning in public. It helps other people. So it's very exciting to actually be on the other side of the show right now as a guest. So thank you very much. And congrats again on the new hosting gig; so you'll be learning in public too now, so this is great. [laughs]
JOËL: The show notes for this episode can be found at bikeshed.fm. This show is produced and edited by Mandy Moore.
If you enjoyed listening, one really easy way to support the show is to leave a quick rating or even a review in iTunes. It really helps other folks find the show.
If you have any feedback, you can reach us at @_bikeshed or reach me @joelquen on Twitter or at [email protected] via email. Thank you so much for listening to The Bike Shed, and we'll see you next week. Byeeeee!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
It's Steph and Chris' last show.
Steph found a game, and if you've been following the journey, all of the Test::Unit test files are now live in RSpec. JWTs really grind Chris' gears.
They wrap up with things they've learned, takeaways they've had, and their proudest podcasting moments. They also thank all the folks who've helped make The Bike Shed happen.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Transcript:
CHRIS: One more round of golden roads, our golden. So here we go.
STEPH: Oh, one more round of golden roads. Okay, maybe that's going to get to me today. [laughs]
CHRIS: [singing] Golden roads take me home to the place.
STEPH: [singing] I belong.
CHRIS: Yeah, there you go.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together we're here to share a bit of what we've learned along the way, at least one more time. So with that [chuckles] as an intro, Steph, what would you say is new in your world?
STEPH: Hey, Chris. Well, today is the big day. It is the day that you and I are recording our final Bike Shed episode, which we have all the feels about, and we will definitely dive into. But to ignore some of that for now, I have another small fun update I can provide about a new game that I found. So one of the things that's new in my world is I started playing a new board game with Tim; it's called Ticket to Ride. Have you heard of that?
CHRIS: I have. I don't know if I've played it. I feel like it's a particularly popular one now. But I don't know if I've ever had the pleasure.
STEPH: It's a very cute game, so we have the smaller version of it. For anyone that's not familiar, it's essentially a map. And then there's a bunch of spots where you can build trains and connect them, and then you get tickets. So your goal is that you're going to connect one location to another location. And then you get points and yada yada, but it's so much fun and especially the two-player version. It's like this perfect 20, maybe 30-minute game.
I'll be honest; I'm not really a board game person. I always enjoy it. Once I get into it, then I'm like, this is great. I don't know why I was resistant to this. But every time someone's like, "Do you want to play a board game?" I'm like, "Not really." [laughs] I first have to get into it. But I have really enjoyed Ticket to Ride. That's been a really fun game to play. And it's been a nice way to, like, even during the day, we'll break for lunch and squeeze in a game.
CHRIS: Well, I love good two-player games. They're hard to find. But when you find a good one, and it's got that easy pickup and play...I believe I'm going to now purchase this. And thank you for the tip.
STEPH: Yeah, this is definitely one of those where it's easy to pick up, and then you can get the expanded board. So there's a two-player version, but then yeah, you can get one that's a map of the U.S. or a map of Europe. And I think it accommodates up to five players as the maximum, so not a huge group but definitely more than two.
On a slightly more technical note, I have something that I'm very excited to share. It is a journey that you have been on with me, that everybody listening has been on this journey with me. And I'm very excited. I see you nodding your head, so I'm guessing that you're going to know where I'm headed with this. But I'm very excited to announce that all of the Test::Unit test files now live in RSpec. So that is a big win.
I'm very, very excited for that to be a previous state of life and not an ongoing state of life. Because I have certainly developed too much niche knowledge around migrating these tests, and that became apparent to me when I was pairing with another developer that works with the client because they had offered...they had some time. They're like, "Hey, do you want help migrating a test file?" And I was like, "Sure." I was like, "But this is wonky enough, like, we should pair and work on this together because I just know some ins and outs. And I don't want you to have to learn a lot of the hard lessons that I've learned."
And the test that we happened to pick up was very gnarly. It had a lot of mystery guests. And we spent, I think it was a good two hours. And we only migrated one of the tests, so not even a full file but one of the tests. And at the end of it, I was like, I know way too much about some of the oddities and quirkiness of this. And we got through it, but we decided that wasn't a good use of their time for them to go at this alone. So that's why I'm extra excited and relieved because I didn't want this task to carry on to someone else. So, hooray, we did it.
CHRIS: Hooray. Just in time. You're Indiana Jones grabbing your hat right as you roll out and off to [laughs] be away from the project for a bit. So you stuck the landing. Well done, Steph.
STEPH: Thank you. Thank you. So that's some great news. And then also, everything else in life is pretty much focused around getting ready for maternity leave. That's about to happen soon, and I am so ready. I have thoroughly enjoyed a lot of the things that I'm doing, [laughs] but goodness, being pregnant is hard. And I am very much ready for that leave.
So also, a lot of the things that I'm doing right now are very focused on making sure everything's transitioned and communicated and that I just feel really good about that day of departure. That covers all the newness in my world other than the big thing that we're just not talking about yet. How about you? What's new in your world?
CHRIS: Well, continuing to skirt the bigger topic that we will certainly get to in the episode, what is new in my world? I'm actually quite excited workwise right now. We have a much larger body of work that finally we got the clarity. All the pieces fell into place, and now we're sort of everybody rowing in the same direction. There's interesting, I think, really impactful code that we're writing for Sagewell right now. So that's really fantastic. We've got the whole team back together on the engineering side. And so we're, I think, in the strongest and most interesting point that I have experienced thus far. So that's all really fantastic.
On a slight technical deep dive, you know what really grinds my gears? It's JWTs. JSON Web Tokens and I have never gotten along. It's never been a match made in heaven. And we have a webhook that comes from Plaid. Plaid is a vendor for connecting bank accounts and whatnot. And they have webhooks like many people do. So they can inform us when things change, lovely feature of how we build web apps these days. But often, there's a signature that says, "This is definitively from us, and you can trust us." And usually, it's some calculated signature, HMAC, or something like that.
For some reason, Plaid's uses JWTs, and more than that, they use JWKs. So there's JWT which is the signature. That JWT itself is signed with a JWK. You have to fetch the JWK from their server based on the key ID in the header of the JWT. But how do you know if you can trust the JWT before you've gotten the JWK? All of this broke in a recent upgrade.
We went from Heroku-20 to Heroku-22 to the new platform with Heroku, which bumped us to OpenSSL 3.0, and it turns out JWT doesn't work with it. And so that's sad. It's a no. It's going to be a no. It turns out the way that OpenSSL 3.0 works is incompatible with some of the code paths in JWT. And so I was like, wait, we just can't do this? And it's low-level cryptographic primitive stuff that I'm not comfortable messing around with. I'm not going to hop in there and roll up my sleeves.
And even just getting to the point that I understood what was broken about this took like an hour and a half just to sort of like, wait, which is okay...so the JWT signs and encodes. And this will be a theme that we come back to later, but I think web development should be simpler. I think we should strive for simplicity. And this is a perfect example where I'm guessing Plaid uses JWTs and that approach to communicating security things often, but I've not seen it used much for signing webhooks. And, oof, it led to a complicated day. And it's unfixable now as far as I can tell.
There is a commit on the JWT Ruby repo as of five days ago, but it doesn't build in our system. And it's not released. And it's just a mess. So yeah, engineering is complicated. I'm both wildly excited about what we're doing at Sagewell, and then today was this local minimum of like, oh, JWTs again. Again, we find ourselves battling. And you won today, but hopefully not for too long.
STEPH: Oof, how did this manifest that you first noticed? So is it because a webhook suddenly stopped working, and that was like the error that rose up, and that's what helped you dive into it?
CHRIS: Yeah, we have a little bit of code in the controller for where Plaid events come in. We calculate and verify the signature of the webhook to make sure that it's valid, and we reject it otherwise. And we alert ourselves via Sentry, and then we also have a Datadog scan that can show what's the status code of the response. Because these are incoming HTTP payloads or requests, and so we can see there were 200 up until this magical day when suddenly everything changed. And that was when we switched Heroku stacks.
And then we can see it also in Sentry. So we're able to look at it, and we're like, why are none of the Plaid webhooks able to verify the signature anymore? That seems weird. And so then Datadog confirmed that it consistently was broken from this point in time. And then we were able to track that back. It was also pretty easy to guess because the error was "pkeys are immutable in OpenSSL 3.0," and that was the data. And I was like, oh, cool, that sounds fun. Let me go figure out what that means.
STEPH: [laughs] Well, it's a nice use of Datadog. I remember in the past you were talking about adding it. And I was excited because I've never been at that point where a team has just introduced it; either a team doesn't have it, and they wish they had more insights, or they have it and don't use it. And nobody ever checks the board. So that's a nice anecdote for Datadog helping you out. Yeah, I'm not envious of your situation, friend.
CHRIS: I do love the cup half full take [laughs] that you have on the overall situation, but that's nice how Datadog worked out for you. And you know what? It was. Thank you, Steph, for once again being that voice of positivity.
STEPH: I appreciate that you enjoy it because there are times that when someone points it out to me that I do that, I have to be like, "I'm sorry, I'm not trying to be toxic positivity over here. [chuckles] That's just how my brain works."
CHRIS: Oh, you are definitively not toxic positivity. That's a different thing. Because you ended with but also, I feel bad for you, and I'm glad that I'm not in your shoes. So you are the right level of positivity. I don't think I could have talked to you for three and a half years as co-host on a podcast if I didn't appreciate the level of positivity or the general approach that you bring to thinking about stuff.
STEPH: Okay. Well, to borrow a phrase from Matt Sumner, who has been a guest on the show, cool, cool, cool, cool. I'm glad my positivity has been well calibrated. And I was about to say I'm interested to hear how this turns out for the team. [laughs] But we're in an awkward spot where I mean, you and I, we can still totally chat. But listeners won't get to hear the rest of that particular saga. I mean, you can share. I mean, you do you. I'm setting all sorts of boundaries for you right now.
Okay. And now I'm just rambling, and I'm getting weird with it. Because the truth is that, you know, we won't be back. And this is our final episode together. So I think let's just go ahead and rip off the Band-Aid. Let's dive into it. Let's talk about it. Given that it's our last episode that we are recording, we thought of a couple of things that we'd like to talk about. You brought up a great idea that I'm excited to dive into. Do you want to lead us in?
CHRIS: Sure. Well, if we go back all the way to Episode 172, that is the first episode that you came on as a guest. I actually continue to really love the title of that episode, which is What I Believe About Software. And it both captured that conversation really well, but also, more generally, it's actually become the tagline of the show when we do our little introduction. What do we believe about building great software? Et cetera.
And I think that's been the throughline of the conversations that we've had is what remains true. What are the themes? Not necessarily the specific technologies, although we certainly talk about that. But what do we believe about building great software? And so today, I thought it would be fun for us to talk about what do we still believe about building great software? It's roughly three and a half years or so that we've been doing this. What's still true?
STEPH: Oh, well, I have the first unequivocal one, the thing that I still believe about building great software, and that's you should hire thoughtbot. That's definitely the way to go. We'll help you get it done, not that I'm biased in any way.
CHRIS: No. I'd say collectively between us; there's zero bias with regard to thoughtbot or any other web development shop out there. But thoughtbot is the best.
STEPH: All right, perfect. So we've got the first one, the clutch one of hire thoughtbot. And then I also really like this topic. And I still think back to that first episode that I recorded with you and how much fun that was and how that really got me to start thinking about this. Because it was something that, at the time, I didn't really reflect on a lot in terms of what does it take to build great software? I was often just doing the day-to-day actions but then not really going high-level think about it. So I'm excited this is one of the topics that we're revisiting.
So for the next one, this one is, I don't know, maybe it's a little cutesy, but I was trying to think of an alliteration that I enjoyed. And so this one is be an assumption assassin. So what assumptions are you making? And then how can you validate or disprove them? And that is something that I find myself doing constantly. And it always yields better work, better questions, better software, better code, better code reviews. And that's my first one is be an assumption assassin and identify what assumptions you have.
And I had a really good example come up today while I was having a conversation with Joël about something that I was looking to merge. But I was a little hesitant about it because there are some oddities that I won't dig in too deeply. But essentially, there's a test that I migrated that highlights an existing concern in the code. And I was like, should I go ahead and merge this test that documents it, or should I wait to fix that concern and address it?
And he brought up a good point. And he's like, "Well, we're assuming it's a bug and an issue, but it may not actually be depending on how the software is being used." And so then he was encouraging me to reevaluate that assumption that I had where I'm like, oh, this is definitely a problem to, like, I don't know, is it a problem? Let's ask somebody.
CHRIS: First off, I love that as a theme, as one of the things that you still believe about software. Second, I believe you correctly said that you were looking for an alliteration, but my brain heard acronym.
STEPH: [laughs]
CHRIS: And so then I was like, B-A-A-A. Is it BAAA? What are you going for there? Oh, you just wanted a bunch of As. Okay, I got it now. Secondly or thirdly, I think I'm on my third now. Apparently, within Sagewell team culture, one of the things that I'm most known for is... there are two phrases: one is just to name it, and the other is to be clear. And these are the two things that I do apparently constantly so much that it's become a meme within the team.
It's just like, okay, everybody's been talking. But I just want to make sure we're on the same page here. So just to be clear, or just to name it, here's what I'm seeing. But I agree; I think taking those things...what are the implicit bits? What are the assumptions? And making them more explicit. Our job as developers is just to yell at computers all the time and make them try and do human stuff. And there's so much room for lossy conversions at every point in that conversation chain. And so yeah, being very clear, getting rid of assumptions, love it. It's all great stuff.
Actually, in a very related note, the first on my list is that code is for humans to read. This is one of the things that I believe most deeply and most impacts the way that I write software. Any given piece of functionality that we want to author in our code feels like 10, 20, 50, frankly, almost infinite different versions of the code that would produce nearly identical functionality. So at the end of the day, the actual symbols and strings of text that we bring together to write the code is all about other humans, other people on your team, you five months from now, you a week from now, frankly, or me. I'm going to say me, me a week from now.
I want to do future me and everyone else on the team a solid and spend that extra 10% of okay, I have something that works now, but let me try and push it around and try and massage it into a shape that is a little more representative of how we're actually thinking about the code, how we talk about it as an organization. Is that the word that we use to describe that domain concept? Maybe we could change that just a little bit. Can I push more of this into the private API? What actually needs to be known here?
And I think that's where I'm happiest is in those moments because that's where all of the parts of the job come together, the bit where I trick a computer into doing what I want and simultaneously making it so that that code is revisitable, clear, expressive, all of those things. So yeah, code is for humans. And that's true across every language, and framework, and domain that I have worked in. And I've only believed it more and more so over time. So yeah, that's mine.
STEPH: Yeah, I love that one. That's one of the things that comes to mind when people talk about disliking code reviews. And I can imagine there are a number of reasons that people may have had a poor experience with a code review process. But at the end of the day, if you're not getting that feedback or validation from fellow humans, then how do you know that you've been successful, that you've written something that other people can follow up on? Which goes back to the assumptions in terms of like, you're assuming that you have written something that your future self or that other people are going to be able to read and maintain down the road. So yeah, I love that one.
One of the other things that I still hold really true to building great software is prioritize early and often. So always be checking in to understand with your users, with your tech concerns, with data that you may have, new insights, and then just confirm that yes, you and the team are constantly working on the thing that has been prioritized and that is the most important.
And also, be ready to let go. That can be really hard. I have definitely had those moments in my career where I've spent two weeks working really hard on something. And then we've realized that the thing that we were pursuing isn't that valuable, or it's something that users don't need or actually want. And so it was better to let go of it than to pursue it and ship it anyways. So that's one of my other mantras that I have adopted now is prioritize, prioritize, prioritize.
CHRIS: Unsurprisingly, I agree wholeheartedly with all of that. We're still searching for that thing, that core thing that we disagree on other than Pop-Tarts and IPAs. But I don't know that today is the episode that we're actually going to find that. But yeah, prioritizing is such a critical activity. And it is this interesting collaboration point. It gets different groups together. It's this trade-off. It's this balance. And it's a way to focus on and make explicit the choices that we're making. And we're always making choices. We're always making trade-offs. And so being more explicit, being more connected and collaborative around those I believe in so, so, so much. So love that that was something on your list.
Let's see, next up on my list is reduce complexity, just sort of as an adage, just always be reducing complexity. It is amazing to me in my time, particularly as a consultant, but even now, this is something that I hold very true is just it's so easy to grow a system in anticipation of future complexity or imagine that the performance concerns that we're going to run into will be so large that we must switch from Postgres and a nice, simple atomic database into a sharded, clustered Kafka queue adventure. And there are absolutely cases that make sense for that sort of thing.
But at a minimum, I beg of you, anyone starting a new system, don't start with microservices. Don't start with an event queue-based system. These are wildly complex versions of what often can be done with so much simpler of an application. And this scales through to everything. What's the complexity of an API? Do we need caching in that API layer? Or can we just be a little bit inefficient for a little while and avoid the complexity and the overhead of caching?
Turns out caching is a tricky thing to get right, just as an aside. And so the idea of like, oh, let's just sprinkle in a little bit of caching. It'll be easy, and then we'll get better performance, like, yeah, but did you get it right? Or did you introduce a subtle bug into your program that's going to be really hard to debug later? Because do you cache in development? Well, maybe, I'm not sure, could be.
So over time, this is something that I've sort of always felt, but I've only ratcheted it up. It's only something that I've come to believe in more and to hold more firmly to. I think earlier in my career, it was something that I felt, but I would more easily be swayed by aspirational ideas of the staggering amounts of traffic that we would be getting soon or the nine different ways that the data model will expand. And so, we should code the current version in anticipation of that. And I have become somewhat the old man on his lawn yelling at the clouds like, "Nah, we don't need it yet. We can grow to that."
And there's a certain category of things that are useful to try and get out in front of and don't introduce additional complexity, but they're a tiny, tiny list. And so, for most things, my stance is what's the simplest thing that we can get away with right now, that still provides a meaningful experience to our users, that doesn't compromise on security or robustness or correctness but just solves the problem we have right now? And over and over and over again, that has served me incredibly well. So yeah, keep that complexity at bay.
STEPH: That is one that I've definitely struggled with. And frankly, it works in my favor, that idea of keeping things simple. Because I'm terrible when it comes to predicting the future or trying to build things in a way that I just don't have enough information to really drive the architecture or the application that I'm building. So anytime I'm trying to then stretch and reach for the future in those ways unless I really have a concrete understanding of I am building for these particular scenarios, it's really hard to do. So I very much like keeping it simple and not optimizing before you need to.
And it reminds me of I think it's Mark Twain, who has a quote, "Worrying is like paying a debt that you don't owe." And that's something that comes to mind for me when also writing code and building features and software is that I tend to be someone who will worry about stuff. And I'm like, oh, is this going to be easy to extend? Is it going to be what it needs to be six months from now if we need to add more features to this and build on top of it? And I have to remind myself it's like, well, let's just wait. Let's wait till we get there and we know more.
One of my other ideas that couples nicely with the one that you just shared in regards to keeping things simple and then waiting for those needs to arise is that mistakes are going to happen. They are a part of the process. As we are learning and growing and we're stretching our skills and trying things out, things are going to go wrong. We're going to introduce bugs. And to take those opportunities, that's when we start to use that feedback to then improve things like observability, like capturing logs, and how we handle error reporting or having a plan for emergencies.
So maybe that's the part of worrying that can pay off is thinking through, all right, if something does break, or if something gets shipped that shouldn't, then what is our plan in how we handle that? How do we roll back? Or how do we get things back to a stable build?
CHRIS: It's funny. I was actually visiting with a friend this past weekend, and we were chatting more generally about life things but the idea of worrying and anticipation and trying to prepare for every bad outcome. And there's the adage of an ounce of prevention is worth a pound of cure. But increasingly, both in life, depending on the context, and in code, I've found that I've shifted to the opposite of it's impossible to stop everything.
There are going to be bugs that are going to get out there. There are going to be places where we code things incorrectly. And I would rather...I still want to try as hard as I can to get things right, to be clear. I'm not giving up on trying. But I'm all the more focused on how do we know and how do we recover when those things happen? So it's interesting that you just described exactly that, which, again, is a very human life conversation, and yet it applies to the code.
STEPH: I love that rephrasing of it. Instead of the mistakes are going to happen, it's, like, how do we know, and how do we recover? I think that's perfect. I've also found that by answering the how do we know and how do we recover, that really helps you build trust with clients as well. Because again, things are going to happen, things are going to break.
And the more prepared you are for that and then the better plan that you have, and then they can watch how you execute that plan, and it’s going to establish a lot of deep trust with other engineers and also the team that you're working with, that you have been thoughtful and that you have ideas on how are we going to address this? Instead of waiting for that moment to happen.
That's going to happen too. You're going to make decisions in the heat of the moment. But I have found that to be a really useful way to establish yourself with a team in terms of I care about this team and these processes and this application. So how do we handle the bad times, not just the good times?
I do want to circle back because you alluded to the fact that you and I, we've tried to find things that we disagree on. And so far, Pop-Tarts and beer have been the two things that we disagree on. But I do have a question for you that maybe I will disagree with you on. But I need to know some more about it first.
You have alluded to there's the Brussels snack, (Oh, I'm going to get this wrong.) Brussels sprout snack hour or working lunch, something combination of those words. [laughs] And it's the working lunch that has stuck out to me, and I've wanted to ask you about it. So here I am. I'm asking you about it. What's a working lunch? What's the Brussels snack happy hour, snackariffic working lunch look like?
CHRIS: This is fantastic. I love that you waited until the last episode that this was rolling around in the back of your head. And you're like, are you making the team work through lunch? And now, on this final episode, we get to address the controversy that has been brewing in the back of your head. Spoiler alert, no, this is just ridiculous nomenclature. These are two meetings that we have that are more like, let's get the dev team together and talk about stuff that's in our platform sort of developer experience. Or stuff in observability often is talked about in this context because it doesn't quite impact users, but it's how we think about the work.
And so there are two different meetings that alternate every other week. So every Friday afternoon, we do this, but it's one of two meetings depending on the day. So there's a crispy Brussels snack hour that was the first one that was named, which was named purely for nonsense reasons because we don't have anything else that's named nonsensically in our organization.
And so I was like, oh when we name this meeting, we should make it nonsense because we don't have any other...We don't have, you know, an SOA microservices fleet with Barbie doll and Galactus and all of the other wonderful names. Those are references to the greatest video ever about microservices; if you've not seen it, that will be in the show notes. It's required reading.
But anyway, we don't have that. And so we thought, let's be funny with the name of this. So the crispy Brussels snack hour is one, and the crispy Brussels we wanted something that was...the first one is a planning meeting. The second is like, let's actually sort of ensemble program. Let's get the four of us together, and we'll work on some of the stuff that we're talking about here but as a group. And so I wanted the idea of we're working, and so I was like, oh, this will be the crispy Brussels work lunch. But it's purely a name. It's the same time slot. It's 3:00 o'clock on a Friday afternoon. [laughs]
So it is not at all us working through lunch. I don't think we should work through lunch. I'm concerned that you thought that for a while, and you were just like, I'm a little worried, but I'm not going to bring it up. But I'm glad we got to cover this before we wrapped up this whole Bike Shed co-hosting adventure together.
STEPH: I feel relieved and also a little robbed of an opportunity for us to have something that we disagree on because I thought this might be a thing. [laughs]
CHRIS: We can continue searching for that thing. But maybe it's okay that we agreed on most stuff for the run [laughs] of this fun, little show that we did together.
STEPH: Yeah, that's gone on quite a time. We've got like three years together that we have managed to really only find two, I mean, very important of course, two things. But yeah, it's been pretty limited to those two areas. And each time that you'd mentioned the work lunch, I was like, huh, I need to ask about that because I have feelings about it. But then, you always would dive into very interesting stories of things that came out of it, and I quickly forgot about it.
So this feels good. This feels like very good important closure. I'm glad that this finally surfaced. But circling back, since I took us on a detour for a little bit, what are some other things that you still hold deeply about building great software?
CHRIS: I've really got one last thing on the list. It's interesting, there's not a ton technically in this list, which I think represents broadly how I feel about software, and I think how you feel about software. It’s like, it's actually mostly about how the people interact at the end of the day. And you can program in any language or framework, and you can get the job done. We certainly have our preferences and things that we enjoy.
But the last one really rounds us out, which is think about the users. I always want to be anchoring the conversations that we're having, the approach that we're taking to building the software in what do the users think? Who are our users? What do we know about them? What do they care about? How are they using this technology? How is it impacting their lives? We've talked a number of times about potentially actually watching the sales demo as an engineering team, trying to understand what's the messaging that we're putting out into the world for this piece of software that we're building?
Or write along with customer support and understand what are the pain points that people are hitting? And really, like, real humans, what are they experiencing? Potentially with a name attached. And that just changes the way that you think about the software. There's also even the lower-level version of it. As we're building classes or modules, what are the public facets of that, and what are the private API? What's the stuff that we're hiding away? And what's the shape that we are exposing to the outside world for varying definitions of outside?
And how can we just bring in a little bit of empathy to try and think about, again, in the case of like the API for a class, it's probably you on the other side of it, but it's future you in a slightly different mindset with a little bit less information and context on the current problem that you're working on. And so, how can we make things easier for ourselves in the code, for our users at the end of the day?
How can we deliver real value that is not mired in the minutiae of technical complexity and whatnot but really is trying to help people live better lives? That's a little too fancy as I say it out loud. But it is kind of the core of what I believe, so I'm not going to take it back.
STEPH: I love how you've expanded users where more traditionally, it's people that are then using the software. But then you've expanded it to include developers because that is something that is often on my mind and something that I just agree with wholeheartedly in terms of when you're writing software; as you mentioned before, software is for people. And so we want to include others.
And it does improve people's lives. People show up to work every day, and if you've been thoughtful if your past you has been thoughtful, it's either going to give you your future self a better day, or it's going to give other people a better day. So I think that's a very fair statement, improving lives by being thoughtful in regards to focusing on the users, people consuming software, and working in the codebase.
CHRIS: I know we've talked about this before, but I was having a conversation with one of the developers on the team at Sagewell just last week, and they were mentioning how they really loved working on admin features. And I was like, oh, that's interesting. Let's talk more about that. And it was really it's that same thing that I think you and I have discussed of like there's that immediacy. There's that connection. These are actually colleagues, but you can build software to make their day better. You can understand in detail what the pain points are.
What's the workflow that as you watch it, you're like, oh, I could put a button up in the corner of the screen that would automate almost all of this and your day would be that much faster? Oh, let me do that. That's exciting. And so I love that as another variation of it, like, yeah, there's for other developers. There's also for the admin team or other users in the organization of the software. There are so many different versions of users, but I think I think we build a better thing if we think about them more.
STEPH: I have definitely worked with teams where I can tell that certain people are demoralized, and it comes down to they feel frustrated and often disconnected from the people that they are building for. And so then you really feel isolated. I'm pushing code around, but I don't really see the benefit or the purpose of it. And I think that's very hard for developers who typically want to build something that's going to be useful and not feel like it's just going to be thrown away. So connecting your team to those users, I certainly understand. Getting to build something for your colleagues and then they get to say how much they like it is an incredible, rewarding experience.
You also touched on something that I really appreciate, where you highlighted that a lot of the technical decisions that we make are important, but they're not at the center of the things that we believe when it comes to building great software. And that's something that I will often reflect on. Like, as we were thinking through these particular ideas that we still hold true today, how mine are more people and process-focused and rarely deep in the technical weeds. And there are times that I think, well, shouldn't there be something that's more technical, something that's very concrete? Yes, you should build your code this way or build your application or use a specific technology.
But after all the projects and teams that I've been a part of, that's just usually not the most important part. And so I appreciate that you highlighted that because sometimes I have to remind myself that, yes, those things can be challenging, but it's often with people and process. That's where the heart of great software lies.
CHRIS: That's a fantastic phrase, I think, that really encapsulates all of the conversations that we're having here.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half.
So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
CHRIS: Actually shifting gears a little bit, so we've just talked about what we still believe about building great software. I'm intrigued. We've been chatting for a number of years here on this microphone, these microphones. We have separate ones because we're in different states. But I'm interested; what have we changed our minds about? What have you changed your mind about, Steph? I got a couple of ideas, but I'm intrigued to hear yours.
STEPH: Nothing. I've never been wrong. I've stuck to everything that I've ever thought.
CHRIS: That must be boring.
STEPH: [laughs] Yeah, that's totally not true there. There are definitely things that I've changed my mind about. One of the things that I've changed my mind about is that people who know the most will ask the fewest questions. That's something that I used to consider the trademark of someone who is a more experienced senior developer in terms of you really know what you're doing. And so you typically don't ask for help or need help very often. And so, I'm going way back in terms of things that I have changed my mind about.
But I have definitely changed my mind where people who know the most are actually the ones that do a really great job of constantly asking questions and asking for feedback. And I think that is still a misconception that people still carry forward. The idea that if you're asking a lot of questions or asking for help that you are not as skilled in your work, and I view it as quite the opposite, that you are very good at what you do and that you know precisely the value of your time.
And then also reaching out to others for help, and then also just getting validation on things that you may have concerns around. So that's one I've changed my mind on is that I think the more experienced you are, the more questions you tend to ask.
CHRIS: Oh, I love that one. It's a behavior that I know...I think we've talked about this before. But as consultants, we try and model it just the like; it's totally fine to ask questions. And because we often come in with less context, it makes sense for us to be asking questions, but I will definitely intentionally lean into it in those contexts to be like, everybody keeps throwing around this acronym. I don't actually know what that is. Let me raise my hand.
And my favorite moment is when people disagree on what the acronym or what the particular word or what the particular project is. Like, I ask the question, and people are like, "Oh, it's this," and someone across the room is like, "Wait, that's what it means? I thought it was this totally other thing." I'm like, cool, glad that we sorted that out. Glad that we got that one up in the air.
But I actually remember many, many, many years ago, at this point, there was a video series of...PeepCode was the company, and there was the Play by Play series. And so there were particular prominent developers, particularly in the Ruby community. And they would come and sort of be interviewed and pair program. And it was amazing getting to watch these big names that you had heard of, like Yehuda Katz is the one that stands out in my mind. He was one of the authors of merb, which was a framework that was merged with Rails, I want to say around the 3.0 time.
And just an absolute, very big name in this world and someone that I looked up to and respected. And watching this video, they had to Google for particular API signatures and Rails methods. They were like, "Oh, how does that work? Is it link to and then you pass the name?" I forget what it was specifically. But it was just this very human normalizing moment of this person who has demonstrably done incredible work in our community and produced very complex software still needs to Google for the order of arguments to a particular method within Rails. I was like, oh, okay, that's good to know.
And with complete humility in the moment, I was just like, yeah, this is normal. Like, it's impossible to hold all of that in your head. And seeing that early on shook me off the idea that that's the thing to do is just memorize everything. It's like no, no, get good at asking the questions. Get good at debugging. Get good, yeah, asking questions. It's a core skill rather than a thing that you grow out of. But I definitely shared early on I was like, not allowed to ask questions, that'll be scary.
STEPH: I love that example. Because counterintuitively, to me, it demonstrates confidence when someone can say, "Oh, I don't remember how this works," or "Let me go look it up." And so I just very much appreciate when I see someone demonstrating that level of confidence of let's keep going. Let's keep making progress. I'm going to ask for help because that is totally fine, and we are in a safe space. Or I'm going to create a safe space for us to do that.
One of my favorite versions of this where you shared like if you ask about an acronym and then people disagree, one of my favorite versions is to ask about a particular area of the codebase and be like, what would you say this code is doing here? What do you think users do here? Like, what is the purpose? What's the point of this? [chuckles] And then having people be able to say, "Oh, yeah, this definitively does this thing." Or people are like, "You know, I'm not sure. I don't even know if that code is getting run." That's one of my favorite outcomes of asking questions. How about you? What's something you've changed your mind about?
CHRIS: I made a list of a couple of things like remote is on there. I didn't know if I'd like remote. I wanted to try it for a while. Tried it, turns out I like it a lot. It's complex. You got to manage it, whatever. But that I think everybody's talked about that a bunch.
I think probably the most interesting one is deadlines. Initially, in my career, I didn't really feel anything about them. And then I experienced the badness of deadlines. Deadlines are bad. Deadlines are things that come down from on high and then you fail to hit them, and then you're sad. And maybe along the way, you're very stressed and work long hours to try and get there. But they're perhaps arbitrary. And what do they even mean? And also, we have this fixed scope, and they're just bad. And then there was a period of my time where, like, deadlines are bad. The only thing that we do is we show up, and we make the software as quickly as we can.
But in reality, there are times that we need that constraint. And in fact, I have found a ton of value in deadlines when used intentionally. So we can draw a line in the sand, and we can say, at this point in time, we will have a version of the software. We have a marketing campaign that we need to align with this. So we got to have something at that point. And critically, if you're going to have a deadline, you've now fixed a point in time. You need to flex other things.
And critically, I think the thing to flex is the scope. So we need to have team management. We have user accounts right now, but now we need to organize them into teams. That is like a category of functionality. It's not a singular feature. And so yeah, we can ship teams in the next quarter. What exactly that means is up in the air.
And as long as we're able to have conversations essentially on a day-to-day at least weekly cadence as to what will make it in by that deadline and what won't, and we're able to have sometimes the hardest conversations but the very necessary conversations of the trade-offs that we have to make as we're building that software, then I find deadlines are absolutely fantastic tools for focusing and for actually reducing scope but in a really useful way.
And getting something out there in the hands of users so that you start to get real feedback so that you start to learn, is this useful? What are the ways that people are using this? What should we lean into and do more of? What maybe should we roll back, actually? So yeah, deadlines. First, I didn't know them, then I feared them. Now I love them but only under the right circumstances. It's a double-edged sword, definitely.
STEPH: I, too, have felt the terribleness of deadlines and railed against them pretty hard because I had gone through a negative experience with them but have also shifted my feelings about them where they can be incredibly useful. So I really liked that's one of the things that you've changed your mind about.
It also reminds me of one of the other things...I'm going to circle back for a moment to one of the things that I believe about creating great software is to not wait for perfection, and deadlines are a really good tool that helps you not wait for perfection. Because I have also seen teams really struggle or sometimes fail because they waited until there was something perfect to present, and then you realize that you've built the wrong thing.
So I do want to transition and talk a bit about the show because it's our last episode, and we should talk about it, and the fun adventures that we've had and some of the things that we've learned or things that we're feeling in the moment. So given that it's been a wonderful three years for me, it's been four years for you since you've been a host on the show. How are you feeling?
CHRIS: I'm feeling a bunch of different things sort of all at once. I am definitely going to miss this immensely. Particularly, I loved when I started, and I got to interview a bunch of thoughtboters and other people from the community. But frankly, three-plus years of getting to chat with you has been just such a delight. There's been an ease to it. We kind of just show up and talk about what we're doing. And yet there are these themes that have run through it. And it has definitely helped me hone and shape my thinking and my ability to communicate about what I'm thinking.
I've learned that you have a literal superpower to remember the last thing that you were talking about. Listeners, you may not know this, but we are not quite the put-together folks that we may sound like in these recordings. We have a wonderful editor, Mandy Moore, who makes us sound so much better than we are. But we'll often pause and stop and then discuss what we want to talk about next. And Steph always knows the exact phrase that she or I left off on. And it has been so valuable to the team.
But really, it's been just such a pleasure getting to have these conversations. It's also been something that has just gently been in the back of my mind at all times. And so, I'm observing the work in any given week as I'm doing it. It's almost like meditation in a certain way, whereas I'm working on something, like, oh, this is actually really cool. I want to take a note about this and talk about it on The Bike Shed with Steph.
And having this outlet, having this platform to be able to have those conversations and knowing that there are people out there is fantastic, although it's very weird because really, every one of these recordings is just you and I on a video call. And so there is an audience, I'm pretty sure. I think people listen to the show; I don't know, occasionally they write in, so it seems like they do. But at the end of the day, this really just feels like a conversation with a friend, and that has been so valuable to have. And yeah, I'm definitely going to miss that.
It's been a wonderful run, you know, four years is a long time. It's about as long as I've done most things in my career. And so I'm very happy with what we have done here. And there's a trite saying that isn't...yeah, whatever; I'll just say it, which is, "Don't be sad that it's over. Be glad that it happened." And I guess I'm still going to be sad that it's over. But I am so glad that I got the opportunity to do this, that you joined in this adventure and that we got to chat each week. It's been really delightful.
STEPH: I really liked how you refer to this as being a meditative state. And that is something that I have certainly picked up from you and thoroughly enjoyed that I have this space that I get to show up and bring these ideas and topics and then get to talk them out with you. And that has been such a nice way to either end the week or start a week. I mean, it doesn't matter. Anytime that we record, it's this very nice moment of the week where we get to come together and talk through some of the difficulties and share our stories.
And that's been one of my favorite moments is because you and I get to show up and share everything that's going on. But then when someone writes into the show or if they send a tweet or something and they share their story or their version of something that happened, or if they said that we made them laugh, that was one of my favorite accomplishments is the idea that something that we have done was silly enough or fun enough that it has brought them joy and made them laugh. So I, too, I'm very, very much going to miss this. It has been a wonderful adventure.
And I thank you for encouraging me to come on this adventure because I was quite nervous in the beginning. And this has definitely been an aspect of my life that started out as something that was very challenging and stretching my limits, and now it has become this very fun aspect and something that I get to show up and do and then get to share with everyone. And I do feel very proud of it, very much in part to Thom Obarski, who was our initial producer and helped us have that safe space to chat about things. And now Mandy, who keeps the show running smoothly and helps us sound our best week to week.
So it's been a wonderful adventure. This is going to be hard to let go. And I think it's going to hit me most. Like, this was one of those things as we're talking about it, it's, like, I'll see you next week. This will be fine. But I think it's going to hit me when there's something that I want to talk about where I'm like, oh, this would be great to talk about, and I'll add it to The Bike Shed Trello board. And I'll be like, oh yeah, that's not a thing anymore, at least not quite in the same way that it was.
CHRIS: So what I'm taking away from this is that you're immediately going to delete my phone number the minute we hang up this call and stop recording. [laughs]
STEPH: Oh yeah. I preemptively deleted. So that's already done. Friendship is over at this point.
CHRIS: That's smart. Yeah, because you might forget otherwise in the heat of the moment as we're wrapping this whole thing up.
STEPH: [laughs]
CHRIS: But actually, on that note, in a slightly more serious vein, again, there's this weird aspect where the audience is out there. But we're very sort of disconnected, particularly at the moment in time where we're recording. But it has been so wonderful getting various notes from listeners, listener questions, but also just commentary and the occasional thanks and focusing; oh, you pointed me in the direction, or you helped me think through a complicated piece of work or process a problem that we were having. And so that has been just so, so rewarding.
And one of the facets of this that has been so interesting to me is being able to connect to people and basically put out there what we believe about software, and for the folks that resonate with it and be able to have that connection instantly. And meeting people, and they're like, "Oh, I've listened to The Bike Shed. I like all these things." I'm like, oh, cool, we get to skip way further into the conversation because I've already said a bunch, and you say you like that thing. So, cool, we're halfway through our introductory chat.
And I know that we agree about a bunch of things, and that's really wonderful. And frankly, I'm going to miss that immensely. So for anyone out there who's found something valuable in this, who's enjoyed listening week to week, or perhaps even back to Upcase for things, I would love to hear from you. I'd love to connect to folks. Send me an email, Twitter. I'm on all the places. I'm Chris Toomey in various spots or ctoomey.com on the internet. Chris Toomey on GitHub. I'm findable, I think. Chris Toomey developer will probably get you there.
But I would really love to hear from folks, to connect to folks, you know, someday down the road; I think I'll be hiring again. And that'll be fun. I would love to work with some of the folks that have listened to this show or meet you at a conference, or if I happen to be traveling to a city or you're traveling to Boston. Really for me, so much of what this show is about is connecting with people around how we think about building great software. And so, I would love to continue that forward into the future. So yeah, say hi, if you're interested.
STEPH: I agree. That's been one of the most fun aspects of being co-host of the show. And I'll be honest, you are welcome to contact me, but I am going to be off-grid for probably six months. [laughs] So just know that there will be a bit of a delay before you hear back from me. But I would definitely love to hear from you.
I also want to say a very heartfelt thanks to a couple of people, just folks that have made this journey incredible and have made it so much fun. One, in particular, is everyone at thoughtbot for their continuous stream of knowledge. I mean, frankly, my software opinions wouldn't be half as interesting if it wasn't for everyone at thoughtbot constantly sharing their knowledge and being a source of inspiration. So I deeply appreciate everyone that has contributed to topics and ideas and just constantly churning out blog posts because those are phenomenal.
And I also want to give a shout-out to my husband, Tim, because he has listened to The Bike Shed for many years and even helped out with a number of show notes when that was something that you and I used to do before Mandy made our life so much easier and took that over for us. And has intervened a number of times when Utah mid-recording would decide it's time to play. So I want to give a very special thank you to him because he has been a very big supporter of the show and frankly helped me manage through a lot of the recordings for when I had an 80-pound dog that was demanding my attention.
CHRIS: I think continuing on the note of thanks; similarly, I'm so grateful to thoughtbot as an organization for everything that is represented in my career. It's a decade-plus that I have been following and then listening to the podcasts and then joining the organization, and then getting so many wonderful opportunities to learn about this thing called web development. And then, even after I left the organization, I was able to stay on here on The Bike Shed and hang out and still chat with you, Steph, which has been really wonderful. So thank you, thoughtbot, so much.
Thank you to Joël Quenneville, who will be the continuing host of the show. This show is not going anywhere. And, Steph, you and I aren't really going anywhere, but we won't be around anymore. But we are leaving it in the very, very capable hands of Joël, and I'm super excited to hear the direction that he takes it and Joel's incredibly thoughtful and nuanced approach to thinking about programming and communicating. So I think that will be really wonderful.
And lastly, I definitely want to thank Derek Prior and Sage Griffin, the two original hosts of this show, who really produced something wonderful, and for many years, I think it was about four years that they hosted together. I was an avid listener despite actually working at the company the whole time and really loved the thing that they produced and was so grateful that they entrusted me with continuing it forward.
And hopefully myself and then with the help of you along the way, we've...I think we've done an okay job, but now it is time to pass the torch or the green lantern. That's the adage I've been going with. Gotta pass the lantern, pass the mantle on to the next one. So, Joël, it's going to be in your hands now.
STEPH: Yeah, I'm so looking forward to future episodes with Joël Quenneville. They are going to be fabulous.
So I've been thinking in terms of this being our finale episode and then a fun ending for it, so there's a thing called the 21-gun salute, which is the military honor that's performed by firing cannons or artillery. Not to be confused with the three-volley salute, which I definitely confused earlier that is reserved and used at funerals, which this is not. So using the 21-gun salute, I was like, hmm, it is The Bike Shed, and we have this cute ring ring that goes. So I think for our finale, we should have a 21-bell salute as we exit the shed and right off into the sunset.
CHRIS: I love it. I couldn't imagine a more perfect send-off. So with that, what do you think? Should we wrap up?
STEPH: Yes, but I have one more silly thing to add. I've thought of a new software idiom that I'm excited about. And so, this may be my final send-off into glory that I'd like to share with you. And I think that we should make like a shard and split.
CHRIS: [laughs] I so appreciate that in this moment, this final moment that we have together, you choose to go with a punny joke. It is so on brand for the show. It is absolutely perfect. And I think with that note, shall we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph and Chris announce Joël Quenneville as the new host of the show! 🎉 Joël talks about his grand plans for where The Bike Shed is going to go from here. (Okay, maybe not grand plans...!)
Together, the group chats about unpopular opinions and hot programming takes.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Follow Joël on Twitter! Welcome him to the show.
Joël Quenneville - DRY is harmful for intermediate devs
Become a Sponsor of The Bike Shed!
Transcript
CHRIS: Thank you. No brown M&M'S. No asking me weird questions. I ask very little.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: I'm Chris Toomey.
JOËL: And I'm Joël Quenneville.
STEPH: And together, we're here to share a bit of what we learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What is new in my world? There is a new friend on the show with us today, Joël, developer extraordinaire, principal developer at thoughtbot, former coach of the Chicago Blackhawks (That one's not true, but it's funny to say.) and friend of the show, many time guest. Joël, so great to see you. How are you today?
JOËL: I'm excited to be joining the show.
CHRIS: Fantastic. So Steph and I shared with the audience in the previous episode that we have decided...we've made the heavy decision that it is time for us to hand over the hosting mantle. But we had yet to discuss exactly where that was going to go. And today, we are happy to share with you the specifics of that, which is that Joël will be taking over the hosting detail. So, Joël, do you have grand plans for where The Bike Shed is going to go from here?
JOËL: Maybe not in grand plans, but I'm definitely already planning content, lining up a few guests. I think the next few weeks are going to be a lot of interviews with some guests, a lot of co-workers at thoughtbot will want to share their stories or some exciting things that they're working on, things they are specialized in.
One thing that I really appreciate at thoughtbot is that we're all pretty full-stack developers. Everybody tends to have a specialization or an area they're really good at. And so when you're ever stuck, and you need advanced Git help, you go to, historically, Chris. If you want some help on security, you go to Mike Burns, et cetera. And so yeah, I'd love to bring in some of them and get to talk a little bit about some of the areas where they're either doing something interesting, or this is just an area where they have deep expertise.
STEPH: I have to say that having you take over as host of the show feels like such a nice continuation, given that you've been involved with The Bike Shed since you were a guest on the show back in 2018. And then, since then, you've been a repeat guest. And even when you're not here, Chris and I still frequently bring up your name and mention some of the talks and the blog posts that you have written.
So I'm super excited for everything that you just said, for all the guests that you're going to bring on the show, and the thoughtbot voices. And I'm looking forward to tuning into future episodes and hearing what happens next.
JOËL: Yeah, this is a really exciting transition for me. I am a long-time fan of the show. I've been a guest a few times. And now, coming on as a host is just taking it to the next level.
CHRIS: Yeah, Joël, I'm also super excited to see the new perspective that you bring to the show. And frankly, to Steph's point, you've been on the show a bunch of times. You've been here in spirit. In fact, there was a tweet that someone sent to us which was "Hey, @joelquen," which is your Twitter handle, "I feel as if I know you and your work after listening to @SViccari and @christoomey. They definitely appreciate you," which was true then and is true now. So I couldn't be happier for you to be taking over this hosting slot.
JOËL: Creating content for developers and sharing the things I've learned, or the things that I've experienced with other people has always been something that's been really important to me. And so, getting a chance to bring that to The Bike Shed is a really exciting opportunity.
My past work has intersected with the show several times, either that's been conference talks or blog posts, or even just conversations we've all had. Steph and I pair almost every day. So there are a lot of good conversations. A lot of conversations that end with me saying, "You should talk about that in The Bike Shed. That's a good topic."
STEPH: You have been an excellent source for topics in terms of you've literally added stuff to our Trello board that we use to manage the show and then yes, and all those conversations that we've had. You're like, "Oh, this will be a good Bike Shed topic." And I'm like, "Hold please," while I go and then add it to our board.
JOËL: The fun of being an employee at thoughtbot is that I have access to the Trello board. And I can just add whatever ideas I have there for you all to talk about. [laughs]
CHRIS: That is the most direct way to send in a listener question is just to write it into the Trello board directly. Skip all the forms, and the Twitter, and the whatnot.
JOËL: Yeah, speaking of topics on our Trello board, let's go out on a really spicy one. I see we have a card about unpopular opinions. Chris, you created this. What is your hot programming take?
CHRIS: This was so long ago, I don't even remember. But this one has interestingly sat on the Trello board for a while. I was like, Steph, let's really lean into it. Let's go out there. What are our extreme takes? And I think we say the stuff that's in our heart most of the time anyway. But we're; also, I don't know, pragmatic, kind of boring up the middle.
Someone recently described my tech choices as like a Subaru. Like the architecture, the way I built it, you know, it's stable. It'll get to where you want to go. It's not going to be too fancy or too flashy. And I was like, you know, actually, humorously, my wife and I just bought a Subaru. So I was like, I guess I can't say no to that.
Anyway, though, I think I have one spicy take, which I have shared on The Bike Shed before. But it is the thing that I will die on this hill. I care about this. It feels like I'm just being persnickety, but I think it matters, which is that the phrase single-page application or the idea of an S-P-A or a SPA, which I've heard some people say it, is just a terrible framework. I am so unhappy with it as a concept.
I think, technically, the implementation of them has often led to some really complicated things, which is why I've spent so much time exploring Inertia, or LiveView or Livewire, or all of the other options that are out there. I think there are some really interesting novel ways. Remix.run is the most recent thing that I've been talking about, which takes a pretty traditional SPA type of build and then makes it behave more like a traditional server-rendered application.
But just the idea single-page application really grinds my gears that that is a thing that we talk about. Because it's the web, we have URLs; these should all be different pages. I don't care if there's one bundle of JavaScript that we send down or that it begins as a single HTML file that we send down, and then we repopulate constantly. There still should be URLs. Those are a honkin' good idea, and we should use more of those. And we shouldn't anchor to a technology...like, no, SPAs, I don't like it. I'm not a fan. So that is my unpopular opinion, I'm pretty sure.
STEPH: I think we have a full episode, too, where we focused a lot on that topic where you shake your fist at the SPAs in the world [laughs] and why you don't like them.
JOËL: I feel like the industry pendulum might be swinging back. I think we've hit peak SPA, and now we're slowly moving back, anecdotally anyway.
CHRIS: That is definitely what I'm seeing more and more of, and I'm very happy for it. Because I think there's some interesting stuff that came out of SPAs and the ideas of like, oh, let's animate, and let's have more continuity and page transitions and things that I think can really enhance the end-user experience. But I think the cost has been too high. It's broken us from some of the norm.
Like, how does the web work? That's a thing that we should talk about. Links, they're awesome. You can link between pages. It's so cool. And we just kind of threw that away. And we're like, div onclick. It'll be fun. Don't worry about it. Screen readers, who cares about those? Doesn't even matter. I care. That's who cares. I'm going to calm down now. I'm fine. It's fine. But yes, I agree. I do think the pendulum is swinging back, and I'm very happy to see that.
JOËL: Long-time listeners of the show will know that I'm a big fan of the Elm language. And for the longest time, I wanted to do a client project at thoughtbot with it. But I agree with you, Chris, that a lot of things shouldn't be SPAs. A lot of things should be just boring Rails apps. And so, every time an opportunity came, I just couldn't justify doing a front-end app. So I would be like, yay, I would love to do Elm here, but this should just be a vanilla Rails app.
And then, one day, we had a project that came in that was actually a single-page app. It was one page where you loaded up a bunch of data streams and then could interact with them and get all these cool visualizations and things. There was no clicking away. There were no other pages. It was just load some data, and you've got a playground. And that was the moment I knew, okay, this is the app I want to do in Elm. And that was my introduction to bringing Elm to colleagues at thoughtbot to work on a project together.
STEPH: You put out the signal kind of like the Batman signal, but you put out the Elm signal calling everybody into the project.
JOËL: [laughs]
CHRIS: You put a small tree on your desk, and everyone came together around it.
JOËL: [laughs]
CHRIS: Also, I want to applaud your pragmatic restraint of I want to use this. This would be fun to use, but it is not the right choice for many particular applications or projects that came through the door. But then you found it. Then you found that magic moment.
To be extra clear, I'm not opposed like; my distinction is an SPA or traditional server-rendered HTML generation on the server...I got redundant in there, but that'll be fine. It's more the bundle of JavaScript that goes down and that there's no routing on the server-side, et cetera, that all logic is pushed to the client-side.
I've harped on about Inertia and my love for that framework over many, many an episode on the show. But I feel like that, and many other solutions that are in that similar space, allow us to have the sort of experiences that are traditionally associated with an SPA but don't give up on the idea of auth being simply managed via a cookie on the server sort of thing. Cookie on the server is a phrase that doesn't really make sense. But y'all get what I'm saying, I hope. If not, assume that I said the right thing. It'll be more fun that way.
JOËL: Steph, I'm curious; what's maybe one of your unpopular opinions or hot takes?
STEPH: I have a couple, and since Chris used a phrase that has now helped anchor me in terms of like the hill that I will die on, I'm now looking through that list to pick the one that I feel like the most passionate about. So looking through that list, I might just have to go a couple. It's hard to really choose.
But the first one is I'm going to say you don't need a side project. I can't tell you how much that frustrates me when people just always say you have to do something on the side. You have to stay up late and code. You have to do coding on the weekends. I feel very strongly that software development is a job, and it doesn't have to be your passion; if it is, that is fabulous.
But it is, at the end of the day, still a job, and you don't need to know three additional languages to be good at your job. And you should be able to focus on learning what makes your day-to-day easier and then learn that during your work hours. So that's something I feel very strongly about.
JOËL: Do you think that's something that is a current reality or something that is aspirational? As in, an employer shouldn't require it, or it is currently possible to have a completely fine career and never have a side project.
STEPH: I think it's very possible currently and aspirational for some teams. I think there are some companies and teams that will turn you down because you don't fit that mold. And I think that's what then puts us in that unpopular opinion category. Because there are enough people that still think that that is an important part of being a software developer.
But I do think that there are still plenty of teams and people that are starting to agree with the idea that it shouldn't be that way and that that is not a requirement for an interview, or for joining the team, or for being a good developer, or for progressing in your skills. So a little bit of both, currently possible and also aspirational.
JOËL: I'm going to throw a question out, and I think this may be its own complete topic. So feel free to tell me that this is not the day to talk about this, but I'm going to put a question out. Recently, there was a really good conversation that happened internally at thoughtbot where one of our newer developers was asking, is it possible to progress our whole career ladder without ever doing any side projects? So just 32 hours of client work, eight hours investment time every week.
And maybe a little bit beyond that, maybe it's technically possible. But it takes an excruciatingly long time. Is it possible to progress in a reasonable amount of time through the career ladder without doing any extra work outside of our standard working hours? What are your thoughts on that for thoughtbot?
STEPH: I think the answer has to be yes. Because I mean, who's creating that ladder that you are climbing? It's going to come down to the company and the managers and the people who are deciding on how you advance, and not everybody has time. So what you're telling me is that if someone can't advance during the normal work hours that they have then...because they have families; they have other priorities in their life; they have other responsibilities, that then they're just stuck? And that feels like an unacceptable answer to me. So my answer is absolutely you can progress during the work hours that you have. And if you can't, then that is a problem for managers and leadership to fix.
CHRIS: I definitely agree with the assertion that you're making, Steph. And I like the ire that you're bringing to this. This is good. This is the sort of fire that we should have in this particular segment. I do think there was an aspect of the question that is subtle and really interesting to me, which is in a reasonable amount of time. And then you mentioned, or it might take an excruciatingly long amount of time.
A thing that I observe about our industry is that it is rather young overall, and the expectations of progression are incredibly rapid. This is sort of my second career. I spent three years as a mechanical engineer working in industry. That was the first thing that I did. And that world looks wildly different.
The idea of achieving principal engineer, which is a little bit more formalized of a concept, but I saw that as 30 years down the road kind of thing, or it may be 20, or something, but a significant chunk of my career. That is something I might achieve towards the end of my career after having put in a lot of time.
And development is just such a young industry. Like, the idea of going to a bootcamp and then two years later being a senior engineer and continuing to progress just ever so rapidly is interesting. I don't want to slow anyone down by any means. And I don't want to say that like, well, when I was over there, it was slower, and so it should be slower here. But I don't think it's realistic, frankly.
And I think some of what is at the heart of this question is like, no, this is an industry where you get in there, and in five years, you have achieved the pinnacle of your career. And then you go retire, and you go to a cabin in the woods, and you never talk to humans or touch a computer again. That's the dream. It's like, well, maybe that's not realistic. And what would it actually look like if we were a little more chill about these sorts of things? That is a question that I have.
STEPH: It's funny that you mentioned that one because that was one that I almost put on my list of unpopular opinions where I think the progression in which we change our titles as developers is silly, [chuckles] is probably the best word that I have for it. Because I agree with you, I don't want to slow people down. And we often change our titles to reflect that, yes, we want more responsibilities. We want more pay, and so that feels like the best way to then achieve those goals.
But it's so rapid in how we expect people to progress to different levels of engineering. I do think it loses a bit of its meaning because then we progress people so quickly through those different roles. Because then senior developer means so much. I mean, are you a senior developer with two years of experience or ten years of experience? Like, you could be in both of those groups. And so, it just loses some of its meaning because of that.
JOËL: It's also hard because years of experience aren't really a good way to compare two developers. I mean, two versus 10 is probably something you can compare very roughly. But five versus 7 or 5 versus 10, someone might be much more experienced or be better at solving problems after five years than somebody else in 10.
CHRIS: I will argue that two years in a consultancy is like five or maybe even ten years in a middle-of-the-road product company that's kind of got its stuff figured out. And just the volume of new and novel that will come at you is quite large. And I strongly recommend working with a lovely company called thoughtbot to get that experience because it'll be a fun time while you're there. And, man, will you learn a lot.
MID-ROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half.
So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPH: Joël, I have another potentially unpopular opinion I'd love to share. But I'm curious, what's one of yours?
JOËL: So here's a controversial opinion that I have: DRY, Don't Repeat Yourself, is dangerous, I'd say even a harmful rule of thumb for intermediate developers. It's one of our favorite kind of aphorisms, principles that we try to live by as developers. But I think it often does more harm than good at a certain point in your career.
When you start off as a new developer, one of the most common sorts of bugs you'll do is where you duplicate code, and then you change it in one place and don't change it in the other. And they're out of sync, and then your code breaks. And then you discover DRY, Don't Repeat Yourself. And it's amazing because this simple rule fixes 80% of the bugs you were creating. And of course, now there are other types of bugs you find out about. But it's a whole class that gets eliminated by that, and it's wonderful.
But then you get into the intermediate developer, and you start to get clever, and you try to apply this in a lot of places. And you start trying to abstract things that are similar but not the same, and then they start to diverge. And then you've got a mess on your hands. And this happens in application code happens in test code.
Yeah, you end up doing that, prematurely abstracting, and particularly when it's just based on through a simple substring matching. So you're saying this string of 100 characters is identical in two files. We need an abstraction that shares these two when the fact that the two strings might be similar is just coincidental. And so that's where you cause more harm than good. And then eventually, you kind of transcend that, hopefully, and get back to the point where you can maybe apply DRY more judiciously.
Now, DRY is no longer for you about similar characters. It's about similar or actually same concepts because similar is not good enough. And hopefully, you have the discernment to distinguish between similar and same. And when you're not quite sure, maybe you leave them separate to see will they diverge in the next few months?
STEPH: It's almost like you and I are on the same project, and we have felt similar pains in the world.
JOËL: [laughs] I think this is a thing that a lot of developers eventually get to the point where they can do it decently well in production code. But at least when it comes to RSpec code, the Ruby community has just refused to learn this lesson. We're still stuck in that intermediate developer phase where everything's got to be extracted out as shared setup. And I would argue that shared setup on most tests is similar and not same. And the identical thing is if you're just copying two strings of code that look similar in two files and creating an abstraction that really doesn't need to be there.
CHRIS: Friends, I have to tell a truth in this moment. I wrote a let within an RSpec file this week. It happened.
JOËL: [Gasps]
STEPH: This is actually why you're moving off the show, Chris. This right here. [laughs]
CHRIS: I'm getting kicked off the island. I've broken rule number one [chuckles]; let's not. This was...I think it was a reasonable one; at least I couldn't figure out a different way to do it. I was defining a class within an RSpec file, a representative implementation of a subclass. And I tried to do it by just defining a class in line, but RuboCop came along; we recently added RuboCop to the app as well. And RuboCop said, no, no, no, you are leaking a constant. And I said, oh, that's true, RuboCop.
And then, thankfully, the documentation page for that rule had a pointer to what to do instead. And so it ended up with a let. I don't actually think I need to let now that I think about it. You can dynamically define a class within a spec example. So I then, in the rest of the example, started doing that. So I'll probably remove the one let that I actually added. [laughs] But it happened.
JOËL: That honestly seems reasonable to me. I think there are use cases where let can be done correctly. It's just really hard to have the discipline to not stray off that very narrow path of safety. I recently gave a talk at RailsConf about how your test suite is making too many database calls. And most of them are all related to doing way too much during your setup phase.
And one of the reasons I gave that this happens too often is shared setup and let in particular. And I thought that I might get booed off the stage. But in fact, several people came up to me afterwards and told me that I had actually given them a whole new perspective they had never seen, and that was interesting to them.
STEPH: That's awesome. That's so nice that people came up and shared that with you. Yeah, I think one of the areas that I don't know if we highlight enough, but whenever you and I happen to gripe about the use of let or over drying and extracting setup for tests, specifically, is because then we're not talking about the trade-off of then you're coupling all your other tests to that extracted setup.
And so it's not so much that I care that you dried up your test setup, and now the rest of my tests are likely reliant on that extracted shared setup. And then that's where we've introduced a trade-off, and it's a painful trade-off that I have worked through a number of times. So just to add a bit of persnicketiness to our discussion in terms of it's not so much the DRY in the extraction that bothers me, it's then the trade-off of now everything is coupled. And it becomes much harder to then create independent scenarios that are still easy to read and then modify.
JOËL: Because you're coupling two things that are not the same, that want to diverge. And now you're forcing them down a single path. And I think probably the biggest, reddest flag you can get that you've misDRY-ed is where you try to introduce conditionals to your shared extraction. Because the whole point of a shared extraction is that everything is the same. And so once you start introducing branching in there, you know something's gone horribly wrong with your abstraction.
STEPH: Okay, so I think we've established that we've got very strong, maybe popular, unpopular opinions about DRY and especially using DRY in test setup. What's another unpopular opinion that you have, Joël?
JOËL: This one is a stylistic one. But I think that in Ruby, at least, every if should come with an else. You want something to happen when your condition doesn't trigger. I think it generally makes for more readable code and also prevents some implicit nils from getting through. And nil errors are probably at the top of your bug backlog if you were to look at right now (For all of you listeners, I'm pretty sure that's true.), excluding JavaScript.
CHRIS: Someday, we're going to make undefined a function, and it's going to be a great day. This is an interesting one, Joël. I'm inclined to agree with you, but I know that in my code, I don't necessarily follow this adage. So I find it interesting. I would call them intentional nils; I'm sometimes fine with that, or particularly in side effect-y code, you know, if this condition, then do something, otherwise, nothing.
But yeah, it is interesting. The explicitness, the nil, is a big mistake that seems true. You hang out in Elm for long enough, and you don't have one, and then you got to think about stuff harder. But yeah, I don't find myself doing this. So I find it interesting. I conceptually agree with what you're saying, and yet my code tells other stories.
JOËL: So I would argue that Ruby is an expression-oriented language; all methods auto return something as opposed to a statement-driven language like JavaScript, where nothing returns unless you explicitly return it. Ruby is all about those return values. And so you need to cover all branches, and if you don't, Ruby will implicitly have some branches that you don't have there. And so, I prefer to make visible the branching logic that's happening.
STEPH: I also like this one. I think I'm on the same page as Chris, where I like the opinion and this statement. And then also, you'd asked me earlier about whether something is capable now versus aspirational. And this feels like in the aspirational area for me where I agree with it, but I don't necessarily do it. But I think it makes a lot of sense because then it does force you to think about what is the return value? And to be very explicit and say, yep, no, I want a nil here. That's just what I'm going to return.
JOËL: And maybe even think about in that else case maybe you don't always want nil, maybe you'd rather return an empty string, or a null object, or something like that. And forcing you to actually manually type that value in instead of just being like yeah, Ruby knows. It'll put a nil there. It's easy to not think about the error edge cases, which I know is the opposite of how Chris thinks. Chris thinks about all the error edge cases.
CHRIS: My brain has been shifted by experience.
JOËL: One thing that I think this works really well is a stylistic approach that I use where I separate branching code from doing code. So in a particular method, if there is a conditional, then the body of the conditional calls out to another method. It doesn't implement logic there directly. And if a method does a calculation, or does the side effect, or does some kind of work, it's not allowed to have a conditional in it. So an individual method either gets to branch, or it gets to do a thing but not both.
STEPH: Is that so you can quickly see all of the branching that's involved so you can see it at a very high level versus if you have a very large branching in your method and then it's hard to see how many possible return values that you have?
JOËL: Yes. I like it because when you read code, especially when you're reading code that has a conditional in it, typically, you're reading it at a higher level of abstraction. You just want to know what are the possible ways that it can branch, and then what are the paths it can go down? And you probably don't care about all the nitty gritty implementation details that happen at each branch.
And then you might care about one particular branch and decide to go down that path. At that point, you will jump to that method. But you don't need to have all 2 or 3 or 10 branches expanded for you. It keeps the method at a single level of abstraction and also allows each method, in a certain extent, to have a single responsibility. It does one thing. It's either branching between n choices, or it does one thing, but it doesn't try to do multiple things.
So that's less of a hot take opinion, like, everyone should write code that way. That's just a style I've adopted for myself that works really well. But that particularly dovetails nicely with my hot take, which is if should come with an else.
STEPH: I like it.
JOËL: Yeah, so, Steph, do you have another juicy opinion you'd like to share with our listeners?
STEPH: Yeah, I've got one more to share, and this one is going to show my consulting roots. And it's that developers should be included in the design and planning phases, and not just in the execution phase of the work. So that's something that I've seen a number of teams struggle with where they wait until either design is considered completed or perfect or someone else has figured out the way that they want a feature to be built, and then they just essentially throw it over the wall and kick it over to the development team.
And so then it's laid out with very specific criteria. But it's not clear the problem that's actually being solved. And so developers have done a very narrow focus of what they're working on. And you just often end up building the wrong thing when that happens because people don't have the context. They don't know the questions to ask if they do run into some questions regarding the requirements.
And so I strongly advocate that developers shouldn't just pick up tickets and then write code, that they should be part of the planning and the product process as well. Turns out that developers often have really good ideas when it comes to features. And they also have a lot of knowledge about the application and can ask good questions. So why not include them in that process?
JOËL: You mentioned that this is inspired by your experience as a consultant. I've sometimes heard the contrast between the terms contractor versus consultant. Is that a distinction that you make?
STEPH: Yes and no. Yes, because I've heard other people make that distinction, but I personally wouldn't make that distinction. And even for people who are not a contractor or consultant but they're full-time, I still love when people can adopt the mentality of you have the same responsibility of this product and where it's headed and its technical decisions.
And we all should be mindful of the time that we're spending as we're working on something. And I think being a consultant helps you be more mindful in terms of like, oh, I've spent two hours on this problem. I should probably reach out for help. And I feel like people who haven't had an opportunity to be in that mindset don't think that way. But I think it's a really wonderful state to be in all the time just to think through; okay, I've been stuck for an hour, now's the right time that I should reach out for help. Versus spending like a full day on something and then reaching out for help.
So I have heard that distinction, but I personally wouldn't make that distinction. I would advocate that even if you are a consultant, or full-time, or contractor that ideally, you would still be part of those different processes to then help build a valuable product.
JOËL: Steph, I found it really interesting when you introed this idea. It was about developers bringing their ideas to the business and design side of things. But then, when you dug into the idea a little bit more, you kind of flipped it on its head and said that being involved in those meetings helps developers do their job better because now they can more appropriately timebox a feature or decide when they've hit that 80% point that's good enough to ship, and then they need to reprioritize something else. So it sounds like there are advantages to both sides, both the business side and the dev side, from a tighter integration there.
STEPH: Yeah, thanks for calling that out. I think that's an excellent point that I hadn't really even considered as I was just rambling about a strong feeling. But yes, I think it's beneficial to all sides. It's beneficial to developers who are getting the work done. And then I also think it's really beneficial to the product team and design just because that way, everybody essentially has the same context. They're on the same page.
And then you also have more camaraderie that way too in terms of people know the problems that they're trying to solve. And you can have more opinions. And people can surface those ideas versus if you are kept in the dark as to perhaps why a feature is being built or who it's geared towards, then it's more likely that you're not going to be as connected with the rest of the team and be able to provide helpful ideas because you won't have the fuller context to then surface that to the rest of the team.
So this is definitely coming just from all of my experience in the projects that I've been on that the most successful projects include design, and developers, and product management. And they all get together, and they talk about the problems that are being solved. Versus the projects that I've been on where essentially, all of that work is done separately, and then there's just a Trello board or Jira tickets or whatever tool that you're using. And then developers go and pick up tickets.
Because then you often end up having those discussions anyway because developers are then going to have to check in to say, hey, you said this thing. Did you mean this? What exact requirements am I looking for? By siloing that process elsewhere, you end up just duplicating your efforts because that conversation is ideally going to happen anyways.
JOËL: I recently had a conversation with someone who had been promoted to senior developer and was asking me for some advice on how do you be a senior at your job. And this is at a product company. And basically, what you were saying, Steph, is what I recommended. If, as a senior developer, you are just a machine that converts tickets into code, they're not using you to your full potential. And honestly, they're overpaying for what they're getting.
You need to be in those meetings, in those conversations, so that, as you mentioned, you can be that nexus between business and tech and tell them, look, this is your strategic goal. You think this is the technology thing you want to do. Yes, it will solve your problem, but it will take twice as long.
And that is incredibly valuable to the business people because doable and easy, and very similar solution that is near impossible to do in tech are very common. There's a classic xkcd about this situation. And so having someone who knows that nuance and can recommend and say, look, we can do this in two weeks. That will take six months. Or even just talking through trade-offs is incredibly valuable.
And then, as you said, bring that back in your own work, knowing when to say, look, we thought this is going to take two weeks, and it's taking more. We think this is going further. We know that the business goal is to get to here. So I'm going to pause on this and propose a different technical solution that will get us most of the way towards that goal while still respecting the deadlines we're working under.
I might argue that this sort of mindset is not just a senior developer thing. It's valuable at mid-level and junior as well. And I think that as a consulting firm at thoughtbot, that's something that we bring in from the beginning for everyone that we try to build this. But definitely, as you move up into your career, this is going to become more and more important.
CHRIS: Yeah, I'm surprised. I'm actually totally on the opposite side of this one. I fundamentally disagree with...no, I totally agree with everything you're saying and how important these sorts of conversations are. We're actually working on something at Sagewell right now. It's a new, reasonably large integration with an external platform.
And we're very, very intentionally pushing such that product and design are going off and doing some work, and engineering is going off and doing exploratory what do the APIs actually look like? What's the data that we need? What's the object model in this system? What can we get away with not providing? What do we need to provide?
And continually, we're just trying to encourage communication across those different tracks. So design is thinking about different flows and what's the experience a user is going to have. And ideally, getting engineering to review that and say, "Oh, that's going to be easy. That'll be hard. I think we could do that, but I actually have to go look it up and see if that's possible," and vice versa.
Engineering now being in the depths of saying like, "Oh, actually, this has to be done in this way for legal reasons or whatever it is," and then providing that back to design and product to think about how do we structure it? How do we sequence things? What's part of the MVP? And I've been really happy with the nature of that, that back and forth communication.
But it is definitely something that requires intentional work of making sure that we're not just falling into our own silos, but we're coming up for air, having those conversations, passing ideas back and forth, including each other in the conversations. But without question, it will produce a better outcome at the end of the day. So yeah, I actually do, in fact, agree 100%.
STEPH: Yeah, to add on to that just a bit, there is a particular example that comes to mind where I felt the most pain for when developers and product and design aren't in the same room and then discussing the work to be done. And that's essentially where someone ends up building the wrong thing. And it's because someone decided that they knew the best solution for something, but they didn't really collaborate with the rest of the team. They turned that into a ticket.
So then a developer sees that, and then they start implementing, and it's not until later that someone comes along and starts asking questions to say, "Hey, so what problem are we solving?" Or "Why are we adding this code?" And Joël, I think you said it perfectly and that this is something that I do expect of a more senior developer where they have that experience, and they ask those types of questions that then you realize that, oh, we've actually built the wrong thing, or the thing that we're building doesn't solve the problem that we had in mind.
And at that point, you have someone who has invested a week, maybe a couple of weeks into something, so then that feels really terrible to then say, "We actually need to scrap this" or "We need to totally rethink this." So then you start making trade-offs of like, well, maybe we can keep this portion or preserve some of the work that you've done. And so you just end up in a really messy state. And that can be avoided if people are collaborating at an earlier stage of that process.
JOËL: Yes, you and I, Steph, were in a very particular situation where something like this had happened. Some developer had been working on a ticket for a couple of weeks. And it was one of those tickets that just kept growing. The code kept growing, but then the client kept adding new requests and features onto it. And then, at some point, you and I were brought in to help, and the first thing we had to do is like, let's stop and ask what is the problem that is actually being solved here?
And actually got everyone together with the client. And that conversation helped us do a big reset and helped us find a way that was more focused that actually solved the underlying problem, what the client actually wanted rather than what they said they wanted.
I noticed that for most of these unpopular opinions, it sounds like we pretty much all agree with each other. So either we all have a very similar set of unpopular opinions, or maybe these opinions might be a little bit more mainstream than we give them credit for. I find that oftentimes, at least on Twitter, when people tag things as "unpopular opinion" in quotes, they may be a little bit more popular than people give credit for.
CHRIS: I feel like the RSpec one is unpopular. And also, have you asked Steph about Pop-Tarts?
JOËL: [laughs]
CHRIS: Because we are capable of having sincerely unpopular opinions.
JOËL: Let's save that for the next episode. And on that note, shall we wrap up?
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph and Chris share some big news about the future of The Bike Shed.
Steph shares an update about integrating with Knapsack Pro. Chris is excited for larger projects that will begin in the next few weeks. They answer a listener's question on keeping backlogs connected to the product vision.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Linear
RailsConf 2022 YouTube Playlist
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: We don't need Skype anymore. We live in a post-Skype world, audio flapjacks.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. So let's see, recently, Utah, who's my dog for anyone that's not familiar, Utah had his very first beach trip. So we went down to Myrtle Beach area in South Carolina. And he had a grand old time because he's a water baby. Like, if there's water, he is in it. He loves it. So I was confident that he was going to like the beach, but I wasn't sure what he was going to think of the waves.
And luckily, there's an inlet area so that he could splash around and not be too worried about waves and had a wonderful time. He did take a huge gulp of saltwater because he's not used to that, and that threw him for a loop for a while. [chuckles] But overall, it was a lot of fun. He had a wonderful first time at the beach.
CHRIS: A live raptor on the beach. For continuity for anyone listening, I tend to call Utah Raptor because there are things called Utahraptors, and I can't call things by their normal name. It's just an affliction I have. But yeah, a raptor on the beach.
STEPH: He has a couple of nicknames going for him. There's someone else in my family that always refers to him as Tank; I think because they remind him of another dog named Tank. So yeah, he's got all the nicknames.
In some more tech-related news, I'm super excited that the Rails conference talks are now public. There are a number of talks that I'm interested in watching. And there's just such a killer lineup of topics and presenters, including a number of thoughtboters that presented this year. There are also several talks that focus on testing legacy code, which, as you know, is very relevant to my life right now. So I'm particularly interested in those talks. And frankly, I'm running out of doomsday movies to watch. So it's really good timing that I have these talks to help with my evenings where I need something to watch.
CHRIS: Now you say you're running out of doomsday movies, but have you watched Armageddon?
STEPH: [chuckles] No, it is on the list. So I have not completely run out. I have a very good one to still watch.
CHRIS: I'm not even that big a fan of the movie. It's just now a part of my public persona, apparently, and so I got to hold that line. But I didn't realize the RailsConf talks were out. That is super exciting. I will definitely have to check through those and pick out a few at a minimum, watching all of the wonderful talks by thoughtbot folks. So yeah, that's very exciting.
STEPH: Yeah, I'll be sure to include a link in the show notes so that way it's easy for folks to find and watch along. You and I also have some big news to share with everyone. As most listeners know, I've been prepping for maternity leave. And as part of that preparation, you and I have discussed ways to handle that period where I'm away and focused on being a new mom for six months.
We talked through a couple of ideas, and ultimately, you and I came to the conclusion that the timing feels right to end our season as host of The Bike Shed and transition the show to a new host, so a passing of the torch or a passing of the handlebars if you will. So even though you and I are leaving the show, The Bike Shed will continue to exist, and you and I will be here for the next couple of episodes. And the show will continue to be the wonderful show that it is today. And we'll share some more details about that in a future episode.
So while it's really exciting that someone new is going to take over the show, I think I can speak for both of us when I say that this definitely wasn't an easy decision. I know that I've really enjoyed this part of my life where we show up and share our development adventures. Although, to be honest, it's really all the nonsense; that's what I'm here for. That's been my favorite part, like our poor attempts to use sports analogies and renting goats to mow the grass. And I particularly love when you lean into segments about what grinds your gears. There's something about a spirited Chris Toomey "You know what grinds my gears?" rant [laughs] That really brings me a lot of joy.
CHRIS: Oh, well, that feels like it's too kind of a thing for you to say. And, well, this is already an emotional topic now. I'm feeling the feels. But yes, this has been such a joy to record the show with you. And again, we'll be here for a couple more episodes just to sort of segue over and provide some continuity. But yeah, it is, I think, the right time. We've both done this for a good bit of time now. I think we've said a lot of the things that we have to say.
I appreciate both the consistency in what we've had to say and also the way things have changed and the new elements that have come in and out. But yeah, I am excited for the next host, and we will introduce you to them in the very near future, dear listeners. At a minimum, and we'll get another chance to say this, but, Steph, it's been a real pleasure recording this podcast with you.
STEPH: Thank you. It has been a real pleasure. And I'm with you, this is hard to do, and it's hard to announce. But like you said, we'll have some more episodes, and then we will also have more of a finale episode where we can dig into all the feelings. But keeping some of those feelings at bay for now just because we will still have a couple more episodes to chat and then another episode where we can really dig into all of those feelings and then also reflect on our time as host of the show.
But returning to a technical note, I have an update I can share that's related to some of the testing work that Joël and I are doing. Specifically, we started integrating with a service called Knapsack Pro, which is a service that helps you parallelize your test suite across CI nodes to then speed up your test suite. And ultimately, what we really want to use are Knapsack Pro's cue mode and automatic splitting of large test files to help us then distribute those files across all of the available nodes. And we're still working on setting that up. So I don't have any cool or sparkly stats to share just yet.
But I have noticed some other wonderful features about Knapsack, specifically some of the reporting structure that they have. So a lot of this data Joël and I were collecting manually. So we were having to go through and figure out, okay, how long are test files taking? Which files are running on which process? Where do we have, like a tentpole, a particular file that's taking a lot longer than other files to run? And with the Knapsack UI, they're just telling us all of that data where they're showing us how long do the test files take? Which process is completing first versus completing last?
They also show what's the time span between the finished times of the CI node that started first versus the one that finished last? So we can see like, are we balancing well across all of these nodes or workers? So there's been already a lot of really good stuff that I've been seeing from Knapsack Pro that I'm really excited about. And we'll just have to see what comes next with the queue, what kind of time improvements that we can also see by taking this approach.
CHRIS: Oh, that's cool. I didn't realize you had started working with Knapsack Pro. That's definitely on my short list of things to consider, particularly as our test suite grows. Actually, on a quasi-related note, we this week had another developer who had been off for a couple of months in the summer; they just came back this week. And so we're sort of at full capacity.
And I've also been writing a couple of PRs this week. It's been exciting. I actually still remember how to code, which is cool; glad that that's still in there. I'm actually operating as point dev this week, which means that I am the support for our admin team wherever they need changes to customer accounts or things like that happening in the background. I'm also triaging all the bugs and things that are coming in, so there are a bunch of little PRs that I pushed through.
But interestingly, I've just not really looked at or optimized our CircleCI plan. But we keep going over our budget. And then it turns out that their pricing is structured in probably a reasonable way where we buy a plan that is this many build minutes for a month or this many build points or whatever it is, and we keep going over. And each of the overage charges are actually kind of expensive. And we're now doing it on a weekly basis which is I should probably rethink some stuff, figure out a more optimum strategy there. But there's a certain pride of like, oh yeah, look at us. We're burning through CI, really making a lot of PRs happen.
But I do remember there was a particular week where one of the developers was on vacation; somebody else was elsewhere. And so, our throughput on engineering dropped significantly. And we got this email from CircleCI. It was an automated email, but it was like they were negging us and they were like, "Hey, do you need some help unsticking your CI pipeline? Looks like your build minutes have dropped way down."
And I was like, whoa, CircleCI, I do not like the vibe that you're putting out there. So now there's this perverse pleasure in like yeah, that's right. We keep going over our limits, and then you charge us a bunch. But actually, I think you're winning in all of these cases, never mind [laughs] I'm just losing. But yeah, it was a fun sequence of emails from CircleCI.
STEPH: [laughs] That's one of those are you okay? Kind of emails that you get from a service. [laughs] That is fascinating. And yes, I think they're winning because they have then encouraged you to keep it up in terms of like the spend. You made me just think of one of the nice features that we've also noticed with or not so much a feature, but it is a process that they have with when you start your free trial with Knapsack to then integrate and see what the results look like.
I believe it's 14 days that they give you. But those 14 days only count for build time. So it's not just like from the day that you sign up to now you've got 14 days or business days; it is specifically allocated to the days that you're actually running builds. So I don't think they break it down in terms of hours worth of 14 days.
But it's like, hey, did you run a build today? Then we'll calculate that. Which has been nice because then there have been some side adventures that we've been interested in, and we've been allowed to pursue those side adventures because we know we can pause on Knapsack for just a little bit, but we're not going to lose that day as part of our free trial.
Their customer support has also been really nice where they've already...because Joël has been chatting with them with a couple of questions, and they've been very nice with like, hey, we know there are some issues that you're working through in terms of getting the cue mode up and running. So if you do need a couple of extra days, let us know. I wonder if Knapsack Pro will be cool with me sharing that inside baseball, but [laughs] their customer service has been very helpful and nice in that regard.
CHRIS: I feel like there's such a subtle art to structuring a free trial and particularly the thing you're describing of the way they pace it out. And like, we don't really count it if you're not using it, which makes a lot of sense. There have been so many free trials that I've tried in my life. But like, I started the free trial, and then I forgot about you for a few days. And then I didn't integrate or use. If we're being honest, my free trial has expired, and I have no idea if I want to use you as a service.
And I know many platforms will offer to restart your trial or things like that or extend it, or the metered trial days that Knapsack has is an interesting one that I don't think I've heard of before, but I like it as an approach. And I think, frankly, it serves them. They want to give people an actual opportunity to try out the platform and decide if it works for them, and that takes a day or two, it turns out.
STEPH: Yeah, I agree. Some of the additional information that they've shown us, as I've been talking about, like the niceties that they've included around the build metrics. They also show you your longest-running test files. So they also have an auto-splitting feature that we haven't tweaked correctly just yet, so I'm waiting to see that happen. But even then, just knowing the build metrics, because again, this is information that Joël and I were working on manually to collect from all the stats that we could from RSpec and using parallel_tests.
But now we can just go to the build metrics, and we can see like, oh, these are our top files that take the longest, and then Knapsack Pro tells us your ideal test time for a file is like this number. So to give a concrete number for us, it'd be like around 6 minutes, and 20 seconds would be our ideal time that each node runs. Now, that's probably going to require splitting some of those files because we have a couple of files that take more like eight, nine minutes. But it's so nice to be able to see that and not have to run scripts that we have crafted together to then be able to identify our slowest test.
But once we get cue mode working and then also the automatic splitting of files, I'll be sure to keep you updated because I'm hoping we're going to see some sparkly stats in terms of then how the tests are getting distributed and that we'll be able to bring the CI time down at least for this portion; it won't be for the whole build but for running the RSpec suite.
If we can match that ideal time that we've seen and that Knapsack Pro says that if we balance everything in split files, then the ideal run time is around six or seven minutes. So here's hoping. But that's really what's going on in my world. What's going on in your world?
CHRIS: My world is, I would say, largely the same as it has been. We've been sort of between large projects for a bit now. We've been taking that time to build out some infrastructure, get some smaller things done, some niceties, enhancement, tweaks, et cetera. But at this point, I would say the storm's brewing. The larger projects that we are planning for, mostly Q3 sort of thing, are all kind of coming to a head right now. And so I'm kind of excited.
I'm ready. I'm ready for a bigger challenge, something to sink our teeth into and really dive into. So yeah, it's been, I would say, very calm of late in a very positive way but almost maybe too calm. And so I'm ready for, yeah, for different things to try out and some stuff to really dig into and grow the Sagewell platform. But yeah, that's most of what's up in my world.
MIDROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half.
So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPH: Well, that's kind of a perfect topic or a jumping-off point for a question that we received recently from a listener that's very focused around when you have a lot of work. And since you've got a lot of work, it sounds like, coming down the pipeline, then how to manage a lot of small projects, and then keep the team in sync with each other. So let me back up and read their question so that way we don't miss any of the good bits.
But this question comes from Brian, and Brian writes in, "I've got a listener question for y'all. How do you like to or have seen teams keep the vision of the product clear yet connected to the backlog such that tasks don't feel like a disconnected set of work, especially when the team stream count grows?
To expand on my question, the common situation I've seen is when a team has multiple work streams in flight, so using like a backlog tool like Jira or Trello shows a stream of work, but it's hard to associate the stories and tasks either to the bigger picture of the goals or the system's end state. That has slowed down new teammates' onboarding.
It's made identifying necessary but missing stories harder earlier on. It's also made reviewing work require more context switching or made it harder for interested parties to track progress. It's also caused tension on teams when individuals have different ideas of the goals and the end state of the work. I can see some obvious options to help but curious what your general and specific experiences and advice might be."
All right, so speaking of lots of work coming down the pipeline and managing some small projects, Chris, do you have any initial thoughts around lots of big questions here in terms of keeping the backlog connected to the product vision?
CHRIS: Yeah, I have a bunch of little thoughts that I'm happy to share. But to sort of go bigger picture on this, I think the question that Brian is asking here is one of the hard questions of the work that we do. It's like, how do we keep everybody in sync and understanding the big picture while working on the small individual pieces? Like, that's the hard stuff.
That's the place where communication problems can happen or where the larger team you have, you start to feel those growing pains, et cetera. So to name it, this is just a hard thing. This is one of those hard things that I don't have any silver bullet by any means. But I do have a couple of things that have worked for me in the past.
So the first is having some notion within whatever tool that you're working on of the bigger picture of the projects. And so recently, just to use Sagewell as an example, we started on Trello. And when we were using Trello, we had a projects list off to the side. So there's the actual backlog of next-up work. But next to it, we had a projects list that was a little higher level zoomed out, trying to group things into sections.
Pretty quickly, even with our small team, that became insufficient to actually track these things and try and tie small pieces of work back to the bigger project that they were associated with. So we introduced something called I'm going to say it's Hello Epics is the name of the extension for Trello. But it's very much a grafting on of functionality. It's like a Chrome extension; I want to say that takes advantage of card linking and Trello and tries to make some version of this happen.
It was okay, but we, again, sort of hit the ceiling of that. We somewhat recently, I'm going to say, a couple of months ago, moved to Linear, and Linear has a more formal idea of projects within certain functional areas. And there are ways that projects sort of span different teams, and individual tasks can associate back to projects. And I would say, for us, it's been just the right amount of structure. I want to have that continuity and the linking between things. But I don't need it to be too fancy.
There is a Gantt chart view in Linear, and I look at it, and I'm like, wow, those are dates. You made some guesses there, Linear. Good luck, we'll see if it happens; I don't know. But overall, that functionality has been great and sufficient for where we're at. So that's one thing.
Another thing that comes to mind is trying to keep those scoped. So those projects that I'm talking about this is one of the things that I push on a lot is I want that list to be turning over semi-regularly. So we shouldn't have projects that are just like for the next two years, we're building the admin UI, and that's just this sort of open-ended amorphous, unclear project.
I like to really push for let's get to some deliverable, some doable line, some perhaps arbitrary MVP definition that we define. But let's make it so that each of these projects is doable such that if something stays in that list for too long, our attention is drawn to that. We don't just become numb to the idea that I don't know, there's a list of projects, but some of them are kind of dead in the water, kind of just hanging around and not quite complete. I want that to be a mechanism for reviewing the work that we have in process.
And so often, it almost feels somewhat artificial, but we'll break a bigger body of work down into smaller projects. And so there's like V1 of a thing, and then there's V1.2. And we get somewhat cheeky with the names at times. But I found a lot of value in having that sort of idea of let's define a boundary around a portion of this work, give it a name, and decide are we done with that or not? And each week, we get asked that question, particularly around our product planning meeting. And that has been really useful, particularly to make sure that stuff has a broader context that it's connected to.
The last thing that I'll say that has been super useful is retro because what you're describing, again, is one of the harder things. This is really difficult to get right. It's going to be different in each team at different team sizes, at different complexities of product and platform that you're building. You're going to feel this in different ways. And so retro, by far, is the most effective tool I know of to ensure that you are naming and responding to the pain points that you have in your own workflows.
There's no one size fits all for this sort of thing. But if you have a process that regularly has you come up for air, take a look at what you're doing and decide is this going well or is this not well? Are we missing critical features? Are developers lacking context? Is it hard to move people between teams or whatever the pain point may be? Then you can try and focus in and actually find solutions specific to that.
But again, I can't answer it for you. All I can say is like, by far, retro is the most effective thing I've ever seen for trying to answer that question. So yeah, a couple of thoughts. I don't know. What do you think, Steph?
STEPH: I think this is one of those rare moments that you hear someone in a leadership position express that they want high turnover; that is something that they're shooting for.
CHRIS: Turnover projects, not people. I like people.
STEPH: [laughs] I know.
CHRIS: People are moral. People are great. As you said it, I was like, wait, what did I say? Oh, right. No, I didn't. Okay, I got it now.
STEPH: [laughs] I left out that important word, high turnover of projects and tickets, yes. [laughs] It amused me as I was thinking about it as you mentioned that it's nice to have that consistent turnover in terms of like, you know that something may have gotten scoped too largely if it's sitting there for a while. Unsurprisingly, all the things that you said are wonderful. I love that you sprinkled retro in there at the end since, as you mentioned, this is hard, and it's going to be hard to get it right. So keep checking in with your team to see where improvements can be made.
I'm going to share a recommendation that actually starts with a pain point and then kind of walk it through from there. So one of the areas that Brian highlighted about that is that it makes it harder for interested parties to track progress. I really liked that one because I've also felt that pain. One way that I've experienced teams manage it that I wasn't a fan of is where people would just go to developers where they saw someone's name on a ticket, and they would ask for updates.
And the reason I didn't care for that is because it just felt too isolated. And then it felt like work where someone then had to identify who's working on this and getting an update. And then it felt almost stressful to then have someone checking in with you in that regard to be like, hey, how's this ticket going? And then what's the update on this? Versus having a more formal process of like, this is how we update our work.
So that sort of like one-off behavior of where then someone who's interested in this has to go find the person that's working on it and then check in with them, I think wasn't great for that person. And then it's also not great for the developer who then needs to switch context and provide a high-level overview of when they think something might be done or how it's going. Because then they need to translate from their developer-focus to then more product-focused.
A slight improvement on that process was at least to keep it public. So then, if there was someone that was in more of like a sales role or a customer support role, and then they were curious about something, is at least don't make that a private conversation. So instead of messaging that developer directly, at least put it in a channel where then anybody could respond, which is then nice because then other people can see that someone is checking in on this; they're interested in it. If that person's out, maybe someone else can respond on their behalf, but at least at a minimum, keep it public.
Even better is if you can have just a point person, so this is probably your product manager who then someone that's in sales or customer service can go to and say, "Hey, I'd love an update." And then maybe that product manager turns around and goes to the developer and asks questions, but at least they know who to go to, and so they don't have to find the person to follow up with.
Another approach that I'm currently experiencing with my team is we do have a number of small projects that are going on within the same team. So there is an important Ruby and Rails upgrade that's going on. There's the normal day-to-day work that needs to get done. And then there's also the CI performance improvement that Joël and I are working on. And this goes back to your point in terms of use all the tools that can then help you promote the work that's getting done.
So in our case, we've actually built on you can see in one board, but then you can have subsets, or you can have streams inside those boards. So then you can have a board that is titled...board is kind of a fancy term. It's more like a line item on one larger board for a team to use some weird terminology. [laughs] But I can't think of a more correct term for it.
So there's a line item that focuses on Rails and Ruby upgrade. And then there's a line that focuses on other work that's being done, and then that's RSpec-specific performance improvements. And that has felt very nice because then you section each thing together, and you can focus on one at a time.
It does expand the context that you have with the work that's going on with that team. And I do have mixed feelings about it because, on one hand, it does make your daily sync longer because then there are people sharing updates or looking for help on things that you frankly don't have a lot of context and that you're not working on. So it can feel a little wasteful to then sit there while they're going through updates for work that you're not part of.
But then it's also been really nice because then you get that high-level context. So you don't really have to know the details, but you're at least aware of that work that's being done. So then that way, if a question does come up, you have enough context that you can say, "Oh, I know a little bit about that," or "I know who's working on it," or "I know who to follow up with." So I think the time trade-off is worth it. Even if it may feel a little painful some days, I think it's still really nice.
And then also, if you can look for opportunities to form sub-teams where if there's a group and there's work that you can group together. So maybe if you look at the type of work that's coming downstream, like, if you are working on one portion of the application and you see that another big feature is working on that similar area, then maybe grouping that stuff together to reduce some of the context switching is really nice.
The buddy system also works really well, and maybe that's pairing, maybe it's not, but it's at least having two people that are working on a larger project. So using that RSpec CI performance improvement as an example, Joël and I both have a lot of context on this. And so that way, one, we always have a buddy, someone to reach out to and talk to. But then for like the CRs or PRs that I'm pushing up, then I usually will tag Joël on those because I know he already has the context.
So I'm trying not to bring other people in unless they just want to, but that way, Joël's there, and it makes for a quicker review. So that's one nice benefit of the buddy system is because there's at least one person that has enough context where it's not as big of a hurdle for them. And they don't have to ask as many questions and then get caught up to speed before reviewing your changes.
There's another area that Brian identified as struggling to identify important work but missing stories. I'm intrigued by that one because I'm trying to think of where a team is working on a project, but because you've got so many small projects, you may have forgotten about a particular task that needs to get done, or maybe you need to collaborate with another team, and that's something that slipped through the cracks. I don't think I have a great solution for that one, except with time and experience, you'll start to identify some of those areas where have we thought through, like the different teams and communications and the different services that we may need to integrate with?
And then I also think it's fine, like, if you've got someone working on a project, it shouldn't be so thoroughly scoped upfront that there's not room for discovery. So if you have one or two people on that project and they're like, hey, actually, I need to create a ticket for this, I think that feels totally reasonable that that's part of that discovery process.
CHRIS: Yeah, I think I share that take, or I guess a different way to say it is I don't feel the pain of stuff falling through. But that's because we have kind of a continuous process of as we're working on things, we're like, oh, didn't think of X. And then it's very much a part of our process to throw a ticket into the inbox. And our product manager will then triage that and decide relatively where it goes. But everyone is empowered to do that, everyone that's working on the product, such that we have no expectation of being able to fully scope things out in advance.
And thus the idea of missing a ticket...if it gets to a user and like we forgot to build a critical feature, then, well, that would be sad, but that tends to not happen because just the nature of the process is we're in there. And the idea of the fog of war, like, you can't know until you know. You got to go out there in the forest, and then the fog starts to clear. And then you can see, oh, there's a goldmine there. I'm going to stop with my analogy now.
But more generally, this is just sort of agile in my mind. I'm a big fan of limiting the specificity the further out a project plan is. So right now, we're working on the concrete implementation of a thing. Cool, that's real. Let's have detailed tickets that describe as much as necessary to actually do the work. And then like a little further out the projects, let's talk about the feature level. What's the screen that we want to build but not necessarily all of the nitty-gritty details?
And then, further than that, the roadmap, I want to get away from either implementation or feature level and go to benefits. What's the user going to be able to do as a result of this? And user can be a little bit abstract where your customers are developers right now on the team. You're trying to empower them. So it's like CI needs to be faster. And so that's the benefit that we're trying to get to. And then the features are what we want to do — splitting into queues, and then the actual implementation is integrate Knapsack Pro and whatnot.
And the closer the work is to actually being something that you're tackling, the more specificity I think is useful. But very purposefully, I will avoid specificity for things that are more than a month or two out just to be like, can those just be bullet items on a list that kind of talk about how happy a user will be when we do it but not actually constrain ourselves to a particular feature or implementation? Because we're going to figure some stuff out and it's going to change.
And also, it's so much easier, like the idea of the team not being able to picture all of the different moving pieces of work. If your roadmap is simple in a couple of bullet items, then it's much easier for everyone to be like, yeah, I kind of know all the stuff that we're going to be doing, that we think we'll be doing in six months. Could change turns out, but that's a thing that I'm a big fan of is limiting the specificity further out.
And I've worked on many projects where like, no, no, we absolutely need to know down to the day how long this roadmap will take that we think is about a year. And it's like, that's not real. That's not a real thing. Stuff's going to change. I assure you.
STEPH: I love that. The further out that you get, you focus more on benefits versus implementation. I think that hits home for so many reasons and helps address a number of the concerns that Brian brought up.
I did think of one more strategy that I've seen that I've also enjoyed is where if you do have an interested party who is like, hey, I've got, I don't know, maybe you've got a customer that's really mad about something, and there's a bug fix. And so this person in customer support really wants to be able to know exactly how this bug is going. Invite them to your daily sync. Let them attend, and they can come, and they'll know who's working on it. They can get a daily update as to how things are going. I think that's a really nice addition.
There's also the idea that if you are collaborating closely with another team and so your work is very important to theirs or vice versa, also have one person from their team join your daily sync or vice versa. But then that way, you have that communication. And that way, you can check in with each other, and you're at least aware of that high-level context in terms of like, oh, we're waiting on this team for designs or for a new server or something, things like that. Then they can provide an update, or you can provide an update to them.
CHRIS: I probably shouldn't be surprised, but we had a surprising amount to say about that question. [laughs] It's really in our wheelhouse, the sort of stuff that we like to dig into. How do we do the work? You know, that's the question here on The Bike Shed. But I think with all of those notes, Brian, I hope that helps with what you're doing. And yeah, what do you think, Steph? Should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris talks about a small toy app he maintains on the side and working with a project called capybara_table. Steph is getting ready for maternity leave and wonders how you track velocity and know if you're working quickly enough?
They answer a listener's question about where to get started testing a legacy app.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
jnicklas/capybara_table: Capybara selectors and matchers for working with HTML tables
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Just gotta hold on. Fly this thing straight to the crash site.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
CHRIS: And I'm Steph Viccari.
STEPH: And together, we're here to share a bit of what we've learned along the way. I love that you rolled with that. [laughs]
CHRIS: No, actually, it was the only thing I could do. I [laughs] was frozen into action is a weird way to describe it, but there we are.
STEPH: I mentioned to you a while back that I've always wanted to do that. Today was the day. It happened.
CHRIS: Today was the day. It wasn't even that long ago that you told me. I feel like you could have waited another week or two. I feel like maybe I was too prepared. But yeah, for anyone listening, you may be surprised to find out that I am not, in fact, Steph Viccari.
STEPH: And they'll be surprised to find out that I actually am Chris Toomey. This is just a solo monologue. And you've done a great job of two voices [laughs] this whole time and been tricking everybody.
CHRIS: It has been a struggle. But I'm glad to now get the proper recognition for the fact that I have actually [laughs] been both sides of this thing the whole time.
STEPH: It's been a very impressive talent in how you've run both sides of the conversation. Well, on that note, [laughs] switching gears just a bit, what's new in your world?
CHRIS: What's new in my world? Answering now as Chris Toomey. Let's see; I got two small updates, one a very positive update, one a less positive update. As is the correct order, I'm going to lead with the less positive thing. So I have a small toy app that I maintain on the side. I used to have a bunch of these little purpose-built singular apps, typically Rails app sort of things where I would play with a new technology, but it was some sort of like, oh, it's a tracker. It's a counter. We talked about breakable toys in the past. These were those, for me, serve different purposes, productivity things, or whatever.
But at some point, I was like, this is too much work, so I consolidated them all. And I kept like, there was a handful of features that I liked, smashed them all together into one Rails app that I maintain. And that's just like my Rails app. It turns out it's useful to be able to program the internet. So I was like, cool, I'll do that for myself. I have this little app that I maintain.
It's got like a journal in it and other things. I think I've talked about the journal in the past. But I don't actually take that good care of it. I haven't added any features in a while. It mostly just does what it's supposed to, but it had...entropy had gotten the better of it.
And so, I had a very small feature that I wanted to add. It was actually just a Rake task that should run in the background on a schedule. And if something is out of order, then it should send me an email. Basically, just an update of like, you need to do something. It seemed like such a simple task. And then, oh goodness, the failure modes that I fell into.
First, I was on Heroku-18. Heroku is currently on their Heroku-22 stack. 18 being the year, so it was like 2018, and then there's a 2020 stack, and then the 2022. That's the current one. So I was two stacks behind, and they were yelling at me about that. So I was like, okay, but whatever. Can I ignore that for a little while?
Turns out no, because I couldn't even get the app to boot locally, something about some gems or some I think Webpacker was broken locally. So I was trying to fix things, finally got that to work. But then I couldn't get it to build on CircleCI because Node needed Python, Python 2 specifically, not Python 3, in order to build Node dependencies, particularly LibSass, I want to say, or node-sass.
So node-sass needed Python 2, which I believe is end of life-d, to build a CSS authoring tool. And I kind of took a step back at that moment, and I was like, what did we do, everybody? What is going on here? And thankfully, I feel like there was more sort of unification of tools and simplification of the build tool space and whatnot. But I patched it, and I fixed some things, then finally I got it working. But then Memcache wasn't working, and I had to de-provision that and reprovision something.
The amount of little...like, each thing that I fixed broke something else. I was like, the only thing I can do at this point is just burn the entire app down and rebuild it. Thankfully, I found a working version of things. But I think at some point, I've got to roll up my sleeves some weekend and do the full Rails, Ruby, everything upgrade, just get back to fresh. But my goodness, it was rough.
STEPH: I feel like this is one of those reasons where we've talked in the past about you want to do something, and you keep putting it off. And it's like, if I had just sat down and done it, I could have knocked it out. Like, oh, it only took me like 5-10 minutes.
But then there's this where you get excited, and then you want to dive in. And then suddenly, you do spend an hour or however long, and you're just focused on trying to get to the point where you can break ground and start building. I think that's the resistance that we're often fighting when we think about, oh, I'm going to keep delaying this because I don't know how long it's going to take.
CHRIS: There's something that I see in certain programming communities, which is sort of a beginner-friendliness or a beginner's mindset or a welcomingness to beginners. I see it, particularly in the Svelte world, where they have a strong focus on being able to pick something up and run with it immediately. The entire tutorial is built as there's the tutorial on the one side, like the text, and then on the right side is an interactive REPL. And you're just playing with the Svelte REPL and poking around. And it's so tangible and immediate.
And they're working on a similar thing now for SvelteKit, which is the meta-framework that does server-side rendering and all the fancy stuff. But I love the idea that that is so core to how the Svelte community works. And I'll be honest that other times, I've looked at it, and I've been like, I don't care as much about the first run experience; I care much more about the long-term maintainability of something.
But it turns out that I think those two are more coupled than I had initially...like, how easy is it for a beginner to get started is closely related to or is, you know, the flip side of how easy is it for me to maintain that over time, to find the documentation, to not have a weird builder that no one else has ever seen.
There's that wonderful XKCD where it's like, what's the saddest thing on the internet? Seeing the question that you have asked by one other person on Stack Overflow and no answers four years ago. It's like, yeah, that's painful. You actually want to be part of the boring, mundane, everybody's getting the same errors, and we have solutions to them. So I really appreciate when frameworks and communities care a lot about both that first run experience but also the maintainability, the error messages, the how okay is it for this system to segfault? Because it turns out segfaults prints some funny characters to your terminal.
And so, like the range from human-friendly error message all the way through to binary character dump, I'm interested in folks that care about that space. But yeah, so that's just a bit of griping. I got through it. I made things work. I appreciate, again, the efforts that people are putting in to make that sad situation that I experienced not as common.
But to highlight something that's really great and wonderful that I've been working with, there is a project called capybara_table. capybara_table is the gem name. And it is just this delightful little set of matchers that you can use within a Capybara, particularly within feature spec. So if you have a table, you can now make an assertion that's like, expect the table to have table row. And then you can basically pass it a hash of the column name and the value, but you can pass it any of the columns that you want. And you can pass it...basically, it reads exactly like the user would read it.
And then, if there's an error, if it actually doesn't find it, if it misses the assertion, it will actually print out a little ASCII table for you, which is so nice. It's like, here's the table row that I saw. It didn't have what you were looking for, friend, sorry about that. And it's just so expressive. It forces accessibility because it basically looks at the semantic structure of a table. And if your table is not properly semantically structured, if you're not using TDs and TRs, and all that kind of stuff, then it will not find it.
And so it's another one of those cases where testing can be a really useful constraint from the usability and accessibility of your application. And so, just in every way, I found this project works so well. Error messages are great. It forces you into a better way of building applications. It is just a wonderful little tool that I found.
STEPH: That's awesome. I've definitely seen other thoughtboters when working in codebases that then they'll add really nice helper methods around that for like checking does this data exist in the table? And so I'm used to seeing that type of approach or taking that type of approach myself. But the ASCII table printout is lovely. That's so...yeah, that's just a nice cherry on top. I will have to lock that one away and use that in the future.
CHRIS: Yeah, really, just such a delightful thing. And again, in contrast to the troubles of my weekend, it was very nice to have this one tool that was just like, oh, here's an error, and it's so easy to follow, and yeah. So it's good that there are good things in the world. But speaking of good things, what's new in your world? I hope good things. And I hope you're not about to be like, everything's terrible. But what's up with you?
[laughter]
STEPH: Everything's on fire. No, I do have some good things. So the good thing is that I'm preparing for...I have maternity leave that's coming up. So I am going to take maternity leave in about four-ish weeks. I know the date, but I'm saying the ish because I don't know when people are listening. [laughs] So I'm taking maternity leave coming up soon. I'm very excited, a little panicked mostly about baby preparedness, because, oh my goodness, it is such an overwhelming world, and what everyone thinks you should or shouldn't have and things that you need to do.
So I've been ramping up heavily in that area. And then also planning for when I'm gone and then what that's going to look like for the team, and for clients, and for making sure I've got work wrapped up nicely. So that's a big project. It's just something that's on my mind, something that I am working through and making plans for.
On the weird side, I ran into something because I'm still in test migration world. That is one of like, this is my mountain. This is my Everest. I am determined to get all of these tests. Thank you to everyone who has listened to me, especially you, listen to me talk about this test migration path I've been on and the journey that it's been. This is the goal that I have in mind that I really want to get done.
CHRIS: I know that when you said, "Especially you," you were talking to me, Chris Toomey. But I want to imagine that every listener out there is just like, aww, you're welcome, Steph. So I'm going to pretend for my own sake that that's what you meant by, especially you. It's especially every one of you out there in the audience.
STEPH: Yes, I love either version. And good point, because you're right, I'm looking at you. So I can say especially you since you've been on this journey with me, but everybody listening has been on this journey with me. So I've got a number of files left that I'm working through. And one of the funky things that I ran into, well, it's really not funky; it was a little bit more of an educational rabbit hole for me because it's something that I hadn't considered.
So migrating over a controller test over from Test::Unit to then RSpec, there are a number of controller tests that issue requests or they call the same controller method multiple times. And at first, I didn't think too much about it. I was like, okay, well, I'm just going to move this over to RSpec, and everything is going to be fine.
But based on the way a lot of the information is getting set around logging in a user and then performing an action, and then trying to log in a different user, and then perform another action that was causing mayhem. Because then the second user was never getting logged in because the first user wasn't getting logged out. And it was causing enough problems that Joël and I both sat back, and we're like, this should really be a request back because that way, we're going through the full Rails routing. We're going through more of the sessions that get set, and then we can emulate that full request and response cycle.
And that was something that I just hadn't, I guess, I hadn't done before. I've never written a controller spec where then I was making multiple calls. And so it took a little while for me to realize, like, oh, yeah, controller specs are really just unit test. And they're not going to emulate, give us the full lifecycle that a request spec does.
And it's something that I've always known, but I've never actually felt that pain point to then push me over to like, hey, move this to a request spec. So that was kind of a nice reminder to go through to be like, this is why we have controller specs. You can unit test a specific action; it is just hitting that controller method. And then, if you want to do something that simulates more of a user flow, then go ahead and move over to the request spec land.
CHRIS: I don't know what the current status is, but am I remembering correctly that the controller specs aren't really a thing anymore and that you're supposed to just use request specs? And then there's features specs. I feel like I'm conflating...there's like controller requests and feature, but feature maybe doesn't...no, system, that's what I'm thinking of. So request specs, I think, are supposed to be the way that you do controller-like things anymore. And the true controller spec unit level thing doesn't exist anymore. It can still be done but isn't recommended or common. Does that sound true to you, or am I making stuff up?
STEPH: No, that sounds true to me. So I think controller specs are something that you can still do and still access. But they are very much at that unit layer focus of a test versus request specs are now more encouraged. Request specs have also been around for a while, but they used to be incredibly slow.
I think it was more around Rails 5 that then they received a big increase in performance. And so that's when RSpec and Rails were like, hey, we've improved request specs. They test more of the framework. So if you're going to test these actions, we recommend going for request specs, but controller specs are still there. I think for smaller things that you may want to test, like perhaps you want to test that an endpoint returns a particular status that shows that you're not authorized or forbidden, something that's very specific, I think I would still reach for a controller spec in that case.
CHRIS: I feel like I have that slight inclination to the unit spec level thing. But I've been caught enough by different things. Like, there was a case where CSRF wasn't working. Like, we made some switch in the application, and suddenly CSRF was broken, and I was like, well, that's bad. And the request spec would have caught it, but the controller spec wouldn't. And there's lots of the middleware stack and all of the before actions.
There is so much hidden complexity in there that I think I'm increasingly of the opinion, although I was definitely resistant to it at first, but like, yeah, maybe just go the request spec route and just like, sure. And they'll be a little more costly, but I think it's worth that trade-off because it's the stuff that you're not thinking about that is probably the stuff that you're going to break. It's not the stuff that you're like, definitely, if true, then do that. Like, that's the easier stuff to get right. But it's the sneaky stuff that you want your tests to tell you when you did something wrong. And that's where they're going to sneak in.
STEPH: I agree. And yeah, by going with the request specs, then you're really leaning into more of an integration test since you are testing more of that request/response lifecycle, and you're not as likely to get caught up on the sneaky stuff that you mentioned. So yeah, overall, it was just one of those nice reminders of I know I use request specs. I know there's a reason that I favor them. But it was one of those like; this is why we lean into request specs. And here's a really good use case of where something had been finagled to work as a controller test but really rightfully lived in more of an integration request spec.
MIDROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers that can actually help you cut your debugging time in half.
So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
STEPH: Changing gears just a bit, I have something that I'd love to chat with you about. It came up while I was having a conversation with another thoughtboter as we were discussing how do you track velocity and know if you're working quickly enough?
So since we often change projects about every six months, there's the question of how do I adapt to this team? Or maybe I'm still newish to thoughtbot or to a team; how do I know that I am producing the amount of work that the client or the team expects of me and then also still balancing that and making sure that I'm working at a sustainable pace?
And I think that's such a wonderful, thoughtful question. And I have some initial thoughts around it as to how someone could track velocity. I also think there are two layers to this; there could be are we looking to track an individual's velocity, or are we looking to track team velocity? I think there are a couple of different ways to look at this question. But I'm curious, what are your thoughts around tracking velocity?
CHRIS: Ooh, interesting. I have never found a formal method that worked in this space, no metric, no analysis, no tool, no technique that really could boil this down and tell a truth, a useful truth about, quote, unquote, "Velocity." I think the question of individual velocity is really interesting.
There's the case of an individual who joins a team who's mostly working to try and support others on the team, so doing a lot of pairing, doing a lot of other things. And their individual velocity, the actual output of lines of code, let's say, is very low, but they are helping the overall team move faster. And so I think you'll see some of that.
There was an episode a while back where we talked about heuristics of a team that's moving reasonably well. And I threw out the like; I don't know, like a pull request a day sort of thing feels like the only arbitrary number that I feel comfortable throwing out there in the world. And ideally, these pull requests are relatively small, individual deployable things. But any other version of it, like, are we thinking lines of code? That doesn't make sense. Is it tickets? Well, it depends on how you size your tickets.
And I think it's really hard. And I think it does boil down to it's sort of a feeling. Do we feel like we're moving at a comfortable clip? Do I feel like I'm roughly keeping pace with the rest of the team, especially given seniority and who's been on the team longer? And all of those sorts of things. So I think it's incredibly difficult to ask about an individual.
I have, I think, some more pointed thoughts around as a team how we would think about it and communicate about velocity. But I'm interested what came to mind for you when you thought about it, particularly for the individual side or for the team if you want to go in that direction.
STEPH: Yeah, most of my initial thoughts were more around the individual because I think that's where this person was coming from because they were more interested in, like, how do I know that I'm producing as much as the team would expect of me? But I think there's also the really interesting element of tracking a team's velocity as well.
For the individual, I think it depends a lot on that particular team and their goals and what pace they're moving at. So when I do join a new team, I will look around to see, okay, well, what's the cadence? What's the standard bar for when someone picks up a ticket and then is able to push it through? How much cruft are we working with in the codebase?
Because then that will change the team's expectations of yes, we know that we have a lot of legacy code that we're working with, and so it does take us longer to get through things. And that is totally fine because we are looking more to optimize our sustainability and improving the code as we go versus just trying to get new features in.
I think there's also an important cultural aspect. So some teams may, unfortunately, work a lot of extra hours. And that's something that I won't bend for. I'm still going to stick to my sustainable hours. But that's something that I keep in mind that just if some other people are working a lot of evenings or just working extra hours to keep that in mind that if they do have a higher velocity to not include that in my calculation as heavily.
I also really liked how you highlighted that certain individuals often their velocity is unblocking others. So it's less about the specific code or features or tickets that they're producing, but it's how many people can they help? And then they're increasing the velocity of those individuals.
And then the other metrics that unfortunately can be gamified, but it's still something to look at is like, how many hours are you spending on a particular feature, the tickets? But I like that phrasing that you used earlier of what's your progress? So if someone comes to daily sync and they mention that they're working on the same thing and we're on like day three, or four, but they haven't given an update around, like, oh, I have this new thing that I'm focused on, or this new area that I'm exploring, that's when I'll start to have alarm bells go off.
And I'm like, okay, you've been working on the same thing. I can't quite tell if you've made progress. It sounds like you're still in the depths of the original thing that you were on a couple of days ago. So at that point, I'm going to want to check in to see how you're doing.
But yeah, I think that's why this question fascinates me so much is because I don't think there's one answer that fits for everybody. There's not a way to tell one person to say, "Hey, this is your output that you should be producing, and this applies to all teams." It's really going to vary from team to team as to what that looks like.
I remember there was one team that I joined that initially; I panicked because I noticed that their team was moving at a slower rate in terms of the number of tickets and PRs and stuff that were getting pushed up, reviewed, and then merged. That was moving at a slower pace than I was used to with previous clients. And I just thought, oh, what's going on? What's slowing us down? Like, why aren't we moving faster?
And I actually realized it's just because they were working at a really sustainable pace. They showed up to the office. This was back in the day when I used to go to an office, and people showed up at like 9:00 a.m. and then 5:00 o'clock; it was a ghost town, and people were gone. So they were doing really solid, great work, but they were sticking to very sustainable hours.
Versus, a previous team that I had been on had more of like a rushed feeling, and so there was more output for it. And that was a really nice reset for me to watch this team and see them do such great work in a sustainable fashion and be like, oh, yeah, not everything has to be a fire, not everything has to be rushed.
I think the biggest thing that I'd look at is if velocity is being called into question, so if someone is concerned that someone's not producing enough or if the team is not producing enough, the first place I'm going to look is what's our priorities and see are we prioritizing correctly? Or are people getting pulled into a lot of work that's not supporting the priorities, and then that's why suddenly it feels like we're not producing at the level that we need to?
I feel like that's the common disconnect between how much work we're getting done versus then what's actually causing people or product managers, or management stress. And so reevaluating to make sure that they're on the same page is where I would look first before then thinking, oh, someone's not working hard enough.
CHRIS: Yeah, I definitely resonate with all of that. That was a mini masterclass that you just gave right there in all of those different facets. The one other thing that comes to mind for me is the question is often about velocity or speed or how fast can we go. But I increasingly am of the opinion that it's less about the actual speed. So it's less about like, if you think about it in terms of the average pace, the average number of features that we're going through, I'm more interested in the standard deviation.
So some days you pick up a ticket, and it takes you a day; some days you pick up a ticket, and suddenly, seven days later, you're still working on it. And both at the individual level and at the team level, I'm really interested in decreasing that standard deviation and making it so that we are more consistently delivering whatever amount of output it is but very consistently doing that. And that really helps with our ability to estimate overall bodies of work with our ability for others to know and for us to be able to sort of uphold expectations.
Versus if randomly someone might pick up a piece of code or might pick up a ticket that happens to hit a landmine in the code, it's like, yeah, we've been meaning to refactor that for a while. And it turns out that thing that you thought would be super easy is really hard because we've been kicking the can on this refactoring of the fundamental data model. Sorry about that. But today's your day; you lose.
Those are the sort of things that I see can be really problematic. And then similarly, on an individual side, maybe there's some stuff that you can work on that is super easy for you. But then there's other stuff that you kind of hit a wall. And I think the dangerous mode to get into is just going internal and not really communicating about that, and struggling and trying to get there on your own rather than asking for help. And it can be very difficult to ask for help in those sorts of situations.
But ideally, if you're focusing on I want to be delivering in that same pace, you probably might need some help in that situation. And I think having a team that really...what you're talking about of like, if I notice someone saying the same thing at daily sync for a couple of days in a row, I will typically reach out in a very friendly, collegial way, hey, do you want someone else to take a look at that with you? Because ideally, we want to unblock those situations.
And then if we do have a team that is pretty consistently delivering whatever overall velocity but it's very consistent at that velocity, it's not like 3 one day and then 0, and then 12, and then 2; it's more of like, 6,5,6,5 sort of thing, to pick random numbers out of the air, then I feel so much more able to grow that, to increase that.
If the question comes to me of like, hey, we're looking at the budget for the next quarter; do we think we want to hire another developer? I think I can answer that much more accurately at that point and say what do I think that additional individual would be able to do on the team. Versus if development is kind of this sporadic thing all over the place, then it's so much harder to understand what someone new joining that team would be able to do.
So it's really the slow is smooth, smooth is fast adage that I've talked about in the past that really captured my mind a while back that just continues to feel true to me. And then yeah, I can work with that so much better than occasional days of wild productivity and then weeks of sadness in the swamp of refactoring. So it's a different way to think about the question, but it is where my mind initially went when I read this question.
STEPH: I'm going to start using that description for when I'm refactoring. I'm in the refactoring swamp. That's where I'm spending my time. [laughs] Talking about this particular question is helping me realize that I do think less in terms of like what is my output in the strict terms of tickets, and PRs, and things like that. But I do think more about my progress and how can I constantly show progress, not just to the world but show it to myself.
So if there are tickets that then maybe the ticket was scoped too big at first and I've definitely made some really solid progress, maybe I'm able to ship something or at least identified some other work that could be broken out, then I'm going to do that. Because then I want everybody to know, like, hey, this is the progress that was made here. And I may even be able to make myself feel good and move something over to the done column. So there's that aspect of the work that I focus on more heavily.
And I feel like that also gives us more opportunities to then iterate on what's the goal? Like, we're not looking to just churn out work. That's not the point. But we really want to focus on meaningful work to get done. So if we're constantly giving an update on this as the progress that I've made in this direction, that gives people more opportunities to then respond to that progress and say, "Oh, actually, I think the work was supposed to do this," or "I have questions about some of the things that you've uncovered." So it's less about just getting something done. But it's still about making sure that we're working on the right thing.
CHRIS: Yeah, it doesn't matter how fast we're going if we're going in the wrong direction, so another critical aspect. You can be that person on the team who actually doesn't ship much code at all. Just make sure that we don't ship the wrong code, and you will be a critical member of that team.
But shifting gears just a little bit, we have another listener question here that I'd love to get into. This one is about testing a legacy app. So reading this question, it starts off with a very nice note to us, Steph. "I want to start by saying thanks for putting out great content week after week." We are very happy to do so." So a question for you two. I just took over a legacy Rails app. It's about 12 years old, and it's a bit of a mess. There was some testing in place, but it was completely broken and hadn't been touched in over seven years. So I decided to just delete it all.
My question is, where do I even start with testing? There are so many callbacks on the models and so many controller hooks that I feel like I somehow need to have a factory for every model in our repo. I need to get testing in place ASAP because that is how I develop. But we are also still on Ruby 2 and Rails 4.0. So we desperately have to upgrade. Thanks in advance for any advice." So Steph, I actually replied in an email to this kind listener who sent this. And so, I definitely have some thoughts, but I'm interested in where would you start with this.
STEPH: Legacy code, I wouldn't know anything about working in legacy code. [laughs] This is a fabulous question. And yeah, the response that you provided is incredible. So I'm very excited for you to share the message that you replied with. So I'm going to try not to steal any of those because they're wonderful.
But to add to that list that is soon to come, often where I start with applications like these where I need some testing in place because, as this person mentioned, that's how they work. And then also, at that point, you're just scared to ship anything because you just don't know what's going to break. So one area that you could start with is what's your rollback strategy? So if you don't have any tests in place and you send something out into the world, then what's your plan to then be able to either roll back to a safe point or perhaps it's using feature flags for anything new that you're adding so that way you can quickly turn something on and off.
But having a strategy there, I think, will help alleviate some of that stress of I need to immediately add tests. It's like, yes, that's wonderful, but that's going to take time. So until you can actually write those tests, then let's figure out a plan to mitigate some of that pain. So that's where I would initially start.
And then, as for adding the test, typically, you start with testing as you go. So I would add tests for the code that I'm adding that I'm working on because that's where I'm going to have the most context. And I'm going to start very high. So I might have really slow tests that test everything that is going to be feature level, integration level specs because I'm at the point that I'm just trying to document the most crucial user flows. And then once I have some of those in place, then even if they are slow, at least I'm like, okay, I know that the most crucial user flows are protected and are still working with this change that I'm making.
And in a recent episode, we were talking about how to get to know a Rails app. You highlighted a really good way to get to know those crucial user flows or the most common user flows by using something like New Relic and then seeing what are the paths that people are using. Maybe there's a product manager or just someone that you're taking the app over that could also give you some help in letting you know what's the most crucial features that users are relying on day to day and then prioritizing writing tests for those particular flows.
So then, at this point, you've got a rollback strategy. And then you've also highlighted what are your most crucial user flows, and then you've added some really high level probably slow tests. Something that I've also done in the past and seen others do at thoughtbot when working on a legacy project or just working on a project, it wasn't even legacy, but it just didn't have any test coverage because the team that had built it before hadn't added test coverage. We would often duplicate a lot of the tests as well.
So you would have some integration tests that, yes, frankly, were very similar to others, which felt like a bad choice. But there was just some slight variation where a user-provided some different input or clicked on some small different field or something else happened. But we found that it was better to have that duplication in the test coverage with those small variations versus spending too much time in finessing those tests. Because then we could always go back and start to improve those tests as we went.
So it really depends. Are you in fire mode, and maybe you need to duplicate some stuff? Or are you in a state where you can be more considerate with your tests, and you don't need to just get something in place right away? Those are some of the initial thoughts I have. I'm very excited for the thoughts that you're about to share. So I'm going to turn it over to you.
CHRIS: It's sneaky in this case. You have advanced notice of what I'm about to say. But yeah, this is a super interesting topic and one of those scary places to find yourself in. Very similar to you, the first thing that I recommended was feature specs, starting at that very high level, particularly as the listener wrote in and saying there are a lot of model callbacks and controller callbacks.
And before filters and all of this, it's very indirect how this application works. And so, really, it's only when the whole thing is integrated together that you're going to have a reasonable sense of what's going on. And so trying to write those high-level feature specs, having a handful of them that give you some confidence when you're deploying that those core workflows are still working as expected.
Beyond that, the other things that I talked about one was observability. As an aside, I didn't mention feature flags or anything like that. And I really loved that that was something you highlighted as a different way to get to confidence, so both feature flags and rollbacks. Testing at the end of the day, the goal is to have confidence that we're deploying software that works, and a different way to get that is feature flags and rollbacks. So I really love that you highlighted that.
Something that goes really well hand in hand with those is observability. This has been a thing that I've been exploring more and more and just having some tooling that at runtime will tell you if your application is behaving as expected or is not. So these can be APM-type tools, but it can also be things like Sentry or Honeybadger error monitoring, those sorts of things.
And in a system like this, I wouldn't be surprised if maybe there was an existing error monitoring tool, but it had just kind of decayed over time and now just has perhaps thousands of different entries in it that have been ignored and whatnot. On more than one occasion, I've declared Sentry bankruptcy working with clients and just saying like, listen; this thing can't tell us any truths anymore. So let's burn it down and restart it.
So I would recommend that and having that as a tool such that much as tests are really wonderful before the code gets out there into the wild; it turns out it's only when users start using it that the real stuff happens. And so, having observability, having tooling in place that will tell you when something breaks is equally critical in my mind.
One of the other things I said, and this is probably the spiciest take on my list, is questioning the trade-off space that you're in. Is this an application that actually has a relatively low defect rate that users use and are quite happy with, and expect that level of performance and correctness, and all of those sorts of things, and so you, frankly, need to be careful with it? Or, is it potentially something that has a handful of bugs and that users are used to a certain lower fidelity experience, let's call it? And can you take advantage of that if that happens to be true?
Like, I would be very careful to break something that has never been broken before that there's no expectation of that. But if we can get away with moving fast and breaking things for a little while just to try and get ourselves out of the spot that we're in, I would at least want to consider that trade-off space. Because caution slows you down, it means that your progress is going to be limited.
And so, if we're able to reduce the caution filter just a little bit and move a little bit more rapidly, then ideally, we can get out of this place that we're in a little more quickly. Again, I think that's a really subtle one and one that you'd have to get buy-in from product managers and probably be very explicit in the conversations and sort of that trade-off space. But it is something that I would want to explore if I found myself in this sort of situation.
The last thing that I highlighted was the fact that the versions of Ruby and Rails that were listed in the question are, I think, both end of life at this point. And so from a security perspective, that is just a giant glaring warning sign in the corner because the day that your app gets hacked, well, that's a bad day.
So testing, unfortunately, I think that's the main way that you're going to get by on that as you're going through upgrades. You can deploy a new version of the application and see what happens and see if your observability can get you there. But really, testing is what you want to do. So that's where building out that testing is all the more critical so that you can perform those security upgrades because they are now truly critical to get done.
And so it gives sort of more than a nice to have, more than this makes me feel comfortable. It is pretty much a necessity if you want to go through that, and you absolutely need to go through the security upgrades because otherwise, you're going to get hacked. There are just automated scanners out there. They're going to find you. You don't need to be a high vulnerability target to get taken down on the internet these days. So if it hasn't happened yet, it's going to. And I think that's an easy business case to sell is, I guess, the way that I would frame it. So those were some of my thoughts.
STEPH: You bring up a really good point about needing to focus on the security upgrades. And I'm thinking that through a little bit further in regards to what trade-offs would I make? Would I wait till I have tests in place to then start the upgrades, or would I start the upgrades now but just know I'm going to spend more time manual testing on staging? Or maybe I'm solo on the project.
If I have a product manager or someone else that can also help the testing with me, I think I would go for that latter approach where I would start the upgrades today and then just do more manual testing of those crucial flows and then have that rollback strategy. And as you mentioned, it's a trade-off in terms of, like, how important is it that we don't break anything?
CHRIS: I think similar to the thing that both of us hit on early on is like, have some feature specs that just kick the whole application as one connected piece of code. Have that in place for the security upgrade, testing. But I agree, I wouldn't want to hold off on that because I think that's probably the scariest part of all of this. But yeah, it is, again, trade-offs. As always, it depends. But I think those are my thoughts. Anything else you want to add, Steph?
STEPH: I think those are fabulous thoughts. I think you covered it all.
CHRIS: Sounds good. Well, in that case, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Natural disaster movies, anyone? It's what Steph's been into, and Chris has THOUGHTS on the drilling in Armageddon.
Additionally, a chat around RuboCop RSpec rules happens, and they answer a listener's question, "how do you get acquainted with a new code base?"
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Become a Sponsor of The Bike Shed!
Transcript:
AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow, and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. So I've been watching more movies lately. So evenings aren't always great; I don't always feel good being around 33 weeks pregnant now. Evenings I can be just kind of exhausted from the day, and I just need to chill and prop my feet up and all that good stuff. And I've been really drawn to natural disaster like end-of-the-world-type movies, and I'm not sure what that says about me. But it's my truth; it's where I'm at. [chuckles]
I watched Greenland recently, which I really enjoyed. I feel like they ended it well. I won't share any spoilers, but I feel like they ended it well. And they didn't take an easy shortcut out that I kind of thought that they might do, so that one was enjoyable. Geostorm, I watched that one just last night. San Andreas, I feel like that's one that I also watched recently. So yeah, that's what's new in my world, you know, your typical natural disaster end-of-the-world flicks. That's my new evening hobby.
CHRIS: I feel like I haven't heard of any of the three that you just listed, which is wild to me because this is a category that I find enthralling.
STEPH: Well, definitely start with Greenland. I feel like that one was the better of the three that I just mentioned. I don't know Geostorm or San Andreas which one you would prefer there. I feel like they're probably on par with each other in terms of like you're there for entertainment. We're not there to judge and be hypercritical of a storyline. You're there purely for the visual effects and for the ride.
CHRIS: Gotcha. Interesting. So quick question then, since this seems like the category you're interested in, Armageddon or Deep Impact?
STEPH: Ooh, I'm going to have to walk through the differences because I always get those mixed up. Armageddon is where they take Bruce Willis up to an asteroid, and they have to drill and drop a nuke, right?
CHRIS: They sure do.
STEPH: [laughs] And then what's Deep Impact about? I guess the fact that I know Armageddon better means I'm favoring that one. I can't place what...how does Deep Impact go?
CHRIS: Deep Impact is just there's an asteroid coming, and it's the story and what the people do. So it's got less...it doesn't have the same pop. I believe Armageddon was a Michael Bay movie. And so it's got that Michael Bay special bit of something on it. But the interesting thing is they came out the same year; I want to say.
It's one of those like Burger King and McDonald's being right next door to each other. It's like, what are you doing there? Why are you...like, asteroid devastation movies two of you at the same time, really? But yeah, Armageddon is the correct answer. Deep Impact is like a fine movie, but Armageddon is like, all right, we're going to have a movie about asteroids. Let's really go for it. Blow it out. Why not?
STEPH: Yeah, I'm with you. Armageddon definitely sticks out in my memory, so I'd vote that one. Also, for your other question that you didn't ask, but you kind of implicitly asked, I'm going to go McDonald's because Burger King fries are trash, and also, McDonald's has better ice cream cones.
CHRIS: Okay, so McDonald's fries. Oh no, I was thinking Wendy's, get a frosty from there, and then you make that combination because the frostys are great.
STEPH: Oh yeah, that's a good combo.
CHRIS: And you need the french fries to go with it, but then it's a third option that I'm introducing. Also, this wasn't a question, but I want to loop back briefly to Armageddon because it's an important piece of cinema. There's a really great...like it's DVD commentary, and it's Ben Affleck talking with Michael Bay about, "Hey, so in the movie, the premise is that the only way to possibly get this done is to train a bunch of oil drillers to be astronauts. Did we consider it all just having some astronauts learn to do oil drilling?" And Michael Bay's response is not safe for radio is how I would describe it. But it's very humorous hearing Ben Affleck describe Michael Bay responding to that.
STEPH: I think they addressed that in the movie, though. They mentioned like, we're going to train them, but they're like, no, drilling is such an art and a science. There's no way. We don't have time to teach these astronauts how to drill. So instead, it's easier to teach them to be astronauts.
CHRIS: Right. That is what they say in the movie.
STEPH: [laughs] Okay.
CHRIS: But just spending a minute teasing that one apart is like, being an astronaut is easy. You just sit in the spaceship, and it goes, boom. [laughs] It's like; actually, there's a little bit more to being an astronaut. Yes, drilling is very subtle science and art fusion. But the idea that being an astronaut [laughs] is just like, just push the go-to space button, then you go to space.
STEPH: The training montage is definitely better if we get to watch people learn how to be astronauts than if we watch people learn how to drill. [laughs] So that might have also played a role.
CHRIS: No question, it is the correct cinematic choice. But whether or not it's the true answer...say we were actually faced with this problem, I don't know that this is exactly how it would play out.
STEPH: I think we should A/B test it. We'll have one group train to be drill experts and one group train to be astronauts, and we'll send them both up.
CHRIS: This is smart. That's the way you got to do it. The one other thing that I'm going to go...you know what really grinds my gears? In the movie Armageddon, they have this robotic vehicle thing, the armadillo; I believe it's called. I know more than I thought I would remember about this movie. [chuckles] Anyway, continuing on, the armadillo, the vehicle that they use to do the drilling, has the drill arm on it that extends out and drills down into the asteroid. And it has gears on the end of it. It has three gears specifically.
And the first gear is intermeshed with the second gear, which is intermeshed with the third gear, which is intermeshed with the first gear, so imagine which direction the first gear is turning, then imagine the second gear turning, then imagine the third gear turning. They can't. It's a physically impossible object. One tries to turn clockwise, and the other one is trying to go counterclockwise, and they're intermeshed. So the whole thing would just cease up. It just doesn't work.
I've looked at it a bunch of times, and I want to just be wrong about this. I want to be like; I don't know what's going on. But I think the gears on the drilling machine just fundamentally at a very simple mechanical level cannot work. And again, if you're going to do it, really go for it, Michael Bay. I kind of like that, and I really hate it at the same time.
STEPH: I have never noticed this. I'm intrigued. You know what? Maybe Armageddon will be the movie of choice tonight. [chuckles] Maybe that's what I'm going to watch. And I'm going to wait for the armadillo to come out so I can evaluate the gears. And I'm highly amused that this is the thing that grinds your gears are the gears on the armadillo.
CHRIS: Yeah. I was a young child at the time, and I remember I actually went to Disney World, and I saw they had the prop vehicle there. And I just kind of looked up at it, and I was like, no, that's not how gears work. I may have been naive and wrong as a child, and now I've just anchored this memory deep within me.
In a similar way, so I had a moment while traveling; actually, that reminded me of something that I said on a recent podcast episode where I was talking about names and pronunciation. And I was like, yeah, sometimes people ask me how to pronounce my name. And I can't imagine any variation. That was the thing I was just wrong about because 'Toomay' is a perfectly reasonable pronunciation of my name that I didn't even think...
I was just so anchored to the one truth that I know in the world that my name is Toomey. And that's the only possible way anyone could pronounce it. Nope, totally wrong. So maybe the gears in Armageddon actually work really, really well, and maybe I'm just wrong. I'm willing to be wrong on the internet, which I believe is the name of the first episode that we recorded with you formally as a co-host. [chuckles] So yeah.
STEPH: Yeah, that sounds true. So you're going to change the intro? It's now going to be like, and I'm Chris 'Toomay'.
CHRIS: I might change it each time I come up with a new subtle pronunciation. We'll see. So far, I've got two that I know of. I can't imagine a third, but I was wrong about one. So maybe I'm wrong about two.
STEPH: It would be fun to see who pays attention. As someone who deeply values pronouncing someone's name correctly, oh my goodness, that would stress me out to hear someone keep pronouncing their name differently. Or I would be like, okay, they're having fun, and they don't mind how it gets pronounced.
I can't remember if we've talked about this on air but early on, I pronounced my last name differently for like one of the first episodes that we recorded. So it's 'Vicceri,' but it could also be 'Viccari'. And I've defaulted at times to saying 'Viccari' because people can spell that. It seems more natural. They understand it's V-I-C-C-A-R-I. But if I say 'Vicceri', then people want to add two Rs, or they want a Y. I don't know why it just seems to have a difference. And so then I was like, nope, I said it wrong. I need to say it right. It's 'Vicceri' even if it's more challenging for people.
And I think Chad Pytel had just walked in at that moment when I was saying that to you that I had said my name differently. And he's like, "You can't do that." And I'm like, "Well, I did it. It's already out there in the world." [laughs] But also, I'm one of those people that's like, Viccari, 'Vicceri' I will accept either.
In a slightly different topic and something that's going on in my world, there was a small win today with a client team that I really appreciated where someone brought up the conversation around the RuboCop RSpec rules and how RuboCop was fussing at them because they had too many lines in their test example. And so they're like, well, they're like, I feel like I'm competing, or I'm working against RuboCop. RuboCop wants me to shorten my test example lines, but yet, I'm not sure what else to do about it.
And someone's like, "Well, you could extract more into before blocks and to lets and to helpers or things like that to then shorten the test. They're like, "But that does also work against readability of the test if you do that." So then there was a nice, short conversation around well, then we really need more flexibility. We shouldn't let the RuboCop metrics drive us in this particular decision when we really want to optimize for readability.
And so then it was a discussion of okay, well, how much flexibility do we add to it? And I was like, "Well, what if we just got rid of it? Because I don't think there's an ideal length for how long your test should be. And I'd rather empower test authors to use all the space that they need to show their test setup and even lean into duplication before they extract things because this codebase has far more dry tests than they do duplication concerns. So I'd rather lean into the duplication at this point."
And the others that happened to be in that conversation were like, "Yep, that sounds good." So then that person issued a PR that then removed the check for that particular; how long are the examples? And it was lovely. It was just like a nice, quick win and a wonderful discussion that someone had brought up.
CHRIS: Ooh, I like that. That sounds like a great conversation that hit on why do we have this? What are the trade-offs? Let's actually remove it. And it’s also nice that you got to that place. I've seen a lot of folks have a lot of opinions in the past in this space. And opinions can be tricky to work around, and just deeply, deeply entrenched opinions is the thing that I find interesting. And I think I'm increasingly in the space of those sort of, thou shalt not type linter rules are not ideal in my mind. I want true correctness checks that really tell some truth about the codebase.
Like, we still don't have RuboCop on our project at Sagewell. I think that's true. Yeah, that's true. We have ESLint, but it's very minimal, what we have configured. And they more are in the what we deem to be true correctness checks, although that is a little bit of a blurry line there. But I really liked that idea. We turn on formatters. They just do the thing. We're not allowed to discuss the formatting, with the exception of that time that everybody snuck in and switched my 80-line length to a 120-line length, but I don't care. I'm obviously not still bitter about it. [chuckles] And then we've got a very minimal linting layer on top of that.
But like TypeScript, I care deeply, and I think I've talked in previous episodes where I'm like, dial up the strictness to 14 because TypeScript tends to tell me more truths I find, even though I have to jump through some hoops to be like TypeScript, I know that this is fine, but I can't prove it. And TypeScript makes me prove it, which I appreciate about it.
I also really liked the way you referred to RSpec's feedback to you was that RSpec was fussing at you. That was great. I like that. I'm going to internalize that. Whenever a linter or type system or anything like that when they tell me no, I'm going to be like, stop fussing, nope, nope. [chuckles]
STEPH: I don't remember saying that, but I'm going to trust you that that's what I said. That's just my true southern self coming through on the mic, fussing, and then go get a biscuit, and it'll just be a delightful day.
CHRIS: So if I give RuboCop a biscuit, it will stop fussing at me, potentially?
STEPH: No, the biscuit is just for you. You get fussed at; you go get a biscuit. It makes you feel better, and then you deal with the fussing.
CHRIS: Sold.
STEPH: Fussing and cussing, [laughs] that's most of my work life lately, fussing and cussing. [laughs]
CHRIS: And occasional biscuits, I hope.
STEPH: And occasional biscuits. You got it. But that's what's new in my world. What's going on in your world?
CHRIS: Let's see. In my world, it's a short week so far. So recording on Wednesday, Monday was a holiday. And I was out all last week, which very much enjoyed my vacation. It was lovely. Went over to Europe, hung out there for a bit, some time in Paris, some time in Amsterdam, precious little time on a computer, which is very rare for me. So it was very enjoyable. But yeah, back now trying to just get back into the swing of things.
Thankfully, this turned out to be a really great time to step away from the work for a little while because we're still in this calm before the storm but in a good way is how I would describe it. We have a major facet of the Sagewell platform that we are in the planning modes for right now. But we need to get a couple of different considerations, pick a partner vendor, et cetera, that sort of thing. So right now, we're not really in a position to break ground on what we know will be a very large body of work.
We're also not taking on anything else too big. We're using this time to shore up a lot of different things. As an example, one of the fun things that we've done in this period of time is we have a lot of webhooks in the app, like a lot of webhooks coming into the app, just due to the fact that we're an integration of a lot of services under the hood.
And we have a pattern for how we interact with and process, so we actually persist the webhook data when they come in. And then we have a background job that processes and watch our pattern to make sure we're not losing anything and the ability to verify against our local version, and the remote version, a bunch of different things. Because turns out webhooks are critical to how our app works. And so that's something that we really want to take very seriously and build out how we work with that.
I think we have eight different webhook integrations right now; maybe it's more. It's a lot. And with those, we've implemented the same pattern now eight times; I want to say. And in squinting at it from a distance, we're like; it is indeed identically the same pattern in all eight cases or with the tiniest little variation in one of them. And so we've now accepted like, okay, that's true.
So the next one of them that we introduced, we opted to do it in a generic way. So we introduced the abstraction with the next iteration of this thing. And now we're in a position...we're very happy with what we ended up with there. It's like the best of all of the other versions of it. And now, the plan will be to slowly migrate each of the existing ones to be no longer a unique special version of webhook processing but use the generic webhook processing pattern that we have in the app. So that's nice.
I feel good about how long we waited as well because it's like, we have webhooks. Let's introduce the webhook framework to rule them all within our app. It's like, no, wait until you see. Check and make sure they are, in fact, the same and not just incidental duplication.
STEPH: I appreciate that so much. That's awesome. That sounds like a wonderful use of that in-between state that you're in where you still got to make progress but also introduce some refactoring and a new concept. And I also appreciate how long you waited because that's one of those areas where I've just learned, like, just wait. It's not going to hurt you. Just embrace the duplication and then make sure it's the right thing.
Because even if you have to go in and update it in a couple of places, okay, sure, that feels a little tedious, but it feels very safe too. If it doesn't feel safe...I could talk myself back and forth on this one. If it doesn't feel safe, that's a different discussion. But if you're going through and you have to update something in a couple of different places, that's quick. And sure, you had to repeat yourself a little bit, but that's fine. Versus if you have two or three of something and you're like, oh, I immediately must extract. That's probably going to cause more pain than it's worth at this point.
CHRIS: Yeah, exactly, exactly that. And we did get to that place where we were starting to feel a tiny bit of pain. We had a surprising bit of behavior that when we looked at it, we were like, oh, that's interesting, because of how we implemented the webhook pattern, this is happening. And so then we went to fix it, but we were like, oh, it would actually be really nice to have this fixed across everything.
We've had conversations about other refinements, enhancements, et cetera; that we could do in this space. That, again, would be really nice to be able to do holistically across all of the different webhook integration things that we have. And so it feels like we waited the right amount of time. But then we also started to...we're trying to be very responsive to the pressure that the system is pushing back on us.
As an aside, the crispy Brussels snack hour and the crispy Brussels work lunch continue to be utterly fantastic ways in which we work. For anyone that is unfamiliar or hasn't listened to episodes where I rambled about those nonsense phrases that I just said, they're basically just structured time where the engineering team at Sagewell looks at and discusses higher-level architecture, refactoring, developer experience, those sort of things that don't really belong on the core product board. So we have a separate place to organize them, to gather them.
And then also, we have a session where we vote on them, decide which ones feel important to take on but try and make sure we're being intentional about how much of that work we're taking on relative to how much of core product work and try and keep sort of a good ratio in between the two. And thus far, that's been really fantastic and continues to be, I think, really effective. And also the sort of thing that just keeps the developer team really happy. So it's like, I'm happy to work in this system because we know we have a way to change it and improve it where there's pain.
STEPH: I like the idea of this being a game show where it's like refactor island, and everybody gets together and gets to vote which refactor stays or gets booted off the island. I'm also going to go back and qualify something I said a moment ago, where if something feels safe in terms of duplication, where it starts to feel unsafe is if there's like an area that you forgot to update because you didn't realize it's duplicated in several areas and then that causes you pain. Then that's one of those areas where I'll start to say, "Okay, let's rethink the duplication and look to dry this up."
CHRIS: Yep, indeed. It's definitely like a correction early on in my career and overcorrection back and trying to find that happy medium place. But as an aside, just throwing this out there, so webhooks are an interesting space. I wish it were a more commoditized offering of platforms. Every vendor that we're integrating with that does webhooks does it slightly differently. It's like, "Oh, do you folks have retries?" They're like, "No." It's like, oh, what do you mean no? I would love it if you had retries because, I don't know, we might have some reason to not receive one of them. And there's polling, and there are lots of different variations.
But the one thing that I'm surprised by is that webhook signing I don't feel like people take it serious enough. It is a case where it's not a huge security vulnerability in your app. But I was reading someone who is a security analyst at one point. And they were describing sort of, I've done tons of in-the-code audits of security practices, and here are the things that I see. And so it's the normal like OWASP Top 10 Cross-Site Request Forgery, and SQL injection, and all that kind of stuff.
But one of the other ones he highlighted is so often he finds webhooks that are not verified in any way. So it's just like anyone can post data into the system. And if you post it in the right shape, the system's going to do some stuff. And there's no way for the external system to enforce that you properly validate and verify a webhook coming in, verify that payload. It's an extra thing where you do the checksum math and whatnot and take the signature header. I've seen somewhere they just don't provide it. And it's like, what do you mean you don't provide it? You must provide it, please.
So it's either have an API key so that we have some way to verify that you are who you say you are or add a signature, and then we'll calculate it. And it's a little bit of a dance, and everybody does it different, but whatever. But the cases where they just don't have it, I'm like, I'm sorry, what now? You're going to say whom? But yeah, then it's our job to definitely implement that.
So this is just a notice out there to anyone that's listening. If you got a bunch of webhook handling code in your app, maybe spot-check that you're actually verifying the payloads because it's possible that you're not. And that's a weird, very open hole in the side of your application.
STEPH: That's a really great point. I have not worked with webhooks recently. And in the past, I can't recall if that's something that I've really looked at closely. So I'm glad you shared that.
CHRIS: It's such an easy thing to skip. Like, it's one of those things that there's no way to enforce it. And so, I'd be interested in a survey that can't be done because this is all proprietary data. But what percentage of webhook integrations are unverified? Is it 50%? Is it 10%? Is it 100%? It's definitely not 100. But it's somewhere in there that I find interesting. It's not a terribly exploitable vulnerability because you have to have deep knowledge of the system.
In order to take advantage of it, you need to know what endpoint to hit to, what shape of data to send because otherwise, you're probably just going to cause an error or get a bunch of 404s. But like, it's, I don't know, it's discoverable. And yeah, it's an interesting one. So I will hop off my webhook soapbox now, but that's a thought.
MIDROLL AD:
Debugging errors can be a developer’s worst nightmare...but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half.
So why do developers love Airbrake? Well, it has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. So head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
CHRIS: But now that I'm off my soapbox, I believe we have a topic that was suggested. Do you want to provide a little bit of context here, Steph?
STEPH: Yeah, I'd love to. So this came up when I was having a conversation with another thoughtboter. And given that we change projects fairly frequently, on the Boost team, we typically change projects around every six months. They asked a really thoughtful question that was "How do you get acquainted with a new codebase? So given that you're changing projects so often, what are some of the tips and tricks for ways that you've learned to then quickly get up to speed with a new codebase?" Because, frankly, that is one of the thoughtbot superpowers is that we are really good at onboarding each other and then also getting up to speed with a new team, and their processes, and their codebase.
So I have a couple of ideas, and then I'd love to hear some of your thoughts as well. So I'll dive in with a couple. So the first one, this one's frankly my favorite. Like day one, if there's a team where I'm joining and they have someone that can walk me through the application from the users' perspective, maybe it's someone that's in sales, or maybe it's someone on the product team, maybe it's a recording that they've already done for other people, but that's my first and favorite way to get to know an application.
I really want to know what are users experience as they're going through this app? That will help me focus on the more critical areas of the application based on usage. So if that's available, that's fabulous. I'm also going to tailor a lot of this more to like a Rails app since that's typically the type of project that I'm onboarding to. So the other types of questions that I like to find answers to are just like, what's my top-level structure? Like to look through the app and see how are things organized?
Chris, you've mentioned in a previous episode where you have your client structure that then highlights all the third-party clients that you're working with. Are we using engines in the app? Is there anything that seems a bit more unique to that application that I'm going to want to brush up on or look into? What's the test coverage like? Do they have something that's already highlighting how much test coverage they have? If not, is there something that then I can run locally that will then show me that test coverage?
I also really like to look at the routes file. That's one of my other favorite places because that also is very similar to getting an overview of the product. I get to see more from the user perspective. What are the common resources that people are going to, and what are the domain topics that I'm working with in this new application? I've got a couple more, but I'm going to pause there and see how you get acquainted with a new app.
CHRIS: Well, unsurprisingly, I agree with all of those. We're still searching for that dare to disagree beyond Pop-Tarts and IPAs situation. To reiterate or to emphasize some of the points you made, the sales demo thing? I absolutely love that one because, yes, absolutely. What's the most customer-centric point of view that I can have? Can I then login to a staging version of the site so I can poke around and hopefully not break anything or move real money or anything like that? But understanding why is this thing, not in code, but in actual practical, observable, intractable software?
Beyond that, your point about the routes, absolutely, that's one of my go-to's, although the routes there often is so much in the routes, and it's like some of those may actually be unused. So a corollary to the routes where available if there's an APM tool like Scout, or New Relic, or something like that, taking a look at that and seeing what are the heavily trafficked endpoints within this app? I like to think about it as the entry points into this codebase. So the routes file enumerates all of them, but some of them matter, and some of them don't.
And so, an APM tool can actually tell you which are the ones that are seeing a ton of traffic. That's a really interesting question for me. Similarly, if we're on Heroku, I might look is there a scheduler? And if so, what are the tasks that are running in the background? That's another entry point into the app. And so I like to think about it from that idea of entry points.
If it's not on Heroku, and then there's some other system, like, I've used Cronic. I think it's Cronic, Whenever the Cron thing. Whenever, that's what it is, the Whenever gem that allows you to implement that, but it's in a file within the codebase, which as an aside, I really love that that's committed and expressive in the code.
Then that's another interesting one to see. If it's more exotic than that, I may have to chase it down or ask someone, but I'll try and find what are all of the entry points and which are the ones that matter the most? I can drill down from there and see, okay, what code then supports these entry points into the application?
I want to give an answer that also includes something like, oh, I do fancy static analysis in the codebase, and I do a churn versus complexity graph, and I start to...but I never do that, if we're being honest. The thing that I do is after that initial cursory scan of the landscape, I try and work on something that is relatively through the layers of the app, so not like, oh, I'll fix the text in a button.
But like, give me something weird and ideally, let me pair with someone and then try and move through the layers of the app. So okay, here's our UI. We're rendering in this way. The controllers are integrated in this way, et cetera. This is our database. Try and get through all the layers if possible to try and get as holistic of a view of how the application works.
The other thing that I think is really interesting about what you just said is you're like, I'm going to give some answers that are somewhat specific to a Rails app. And that totally makes sense to me because I know how to answer this in the context of a Rails app because those organizational patterns are so useful that I can hop into different Rails apps. And I've certainly seen ones that I'm like, this is odd and unfamiliar to me, but most of them are so much more discoverable because of that consistency.
Whereas I have worked on a number of React apps, and every single one I come into, I'm like, okay, wait, what are we doing? How are we doing state management? What's the routing like? Are we server-side rendering, are we not? And it is a thing that...I see that community really moving in the direction of finding the meta frameworks that stitch the pieces together and provide more organizational structure and answer more of the questions out of the box.
But it continues to be something that I absolutely love about Rails is that Rails answers so many of the questions for me. New people joining the team are like, oh, it's a Rails app, cool. I know how to Rails, and we get to run with that. And so that's more of a pitch for Rails than an answer to the question, but it is a thing that I felt in answering this question. [laughs] But yeah, those are some thoughts. But interested, it sounds like you had some more as well. I would love to hear what else was in your mind when you were thinking about this.
STEPH: I do. And I want to highlight you said some really wonderful things. One that really stuck out to me that I had not considered is using Scout APM to look at heavily-trafficked endpoints. I have that on my list in regards as something that I want to know what's my error tracking, observability. Like, if I break something or if you give me a bug ticket to work on, what am I going to use? How am I going to understand what's going wrong? But I hadn't thought of it in terms of seeing which endpoints are heavily used. So I really liked that one.
I also liked how you highlighted that you wish you'd do something fancy around doing a churn versus complexity kind of graph because I thought of that too. I was like, oh, that would be such a nice answer. But the truth is I also don't do that. I think it's all those things. I think it would be fun to make it easy. So I do that with new applications. But I agree; I typically more just dive in like, hey, give me a ticket. Let me go from there.
I might do some simple command-line checking. So, for example, if I want to look through app models, let's find out which model is the largest. I may look for that to see do we have a God object or something like that? So I may look there. I just want to know how long are some of these files? But I also don't use a particular tool for that churn versus complexity.
CHRIS: I think you hit the nail on the head with like, I wish that were easier or more in our toolset. But here on The Bike Shed, we tell the truth. And that is aspirational code flexing that we do not yet have. But I agree, that would be a really nice way to explore exactly what you're describing of, like, who are the God models? I'll definitely do that check, but not some of the more subtle and sophisticated show me the change over time of all these...like nah, that's not what I'm doing, much as I would like to be able to answer that way.
STEPH: But it also feels like one of those areas like, it would be nice, but I would be intrigued to see how much I use that. That might be a nice anecdote to have. But I find the diving into the codebase to be more fruitful because I guess it depends on what I'm really looking at. Am I looking to see how complicated of a codebase this is? Because then I need to give more of a high-level review to someone to say how long I think it's going to take for me to work on a particular feature or before I'm joining a team, like, who do I think are good teammates that would then enjoy working on this application?
That feels like a very different question to me versus the I'm already part of the team. I'm here. We're going to have complexity and churn. So I can just learn some of that over time. I don't have to know that upfront. Although it may be nice to just know at a high level, say like, okay, if I pick up a ticket, and then I look at that churn and complexity, to be like, okay, my ticket falls right smack-dab in the middle of that. So it's going to be a fun first week. That could be a fun fact. But otherwise, I'm not sure. I mean, yeah, I'd be intrigued to see how much it helps me.
One other place that I do browse is I go to the gem file. I'm just always curious, what do people have in their tool bag? I want to see are there any gems that have been pulled in that are helping the team process some deprecated behavior? So something that's been pulled out of Rails but then pulled into a separate gem. So then that way, they don't have to upgrade just yet, or they can upgrade but then still keep some of that existing old deprecated behavior. That kind of stuff is interesting to me.
And also, you called it earlier pairing. That's my other favorite way. I want to hear how people talk about the codebase, how they navigate. What are they frustrated by? What brings them joy? All of that is really helpful too. I think that covers all the ways that I immediately will go to when getting acquainted with a new codebase.
CHRIS: I think that covers most of what I have in mind, although the question is framed in an interesting way that I think really speaks to the consultant mindset. How do I get acquainted with a new codebase? But if you take the question and flip it around sort of 180 degrees, I think the question can be reframed as how does an organization help people onboard into a codebase? And so everything we just described are like, here's what I do, here's how I would go about it, and pairing starts to get to collaboration.
I think we've talked in a number of episodes about our thoughts on onboarding and being intentional with that, pairing people up. A lot of things we described it's like, it's ideal actually if the organization is pushing this. And you and I both worked as consultants for long enough that we're really in the mindset of like, all right, let's assume I'm just showing up. There's no one else there. They give me a laptop and no documentation and no other humans I'm allowed to talk to. How do I figure this out and get the next feature out to production? And ideally, it's something slightly better than that that we experience, but we're ready for whatever it is.
Versus, most people are working within the context of an organization for a longer period of time. And most organizations should be thinking about it from the perspective of how do I help the new hires come into this codebase and become effective as quickly as possible? And so I think a lot of what we said can just be flipped around and said from the other way, like, pair them up, put them on a feature early, give them a walkthrough of the codebase, give them a sales-centric demo.
Yeah, I feel equally about those things when said from the other side, but I do want to emphasize that this shouldn't be you're out there in the middle of the jungle with only a machete, and you got to figure out this codebase. Ideally, the organization is actually like, no, no, we'll help you. It's ours, so we know it. We can help you find the weird stuff.
STEPH: That's a really nice distinction, though, because you're right; I hadn't really thought about this. I was thinking about this from more of the perspective of you're out in the jungle with a machete, minus we did mention pairing in there [laughs] and maybe a demo. I was approaching it more from you're isolated or more solo and then getting accustomed to the codebase versus if you have more people to lean on.
But then that also makes me think of all the other processes that I didn't mention that I would include in that onboarding that you're speaking of, of like, how does this team work in terms of where do I push my code? What hooks are going to run? And then what do I wait for? How many people need to review my code?
There are all those process-y questions that I think would ideally be included on the onboarding. But that has happened before, I mean, where we've joined projects, and it's been like, okay, good luck. Let us know if you need anything. And so then you do need those machete skills to then start hacking away. [laughs]
CHRIS: We've been burned before.
STEPH: They come in handy. [laughs] So when you are in that situation, and there's a comet that's coming to destroy earth, and there's a Rails application that is preventing this big doomsday, the question is, do you take astronauts and train them to be Rails experts, or do you take Rails developers and train them to be astronauts? I think that's the big question.
CHRIS: What would Michael Bay do?
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris is getting ready to travel, and of course, Sagewell started the day with an incident, a situation, if you will...
Steph talks books perfect for vacations and feels sufficiently scarred regarding still working with moving fixtures over to FactoryBot.
This episode is brought to you by Airbrake. Visit Frictionless error monitoring and performance insight for your app stack.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: All right, I am now officially recording as well. Let me make sure my microphone is in front of my face.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Today is an interesting day. We are recording on a Friday, which is not normal for us, was normal for a long time and then stopped, but now it's back to being normal. But it's the morning, which is confusing. Also, I am traveling this evening. I leave on a flight going to Europe. So I'm going to do a red-eye, that whole thing. So I got a lot to pack into today, literally packing being one of those things.
And then this morning, because obviously, this is the way the world should play out, we started the day with an incident at Sagewell, a situation. Some code had gotten out there that was doing some stuff that we didn't want it to do. And so we had to sort of call in the dev team. And we all huddled together and tried to figure it out. Thankfully, it was a series of edge cases. It was sort of one of those perfect storms. So when this edge case happens in this context, then a bad thing could happen. Luckily, we were able to review the logs; nothing bad happened.
While I'm unhappy that we had this situation play out... basically, it was a caching thing, just to throw that out there. Caching turns out to be very hard. And the particular way it played out could have manifested in behavior that would have been not good in our system, or an admin would have inadvertently done something that would have been incorrect.
But on the positive side, we have an incident review process that we've been slowly incubating within the team. One of our team members introduced it to us, and then we've been using it on a few different cases. And it's really great to just have a structured process. I think it's one of those things that will grow over time. It's a very simple; what's the timeline of what happened? What's the story as to why it happened and why it wasn't caught earlier? What are the actions that we're going to take? And then what's the appendix? What's the data that we have around it? And so it's really great to just have that structure to work within.
And then similarly, as far as I can tell, the first even observable instance of this behavior in our system was yesterday morning. We saw it, started to respond to it, saw one more. We were able to chase it down in the logs. Overall, the combination of the alerting that we have in Sentry and the way in which we respond to the alerting in Sentry, which I think is probably the most critical part. Datadog is our log metrics tool right now. So we're able to go through Datadog, and we have Lograge configured to add more detail to our log lines.
And so we're able to see a very robust story of exactly what happened and ask the question, did anything actually bad happen? Or was it just possible that something bad could happen? And it turns out just possible. Nothing actually happened. We were able to determine that. We were even able to get a more detailed picture of who were all the users who potentially could have been impacted. Again, I don't think there was any impact.
But all total, it was both a very stressful process, especially as I'm about to go on vacation. It's like, oh cool, start to the day where I'm trying to wrap up things, and instead, we're going to spend a couple of hours chasing down an incident. But that said, these things will happen. The way in which we were able to respond, the alerting and observability that we had in place make me feel good.
STEPH: I like the incident structure that you just laid out. That sounds really nice in clarifying what happened when it happened in the logs. And the fact that you're able to go through and confirm if anything really bad happened or not is really nice. And I was also just debating this is one of those things, right? Right when you're about to go on vacation, that's when something's going to break. And that's like, is that good or bad? Is it good that I was here to take care of it right before, or is it bad? Because I'd really like to not be here to take care of it. [laughs] You may have mixed feelings. I have mixed feelings.
CHRIS: I think I'm happy. Unsurprisingly, this exists in one of the most complex parts of our codebase. And it involves caching. And I remember when we introduced the caching, I looked at it, and I was like, hmm, we have a performance hotspot that involves us making a lot of requests to an external system. And so we thought about it a little bit, and we were like, well, if we do a little bit of caching here, we can actually reduce that down from seven calls down to one over external HTTP. And so okay, that seems to make sense.
We had a pull request. We did a formal review. And even I looked at the pull request where this was introduced initially, and my comments on it were like, yep, this all looks good. Makes sense to me. But it's caching-related. So let's be very careful and look very closely at it and determine if there's anything, but it's so hard to know. And in fact, the code that actually was at play here was introduced a month ago.
And interestingly, the observable side effect only occurred in the past two days, which we find very surprising. But again, it's this weird like, if A happens and then within a short period after that B happens...and so it's not quite a race condition. But it was something where a lot of stuff had to happen in a short span of time for this to actually manifest. And so, again, we were able to look through the logs and see all of the instances where it could have happened and then what did happen. Everything was fine, but yeah, it was interesting.
I feel actually good to have seen it. And I think we've cleared everything up related to it and been very proactive in our response to it so that all feels good. And also, this is the sort of thing we've done this a few times now where we've had what I would call lesser incidents. There was no customer-facing impact to this. Similarly, previous incidents, we've had no or very minimal customer-facing impact.
So at one point, we had a situation where we weren't processing our background jobs for a little while. So we eventually caught up and did everything we needed to. It just meant that something may not have happened in as timely a fashion as necessary. But there were no deep ramifications to that.
But in each of those cases, we've pushed ourselves to go through the incident process to make sure that we're building the muscle as a team to like, actually, when the bad one comes, we want to be ready. We want to have done a couple of fire drills first. And so partly, I viewed this as that because again, there was smoke, but no fire is how we would describe it.
STEPH: Nice. And that also makes sense to me how you were saying y'all introduced this about a month ago, but you were just now seeing that observable side effect. I feel like that's also how it goes. Like, you implement, especially with caching, some performance improvement, and then you immediately see that. And it's like, yay, this is wonderful.
And then it's not til sometime passes that then you get that perfect storm of user interactions that then trigger some flow that you didn't consider or realize that could create an issue with that caching behavior. So yeah, that resonates. That seems right. All caching problems usually take about a month or two when you've just forgotten about what you've done. And then you have to go back in.
CHRIS: Yep. Yep, yep, yep. So now we've done the obvious thing, which is we've removed every cache from the system whatsoever. There are no caches anymore because it turns out we just can't be trusted with caches in any form whatsoever. ActiveRecord, we turned off caching, Redis we threw it out. No, I'm kidding. We still have lots of caching in the app. But, man, caching is so hard.
STEPH: I would love if that's in the project README where it says, "We can't be trusted with caches. No caches allowed." [laughs]
CHRIS: Yeah, we have not gone all the way to forbid caching within the application. It's a trade-off. But this does have that you get those scars over time. You have that incident that happens, and then forever you're like, no, no, no, we can't do X. And I feel like I'm just a collection of those. Again, I think we've talked about this in previous episodes. But consulting for as long as I did, I saw a lot of stuff. And a lot of it was not great. And so I basically just look at everything, and I'm like, urgh, no, this will be hard to maintain. This is going to go wrong. That's going to blow up someday.
And so, I'm having to work on trying to be a little more positive in my development work. But I do like that I have that inclination to be very cautious, be very pessimistic, assume the worst. I think it leads to safer code in general. There was actually a tweet by Sarah Drasner that was really wonderful. And it's basically a conversation between her and another developer. It's a pretend conversation. But it's like, "But why don't you like higher-order components?" And then it's Squints. "Well, in the summer of 2018, something bad happened, Takes a long drag of a cigarette. something very bad."
It's just written so well and captures the ethos just perfectly. Like, sit down. Let me tell you a tale of the time in 2018. [laughs] So I'll include a link to that in the show notes because she actually wrote it so well too. It's got like scene direction within a tweet and really fantastic stuff. But yeah, we'll allow some caching to continue within the app.
STEPH: That's amazing. So I was just thinking where you're talking about being more pessimistic versus optimistic. And there's an interesting nuance there for me because there's a difference in like if someone's pessimistic where if someone just brings up an idea and someone's like, "Nope, like, that's just not going to work," and they just always shoot it down, that level of being pessimistic is too much. And it's just going to prevent the team from having a collaborative and experimental environment.
But always asking the question of like, well, what's the worst that could happen? And what are the things that we should mitigate for? And what are the things that are probably so unlikely that we should just wait and see if that happens and then address it? That feels like a really nice balance. So it's not just leaning into saying no to everything. But sure, let's consider all the really bad things that could happen, make a plan for those, but still move forward with trying things out.
And I realized I do this in my own life, like when someone asks me a question around if there's something that we want to do that's a bit kind of risky. And the first thing I always think of is like, well, what's the worst that could happen? And I think that has confused people that I immediately go there because they think that I'm immediately saying no to the idea.
And so I have to explain like, no, no, no. I'm very intrigued, very interested. I just have to think through what's the worst that can happen. And if I'm okay with that, then I feel better about accepting it. But my emotional state, I have to think through what's the worst and then go from there.
CHRIS: Wow, it's a very bottom-up approach for your life planning there. [chuckles]
STEPH: Yep, I think that's, you know, it's from being a developer for so long. It has impacted now how I make other decisions. Good or bad? Who knows? Yeah, it turns out being a developer has leaked into my personal life. I've got leaky abstractions over here. So, good or bad? Who knows?
CHRIS: Leaky abstractions all the way down. Yeah, circling back to, like, I don't think I'm pessimistic per se. The way that I see this playing out often is there will be a discussion of an architectural approach, or there's a PR that goes up. And my reaction isn't no, or this has a known failure point; it is more of uh, this makes me uncomfortable. And it's that like; I can't even say exactly why, and that's what makes it so difficult.
And I think this is a place that can be really complicated for communication, particularly between developers who have been around for a little bit longer and have done this sort of thing and have gathered these battle scars and developers who are a bit newer. Having that conversation and being like, um, I can't say exactly why. I can tell you some weird stories. I might not even remember the stories. Some of it just feeds into just like, does this code make me uncomfortable? Or does this code make me happy? And I tend towards wildly explicit code for these reasons.
I want to make it as clear as possible and match as close as possible to the words that we're saying because I know that the bugs hide in the weird corners of our code. So I try and have as few corners. Make very rounded rooms of code is a weird analogy that doesn't play, but here we go. That's what I do on this show is I make weird analogies.
Actually, we were working on some code that was dealing with branching conditional things. So we had a record which has a boolean value on it. So we've got true or false, and then we've got two states, and then we've gotten an enum with three states. So all total, we have six possible states. But as we were going through this conversation, I was pairing with another developer on the team. And I was like, something feels weird here. And I actually invoked the name of Joël Quenneville because much of the data structure thought that I had here I associate with work that Joël has done around Maybe and things like that.
And then also, my suggestion was let's build a truth table because that seems like a fun way to manage this and look at it and see what's true. Because I know that there are spots on this two-by-three grid that should never happen. So let's name that and then put that in the code. We couldn't quite get it to map into the data type, like into that Boolean in the enum. Because it's possible to get into those states, but we never should. And therefore, we should alert and handle that and understand, like, how did this even happen? This should never happen.
And so we ended up taking what was a larger method body with some of the logic in it and collapsing it down to very explicitly enumerate the branches of the conditional and then feed out to a method. Like, call a method that had a very explicit name to say, okay, if it's true and we're in this enum state, then it's bad, alert bad. And then the other case like, handle the good case.
And I was very happy with what we refactored down to because this is another one of those very complex parts of our code. Critical infrastructure-y is how I would describe it. And so, in my mind, it was worth the I'm going to go with pathological refactoring that we got to there. But yes, I was channeling Joël in that moment. I'm very happy to have had many conversations with him that help me think through these things.
STEPH: That's awesome. Yeah, those truth tables can be so helpful. There's a particular article that, of course, Joël has written that then describes how a truth table works and ways that you can implement it into your habits. It's called Back to Basics: Boolean Expressions. I will be sure to include a link in the show notes.
CHRIS: But yeah, I think that summarizes my day and probably the next couple of days as I prepare for an adventure over to Europe and chat about developer spidey sense. But yeah, what's new in your world?
STEPH: Yeah, that's a big day. There's a lot going on. Well, I actually want to circle back because you mentioned that you're packing and you're going on this trip. And I'm curious, do you have any books queued up for vacation?
CHRIS: I do, yeah. I'm currently reading Elantris by Brandon Sanderson. Folks might be aware of his work from the highest-funded Kickstarter of all time, which was absurd. Did you see this happen?
STEPH: I don't think so, uh-uh.
CHRIS: He did this fun, cheeky little Kickstarter. The video was sort of a fake around...oh, it almost sounded like he might be retiring or something like that. And then he's like, JK, I wrote five new books. And so the Kickstarter was for those books with different tiered packages and whatnot. I think he got just the right viral coefficient going on. And apologies for the spoiler if anyone's not seen the video, but it's been out there for a while.
So he wrote some books, and that's what the Kickstarter is for. You get some books. You sort of join a book club, and you'll get one a quarter. A million dollars seems like that will be a bunch for that. That'd be great. If he raised a million dollars, that'd be amazing. $40 million four-zero million dollars. [laughs] I'm just watching it play out in real-time as well. It just skyrocketed up. The video, I think, was structured just right. He got it onto the...it was on Reddit and Twitter and just bouncing around, and people were sharing it. And just everything about it seemed to go perfectly. And yes, the highest-funded Kickstarter of all time, I believe, certainly within the publishing world.
But yeah, Brandon Sanderson, prolific author, and his stuff ends up just being kind of light and fun. And so I was reading Elantris for that. It's been a little bit slower to pick up than I would like. So I'm now in the latter half. I'm hoping it'll go a little bit more quickly and be...I'm just kind of looking for a fun read, some fantasy thing to go on an adventure.
But as the next book, I downloaded a second one just to make sure I'm covered. I have a book by John Scalzi, who's a sci-fi, fantasy, more on the sci-fi end of the spectrum. And I've read some of his other stuff and enjoyed it. And this particular book has a very consistent set of reviews. I've read the reviews a few times. And everybody who reviews it is just like, "This isn't the greatest book I've ever read, but man was it a fun ride." Or "Yeah, no, best book? No. Fun book? Yes." And just like, "This book was a fun ride. This was great." And I was like, perfect. That is exactly what I'm looking for on a European vacation.
The book is called The Kaiju Preservation Society, which also plays on monsters, Pacific Rim Godzilla. Kaiju, I think, is the word for that category of giant dinosaur-like monster. And so it's the Kaiju Preservation Society, which, I don't know, means some stuff, and I'm going to go on a fun adventure. So yeah, those are my books.
STEPH: Nice. I've got one that I'm reading right now. It's called Clementine: The Life of Mrs. Winston Churchill, written by Sonia Purnell. And Sonia Purnell tends to focus on female historical figures. And so it's historical fiction, which is a sweet spot for me. The only thing I'm debating on is because I'm realizing as I'm reading through it, I'm questioning, okay, well, what's real and what's not? Because I don't want to be that person that's like, did you know? And then, I quote this fictional fact about somebody that was made up for the novel. [laughs]
So I'm realizing that maybe historical fiction is fun, but then I'm having to fact-check all the things because then I'm just curious. I'm like, oh, did this really happen, or how did it go down? So it's been pretty good so far. But then it makes me wish that historical fiction novels had at the back of them they're like, these are all the events that were real versus some of the stuff that we fictionalized or added a little flair to. I'm in that interesting space.
I also like how you highlighted that you chose a fun book. I was having a conversation with a colleague recently about downtime. And like, do you consume more tech during downtime? Like, are you actively looking for technical blog posts or technical books to read or podcasts, things like that? And I was like, I don't. My downtime is for fun. Like, I want it to be all the things that are not tech.
Maybe some tech sneaks in there here and there, but for the most part, I definitely prioritize stuff that's fun over more technical content in my spare time, which has taken me a little while to not feel guilty about. Earlier in my career, I definitely felt like I should be crunching technical content all the time. And now I'm just like, nope, this is a job. I'm very thankful that I really enjoy my job, but it's still a job.
CHRIS: It is an interesting aspect of the world that we work in where that's even a question. In my previous life as a mechanical engineer, the idea that I would go home and read about mechanical engineering...I could attend a conference, but I would do that for very particular reasons and not because, like, oh, it's fun. I'll go meet my friends. For me, this was a big reason that I moved into tech because I am one of those folks who will, like, I will probably watch a video about Remix in particular because that's my new thing that I like to play around with and think about.
But it needs to be a particular shape of thing I've found. It needs to be exploratory, puzzle-y. Fun code, reading, learning work that I do needs to be separated from my work-work in a certain way. Otherwise, then it feels like work, then it is sort of a drudgery. But yeah, my brain just seems to really like the puzzle of programming and trying to build things. And being able to come into a world where people share as much as they do blogs and conference talks and all of that is utterly fantastic.
But it is a double-edged sword because I 100% agree that the ability to disconnect to, like, work a nine-to-five and then go home at the end of the day. Yeah, go home, you know, because you remember when we went to an office and then we would go home afterwards? I have to commute every once in a while into the city and --
STEPH: You mean go downstairs or go to another room? That's what you mean? [laughs]
CHRIS: I used to commute every day, and it took a lot of time. And now when I do it, I feel that so viscerally because I'm like, it's just a lot easier to just walk to my office in my house. But yes, I 100% I'm aligned to that like, yeah, no, you're done with work for the day, walk away. That's that.
And learning a new technology or things like that, that's part of the job. There shouldn't be the expectation that that just happens. There's continuing education in every other field. It's like, oh, we'll pay for your master's degree so you can go learn a thing. That's the norm in every other...not in every other industry but many, many, many industries.
And yet the nature of our world the accessibility of it is one of the most wonderful things about it. But it can be a double-edged sword in that if there are the expectations that, oh yeah, and then, of course, you're going to go home and have side projects and be learning things. Like, no, that is an unreasonable expectation, and we got to cut that off. But then again, I do do that. So I'm saying two things at the same time, and that's always complicated.
STEPH: But I agree with what you're saying because you're basically respecting both sides. If people enjoy this as a hobby, more power to you.; that's great. This is what you enjoy doing. If you don't want to do this as a hobby and respect it as a job, then that's also great too. There can be both sides, and no side should feel guilty or judged for whichever path that they pursue.
And I absolutely agree, if there are new skills that you need to learn for a job, then there should be time that's carved out during your work hours that then you get to focus on those new skills. It shouldn't be an expectation that then you're going to work all day and then spend your evening hours learning something else. And same for interviews; there shouldn't be a field that says, "Hey, what are your side projects?" Or at least that should not be an important part of the interview.
There should be an alternative to be like, "Or what work code do you want to talk about?" Or something else that's more in that nine-to-five window that you want to talk about. That way, there's a balance between like, sure, if you have something that you want to talk about on the side, great, but if not, then let's focus on something that you've done during your actual work hours because that's more realistic.
CHRIS: I do think there's an interesting aspect at play because the world of development moves so rapidly and because it's constantly changing. And to frame it differently, I don't think we've got this thing figured out. And so many people lament how quickly it changes and that there's a new framework every other week. And there's a bit of churn that is perhaps unnecessary.
But at the same time, I do not feel like as a community, as a working population, that we're like, yeah, got it, crushed it. We know how to make great software, no question about it. It's going to be awesome. We're going to be able to maintain it for forever, don't even worry about it. New feature? We can get that in there. They're actually still pretty rare.
So we need to be learning, and evolving, and exploring new techniques. I think the amount of thinking is probably good mostly in the development world. But organizations have to make space for that with their teams. And thoughtbot obviously does that with investment days. That's just such a wonderful structure that embraces that reality and also brings happiness, and it's just a pleasant way to work. And frankly, my team does not have that right now.
We do the crispy Brussels snack hour, which also now has a corresponding crispy Brussels work lunch, which is one week we think about it, and the next week we do the thing. We're trying to make space for that. But even that is still more intentional and purposeful and less exploratory and learning. And so it's an interesting trade-off. I deeply believe in this thing, and also, the team that I'm leading isn't doing it right now. Granted, we're an early-stage startup. We got to build a bunch of stuff. I think that's fine for right now. But it is a thing that...again, I'm saying two things at the same time, always fun.
STEPH: Well, and there might be a nice incremental approach to this as well. So thoughtbot has the entire day, and maybe it's less than a full day. So perhaps it's just there's an hour or two hours or something like that where you start to introduce some of that self-improvement time and then blossom out from there. Because yeah, I understand that not all teams may feel like they have the space for that.
But then I agree with everything else you said that it really does improve team morale and gives people a space to then be able to get to explore some of those questions that they had earlier. So then they don't feel like they have to then dedicate some weekend time or off hours’ time to then look into a question.
And I admit, I'm totally guilty too. I am that person that then I've worked extra hours, but it's because, like you said, if there's a puzzle that my brain is stuck on and I just feel the need to get through it. But then I look at that as am I doing this because I want to? Yes. Okay, then as long as I'm happy and I don't feel like this is increasing any concern around burnout, then I don't worry about it.
MIDROLL AD:
Debugging errors can be a developer’s worst nightmare… but it doesn’t have to be. Airbrake is an award-winning error monitoring, performance, and deployment tracking tool created by developers for developers, that can actually help you cut your debugging time in half.
So why do developers love Airbrake? It has all of the information that web developers need to monitor their application - including error management, performance insights, and deploy tracking!
Airbrake’s debugging tool catches all your project errors, intelligently groups them, and points you to the issue in the code so you can quickly fix the bug before customers are impacted.
In addition to stellar error monitoring, Airbrake’s lightweight APM enables developers to track the performance and availability of their application through metrics like HTTP requests, response times, error occurrences, and user satisfaction.
Finally, Airbrake Deploy Tracking helps developers track trends, fix bad deploys, and improve code quality.
Since 2008, Airbrake has been a staple in the Ruby community and has grown to cover all major programming languages. Airbrake seamlessly integrates with your favorite apps and includes modern features like single sign-on and SDK-based installation. From testing to production, Airbrake notifiers have your back.
Your time is valuable, so why waste it combing through logs, waiting for user reports, or retrofitting other tools to monitor your application? You literally have nothing to lose. Head on over to airbrake.io/try/bikeshed to create your FREE developer account today!
Circling back to your original question about what's going on in my world, and you mentioned scarring earlier. I feel sufficiently scarred [laughs] in regards to still working with moving fixtures over to FactoryBot. This week has really confirmed that fixtures don't trigger a lot of the callbacks, the model callbacks that exist. And so this really means that you can just create bad data that your application doesn't actually allow your application to create.
So there are tests that are exercising behavior that should never exist. And then porting that over to FactoryBot then highlights that because then as soon as I move that record over and then try to create it or do something with it, then the app, the test, do the right thing and let me know saying no, no, no, we've added validations. You can't do that anymore. That has been grinding my gears in terms of trying to then translate. Because then I have to really dive into the code to understand it. And the goal here is to stay as high level as possible and not have to dive in too much. But then that means that I do have to dive in and understand more.
So this has frankly just been one of those times in my career where you just kind of have to slog through the work. It's important work to be done. It'll be great once it's done. But it's a painful process. And the best way that I've found to make it more enjoyable is to be in heavy communication with Joël, who's on the project with me, just so if we get stuck on something, then we can chat with each other.
And then also there's one file that's particularly gnarly. And so we moved over one test. We were successful, and which felt great because then we could at least document like, okay, when we come back to this, at least we have one example that highlights the wonkiness that we ran into. But we've decided, okay, we're done with that file. We're going to take a break. There's a lot there, but we're going to move on and give ourselves a break and do some of the easier ones, and then we'll circle back to the harder one.
Which was, I think, just a bit of bad luck in terms of, like, as we're going down the list, that happened to be like the gnarliest one, and it was like the first one that Joël picked up. And so I'm going through a couple of files, and Joël is like, "What? [laughs] How are you making progress?" And we realize it's just because that file, in particular, is very hard to find all the mystery guests and then to move everything over.
Finding a positive note through all of the cruft, I will say this is helping with some of my code sleuthing skills. So as I am running into these problems and then looking for mystery guests, I'm noticing ways that I can then, as quickly as possible, try to triage and identify as to why one test doesn't match another test. Some of it is more specific to the application setup, so it won't be as applicable to future projects. But then some other areas have been really helpful.
Like, I'm using caller a lot more to understand, like, I know this is getting called, but I don't know who's calling you. So I can put in a line that basically outputs like, show me your stack traces to how you got here. So that's been really nice as well. So it has improved some of my code sleuthing skills and also my spidey sense in terms of it's typically mystery guests.
Like when a test isn't passing, it's because fixtures are creating extra data that are getting pulled in when there are queries that are being run. But they're not explicitly referenced in the test setup itself. So that's typically then where I start is looking for what record looks relevant to this test that I haven't pulled over to my test setup.
CHRIS: I appreciate you finding the silver lining, the positive bit of this. Because as you're describing, the work that you're doing sounds like I think you use the word slog, which seems like a very accurate term. But sometimes we have to do that sometimes for a variety of reasons. We end up either having to introduce new code or fix old code, but this is sometimes the work.
And this is something that I think you and I share about this show is we get to show all sides of the work. And the work can be glamorous and new. And oh, I've got this greenfield app that I'm building, and it's wonderful. Look at the architecture. And I know in the moment that I'm building someone else's legacy code three years from now. [laughs]
And so telling the other side of the story and providing that rounded point of view, because like, yeah, this is all part of it. Again, I don't believe that this is a solved problem, building robust software that we can maintain. And so yeah, you're doing the good work in there. And I thank you for sharing it with us.
STEPH: Thanks. Just don't use fixtures in your test, I beg of you. Please don't do that to the legacy code that you're writing for future developers. [laughs] That is my one request.
CHRIS: And I will maybe add on to that, sparingly use callbacks. Maybe don't use them at all, and certainly don't use the combination because, my goodness, that'll lead you into some fun times. But yeah, just two small recommendations there.
STEPH: Oh, there's something else I wanted to share. I saw that Slack added a new audio feature that allows you to record the pronunciation of your name, which is the feature that I was so excited about when we added it to our internal tool called Hub at thoughtbot. And now Slack has it on their profile so that way you can upload the pronunciation. And then anyone looking at your profile can then listen to how to pronounce your name.
There are a couple of other features that they released, I think just in June, so about a month ago from the recording of today. [laughs] That's weird to say, but here we are. So I'll include a link in the show notes so folks can see that feature in addition to others, but I'm super excited.
CHRIS: Oh, that is nice. I also like all right, so Slack now has it. Hub now has it. But I don't have access to Hub anymore. And I don't have access to every Slack in the world yet. But here's my suggestion. All right, everybody, stick with me here. I want you to own a domain. I want you to have a personal site on it. And I want the personal site to include the pronunciation of your name.
I get that that's a big ask. And I get that there are other platforms that are calling to you, and you may be writing on those. But you know what? Just stand up a little site, just a little place on the internet that you own. And if it includes the pronunciation of your name, I will be forever grateful.
STEPH: I like this idea. I initially was taking your idea and immediately running with it as you were speaking it because then I wondered if everyone had their own YouTube channel. But I don't know how hard it is to create a YouTube channel. I am not a YouTube channeler, so I don't know what that looks like. [laughs] But not everybody will know how to purchase a domain. So that might be another approach.
CHRIS: I think it's pretty easy to do a YouTube channel. I'm conflating a couple of things. This is my basket of beliefs about people on the internet, but I kind of think everybody should own their own little slice of the internet. And so totally, YouTube is a place where the people make some stuff, make videos, put them on YouTube, absolutely. But ideally, you own something. I see a lot of people that are on YouTube, and that's it, and so their entire audience lives on YouTube. And if YouTube someday decides to change or remove them or say Medium as an example, Medium actually, I think, does a more interesting version of this where your identity kind of gets subsumed into Medium.
And I really think everybody should just have their own little, tiny slice of the internet that's there. It has their name that they own that no platform can decide; hey, we've shifted, and now your stuff is gone. Cool URIs don't change as they say, and that's what I want. And then yeah, if you can have the pronunciation of your name on there, that's extra nice.
Although I say that, and I don't know that I would do it because my name feels very obvious. One day someone was like, "Oh, how do you pronounce your last name?" I forget if I actually replied with the pronunciation. Or if I was like, "I need to know what options you're considering. I'm so interested because I've really only got the one." Maybe I'm anchored. Maybe I'm biased. [chuckles] I've been doing this for a while. But I really cannot think of another pronunciation of my name.
STEPH: You might hear another one that you really like, and you need to pivot.
CHRIS: Oh gosh.
STEPH: That's the point where you start pronouncing your name differently.
CHRIS: Wow, that would be a lot. And then, I could have a change log on my personal site where people can see this is the pronunciation, and this is what the pronunciation used to be.
STEPH: [laughs] I like this idea. I also like this idea that everybody has their own slice of internet land. I like this encouragement that you're providing for everyone.
On a slightly different note, there's a blog post that I'm really excited to talk about. It's written by Eric Bailey, who's a former thoughtboter. It's called The Optics of Pair Programming. And given how much pair programming that I'm doing, especially with Joël on the current project, it was a really wonderful read.
And it also helped me think about pairing from a different perspective because we do have a very strong pairing culture at thoughtbot. So there's a lot of nuance, especially social nuances that can go along with when you invite someone to pair with you that I had not considered until I read this wonderful post by Eric. And we'll be sure to include a link in the show notes.
But to provide an overview, essentially, Eric shares that given coming from thoughtbot where we do have a very open approach to pairing where pairing sessions are voluntary and then also last as long as the problem will last...but then when you're at a new company, you could experience pushback if you're inviting someone to pair and then to consider why that pushback may exist. And some of the high-level areas that Eric highlighted are power dynamics, assessment, privacy, and learning styles.
So to dive into each of some of those, there's a power dynamics of it's important to consider who's offering to pair. So if I've joined a team as a consultant, there may be a power dynamic there that someone is feeling where their team is paying for my time. So they may feel like they can't say no if I offered to pair. They feel like they need to say yes to the invitation, even if they don't really want to.
Or probably a more classic example would be like, what if your boss wants to pair or someone that's just more senior than you? Then it could leave you feeling like, well, I can't say no to this person, can I? Which yes, you totally can say no to that person, but it may leave you in a place where you feel like you can't. And so, it puts you in this sort of uncomfortable and powerless position.
The other one is assessment, so offering to pair with someone could feel like you are implying that you want to assess their skills or that you're implying that they're not up to the task and therefore they need your help. So then that could also place someone in an uncomfortable position. There's also privacy. So someone who isn't confident may not want someone to observe their behavior or observe how they're working. It could make them feel really anxious, which then I love that Eric points this out.
Ironically, pairing is really good at addressing that lack of confidence because then you get to see how other people work through their problems or how they think, or they may also have some anxiety. Or it just helps you become more comfortable in talking and thinking through with other people. So that one is a tough one where it's hard to get over that initial hurdle. But actually, the more you pair, then the less anxious you'll feel when you pair.
And then there's also learning styles because pairing really involves a lot of deep thinking but in our personal time. And it can be hard to balance both of those, and it's just not as effective for some people. So I know that even as much as I really enjoy pairing, I just need to sit with code on my own sometimes. I need to think about it. I need to run it; I need to look at it.
So it's really nice to talk with someone. But then I also need that alone time to then just think through it on my own because I can't have that same deep focus if I'm also worried about how the other person is experiencing that session because then my mental energy is going towards them.
So that covers a number of the social nuances that can be included or running through someone's mind when you extend an invitation to them to pair. And it really resonated with me the areas that Eric highlights in this blog post. He also talks about a couple of strategies, which I'd love to dive into as well. But I'm going to pause here and see what thoughts you have.
CHRIS: Yeah, I love this post. And it got me thinking about pairing and the broader human backdrop of all of the processes and workflows that we have. Everything he highlighted about pairing feels true. Although similar to you and to Eric, I've worked in a context where pairing was a very natural, very regular part of the work and sort of from the very top-down. And so everyone pairing between developers of any different level or developers and designers or really anyone in the...it was just such a part of how we worked that no one really questioned it or at least not after the first couple of weeks.
I imagine joining thoughtbot those first weeks; you're like, oh God. As I shared, I think in the previous episode that we recorded, my pairing interview was with Joe Ferris, the CTO of thoughtbot, [laughs] writing a book about good and bad code. And I was like, I don't know what anything is here but very quickly getting over that hurdle. And having that normalizing experience was actually really great, and then have been comfortable with it since. But the idea that there are so many different social dynamics at play feels true.
And then as I think about other things, like stand-up is one that I think of as this very simple this is a way to communicate where we're at. And where necessary or where useful, allow people to interject or step in to say, "Oh, let me help you get unblocked there or whatever it is." But so often, I see stand-up being a ritual about demonstrating that you are, in fact, doing work, which is like, here's what I did yesterday. I don't know if it's useful. Then mention that you're working on this project. But the enumeration of look, obviously, work was done by me. You can see it; here are the receipts. It's very much this social dynamic at play.
And retro is another one where like, if retro is very much owned by one voice and not a place that change actually happens where people feel safe airing their opinions or their concerns, then it's going to be a terrible experience. But if you can structure it and enforce that it is a space that we can have a conversation, that everyone's voice is welcome and that real change happens as a result of, then it's a magical tool for making sure we're doing the right things. But always behind these are the people, and feelings, and the psychology at play. And so this was just such an interesting post to read and ruminate on that a little bit more.
STEPH: Yeah, I agree, especially with a comment that you made about those daily syncs where I really just want to focus on today and what you have that you're blocked on. So it's a really nice update in case there are any cross-collaboration opportunities. That's really what I'm looking for in a daily update. And so I appreciate when people don't go through a laundry list of what they did yesterday because it's like, that's great. But then, like you said, it's just like you're trying to prove here's what I've done, and I trust you; you're working. So just let me know what you're doing today, friend.
So Eric does a wonderful job of also including some strategies for ways that then you can address some of these concerns and then how there may be some extra anxiety that's increased when you're inviting somebody to pair. There are some wonderful strategies. I'll let folks read through the blog post itself.
There are a couple in particular that came to mind for me because I was then self-assessing how do I tend to approach pairing with someone? And some ways that I want them to feel very comfortable with that experience. And there's a couple. There's one where I recognize that I need to build trust with each person. I can't just go on to a team and expect everyone to know that I have good intentions and that I'm going to do my best to be a fun, helpful pairing partner, and that it's not a zone of judgment. And that has to be cultivated with each person.
Because especially as a consultant, if I'm joining a team, the people who hired me are not necessarily the people that I'm working with. It's someone that's probably in leadership or management that has then brought on thoughtbot. And so then the people that I'm working with they don't know me, and they don't know what my pairing style is going to be. So looking for ways to build trust with each person and then also inviting them or asking for help myself.
So there's a bit of vulnerability that has to be shown to build trust with someone to say," Hey, I'm stuck on a problem. I would love a second set of eyes. Would you be willing to help me out with this?" So then that way, they're coming in to help me initially versus I'm going in and saying, "Hey, can I help you?" I have found that to be an effective strategy.
And there's one that I do really want to talk about, and that's not everyone is going to pair well together. Like, you may find someone who always leaves you feeling just stressed or demoralized. And while it's important to consider your role and why that's true, that does not mean it's your fault and necessarily your problem to fix.
So similar to having to manage up, you may need to coach the person that you're pairing with in ways that help you feel comfortable pairing. But if they don't listen to your requests and implement any of that feedback, then just don't pair with that person. That is a very fine option to recognize people that are not receptive to your needs and, therefore, not someone that you need to then force into being a great pairing buddy.
And I emphasize that last one because it took me a little while to become comfortable with that and accepting that it wasn't my fault that I wasn't having a great pairing session with people. Similar to when I'm learning from someone that if someone is explaining something to me and they're making me feel inadequate while they're explaining it to me, that's not necessarily my fault. Like, I used to internalize that as like, oh, I just can't get this.
But I am now a very staunch believer in if you can't explain it to me in a way that I understand, then that's probably more on you than on me. And that has also taken me time to just really accept and embrace. But once you do, it is so freeing to realize that if someone's explaining a concept and you're still not getting it, it's like, hey, how can we try harder together versus you just making me try harder?
CHRIS: I like that right there of like, if I don't understand this, it may actually be you, not me, or something to that effect. Let's get that on a bumper sticker and put that in The Bike Shed store so that everybody can buy it and put it on their cars or at least just us. But yeah, that starting from the bottom sometimes it's just not going to work great. There are even...I think what you're describing sounds a little more complicated, individuals who are personally not great at communicating or pairing or things like that. And that's going to happen.
We're going to run into folks that...pairing is communication. That's just the core of it, and some folks, that may not be their strongest suit. But I think there's another category of just like different working styles. And whereas I might...judge is such a heavy word, but I'm going to use it. I might judge someone who is not doing a great job at communicating to someone else, or understanding their point of view, or striving to do that, or taking feedback. Like, those are not great things.
Whereas there may just be two different development styles or backgrounds, or there are other reasons that actually they may be not an ideal fit. That said, I have definitely found that in almost every variation of pairing, I've seen work at some point. Like, when I was very early on in my career pairing with folks that are very senior, I didn't get most of it, but I got some stuff. And then folks that are very much on the same level or folks that have a deep knowledge in framework, code base language, whatever and folks that are new to it but have a different set of experiences.
Basically, every version of that, I found that pairing is actually an incredibly powerful technique for knowledge sharing, for collaboration, for all of that. So although there are rare cases where there might be some misalignment, in general, I think pairing can work.
I do think you hit on something earlier of there are certain folks that are more private thinkers, is how I would describe it, where thinking out loud is complicated for them. I'm very much someone who talks. That's how I figure out what I think is I say stuff. And I'm like, oh, I agree with what I just said. That's good. But I find I actually struggle. There's something I think of...maybe I'm just a loudmouth is what I'm hearing as I say it, but that is how I process things. Other folks, that is not true.
Other folks, it's quite internal, and actually trying to vocalize that or trying to share the thought process as they're going may be uncomfortable. And I think that's perfectly reasonable and something that we should recognize and make space for. And so pairing should not be forced upon a team or an individual because there are just different mindsets, different ways of thinking that we need to account for.
But again, the vast majority of cases...I've seen plenty of cases where it's someone's like, "I don't like to pair. That's not my thing." And it's actually that they've had bad experiences. And then when they find a space that feels safe or they see the pattern demonstrated in a way that is collegial, and useful, and friendly, then they're like, oh, actually, I thought I didn't like pairing. I thought I didn't like retro. I thought I didn't like stand-up. But actually, all of these things can be good.
STEPH: Yeah, absolutely. It's a skill like anything else. You need to see value in it. And if you haven't seen value in it yet or if it's always made you anxious and uncomfortable, then it's something that you're going to avoid as much as possible until someone can provide a valuable, positive experience around how it can go.
I'm going to pull back the curtains just a little bit on our recording and share because you've mentioned that you are very much you think out loud, and that's how you decide that you agree with yourself. And I think already at least twice while we've been recording this episode, I have started to say something, and I'm like, no, wait, I don't agree with that and have backed myself up.
CHRIS: [laughs]
STEPH: And I'm like, no, I just thought through it; I'm going to cancel it out, [laughs] and then moved in a different direction. So I, too, seem to be someone that I start to say things, and I'm like, oh, wait, I don't actually agree with what I just said [laughs], so let's remove that.
CHRIS: Yep. You've described it as Michael Scott-ing on a handful of different episodes or maybe things that were cut from episodes. But where you start a sentence and then you're like, I don't know where I was going to end up there. I hoped I'd figure it out by the end, but then I did not get there. And yeah, I think we've all experienced that at various times.
STEPH: That’s some of my favorite advice from you is where you've been like, just lean into it, just see where it goes. Finish it out. We can always take it out later. [laughs] Because I stop myself because I immediately start editing what I'm trying to say and you're like, "No, no, just finish it, and then we'll see what happens." That's been fun.
CHRIS: This is how you find out what you think. You say it out loud, and then you're like, never mind. That was ridic –
STEPH: [laughs]
CHRIS: I do. Actually, now I'm thinking back, and I have plenty of those where I'll say a thing, and I'm like, nope, never mind, send that one back. [chuckles] As an aside, so we do this thing where we host a podcast, and we get to talk. But we're both now describing the pattern where we'll start to say something, and we’ll be like no, no, no, actually, not that. And I think, dear listeners out there, you probably don't hear any of this, the vast majority of it, because we have wonderful editors behind the scenes, Thom Obarski for many years, and now Mandy Moore, who's been with us for a while.
And so once again, thank you so much to the editor team that allows us to, I think, again, feel safe in this conversation that we can say whatever feels true and then know that we'll be able to switch that around. So thank you so much to the editors who help us out and make us sound better than we are.
STEPH: Yeah, that has made a big difference in my capabilities to podcast. If we were doing this live, ooh goodness, this might be a whole different, weird show. [laughs]
CHRIS: I mean, the same is true for code, right? I deeply value the ability to make an absolute mess in my local editor and have nine different commits that eventually I throw two out. And then I revert that file, and then eventually, the PR that I put up that's my Instagram selfie. That's like, I carefully curated this, but what's behind the scenes it's just a pile of trash. So yeah, the ability to separate the creation and the editing that's a meaningful thing to have in life.
STEPH: Oh, I can't unsee that now. [laughs] A pull request is now the equivalent of that curated Instagram selfie. That is beautiful. [laughs]
CHRIS: To be clear, I don't think I've ever taken an Instagram selfie. But I get the idea, and I felt like it was an analogy that would work. Again, I try out analogies on this show, and many of them do not stick. But I think that one is all right.
STEPH: It might even go back to pairing because then you've got help in taking that picture. So hey, you're making a mess with somebody until you get that right perfect thing, and then you push it up for the world to see. So safe spaces for all the activities, I think that's the takeaway. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph has an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec.
Chris got to do something exciting this week using dry-monads. Success or failure?
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Become a Sponsor of The Bike Shed!
Transcript:
AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow, and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix all of them today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all of the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.
STEPH: What type of bird is the strongest bird?
CHRIS: I don't know.
STEPH: A crane.
[laughter]
STEPH: You're welcome. And on that note, shall we wrap up?
CHRIS: Let's wrap up.
[laughter]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris, I saw a good movie I'd like to tell you about. It was just over the weekend. It's called The Duke, and it's based on a real story. I should ask, have you seen it? Have you heard of this movie called The Duke?
CHRIS: I don't think so.
STEPH: Okay, cool. It's a true story, and it's based on an individual named Kempton Bunton who then stole a particular portrait, a Goya portrait; if you know your artist, I do not. But he stole a Goya portrait and then essentially held at ransom because he was a big advocate that the BBC News channel should be free for people that are living on a pension or that are war veterans because then they're not able to afford that fee. But then, if you take the BBC channel away from them, it disconnects them from society. And it's a very good movie. I highly recommend it. So I really enjoyed watching that over the weekend.
CHRIS: All right. Excellent recommendation. We will, of course, add that to the show notes mostly so that I can find it again later.
STEPH: On a more technical note, I have a small update, or it's more of a question. It's an update and a question wrapped into one about the work that is being done to migrate the Test::Unit test over to RSpec. This has been quite a journey that Joël and I have been on for a while now. And we're making progress, but we're realizing that we're spending like 95% of our time in the test setup and porting that over, specifically because we're mapping fixture data over to FactoryBot, and we're just realizing that's really painful. It's taking up a lot of time to do that.
And initially, when I realized we were just doing that, we hadn't even really talked about it, but we were moving it over to FactoryBot. I was like, oh, cool. We'll get to delete all these fixtures because there are around 208 files of them. And so that felt like a really good additional accomplishment to migrating the test over.
But now that we realize how much time we're spending migrating the data over for that test setup, we've reevaluated, and I shared with Joël in the Slack channel. I was like, crap. I was like, I have a bad idea, and I can't not say it now because it's crossed my mind. And my bad idea was what if we stopped porting over fixtures to FactoryBot and then we just added the fixtures to a directory that RSpec would look so then we can rely on those fixtures? And then that way, we're literally then ideally just copying over from Test::Unit over to RSpec.
But it does mean a couple of things. Well, one, it means that we're now running those fixtures at the beginning of RSpec test. We're introducing another pattern of where these tests are already using FactoryBot, but now they have fixtures at the top, and then we won't get to delete the fixtures. So we had a conversation around how to manage and mitigate some of those concerns. And we're still in that exploratory. We're going to test it out and see if this really speeds us up referencing the fixtures.
The question that's wrapped up in this is there's something different between how fixtures generate data and how factories generate data. So I've run into this a couple of times now where I moved data over to just call a factory. But then I was hitting these callbacks or after-save-hooks or weird things that were then preventing me from creating the record, even though fixtures was creating them just fine. And then Joël pointed out today that he was running into something similar where there were private methods that were getting called. And there were all sorts of additional code that was getting run with factories versus fixtures. And I don't have an answer.
Like, I haven't looked into this. And it's frankly intentional because I was trying hard to not dive into understanding the mechanics. We really want to get through this. But now I'm starting to ponder a little more as to what is different with fixtures and factories? And I liked that factories is running these callbacks; that feels correct. But I'm surprised that fixtures doesn't, or at least that's the experience that I'm having.
So there's some funkiness there that I'd like to explore. I'll be honest; I don't know if I'm going to. But if anybody happens to know what that funkiness is or why fixtures and factories are different in that regard, I would be very intrigued because, at some point, I might look into it just because I would like to know.
CHRIS: Oh, that is interesting. I have not really worked with fixtures much at all. I've lived a factory life myself, and thus that's where almost all of my experience is. I'm not super surprised if this ends up being the case, like, the idea that fixtures are just some data that gets shoveled into the database directly as opposed to FactoryBot going through the model layer. And so it's sort of like that difference. But I don't know that for certain. That sounds like what this is and makes sense conceptually.
But I think this is what you were saying like, that also kind of pushes me more in the direction of factories because it's like, oh, they're now representative. They're using our model layer, where we're defining certain truths. And I don't love callbacks as a mechanism. But if your app has them, then getting data that is representative is useful in tests. Like one of the things I add whenever I'm working with FactoryBot is the FactoryBot lint rake task RSpec thing that basically just says, "Are your factories valid?" which I think is a great baseline to have.
Because you may add a migration that adds a default constraint or something like that to the database that suddenly all your factories are invalid, and it's breaking tests, but you don't know it. Like subtly, you change it, and it doesn't actually break a test, but then it's harder later. So that idea of just having more correctness baked in is always nice, especially when it can be automated like that, so definitely a fan of that. But yeah, interested if you do figure out the distinction.
I do like your take, though, of like, but also, maybe I just won't figure this out. Maybe this isn't worth figuring it out. Although you were in the interesting spot of, you could just port the fixtures over and then be done and call the larger body of work done. But it's done in sort of a half-complete way, so it's an interesting trade-off space. I'm also interested to hear where you end up on that.
STEPH: Yeah, it's a tough trade-off. It's one that we don't feel great about. But then it's also recognizing what's the true value of what we're trying to deliver? And it also comes down to the idea of churn versus complexity. And I feel like we are porting over existing complexity and even adding a smidge, not actual complexity but adding a smidge of indirection in terms that when someone sees this file, they're going to see a mixed-use of fixtures and factories, and that doesn't feel good.
And so we've already talked about adding a giant comment above fixtures that just is very honest and says, "Hey, these were ported over. Please don't mimic this. But this is some legacy tests that we have brought over. And we haven't migrated the fixtures over to use factories." And then, in regards to the churn versus complexity, this code isn't likely to get touched like these tests. We really just need them to keep running and keep validating scenarios. But it's not likely that someone's going to come in here and really need to manage these anytime soon. At least, this is what I'm telling myself to make me feel better about it.
So there's also that idea of yes, we are porting this over. This is also how they already exist. So if someone did need to manage these tests, then going to Test::Unit, they would have the same experience that they're going to have in RSpec. So that's really the crux of it is that we're not improving that experience. We're just moving it over and then trying to communicate that; yes, we have muddied the waters a little bit by introducing this other pattern. So we're going to find a way to communicate why we've introduced this other pattern, but that way, we can stay focused on actually porting things over to RSpec.
As for the factories versus fixtures, I feel like you're onto something in terms of it's just skipping that model layer. And that's why a lot of that functionality isn't getting run. And I do appreciate the accuracy of factories. I'd much rather know is my data representative of real data that can get created in the world? And right now, it feels like some of the fixtures aren't.
Like, how they're getting created seemed to bypass really important checks and validations, and that is wrong. That's not what we want to have in our test is, where we're creating data that then the rest of the application can't truly create. But that's another problem for another day. So that's an update on a trade-off that we have made in regards to the testing journey that we are on. What's going on in your world?
CHRIS: Well, we got to do something exciting this week. I was working on some code. This is using dry-monads, the dry-rb space. So we have these result objects that we use pretty pervasively throughout the app, and often, we're in a controller. We run one of these command objects. So it's create user, and create user actually encompasses a ton of logic in our app, and that object returns a result.
So it's either a success or a failure. And if it's a success, it'll be a success with that new user wrapped up inside of it, or if it's a failure, it's a specific error message. Actually, different structured error messages in different ways, some that would be pushed to the form, some that would be a flash message. There are actually fun, different things that we do there.
But in the controller, when we interact with those result objects, typically what we'll do is we'll say result equals create user dot run, (result=createuser.run) and then pass it whatever data it needs. And then on the next line, we'll say results dot either, (results.either), which is a method on these result objects. It's on both the success and failure so you can treat them the same. And then you pass what ends up being a lambda or a stabby proc, or I forget what they are. But one of those sort of inline function type things in Ruby that always feel kind of weird.
But you pass one of those, and you actually pass two of them, one for the success case and one for the failure case. And so in the success case, we redirect back with a notice of congratulations, your user was created. Or, in the failure case, we potentially do a flash message of an alert, or we send the errors down, or whatever it ends up being. But it allows us to handle both of those cases. But it's always been syntactically terrible, is how I would describe it. It's, yeah, I'm just going to leave it at that. We are now living in a wonderful, new world.
This has been something that I've wanted to try for a while. But I finally realized we're actually on Ruby 2.7, and so thus, we have access to pattern matching in Ruby. So I get to take it for a spin for the first time, realizing that we were already on the correct version. And in particular, dry-monads has a page in their docs specific to how we can take advantage of pattern matching with the result objects that they provide us. There's nothing specific in the library as far as I understand it. This is just them showing a bunch of examples of how one might want to do it if they're working with these result objects.
But it's really great because it gives the ability to interact with, you know, success is typically going to be a singular case. There's one success branch to this whole logic, but there are like seven different ways it can fail. And that's the whole idea as to why we use these command objects and the whole Railway Oriented Programming and that whole thing which I have...what is this word? [laughs] I feel like I should know it. It's a positive rant. I have raved; that is how our users kindly pointed that out to us. I have raved about the Railway Oriented Programming that allows us to do.
But it's that idea that they're actually, you know, there's one happy path, and there are seven distinct failure modes, seven unhappy paths. And now, using pattern matching, we actually get a really expressive, readable, useful way to destructure each of those distinct failures to work with the particular bits of data that we need. So it was a very happy day, and I got to explore it. This is, again, a feature of Ruby, not a feature of dry-monads. But dry-monads just happens to embrace it and work really well with it. So that was awesome.
STEPH: That is awesome. I've seen one or two; I don't know, I've seen a couple of tweets where people are like, yeah, Ruby pattern matching. I haven't found a way to use it. So I'm excited that you just shared a way that you found to use it. I'm also worried what it says about our developer culture that we know the word rant so well, but rave, we always have to reach back into our memory to be like, what's that positive word or something that we like? [laughs]
CHRIS: And especially here on The Bike Shed, where we try to gravitate towards the positive. But yeah, it's an interesting point that you make.
STEPH: We're a bunch of ranters. It's what we do, pranting ranters. I don't know why we're pranting. [laughs]
CHRIS: Because it's that exciting. That's what it is. Actually, there was an interesting thing as we were playing around with the pattern matching code, just poking around in the console session with it, and it prints out a deprecation warning. It's like, warning: this is an experimental feature. Do not use it, be careful. But in the back of my head, I was like, I actually know how this whole thing plays out, Ruby 2.7, and I assure you, it's going to be fine. I have been to the future, at least I'm pretty sure.
I think the version that is in Ruby 2.7 did end up getting adopted basically as it stands. And so, I think there is also a setting to turn off that deprecation warning. I haven't done it yet, but I mostly just enjoyed the conversation that I had with this deprecation message of like, listen, I've been to the future, and it's great. Well, it's complicated, but specific to this pattern matching [laughs] in Ruby 3+ versions, it went awesome. And I'm really excited about that future that we now live in.
STEPH: I wish we had that for so many more things in our life [laughs] of like, here's a warning, and it's like, no, no, I've seen the future. It's all right. Or you're totally right; I should avoid and back out of this now.
CHRIS: If only we could know how the things would play out, you know. But yeah, so pattern matching, very cool. I'll include a link in the show notes to the particular page in the dry-monads docs. But there are also other cool things on the internet. In an unrelated but also cool thing that I found this week, we use Tuple a lot within our organization for pair programming. For anyone who's not familiar with it, it's a really wonderful piece of technology that allows you to pair program pretty seamlessly, better video quality, all of those nice things that we want.
But I found there was just the tiniest bit of friction in starting a Tuple call. I know I want to pair with this person. And I have to go up and click on the little menu bar, and then I have to find their name, then I have to click a button. That's just too much. That's not how...I want to live my life at the keyboard. I have a thing called Bartender, which is a little menu bar manager utility app that will collapse down and hide the icons. But it's also got a nice, little hotkey accessible pop-up window that allows me to filter down and open one of the menu bar pop-out menus.
But unfortunately, when that happens, the Tuple window isn't interactive at that point. I can't use the arrow keys to go up and down. And so I was like, oh, man, I wonder if there's like an Alfred workflow for this. And it turns out indeed there is actually managed by the kind folks at Tuple themselves. So I was able to find that, install it; it's great. I have it now. I can use that.
So that was a nice little upgrade to my workflow. I can just type like TC space and then start typing out the person's name, and then hit enter, and it will start a call immediately. And it doesn't actually make me more productive, but it makes me happier. And some days, that's what matters.
STEPH: That's always so impressive to me when that happens where you're like, oh, I need a thing. And then you went through the saga that you just went through. And then the people who manage the application have already gotten there ahead of you, and they're like, don't worry, we've created this for you. That's one of those just beautiful moments of like, wow, y'all have really thought this through on a bunch of different levels and got there before me.
CHRIS: It's somewhat unsurprising in this case because it's a very developer-centric organization, and Ben's background being a thoughtbot developer and Alfred user, I'm almost certain. Although I've seen folks talking about Raycast, which is the new hotness on the quick launcher world. I started eons ago in Quicksilver, and then I moved to Alfred, I don't know, ten years ago. I don't know what time it is anymore.
But I've been in Alfred land for a while, but Raycast seems very cool. Just as an aside, I have not allowed myself... [laughs] this is another one of those like; I do not have permission to go explore this new tool yet because I don't think it will actually make me more productive, although it could make me happier. So...
STEPH: I haven't heard of that one, Raycast. I'm literally adding it to the show notes right now as a way so you can find The Duke later, and I can find Raycast later [chuckles] and take a look at it and check it out. Although I really haven't embraced the whole Alfred workflow. I've seen people really enjoy it and just rave about it and how wonderful it is. But I haven't really leaned into that part of the world; I don't know why. I haven't set any hard and fast rules for myself where I can't play around with these technologies, but I haven't taken the time to do it either.
CHRIS: You've also not found yourself writing thousands of lines of Vimscript because you thought that was a good idea. So you don't need as many guardrails it would seem. That's my guess.
STEPH: This is true.
CHRIS: Whereas I need to be intentional [laughs] with how I structure my interaction with my dev tools.
STEPH: Instead, I'm just porting over fixtures from one place to another. [laughs] That's the weird space that I'm living in instead. [laughs]
CHRIS: But you're getting paid for that. No one paid me for the Vimscript I wrote.
[laughter]
STEPH: That's fair. Speaking around process-y things, there's something that's been on my mind that Valeria, another thoughtboter, suggested around how we structure our meetings and the default timing that we have for meetings. So Thursdays are my team-focused day. And it's the day where I have a lot of one on ones. And I realized that I've scheduled them back to back, which is problematic because then I have zero break in between them, which I'm less concerned about that because then I can go for an hour or something and not have a break. And I'm not worried about that part.
But it does mean that if one of those discussions happens to go over just even for like two or three minutes, then it means that someone else is waiting for me in those two to three minutes. And that feels unacceptable to me. So Valeria brought up a really good idea where I think it's only with the Google Meet paid version. I could be wrong there. But I think with the paid version of it that then you can set the new default for how long a meeting is going to last.
So instead of having it default to 30 minutes, have it default to 25 minutes. So then, that way, you do have that five-minute buffer. So if you do go over just like two or three minutes with someone, you've still got like two minutes to then hop to the next call, and nobody's waiting for you. Or if you want those five minutes to then grab some water or something like that.
So we haven't implemented it just yet because then there's discussion around is this a new practice that we want everybody to move to? Because I mean, if just one person does it, it doesn't work. You really need everybody to buy into the concept of we're now defaulting to 25 versus 30-minute meetings. So I'll have to let you know how that goes. But I'm intrigued to try it out because I think that would be very helpful for me.
Although there's a part of me that then feels bad because it's like, well, if I have 30 minutes to chat with somebody, but now I'm reducing it to 25 minutes each time, I didn't love that I'm taking time away from our discussion. But that still feels like a better outcome than making somebody wait for three to five minutes if something else goes over. So have you ever run into something like that? How do you manage back-to-back meetings? Do you intentionally schedule a break in between or?
CHRIS: I do try to give myself some buffer time. I stack meetings but not so much so that they're just back to back. So I'll stack them like Wednesdays are a meeting-heavy day for me. That's intentional just to be like, all right, I know that my day is going to get chopped up. So let's just really lean into that, chop the heck out of Wednesday afternoons, and then the rest of the week can hopefully have slightly longer deep work-type sessions. And, yeah, in general, I try and have like a little gap in between them.
But often what I'll do for that is I'll stagger the start of the next meeting to be rather than on the hour or the half-hour, I start it on the 15th minute. And so then it's sort of I now have these little 15-minute gaps in my workflow, which is enough time to do one or two small things or to go get a drink or whatever it is or if things do run over. Like, again, I feel what you're saying of like, I don't necessarily want to constrain a meeting. Or I also don't necessarily want to go into the habit of often over-running.
I think it's good to be intentional. Start meetings on time, end meetings on time. If there's a great conversation that's happening, maybe there's another follow-up meeting that should happen or something like that. But for as nonsensical of a human as I believe myself to be, I am rather rigid about meetings. I try very hard to be on time. I try very hard to wrap them up on time to make sure I go to the next one. And so with that, the 15-minute staggering is what I've found works for me.
STEPH: Yeah, that makes sense. One-on-ones feels special to me because I wholeheartedly agree with being very diligent about like, hey, this is our meeting time. Let's do a time check. Someone says that at the end, and then that way, everybody can move on. But one on ones are, there's more open discussion space, and I hate cutting people off, especially because it might not be until the last 15 minutes that you really got into the meat of the conversation.
Or you really got somewhere that's a little bit more personal or things that you want to talk about. So if someone's like, "Yeah, let me tell you about my life goals," and you're like, "Oh, no, wait, sorry. We're out of time." That feels terrible and tragic to do. So I struggle with that part of it.
CHRIS: I will say actually, on that note, I'm now thinking through, but I believe this to be true. Everyone that reports to me I have a 45-minute one-on-one with, and then my CEO I set up the one-on-one. So I also made that one a 45-minute one-on-one. And that has worked out really well.
Typically, I try and structure it and reiterate this from time to time of, like, hey, this is your space, not mine. So let's have whatever conversation fits in here. And it's fine if we don't need to use the whole time, but I want to make sure that we have it and that we protect it. Because I often find much like retro, I don't know; I think everything's fine. And then suddenly the conversation starts, and you're like, you know what? Actually, I'm really concerned now that you mentioned it. And you need that sort of empty space that then the reality sort of pop up into.
And so with one on one, I try and make sure that there is that space, but I'm fine with being like, we can cut this short. We can move on from one-on-one topics to more of status updates; let's talk about the work. But I want to make sure that we lead with is there anything deeper, any concerns, anything you want to talk through? And sort of having the space and time for that.
STEPH: I like that. And I also think it speaks more directly to the problem I'm having because I'm saying that we keep running over a couple of minutes, and so someone else is waiting. So rather than shorten it, which is where I'm already feeling some pain...although I still think that's a good idea to have a default of 25-minute meetings so then that way, there is a break versus the full 30. So if people want to have back-to-back meetings, they still have a little bit of time in between.
But for one on ones specifically, upping it to 45 minutes feels nice because then you've got that 15-minute buffer likely. I mean, maybe you schedule a meeting, but, I don't know, that's funky. But likely, you've got a 15-minute buffer until your next one. And then that's also an area that I feel comfortable in sharing with folks and saying, "Hey, I've booked this whole 45 minutes. But if we don't need the whole time, that's fine."
I'm comfortable saying, "Hey, we can end early, and you can get more of your time back to focus on some other areas." It's more the cutting someone off when they're talking because I have to hop to the next thing. I absolutely hate that feeling. So thanks, I think I'll give that a go. I think I'll try actually bumping it up to 45 minutes, presuming that other people like that strategy too, since they're opting in [laughs] to the 45 minutes structure. But that sounds like a nice solution.
CHRIS: Well yeah, happy to share it. Actually, one interesting thing that I'm realizing, having been a manager at thoughtbot and then now being a manager within Sagewell, the nature of the interactions are very different. With thoughtbot, I was often on other projects. I was not working with my team day to day in any real capacity. So it was once every two weeks, I would have this moment to reconnect with them. And there was some amount of just catching up. Ideally, not like status update, low-level sort of thing, but sort of just like hey, what have you been working on? What have you been struggling with? What have you been enjoying?
There was more like I needed bigger space, I would say for that, or it's not surprising to me that you're bumping into 30 minutes not being quite long enough. Whereas regularly, in the one on ones that I have now, we end up cutting them short or shifting out of true one-on-one mode into more general conversation and chatting about Raycast or other tools or whatever it is because we are working together daily. And we're pairing very regularly, and we're all on the same project and all sorts of in sync and know what's going on. And we're having retro together. We have plenty of places to have the conversation.
So the one-on-one again, still, I keep the same cadence and the same time structure just because I want to make sure we have the space for any day that we really need that. But in general, we don't. Whereas when I was at thoughtbot, it was all the more necessary. And I think for folks listening; I could imagine if you're in a team lead position and if you're working very closely with folks, then you may be on the one side of things versus if you're a little bit more at a distance from the work that they're doing day to day. That's probably an interesting question to ask, and think about how you want to structure it.
STEPH: Yeah, I think that's an excellent point. Because you're right; I don't see these individuals. We may not have really gotten to interact, except for our daily syncs outside of that. So then yeah, there's always like a good first 10 minutes of where we're just chatting about life and catching up on how things are going before then we dive into some other things. So I think that's a really good point. Cool, solving management problems on the mic. I dig it.
In slightly different news, I've joined a book club, which I'm excited about. This book club is about Ruby. It's specifically reading the book Ruby Science, which is a book that was written and published by thoughtbot. And it requires zero homework, which is my favorite type of book club. Because I have found I always want to be part of book clubs. I'm always interested in them, but then I'm not great at budgeting the time to make sure I read everything I'm supposed to read. And so then it comes time for folks to get together. And I'm like, well, I didn't do my homework, so I can't join it.
But for this one, it's being led by Joël, and the goal is that you don't have to do the homework. And they're just really short sections. So whoever's in charge of leading that particular session of the book club they're going to provide an overview of what's covered in whatever the reading material that we're supposed to read, whatever topic we're covering that day. They're going to provide an overview of it, an example of it, so then we can all talk about it together. So if you read it, that's wonderful. You're a bit ahead and could even join the meeting like five minutes late. Or, if you haven't read it, then you could join and then get that update. So I'm very excited about it.
And this was one of those books that I'd forgotten that thoughtbot had written, and it's one that I've never read. And it's public for anybody that's interested in it. So to cover a little bit of details about it, so it talks about code smells, ways to refactor code, and then also common patterns that you can use to solve some issues. So there's a lot of really just great content that's in it. And I'll be sure to include a link in the show notes for anyone else that's interested.
CHRIS: And again, to reiterate, this book is free at this point. Previously, in the past, it was available for purchase. But at one point a number of years ago, thoughtbot set all of the books free. And so now that along with a handful of other books like...what's Edward's DNS book? Domain Name Sanity, I believe, is Edward's book name that Edward Loveall wrote when he was not a thoughtboter, [laughs] and then later joined as a thoughtboter, and then we made the book free.
But on the specific topic of Ruby Science, that is a book that I will never forget. And the reason I will never forget it is that book was written by the one and only CTO Joe Ferris, who is an incredibly talented developer. And when I was interviewing with thoughtbot, I got down to the final day, which is a pairing session. You do a morning pairing session with one thoughtbot developer, and you do an afternoon pairing session with another thoughtbot developer.
So in the morning, I was working with someone on actually a patch to Rails which was pretty cool. I'd never really done that, so that was exciting. And that went fine with the exception that I kept turning on Caps Lock on their keyboard because I was used to Caps Lock being CTRL, and then Vim was going real weird for me. But otherwise, that went really well. But then, in the afternoon, I was paired with the one and only CTO Joe Ferris, who was writing the book Ruby Science at that time.
And the nature of the book is like, here's a code sample, and then here's that code sample improved, just a lot of sort of side-by-side comparisons of code. And I forget the exact way that this went, but I just remember being terrified because Joe would put some code up on the screen and be like, "What do you think?" And I was like, oh, is this the good code or the bad code? I feel like I should know. I do not know. I'm not sure. It worked out fine, I guess. I made it through. But I just remember being so terrified at that point. I was just like, oh no, this is how it ends for me. It's been a good run.
STEPH: [laughs]
CHRIS: I made it this far. I would have loved to work for this nice thoughtbot company, but here we are. But yeah, I made it through. [laughs]
STEPH: There are so many layers to that too where it's like, well if I say it's terrible, are you going to be offended? Like, how's this going to go for me if I speak my truths? Or what am I going to miss? Yeah, that seems very interesting (I kind of like it.) but also a terrifying pairing session.
CHRIS: I think it went well because I think the code...I'd been following thoughtbot's work, and I knew who Joe was and had heard him on podcasts and things. And I kind of knew roughly where things were, and I was like, that code looks messy. And so I think I mostly got it right, but just the openness of the question of like, what do you think? I was like, oh God. [laughs] So yeah, that book will always be in my memories, is how I would describe it.
STEPH: Well, I'm glad it worked out so we could be here today recording a podcast together. [laughs]
CHRIS: Recording a podcast together. Now that I say all that, though, it's been a long time since I've read the book. So maybe I'll take a revisit. And definitely interested to hear more about your book club and how that goes.
But shifting ever so slightly (I don't have a lot to say on this topic.) but there's a new framework technology thing out there that has caught my attention. And this hasn't happened for a while, so it's kind of novel for me. So I tend to try and keep my eye on where is the sort of trend of web development going? And I found Inertia a while ago, and I've been very, very happy with that as sort of this is the default answer as to how I build websites.
To be clear, Inertia is still the answer as to how I build websites. I love Inertia. I love what it represents. But I'm seeing some stuff that's really interesting that is different. Specifically, Remix.run is the thing that I'm seeing. I mentioned it, I think, in the last episode talking about there was some stuff that they were doing with data loading and async versus synchronous, and do you wait on it or? They had built some really nice levers and trade-offs into the framework. And there's a really great talk that Ryan Florence, one of the creators of Remix.run, gave about that and showed what they were building.
I've been exploring it a little bit more in-depth now. And there is some really, really interesting stuff in Remix. In particular, it's a meta-framework, I think, is the nonsense phrase that we use to describe it. But it's built on top of React. That won't be true for forever. I think it's actually they would say it's more built on top of React Router. But it is very similar to Next.js for folks that have seen that. But it's got a little bit more thought around data loading. How do we change data? How do we revalidate data after?
There's a ton of stuff that, having worked in many React client-side API-heavy apps that there's so much pain, cache invalidation. How do you think about the cache? When do you fetch from the network? How do you avoid showing 19 different loading spinners on the page? And Remix as a framework has some really, I think, robust and well-thought-out answers to a lot of that. So I am super-duper intrigued by what they're doing over there. There's a particular video that I think shows off what Remix represents really well. It's Ryan Florence, that same individual, the creator of Remix, building just a newsletter signup page.
But he goes through like, let's start from the bare bones, simplest thing. It's just an input, and a form submits to the server. That's it. And so we're starting from web 2.0, long, long ago, sort of ideas, and then he gradually enhances it with animations and transitions and error states. And even at the end, goes through an accessibility audit using the screen reader to say, "Look, Remix helps you get really close because you're just using web fundamentals."
But then goes a couple of steps further and actually makes it work really, really well for a screen reader. And, yeah, overall, I'm just super impressed by the project, really, really intrigued by the work that they're doing. And frankly, I see a couple of different projects that are sort of in this space. So yeah, again, very early but excited.
STEPH: On their website...I'm checking it out as you're walking me through it, and on their website, they have "Say goodbye to Spinnageddon." And that's very cute. [laughs]
CHRIS: There's some fundamental stuff that I think we've just kind of as a web community, we made some trade-offs that I personally really don't like. And that idea of just spinners everywhere just sending down a ball of application logic and a giant JavaScript file turning it on on someone's computer. And then immediately, it has to fetch back to the server. There are just trade-offs there that are not great. I love that Remix is sort of flipping that around.
I will say, just to sort of couch the excitement that I'm expressing right now, that Remix exists in a certain place. It helps with building complex UIs. But it doesn't have anything in the data layer. So you have to bring your own data layer and figure out what that means. We have ActiveRecord within Rails, and it's deeply integrated. And so you would need to bring a Prisma or some other database connection or whatever it is. And it also doesn't have more sort of full-featured framework things. Like with Rails, it's very easy to get started with a background job system. Remix has no answer to that because they're like, no, no, this is what we're doing over here.
But similarly, security is probably the one that concerns me the most. There's an open conversation in their discussion portal about CSRF protection and a back and forth of whether or not Remix should have that out of the box or not. And there are trade-offs because there are different adapters that you can use for auth. And each would require their own CSRF mitigation. But to me, that is the sort of thing that I would want a framework to have.
Or I'd be interested in a framework that continues to build on top of Remix that adds in background jobs and databases and all that kind of stuff as a complete solution, something more akin to a Rails or a Laravel where it's like, here we go. This is everything. But again, having some of these more advanced concepts and patterns to build really, really delightful UIs without having to change out the fundamental way that you're building things.
STEPH: Interesting. Yeah, I think you've answered a couple of questions that I had about it. I am curious as to how it fits into your current tech stack. So you've mentioned that you're excited and that it's helpful. But given that you already have Rails, and Inertia, and Svelte, does it plug and play with the other libraries or the other frameworks that you have? Are you going to have to replace something to then take advantage of Remix? What does that roadmap look like?
CHRIS: Oh yeah, I don't expect to be using Remix anytime soon. I'm just keeping an eye on it. I think it would be a pretty fundamental shift because it ends up being the server layer. So it would replace Rails. It would replace the Inertia within the stack that I'm using. This is why as I started, I was like, Inertia is still my answer. Because Inertia integrates really well with Rails and allows me to do the sort of it's not progressive enhancement, but it's like, I want fancy UI, and I don't want to give up on Rails. And so, Inertia is a great answer for that. Remix does not quite fit in the same way. Remix will own all of the request-response lifecycle.
And so, if I were to use it, I would need to build out the rest of that myself. So I would need to figure out the data layer. I would need to figure out other things. I wouldn't be using Rails. I'm sure there's a way to shoehorn the technologies together, but I think it sort of architecturally would be misaligned. And so my sense is that folks out there are building...they're sort of piecing together parts of the stack to fill out the rest.
And Remix is a really fantastic controller and view from their down experience and routing layer. So it's routing, controller, view I would say Remix has a really great answer to, but it doesn't have as much of the other stuff. Whereas in my case, Inertia and Rails come together and give me a great answer to the whole story.
STEPH: Got it. Okay, that's super helpful.
CHRIS: But yeah, again, I'm in very much the exploratory phase. I'm super intrigued by a lot of what I've seen of it and also just sort of the mindset, the ethos of the project as it were. That sounds fancy as I say it, but it's what I mean. I think they want to build from web fundamentals and then enhance the experience on top of that, and I think that's a really great way to go. It means that links will work. It means that routing and URLs will work by default.
It means that you won't have loading spinner Armageddon, and these are core fundamentals that I believe make for good websites and web applications. So super interested to see where they go with it. But again, for me, I'm still very much in the Rails Inertia camp. Certainly, I mean, I've built Sagewell on top of it, so I'm going to be hanging out with it for a while, but also, it would still be my answer if I were starting something new right now.
I'm just really intrigued by there's a new example out there in the world, this Remix thing that's pushing the envelope in a way that I think is really great. But with that, my now…what was that? My second or my third rave? Also called the positive rant, as we call it. But yeah, I think on that note, what do you think? Should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris is weathering through a slight lull, a holding period, where his team waits for new features to become available with some of the platforms they integrate with, and as they think out new facets of the platform they're building.
Steph has been thinking recently about working in isolation. It's a topic that Joël Quenneville pointed out to her and mentioned. Can engineers work in isolation and be successful?
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Always be singing.
STEPH: I can't remember if I've shared the story with you. But I had a beautiful little human moment with someone at airport security. Because when I travel with my mic, I always get stopped because there's the middle long, thin piece that looks like what you would screw on to a gun like for a silencer. And so [laughs] as he was going through, the person was looking at it, and then he called over a buddy. And then they called over another buddy, and there's like three TSA agents all looking at the X-ray screen. And finally, they're like, "Yeah, we need to flag it." So they moved it over.
And then he was digging through, and he pulled out the big metal piece. And I said, "It's for a microphone." And he's like, "Okay," and he kept looking, and then he finally found the microphone. And he lit up because I guess he wasn't really sure to believe me at first when I said it. But he lit up, and he was like, "Karaoke?" [laughs] I was like, "No, it's for podcasting."
CHRIS: But not 100% no because we do sing plenty on this show, so...
STEPH: I think that's what made me think of it. It was your singing. [laughs]
CHRIS: Yep. My wonderful, wonderful singing.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly Podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris? What's new in your world?
CHRIS: What's new in my world? We are in sort of a...what's the word? There's a bit of a lull right now, not like a big lull, but we had a bunch of clear work that came into the team, did a bunch of iterations, some testing, built some new features, et cetera. And now there's a small holding period basically where we wait for some new features to become available with some of the platforms that we integrate with and also as we think out some new facets of the platform that we're building.
So we've got this little bit of time here where we're not necessarily building out as many new novel features. But instead, as a dev team, we're taking this moment to be like, oh, cool, let's tie down. I want to make a sailing analogy here, but I don't know sailing. It's like tie down the somethings and batten the hatches, maybe. That sounds like a thing. [chuckles] But so we have a couple of projects going right now. We want to really accept the truth and lean into Sidekiq. So right now, we have a mix of ActiveJob and Sidekiq jobs. And they're confusing, and et cetera, et cetera.
So we want to kind of lean into that, upgrade dependencies, that sort of thing. We are, again, doing a little bit of work on the observability foundation of our system. so how do we know what's going on at runtime? Also, working on just some core features and functionality. We have done a little bit of an exploration into the event processing stuff, some of that that I've been talking about. It's actually been very interesting. So we're working with Customer.io as a platform, which is omnichannel communication behavior-based messaging sort of thing. So when a user does X, send them an email and then wait three days. And if they haven't responded, then do this other thing.
And I think I've said this in previous episodes; I'm so wildly impressed with that platform. They have done such a good job. And I know that good software doesn't happen in a vacuum. In fact, if we're being honest, a lot of the software out there is not very good. And not only do they do a good job, but it's across...there's a ton of functionality in Customer.io.
And it's interesting because we're finding ourselves leaning into it even more because it is such a solid platform and because it connects into our event system. Like, it's a segment destination, so all of our analytics events get piped into Customer.io, and then we can action on any of them. And the actions can be quite complicated.
And this is where we're getting into good idea, terrible idea space. And to be clear, this is still just an exploration. But we basically wanted a way to do more. There are a bunch of different actions that you can take so, like send an email, send an SMS, or there are a couple of other slightly fancier ones. You can trigger an event within the Customer.io system. You can actually do an arbitrary HTTP POST, PUT, PATCH, whatever, any web requests you want to make. So if you want to integrate with essentially anything else out there, you can do that. You can send some structured data over the wire.
And so we've now been like, okay, what if, and stay with me here, what if we use our analytic system and we send events whenever a user does something, and then that event eventually trickles down to Customer.io? Within that, we allow ourselves to respond to that event by emitting a different event within the system, within Customer.io. And then, via the webhook functionality, we fire that back to the Rails application. And then there we can do whatever we want.
And in a way, that sounds absurd because we're starting from our app, and then we're sending some events down, processing them in certain ways, sending it back to the app, and then maybe doing something. In particular, one of the things we want to do is richly formatted Slack alerts. And Customer.io has a Slack alert functionality, but they can't have any of the fancy stuff. They can't link to our customer in the admin dashboard. So we found that that functionality is particularly useful for our admin team. And so we're like, ah, this feels weird.
But if we were to do this loop out and back, then ideally, we get the power of Customer.io for non-technical users or non-engineering team users to configure workflows and to say, "When a user does this, I actually want to alert the admin team via Slack." And we want it to be rich and have buttons that you can click and all that kind of stuff. And although the thing that I just described seems complicated, is a word that I'll use for it, confusing at times, it isn't...like, I don't want to do all of that in the app.
I don't want the app to have to think about how do I wait three days? We technically can do that with Sidekiq, but it gets us in trouble and whatnot, whereas Customer.io that's a core concept for them. And so, again, very much exploration. This will probably be a future good idea, terrible idea segment. But that's been an interesting one to explore.
STEPH: You have quite a talent for you preface something as a bad idea, and you do a very good job of making it sound reasonable and good. [laughs] So it's interesting to be on that side of like, good idea, bad idea. It's like, I'm looking for the bad. And I have questions, but overall, [chuckles] you do a very good job of being very thoughtful and walking through why it makes sense or what are the benefits of it. So you answered some of my questions around why still send it to Customer.io versus just having it all in-house. So the fact that the admin team has access to it makes a lot of sense.
I want to clarify one point. So when you send it to Customer.io, Customer.io then needs to send a message back to your application. And then that's when you customize the Slack message. Do you need Customer.io to send that message, or could you just fire off an event to Customer.io to say, "Hey, capture this, but don't do anything with this. And then we're going to send the Slack message because we want to customize it."?
CHRIS: I think the key is that we want to leverage the fact that Customer.io is the platform that our operations team really is now becoming comfortable with and using for this behavioral automation workflow type logic. So that idea of when this event, you know, when this triggering event happens, if this condition is true, then respond in this way.
And so because Customer.io is the platform that A, is quite good at that and B, is where our admin team is now thinking about doing that, one thing that we might do let's say a user completes some action within the application. So they fill out a form to submit their interest in some new platform feature. Initially, what we might want to do there is alert ourselves to say, "Hey, this happened. Take some action." And then eventually, we may want to instead switch that over and send an email to the customer with the next steps that they need to do.
And the ability to gradually transition across that spectrum is really interesting to me, and again, Customer.io being the platform, sort of the hub for how we respond to these events. At the same time, I know that this feels like a generic message processing system that might be a Kafka queue somewhere else. And so I've got that in the back of my head of like, is this weird? I think it's a little weird. But it also, thus far as we're exploring it, is very approachable for the admin team, very familiar for them, and reasonably powerful.
And also, there's a drag and drop editor for the events and the payloads. And it knows for this event, here's the stuff that's available to you. And so the ability for our admin team to interact with that interface is really great. And we don't have to build it. We don't have to think about it. But I will say I've worked at so many different companies that have their ad hoc system that makes it easy to do generic X, Y, and Z. And it's bad, and it falls down. And it's impossible to know when anything happens.
And so, I've got a lot of concerns in the back of my head, which I will want to at least think through and understand the trade-offs that we're making if we pursue this path, but it is very interesting to me. So right now, a lot of this logic does live in the app. But it means that it requires a code change for anything that we want to do like this. We want to have a Slack alert whenever X happens. Now, the developers are in the loop for all of that. And really, it's the operations team that owns the decisioning on that.
And so if they can also self-serve and instrument the action, the alert, the follow-up, the whatever it is, if we can give them those primitives in a platform that they already understand, that sounds nice. I'm intrigued, is what I'll say. So anyway, while we're in this lull period, we are trying out some fun stuff like that and exploring those sorts of things.
STEPH: I like that perspective that you're putting on it, or at least the one that's standing out to me is the concept of ownership is like who gets to own these actions. But then beyond that, that's the part where I feel a little squirmy is, so we are using this third-party tool because it makes life easier. But then, at what point when we start building software around this third-party tool to then customize it back on our own side. Then if someone is in Customer.io, so if an admin user is in there and then they trigger an event, is there going to be confusion as to what's going to happen? And can they retry an event?
Because I'm realizing my initial suggestion where it was like, hey, notify Customer.io that this is there but then also manage sending the Slack message that would prevent them from being able to have that retry capability. And that may be very much worth preserving. So then it's understood that hey, if you want to manage this, we are giving you full access to manage this work. We may customize it, but this is still the interface in which you go through to have three tries or to manage that workflow or these actions that get sent to users.
CHRIS: Yeah. I think you've perfectly highlighted the why this might not be a great idea or at least the concerns to explore before adopting this more thoroughly. And even just the idea of adopting it more thoroughly, like, how tied into the system are we? How business-critical does this new external piece of software become? Because I've seen that to be really problematic where there are organizations that I've worked with that are like, "Oh God, we would love to move off of system X. But unfortunately, it's basically the one thing holding this business up." And I'm like, yeah, I get that. And that happens. So yeah, being really intentional with that.
And that's why we're very much in an exploration place. But we have a bunch of stuff that we've done that required engineering work. And we're now seeing like, actually, could we map this into this other tool? And can we build the set of primitives in that space that now this team can own that whole experience? And then critically, can they debug it? Will we know when something goes wrong, et cetera? Those are always parts. At this point, I don't think I can just imagine a happy path. And I hope this isn't true for the rest of my life.
But the work as a software developer, especially after having done a couple of rounds of it and as a consultant, I just imagine failure modes. It's all I do. I'll be like, okay, we just need to wire X up to Z, and then we need to fire off a request. And then, once we get the message back, then we can process them. I'm like, right. You just described 13 things that can go wrong. Now let's imagine each of the different failure states because that's all I'm going to do.
Who cares about the happy path? Those are easy. Those write themselves. It's all of the failure modes that I need to think about. And someday, when I retire, and I go to a log cabin in the woods, and I don't talk to people for a while, maybe I'll go back to a place of only happy paths. But that is not my truth right now.
STEPH: I can't tell you how many people in my personal life I have annoyed so much [laughs] because all I see are failure modes. And one, that's a delightful t-shirt. [laughs] I'd love to have that. And then yeah, I feel you because there are so many times where someone is...like, I'm with someone who's like a big idea person. And so they're just launching into what-ifs, and we did this. And I can't help it, and I have learned to help it.
But it has been a struggle with some strong feedback from family and friends to reel it in. Because then I will start to think through okay, well, what's the details? And I have some questions. What happens when this happens? And yeah, all I see are failure modes. [laughs] It is very true for me too, and not always...not so great. So I, too, shall get a log cabin one day and try to forget all of that.
CHRIS: I will say I painted that as a particularly glib version of myself. But some of what I'm doing right now, particularly joining an early-stage startup and taking the role of CTO, was very much to try and intentionally resist that. Because right now, I have to be really careful with how much of the potential edge cases and whatnot. I'm considering exactly how robust of a platform are we building? Very is the answer. But what about extremely? Because extremely is an option but extremely costs four times as much. Mostly in time being the critical element there.
And so part of the work that I'm doing now is just trying to push on those edges, push on those boundaries, find the places where we can move quickly, and still build a robust platform because frankly, we're building...Sagewell is a financial platform under the hood, and I can't be flippant with that. We as a team have to be really careful with the thing that we're building. But we also have to move quickly.
We have to be able to iterate. We have to be able to build something and try it out and see if it works. And then, if it doesn't, maybe shelve it and pull it out of the codebase. And it has been a real challenge, but it was the challenge that I wanted here. And so I've been enjoying that work, but it has been a stretch, a growth moment, let's call it.
STEPH: I don't know if you've shared that particular goal with me in transitioning to a CTO role, but I really, really like it. One, it's very aligned with who you are. You're very thoughtful, and you look for areas to push and ways to do that. And then I also struggle in those areas, and thoughtbot specifically and consulting has helped push me in directions, push me out of my comfort zones but still in a safe space where I have other people to talk to as I'm making those decisions and pushing past the comfort areas that I have.
But one of them is that I will initially think things have to be perfect or really planned. And I had a really nice conversation with Chad Pytel, who is one of the Founders of thoughtbot and also COO and host of the Giant Robots Smashing Into Other Giant Robots Podcast. And we were chatting about a new offering that thoughtbot is bringing to the market. And it's one that I've been involved with. And I started getting really in the weeds of like, but we really have to plan out how this is going to look and all the actions that need to take place before then we can really sell this type of engagement to a new client.
And as I was going through this list of worries, when I was done, he mentioned he's like, "All of those are valid and something to consider." He's like, "But we don't have any customers yet." So the first part is we feel that we are in a space that we have enough of information to get started. And it's something that we've done before. And then, we'd like to see where customers align with us on this need because we're going to end up shaping this work in response to what their needs are. And so, we can't really begin that shaping until we understand more of what people are looking for.
I was like, oh yeah, that's such a nice point. It just reminded me in regard to pushing those boundaries of yes, we need planning upfront, and we look for failure modes. But then there's also an important aspect of then finding ways to keep moving forward and getting more feedback and then balancing those two.
CHRIS: Yeah, I think that's definitely right the as always, anchoring it to the customer. What is it that they need? How do we connect with them and hear from them? And ideally, keep those feedback loops as short as possible. That's the game, and everything else fits around that. But yeah, so we're trying some stuff. We'll see how it goes. I will certainly report back, depending on how it plays out. But that's a little bit of what's up in my world. What's up in your world?
STEPH: I have been thinking recently about working in isolation. It's a topic that Joël Quenneville, who's another thoughtboter and has been on the show a number of times, it was a topic that he'd actually pointed out to me and mentioned. And so, I wanted to bring that here and share it with you because I'd love to get some of your thoughts on this as well. But I've typically had the viewpoint that when developers are sent off to work on a large, nebulous task, that it's a recipe for disaster, and almost everyone's going to lose in that scenario. And it tends to be a combination of isolation, very distant due dates, and loosely defined scope that leads to those really poor results.
However, as developers, it's not inconceivable for us to land in that position. And it's very similar to my current project, who I'm working with Joël on, where we were given a very fuzzy project with some really aggressive goals, and the engagement is going really well. So that led Joël and I to wonder why is this working? This is the thing that we said that people should never do, but it's actually going quite well for us.
So reflecting upon some of the things that are working well for us, even though we are in more of an isolated state than we would typically work, some of the things that I've been reflecting on or some of the strategies I should say that we've applied to this situation is number one, we did work hard to plug into an existing team. So when we joined, we joined more of an ad hoc volunteer team.
And in everybody's spare time, those individuals were then contributing to the CI process in terms of trying to speed things up and improve things for the rest of the team. But otherwise, there wasn't really a team. There wasn't much structure to it. So it felt like everybody was very much off in their own world doing their own thing, occasionally putting up some code changes for review. And then you had to gain a lot of context to understand what it was that they were doing.
So one of the things that I advocated for early on that I thought was more of just my personal preference but I think has actually worked well in regards to the success of the project as well is to plug into an existing team. So even if you are not working with that team on their day-to-day tasks, but you want to have more people to interact with and more people to share your context with. So you are essentially reducing the isolation of you're no longer these two people who are off in a corner working on something, and nobody has any idea what you're doing, and only one person is getting a status update.
There is now a whole channel or team of people that have some insight as to what's going on. And they can also really unblock you for when you get stuck because then if you do have a question, but there's that one person who has been like your go-to person for this whole project, if they're out on vacation, or if they leave, or just something happens, you're suddenly blocked.
And you don't know who to go to because you've been part of this larger company, but you haven't interacted with anybody outside of that one person. So at least if you're plugged into another team, you've immediately got some friends or some other people to go to and say, "Hey, I'm not sure who can help me with this, but I have this problem." And then, from there, you can get more help.
CHRIS: This is super interesting. To start, I really like that you're framing this in terms of this is a thing that we often recommend against or see as an anti-pattern, and yet in this particular case, it's working. Let's look at that. Because I think the things that you're like, huh, that's interesting. That phrase "Huh, that's interesting" is very interesting. It often highlights like oh, something is behaving counter to how we would expect it to, so let's dig in and explore that. And so I love that that was the reaction and then sort of the conversation that spilled out of that.
I'm also not super surprised that the combination of you and Joël were able to find a way to make this successful because you are two of the most capable developers that I've worked with but also particularly excellent communicators and advocates for the work that you're doing and the way that one should do the work. So the idea that there's a situation that may not be the ideal mode of working and that you're able to take that and say, "What if we shift it just a little bit and make it a little bit more manageable and whatnot?" So unsurprised, frankly, that you found a way collectively to make this a little bit better.
And then I think yeah, it sounds like you're doing the things...so just like, we're in isolation, hmm, that doesn't seem great. Let's unisolate and connect to some people, and that just feels so true. I'm very interested to hear, though. I'm guessing there's more to this story or other things that you've done. Are there other tactics or ways that you've shifted this around?
STEPH: Yeah, there's a couple more. So this is one that (And thank you for the kind words.) this was one that I think Joël is really exceptional at. So Joël is really good at building diagrams and graphs and then sharing that with the team as sort of like we've spent a couple of days understanding this big, messy concept. Here's a nice condensed graph that shows how we went about understanding this. And then here's the big overall picture of what we've learned from this, which has been wonderful for so many reasons.
And every time that we share something with the team, one, it just helps build camaraderie, especially in remote days, it just builds camaraderie on hey, we're all online. And we're working. And here's the thing that I'm working through or struggling with or something that I learned. I often do that, especially when I get frustrated and something goes wrong. I love to share the I did this today. It went terribly. [laughs] Let me tell you about it, so you're aware of it in case it helps you.
And specifically, the diagrams are really nice because then other people can just see and appreciate it, or they can point something out that we didn't know. Or they'll see a different angle because they're more familiar with the system. So they can say, "Oh yeah, that totally makes sense," or "I had no idea that was happening." So that's been a really nice way to engage with the team.
And so, essentially, the little title for that strategy is just overshare. Just share all the things that you're doing and find ways to make it digestible for the team so then they can go along on this big, nebulous journey with you. And you can also put it in threads so that way, you're not flooding a channel, but then people can opt-in to that oversharing if they would like more insight into the work that you're doing.
CHRIS: Opt-in to that oversharing. [laughs]
STEPH: Exactly. I mean, it's not forced oversharing; it's just it's here if people would like it. That was a really nice compliment that some other thoughtboters received from their client team is someone had mentioned that there's so much information that's getting shared from the thoughtboters that they had trouble keeping up. And they really liked that. They really appreciated that they could then go check out this channel or these threads and see exactly the type of work that was happening and the outcomes of it. And then they could just check it maybe beginning of the day, end of the day and get that knowledge dump.
Some of the other strategies that we've used are giving ourselves mini-goals to accomplish as part of the larger, more nebulous task. So as we have this very large goal in mind, it's like, where's the small piece? Where's an entry point? What's a task or a goal that we can define? And then we want to break that down into what questions do we need to ask? How can we start moving in this direction? And we want to find something that has an answer.
So each time that we start researching once we've gotten to that point...and this is hard. I feel like people may know that, but I should just say that this is hard to take something nebulous and then find the entry point and break down some goals. And that has been one of the wonderful parts of then having a buddy for this type of project because then we can bounce ideas off of each other.
And we can also help the other person not go too deep into an area. Because I have definitely had moments where I've been very passionate about like, "We need to do this," and Joël is just like, do we? And I'm like, "Yeah." And he's like, "Do we though?" And I'm like, "I guess not. I just really, really want to." [laughs] It's been very helpful to have a partner balance some of those feelings.
And once you can break down some of that amorphous problem into those smaller goals, then you can also create tickets, which is also a really nice way to then surface the work that you're doing. You can document how you're researching, document the question. And then once you have that question of what you're in search of, it's so nice because then once you find the answer, that's immediately a good moment to pause and reflect.
So I think in a recent episode, we were chatting about this where Joël and I were trying to understand why the tests weren't being balanced properly across each process that was available. And we found the answer, and we started immediately digging into fixing it or solutions. And then it took us a moment to go back and say, "Actually, this ticket is really just about understanding the problem, not fixing the problem." And so that was a nice; now that we understand the problem, let's go back high-level to define our next goal from this big, nebulous task because maybe fixing that balancing is the right thing to do, but maybe not, and we just need to reconsider.
So for that portion of breaking down a big, nebulous task and then identifying smaller tasks that you can achieve, time-boxing has been huge for us in regards of what’s something that we can accomplish this week, or what's something we can accomplish today that will then move us forward? And then making sure that we are setting deadlines for ourselves. So normally, this is another area where it's like, huh, that's interesting. I'm a big believer in deadlines. But I do think self-imposed deadlines are really helpful.
CHRIS: I'm intrigued to hear you say that you're not a big fan of deadlines because I assume we're actually more aligned on this. But deadlines that are arbitrary and also come with fixed scope and other immovable things, yes, those are the worst in the world. But deadlines that we set for ourselves, and then we use that as a mechanism to hone and refine the scope that we're going to get out the door by that deadline, I find those incredibly useful. And that sounds like that's the same sort of thing you have going on here is like by saying we're willing to expend this much to get a result, that defines the work going into it.
STEPH: Yeah, that's fair. Everything that you said is true, too; in regards to, I'm realizing I default that when I hear the word deadline, I'm so used to teams having deadlines that are defined by other individuals that are not part of the work. And as you said, the scope has already been defined, and it can't be changed. And it's all of the bad things that then go with it.
So when I think of deadlines, I immediately think of that type of deadline versus the more self-imposed, yes, we can revisit, yes, the team has bought in and understands why this is important. Those types of deadlines are very helpful. It's that first part that I default to that I think of immediately, and I need some reassurance that that is not the type of deadline that I'm looking at or being forced to meet.
I have a very similar feeling for estimates. Like, those both fall in the same category for me is; as soon as I hear estimation and deadline, I get nervous. And then I just need to understand the purpose of both and who is setting both of those and the communication around them. And then what does that failure mode look like, the one that we're always looking for? So yeah, deadlines and estimations fit into that. Initially, I'm very hesitant and cautious, but I think they're both very good tools.
CHRIS: Yeah, I feel like those are very closely related. And they're definitely tools that can be used for great good or for great evil. And so, ideally, we advocate for the great good usage. But more generally, I love, again, the sharing around the process and what's worked for you in this less typical or often somewhat problematic workflow. I will say, again, so I gave you the series of compliments earlier, and I stand by those compliments for you and Joël.
But I think also the sort of related aspect is that you two are both quite senior, very capable, very comfortable suggesting changes, suggesting workflows. So I think the potential dangers of isolation are still very much there. And the fact that the two of you have been able to find a way to work more effectively and perhaps change the terms of things just a little bit to make this effective is A, unsurprising but B, not something that I would expect of every team.
I think you've described a wonderful list of the specifics as to how you did that. And ideally, if folks that are perhaps a little earlier on in their career are sent out for a month with a wild project, and they're sent to do it in isolation, hopefully, they can borrow from that list. But again, I do think this is a thing that, from an organizational perspective, we should be very careful with when we're imposing this isolation on it because it takes two fantastic folks like you and Joël to break out of the shackles of it.
STEPH: The more we're talking about this, the more apparent it's also becoming that I started with this; how do you manage isolation? And my answer is you get out of it. [laughs] Get out of isolation as quickly as possible. Someone thought it was a good idea to put you there or a good idea to structure it that way. Or maybe they didn't mean it intentionally, but that's how things then shook out. So that's really what a lot of those strategies are about is, then how do I get myself out of this corner that you put me in? Because nobody put Stephanie in a corner.
So it's essentially that's all the strategies are looking for ways to say, hey, I'm isolated, but I really don't want to be, and it's dangerous for me to be isolated in this way. Even as a more senior capable developer, it's more likely that things could go wrong, and miscommunications, misaligned expectations. So I need to find ways to then bring the work that I'm doing to make it more relevant to other people on the team. So then we can have more overlap, or at least I can share a lot of the work that's being done.
CHRIS: Yeah, absolutely. I think with that wonderful summary and, frankly, utterly fantastic movie reference, what do you think? Should we wrap up?
STEPH: Let's do it. Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Another toaster strudel debate?! Plus, the results are in for the most listened-to podcast in the RoR community! :: drum roll ::
Steph has a "Dear Gerrit" message to share. Chris has a follow-up on mobile app strategy.
The Bike Shed: 328: Terrible Simplicity
When To Fetch: Remixing React Router - Ryan Florence
Virtual Event - Save Time & Money with Discovery Sprints
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: thoughtbot's next virtual event "Save Time & Money with Discovery Sprints" is coming up on June 17th, from 2 - 3 PM Eastern. It's a discussion with team members from product management, design and development. From a developer perspective, topics will include how to plan a product's architecture, both the MVP and future version, how to lead a tech spikes into integrations and conduct a build vs buy reviews of third party providers. Head to thoughtbot.com/events to register, the event is June 17th 2 - 3 PM ET. Even if you can't make it, registering will get you on the list for the recording.
CHRIS: We're the second-best. We're the second-best. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: I'm very happy to report that I picked up a treat from the store recently. So while I was in Boston and we were hanging out in person, we talked about Pop-Tarts because that always comes up as a debate, as it should. And then also Toaster Strudels came up, so I now have a package of Toaster Strudels, and those are legit. Pop-Tart or Toaster Strudel, I am team Toaster Strudel, which I know you're going to ask me about icing and if I put it on there, so go ahead. I'm going to pause. [laughs]
CHRIS: It sounds like I don't even need to say anything. But yes, inquiring minds want to know.
STEPH: I think that's also my very defensive response because yes, I put icing on my Toaster Strudel.
CHRIS: How interesting. [laughs]
STEPH: But it feels like a whole different class of pastry. So I'm very defensive about my stance on Pop-Tarts with no icing put Strudel with icing.
CHRIS: A whole different class of pastry. Got it. Noted. Understood. So did you travel? Like, were these in your luggage that you flew back with?
STEPH: [laughs] Oh no. They would be all gooey and melty. No, we bought them when we got back to North Carolina. Oh, that'd be a pro move; just pack little individual Strudels as your airplane snack. Ooh, I might start doing that now. That sounds like a great airplane snack.
CHRIS: You got to be careful though if the icing, you know, if it's pressurized from ground level and then you get up there, and it explodes. And you gotta be careful. Or is it the reverse? It's lower pressure up in the plane. So it might explode.
STEPH: [laughs] Either way, it might explode.
CHRIS: Well, yeah. If you somehow buy a packet of icing that is sky icing that is at that pressure, and you bring it down, then...but if you take it up and down, I think it's fine. If you open it at the top, you might be in danger. If you open icing under the ocean, I think nothing's going to happen. So these are the ranges that we're playing with.
STEPH: I will be very careful sky icing and probably pack two so that way I have a backup just in case. So if one explodes, we'll be like, all right, now I know what I'm working with and be more prepared for the next one.
CHRIS: That's just smart.
STEPH: I try to make smart travel decisions, Toaster Strudels on the go. Aside from travel treats and sky icing, I have some news regarding Planet Argon, who is a Ruby on Rails consultancy regarding their latest published this year's Ruby on Rails community survey results. And so they list a lot of fabulous different topics in there. And one of them includes a learning section that highlights most listened to podcasts in the Ruby on Rails community as well as blogs and some other resources. And Bike Shed is listed as the second most listened to podcast in the Ruby on Rails community, so whoo, golf clap.
CHRIS: Fantastic.
STEPH: And in addition to that, the thoughtbot blog got a really nice shout-out. So the thoughtbot blog is in the number two spot for the most visited blogs in the community. In the first spot is Ruby Weekly, which is like, you know, okay, that feels fair, that feels good. So it's really exciting for the thoughtbot blog because a lot of people work really hard on curating and creating that content. So that's wonderful that so many people are enjoying it.
And then I should also highlight that for the podcast in first place is Remote Ruby, so congrats to Chris, Jason, and Andrew for grabbing that number one spot. And Brittany Martin, host of the Ruby on Rails Podcast, along with Brian Mariani, Jemma Issroff, and Nick Schwaderer, are in the number three spot. And some people say that Ruby is losing steam but look at all that content and all those highly ranked podcasts. I mean, we like Ruby so much we're spending time recording ourselves talking about it. So I say long live Ruby, long live Rails.
CHRIS: Yes. Long live Ruby indeed. And yeah, it's definitely an honor to be on the list and to be amongst such other wonderful shows. Certainly big fans of the work of those other podcasts. We even did a joint adventure with them at one point, and that was a really wonderful experience, so yeah, honored to be on the list alongside them. And to have folks out there in the world listening to our tech talk and nonsense always nice to hear.
STEPH: Yeah. You and I show up and say lots of silly things and technical things into the podcast. The true heroes are the ones that went and voted. So thank you to everybody who voted. That's greatly appreciated. It's really nice feedback. Because we get listener responses and questions, and those are wonderful because it lets us know that people are listening. But I have to say that having the survey results is also really nice. It lets us know people like the show.
Oh, but I did go back and look at some of the previous stats because then I was like, huh, so I'm paying attention. I looked at this year's, and I was like, I wonder what last year’s was or the year before that. And I think this survey comes out every two years because I didn't see one for 2021. But I did find the survey results for 2020, which we were in the number one spot for 2020, and Remote Ruby was in the second spot.
So I feel like now we've got a really nice, healthy podcasting war situation going on to see who can grab the first spot. We've got two years, everybody, to see who [laughs] grabs the number one spot. That's a lot of prep time for a competition.
CHRIS: Yeah, I feel like we should be like, I don't know, planning elaborate pranks on them or something like that now. Is that where this is at? It's something like that, I think.
STEPH: I think so. I think this is where you put like sky frosting inside someone's suitcase, and that's the type of prank that you play. [laughs]
CHRIS: The best of pranks.
STEPH: We'll definitely put together a little task force. And we'll start thinking of pranks that we all need to start playing on each other for the podcasting wars that we're entering for the next few years. But anywho, what's going on in your world?
CHRIS: Let's see, what's going on in my world? A fun thing happened recently. I had a chance to reflect back on some architectural choices that we've made in the Sagewell platform. And one of those specific choices is how we've approached building our native mobile apps. We made what some listeners may remember is an interesting set of choices. In particular, in Episode 328, which we'll include a link to in the show notes, I shared with you the approach that we're doing, which is basically like, Inertia is great, web user great. We like the web as a platform. What if we were to wrap it in a native shell and find this interesting and somewhat unique hybrid trade-off point?
And so, at that point, we were building it. We had most of it built out, and things were going quite well. I think we maybe had the iOS app in the store and the Android app approaching the store or something like that. At this point, both apps have been released to the store, so they are live. Production users are signing in. It's wonderful. But I had a moment in the past couple of weeks to reassess or look at that set of choices and evaluate it. And thankfully, I'm happy with the choices that we've made. So that's good.
But to get into the specifics, there were two things that happened that really, really framed the choice that we made, so one was we introduced a major new feature. We basically overhauled the first-run experience, the onboarding that users experience, and added a new, pretty fundamental facet to the platform. It's a bunch of new screens, and flows, and error states, and all of this complexity. And in the process, we iterated on it a bunch. Like, first, it looked like this, and then we changed the order of the screens and switched out the error messages, and et cetera, et cetera.
And I'll be honest, we never even thought about the mobile apps. It just wasn't even a consideration. And interestingly, we did as a final check before going fully live and releasing this out to the full production audience; we did spot check it in the mobile apps, and it didn't work. But it didn't work for a very specific, boring, technical reason that we were able to resolve. It has to do with iframes and WebViews and embedded something, something. And we had to set a flag. Thankfully, it was solvable without a deploy of the native mobile apps. And otherwise, we never thought about the native apps.
Specifically, we were able to add this fundamental set of features to our platform. And they just worked in native mobile. And they were the same as they roughly are if you're on a mobile WebView or if you're on a desktop web, you know, slightly different in terms of form factor. But the functionality was all the same. And critically, the error states and the edge cases and the flow, there's so much to think about when you're adding a nontrivial feature to an app. And the fact that we didn't have to consider it really spoke to the choice that we made here.
And again, to name it, the choice that we made is we're basically just reusing the same WebViews, the same Rails controllers, and the same what are Svelte components under the hood but the same essentially view layer as well. And we are wrapping that in a native iOS. It's a Swift application shell, and on Android, it's a Kotlin application shell. But under the hood, it's the same web stuff. And that was really great.
We just got these new features. And you know what? If we have to rip that whole set of functionality out, again, we won't need to deploy. We won't need to rethink it. Or, if we want to subtly tweak it, we can do that. If we want to think about feature flags or analytics, or error states or error reporting, all of this just naturally falls out of the approach that we took. And that was really wonderful.
STEPH: That's super nice. I also love this saga of like, you made a choice, and then you're coming back to revisit and share how it's going. So as someone who's never done this before, in regards of wrapping an application in the manner that you have and then publishing it and distributing it that way, what does that process look like? Is this one of those like you run a command, and literally, it's going to wrap the application and then make it hostable on the different mobile app stores? Or what's that? Am I oversimplifying the process? What does that look like?
CHRIS: I think there are a lot of platforms or frameworks I think would probably be the better word like Capacitor is something that comes to mind or Ionic or Expo. There are a handful of them that are a little more fully featured in what they provide. So you just point us at your React Views and whatnot, and we'll wrap that up, and it'll be great. But those are for, I may be overgeneralizing here, but my understanding is those are for more heavy client-side bundles that are talking to a common API. And so you're basically taking your same rich client-side application and bundling that up for reuse on the native app, the native app platforms.
And so I think those do have some release to the store sort of thing. In our case, we went a little bit further with that integration wrapper thing that we built. So that is a thing that we maintain. We have a Sagewell iOS repo and a Sagewell Android repo. There's a bunch of Swift and Kotlin code, respectively, in each of them, and we deploy to the stores manually. We're doing that whole process. But critically, the code that is in each of those repositories is just the bridge glue code that says, oh, when this Inertia navigation event happens, I'm going to push a WebView to the navigation stack. And that's what that is.
I'm going to render the tab bar of buttons at the bottom with the navigation elements that I get from the server. But it's very much server-driven UI, is the way that I would describe it. And it's wrapping WebViews versus actually having the whole client bundle wrapped up in the thing. It's unfortunately subtle to try and talk through on the radio, but yeah. [laughs]
STEPH: You're doing great; this is helping. So if there's a change that you want to make, you go to the Rails application, and you make that change. And then do you need to update anything on that iOS repo? It sounds like you don't, which then you don't have to push a new update to the store.
CHRIS: Correct. For the vast majority of things, we do not need to make any changes. It's very rare for us to deploy the iOS or the Android app is a different way to put it or to push new releases to the store. It happens we may want to add a new feature to the sort of bridge layer that we built, but increasingly, those are rare. And now it's basically like, yeah, we're just wrapping those WebViews, and it's going great. And again, to name it, it's a trade-off. It's an intentional trade-off that we've made.
We're never going to have the richest, most deep platform integration, smooth experience. We are making a small trade-off on that front. But given where we're at as an organization, given how early we are, how much iteration and change, we chose an architecture that optimizes for that change. And so again, like what you just said, yeah, I can...you know how it's really nice to be able to deploy six times a day on a web app, and that's a very straightforward thing to do? It is not so straightforward in the native mobile world. And so, we now have afforded ourselves the ability to do that.
But critically, and this is the fun part in my mind, have the trade-offs in the controls. So if we were just like, it's just a WebView, and that's it, and we put it in the stores, and we're done, that is too far of an extreme in my mind. I think the performance trade-offs, the experience trade-offs, it wouldn't feel like a native app like in a deep way, in a problematic way. And so as an example, we have a navigation bar at the top of our app, particularly on iOS, that is native iOS navigation. And we have a tab bar at the bottom, which is native tab UI element. I forget actually what it's called, but it's those elements.
And we hide the web application navigation when we're in the mobile context. So we actually swap those out and say, like, let's actually promote these to formal native functionality. We also, within our UI on the web, have a persistent button in the top right corner of your screen that says, "Need help? Reach out to your retirement advocate." who is the person that you get to work with. You can send questions, et cetera, et cetera. It's this little help sidebar drawer thing that pops out. And we have that as a persistent HTML button in the top corner of the web frame.
But when we're on native, we push that up as a distinct element in the native UI section. And then again, the bridge that I'm talking about allows for bi-directional communication between the JavaScript side and the native side or the native side and the JavaScript side. And so it's those sorts of pieces that have now afforded us all of the freedom to tinker, and we don't need to re-release where we're like, oh, we want to add a new weird button that does a thing in the WebView when you click on a button outside the WebView. We now just have that built-in.
STEPH: Yeah, I really like the flexibility that you're describing. When you promoted those elements to be more native-friendly so, like the navigation or the footer or the little get help chat, is that something that then your team implemented in like the iOS or the Kotlin repo? Okay, I see you nodding, but other people can't see that, so...[laughs]
CHRIS: Yeah. I was going to also say the words, but yes, those are now implemented as native parts. So the thing that we built isn't purely agnostic decoupled. It is Sagewell-specific; a lot of it is low-level. Like, let's say we want to wrap an Inertia app in a native mobile wrapper. Like, 90% of the code in it is that, but then there are little bits that are like, and put a button up there. And that button is the Sagewell button. And so it's not entirely decoupled from us. But it mostly is this agnostic bridge to connect things together.
STEPH: Yeah, the way you're describing it sounds really nice in terms of you're able to get out the app quickly and have a mobile app quickly that works on both platforms, and then you're still able to deploy changes without having to push that. That was always my biggest mental, or emotional hurdle with the idea of mobile development was the concept of that you really had to batch everything together and then submit it for review and approval and then get it released.
And then you got to hope people then upgrade and get the newest version. And it just felt like such a process, not that I ever did much of it. This was all just even watching like the mobile team and all the work that they had to do. And I had sympathy pains for them. But the fact that this approach allows you to avoid a lot of that but still have some nice, customized, more native elements. Yeah, I'm basically just recapping everything you said because I like all of it.
CHRIS: Well, thank you, friend. Like I said, I've really enjoyed it, and similar to you, I'm addicted to the feedback loop of the web. It's beautiful. I can deploy ten times or however many I want. Anytime I want, I can push out a new version. And that ability to iterate, to test, to explore, to tweak, to not have to do as much formal testing upfront because I'm terrified that if a bug sneaks out, then, it'll take me two weeks to address it; it just is so, so freeing. And so to give that up moving into a native context.
Perhaps I'm fighting too hard to hold on to my dream of the ability to rapidly iterate. But I really do believe in that and especially for where we're at as an organization right now. But, and a critical but here, again, it's a trade-off like anything else. And recently, I happened to be out about in the town, and I decided, oh, you know what? Let me open up the app. Let me see what it's like. And I wasn't on great internet. And so I open the app, and it loads because, you know, it's a native app, so it pops up.
But then the thing that actually happened is a loading spinner in the middle of the screen and sort of a gray nothing for a little while until the server request to fetch the necessary UI elements to render the login screen appeared. And that experience was not great. In particular, that experience is core to the experience of using the app every single time. Every time you use it, you're going to have a bad time because we're re-downloading that UI element. And there's caching, and there's things that could happen there to help with that. But fundamentally, that experience is going to be a pretty common one. It's the first thing that you experience when you're opening the app.
And so I noticed that and I chatted with the team, and I was like, hey, I feel like this is actually something that fixing this I think would really fundamentally move us along that spectrum of like, we've definitely made some trade-offs here. But overall, it feels snappy and like a native app. And so, we opted to prioritize work on a native login screen for both platforms. This also allows us to more deeply integrate. So particularly, we're going to get biometric logins like fingerprints or face scans, or whatever it is.
But critically, it's that experience of like, I open the Sagewell native app on my iOS phone, and then it loads immediately. And then I show it my face like we do these days, and then it opens up and shows me everything that I want to see inside of it. And it's that first-run experience that feels worth the extra effort and the constraints. Because now that it's native mobile, that means in order to change it, we have to do a deploy, not a deploy, release; that's what they call it in the native world. [laughs] You can tell I'm well-versed in this ecosystem.
But yeah, we're now choosing that trade-off. And what I really liked about this sort of set of things like the feature that we were able to just accidentally get for free on native because that's how this thing is built. And then likewise, the choice to opt into a fully native login screen like having that lever, having that control over I'm going to optimize for iteration generally, but where it's important, we want to optimize for performance and experience. And now we have this little slider that we can go back and forth.
And frankly, we could choose to screen by screen just slowly replace everything in the app with true native WebViews backed by APIs. And we could Ship of Theseus style replace every element of the app with true native mobile things until none of the old bridge code exists. And our users, in theory, would never know. Having that flexibility is really nice given the trade-off and the choice that we've made.
STEPH: You said a word there that I missed. You said ship something style.
CHRIS: Ship of Theseus.
STEPH: What is that?
CHRIS: It's like an old biblical story, I want to say, but it's basically the idea of, like, you have the ship. And then some boards start to rot out, so replace those boards. And then the mast breaks, you replace the mast. And slowly, you've replaced every element on the ship. Is it still the same ship at that point? And so it's sort of a philosophical question. So if we replace every single view in this app with a native view, is it still the same map? Philosophers will philosophize about it forever, but whatever. As long as we get to keep iterating and shipping software, then I'm happy.
STEPH: [laughs] Y’all philosophize. That's that word, right?
CHRIS: Yeah.
STEPH: And do your philosopher thing. We'll just keep building and shipping.
CHRIS: I don't know if I pronounced it right. It's like either Theseus or Theseus, and I'm sure I said the wrong one. And now that I've said the other, I'm sure both of them are wrong somehow. It's like a USB where there's up and down, and yet somehow it takes three tries. So anyway, I may have mispronounced it, and I may be misattributing it, but that's the idea I was going for.
STEPH: Well, given I wasn't even familiar with the word until just now, I'm going to give both pronunciations a thumbs up. I also really like how you decided that for the login screen, that's the area that you don't want people to wait because I agree if you're opening an application or opening...maybe it's the first time, maybe it's the 100th time. Who knows? But that feels important. Like, that needs to be snappy. I need to know it's responsive. And it builds trust from the minute that I clicked on that application. And if it takes a long time, I just immediately I'm like, what are y'all doing? Are y'all real? Do you know what you're doing over there?
So I like how you focused on that experience. But then once I log in, like if something is slow to log me in, I will make up excuses for the application all day where I'm like, well, you know, maybe it's my connection. It's fine. I can wait for the next screen to load. That feels more reasonable. And it doesn't undermine my trust nearly as much as when I first click on the app. So that feels like a really nice trade-off as well, or at least a nice area that you've improved while still having those other trade-offs and benefits that you mentioned.
CHRIS: To highlight it, you used a phrase there which I really liked. Like, it's building trust. If something's a little bit off in that first run experience every single time, then it kind of puts a question in the back of your head, maybe not even consciously. But you're just kind of looking at it, and you're like, what are you doing there? What are you up to, friend? Humans say to the apps they use on their phone. That's normal, right? When you talk...
But to name it, we've also done a round of performance work throughout the app. And so there are a couple of layers to it. But it was work that we had planned for a while, but we kept deferring. But now that we're seeing more usage of the native apps, the native apps experience the same surface area of performance stuff but all the more so because they may be on degraded network connections, et cetera.
And so this is another example where this whole thing kind of pays off. The performance work that we did affects everything. It affects the web. It's the same under the hood. It's let's reduce the network requests that we're making in the payloads that we're sending, particularly the network requests to upstream things, so like the banking partner that we're using and those APIs, like, collating all the data to then render the screen.
Because of Inertia, we only have a single sort of back and forth conversation via the API as opposed to I think it's pretty common to have like seven different APIs and four different spinners on the screen. We're not doing that, none of that on my watch. [chuckles] But we minimize the background calls to the other parties that we're integrating with. And then, we reduce the payload of data that we're sending on each request.
And each of those were like, we had to think about things and tweak and poke, but again it's uniform. So mobile web has that now, desktop web has that now. Android, iOS, they all just inherited it sort of that just happened one day without a deploy or release, without a release of either of the native mobile apps. We did deploy to the web to make that happen, but that's easy. I can do that a bunch of times a day.
One last thing I want to share as we're on this topic of trade-offs and levers, there was a really great conference talk that I watched recently, which was Ryan Florence of remix.run also React Router fame if you're familiar with him from that. But he was talking about the most recent version of Remix, which is their meta framework on top of React.
But they've done some really interesting stuff around processing data, fetching data, when and how to sequence that. And again, that thing that I talked about of nine different loading spinners on the screen, Remix is taking a very different approach but is targeting that same thing of like, that's not great for user experience. Cumulative layout shift being the actual number that you can monitor for this.
But in that talk, there are features that they've added to Remix as a framework where you can just decide, like, do we wait for this or do we not? Do we make sure we have all of the data, or do we say, you know what? Actually, this is going to be below the fold. So it's okay to defer loading this until after we send down the first payload. And then we'll kick in, and we'll do it from the client-side. But it's this wonderful feature of the framework that they're adding in where there's basically just a keyword that you can add to sort of toggle that behavior.
And again, it's this idea of like trade-offs. Are we okay with more layout shift, or are we okay with more waiting? Which is it that we're going to optimize for? And I really love that idea of putting that power very simply in the hands of the developers to make those trade-off decisions and optimize over time for what's important. So we'll share a link to that talk in the show notes as well.
But it was very much in the same space of like, how do I have the power to decide and to change my mind over time? That's what I want. But yeah, with that, I think that's enough of me updating on the mobile app. I'll continue to share as new things happen. But again, I'm at this point very happy with where we're at. So yeah, it's been fun. But yeah, what else is up in your world?
STEPH: I have a dear Gerrit message that I wrote earlier, so I want to share that with you. Gerrit is the system that we're using for when we push up code changes that then manages very similar in the competitive space of like GitHub and GitLab, and Bitbucket. And so the team that I'm working with we are using Gerrit. And Gerrit and I, you know, we get along for the most part. We've managed to have a working relationship. [chuckles] But this week, I wrote my dear Gerrit letter is that I really miss being able to tell a story with my commit messages. That is the biggest pain that I'm feeling right now.
So for anyone that's less familiar or if you already are familiar with Gerrit, each change that Gerrit shows represents a single commit that's under review. And each change is identified by a Change-Id. So the basic concept of Gerrit is that you only have one commit per review. So if you were to translate that to GitHub terminology, every pull request is only going to have one commit, and so you really can't push up multiple.
And so, where that has been causing me the most pain is I miss being able to tell a story. So like even simple stories that are like, hey, I removed something that's not used. I love separating that type of stuff into its own commit just so then people can see that as they're going through review. Now, before I merge, I'm likely to squash, and that doesn't feel important that it needs to be its own commit. That's really just for the reviewer so they can follow along for the changes.
But the other one, I can slowly get over that one. Because essentially, the way I get around that is then when I do push up my code for review, is I then go through my change request, and then I just add comments. So I will highlight that line and say, "Hey, I'm removing this because it's not in use." And so, I found a workaround for that one.
But the one I haven't found a workaround for is that I don't push up my local work very often because I love having lots of local, tiny, green commits so that way I can know the progress that I'm at. I know where I'm headed. Also, I have a safe space to roll back to, but then that means that I may have five or six commits that I have locally, but I haven't pushed up somewhere. And that is bothering me more and more hour by hour the more I think about it that I can't push stuff up because it makes me nervous.
Because, I mean, usually, at least by the end of the day, I push everything up, so it's stored somewhere. And I don't have to worry about that work disappearing. Now I am working on a dev machine. So there is that aspect of it's technically...it's not even on my local machine. It is stored somewhere that I should still be able to access.
CHRIS: What's a dev machine? The way you're saying it, it sounds like it's a virtual machine, not like a laptop. But what's a dev machine?
STEPH: Good question. So the dev machine is a remote server or remote machine that then I am accessing, and then that's where I'm performing. That's where I'm writing all of my work. And then that's also kind of the benefit is everything is not local; it's controlled by the team. So then that also means that other teams, other individuals can help set up these environments for future developers.
So then you have that consistency across everyone's working with the same Rails version, or gems, or has access to the same tools. So in that sense, my work isn't just on my laptop because then that would really worry me because then I've got nowhere...it's not backed up anywhere. So at least it is somewhere it's being stored that then could be accessed by someone.
So actually, now, as I'm talking this through, that does help alleviate my concern about this a bit. [laughs] But I still miss it; I still miss being able to just push up my work and then have multiple commits. And I looked into it because I was like, well, maybe I'm misunderstanding something about Gerrit, and there's a way around this. And that's still always a chance. But from the research that I've done, it doesn't seem to be. And there are actually two very fiery takes that I saw that I have to share because they made me laugh.
When I was Googling, the question of like, "Can I push up multiple commits to one single Gerrit CR? Or is there just a way to, like, can I have this concept of like a branch and then I have many commits, but then I turn it into one CR? Whatever the world would give me. What do they have? [laughs] I'm laughing just looking at this now. One of the responses was, have you tried squashing your commits into one commit? And I was like, [laughs] "Yeah, that's not what I had in mind, but sure."
And then the other one, this is the more fiery take. They were very defensive about Gerrit, and they wrote that "People who don't like Gerrit usually just hack shit together. They cut corners and love squashing commits or throwing away history. And those people hate Gerrit. Developers who care love it. It's definitely possible and easy to produce agile software." And I just...that made me laugh. I was like, cool, I'm a developer that cuts corners and loves squashing commits. [laughs]
CHRIS: So you don't care is what that take says.
STEPH: I'm a developer who does not care.
CHRIS: You know, Steph, I've worked with you for a while. And I've been looking for the opportunity to have this hard conversation with you. But I just wish you cared a little more about the software that you're writing, about the people that you're working with, about the commits that you're authoring. I just see it in every facet of your work. You just don't care. To be very clear for anyone listening at home, that is the deepest of sarcasm that I can make. Steph cares so very much. It's one of the things that I really enjoy about you.
STEPH: I mean, we had the episode about toxic traits. This would have been the perfect time to confront me about my lack of caring about software and the processes that we have. So winding down on that saga, it seems to be the answer is no, friend; I cannot push up multiple commits. Oh, I tried to hack it. I am someone that tries to hack shit together because I tried to get around it just to see what would happen. [laughs] Because the docs had suggested that each change is identified by a Change-Id. And I was like, hmm, so what if there were two commits that had the same Change-Id, would Gerrit treat those as patch sets?
Because right now, when you push up a change, you can see all the different patch sets, so that's nice. So that's a nice feature of Gerrit as you can see the history of, like, someone pushed up this change. They took in some feedback. They pushed up a new change. And so that history is there for each push that someone has provided. And I wondered maybe if they had the same Change-Id that then the patch sets would show the first commit and then the second commit. And so I manually altered the commits two of them to reference the same Change-Id.
And I have to say, Gerrit was on to me because they gave me a very nice error message that said, "Same Change-Id and multiple changes. Squash the commits with the same Change-Ids or ensure Change-Ids are unique for each commit. And I thought, dang, Gerrit, you saw me coming. [laughs] So that didn't work either. I'm still in a world of where I now wait. I wait until I'm ready for someone to review stuff, and I have to squash everything, and then I go comment on my CRs to help out reviewers.
CHRIS: I really like the emotional backdrop that you provided here where you're spending a minute; you're like, you know what? Maybe it's me. And there's the classic Seymour Skinner principle from The Simpsons. Am I out of touch? No, it's the children who are wrong. [laughs] And I liked that you took us on a whole tour of that. You're like, maybe it's me. I’ll maybe read up. Nope, nope. So yeah, that's rough.
There's a really interesting thing of tools constraining you. And then sometimes being like, I'm just going to yield control and back away and accept this thing that doesn't feel right to me. Like, Prettier does a bunch of stuff that I really don't like. It shapes code in a way, and I'm just like, no, that's not...nope, you know what? I've chosen to never care about this again. And there's so much utility in that choice. And so I've had that work out really well. Like with Prettier, that's a great example whereby yielding control over to this tool and just saying, you know what? Whatever you produce, that is our format; I don't care. And we're not going to talk about it, and that's that.
That's been really useful for myself and for the teams that I'm on to just all kind of adopt that mindset and be like, yeah, no, it may not be what I would choose but whatever. And then we have nice formatted code; it's great. It happens automatically, love it. But then there are those times where I'm like; I tried to do that because I've had success with that mindset of being like, I know my natural thing is to try and micromanage and control every little bit of this code.
But remember that time where it worked out really well for me to be like, I don't care, I'm just going to not care about this thing? And I try to not care about some stuff, which it sounds like that's what you're doing right here. [laughs] And you're like, I tried to not care, but I care. I care so much. And now you're in that [chuckles] complicated space. So I feel for you, Steph. I'm sorry you're in that complicated space of caring so much and not being able to turn that off [laughs] nor configure the software to do the thing you want.
STEPH: I appreciate it. I should also share that the team that I'm working with they also don't love this. Like, they don't love Gerrit. So when I shared in the Slack channel my dear Gerrit message, they're both like, "Yeah, we feel you. [laughs] Like, we're in the same spot," which was also helpful because I just wanted to validate like, this is the pain I'm feeling. Is someone else doing something clever or different that I just don't know about? And so that was very helpful for them to say, "Nope, we feel you. We're in the same spot. And this is just the state that we're in."
I think they have started transitioning some other repos over to GitLab and have several repos in Gitlab, but this one is still currently using Gerrit. So they very much commiserate with some of the things that I'm feeling and understand. And this does feel like one of those areas where I do care deeply.
And frankly, this is one of those spaces that I do care about, but it's also like, I can work around it. There are some reasonable things that I can do, and it's fine as we just talked through. Like, the fact that my commits are not just locally on my machine already makes me feel better now that I've really processed that. So there are lower risks. It is more of just like a workflow. It's just, you know, it's crushing my work vibe.
CHRIS: Harshing your buzz.
STEPH: In the great words of Queen Elsa, I gotta let it go. This is the thing I'm letting go. So that's kind of what's going on in my world. What else is going on in your world?
CHRIS: Well, first and foremost, fantastic reference and segue. I really liked that. But yeah, let's see, [laughs] what else is going on in my world? We had an interesting thing happen last week. So we had an outage on the platform last week. And then we had an incident review today, so a formal sort of post-mortem incident review. There are a couple of different names that folks have given to these. But this is a practice that we want to build within our engineering culture is when stuff goes wrong, we want to make sure that we have meaningful conversations around to try to address the root causes.
Ideally, blameless is a word that gets used often in this context. And I've heard folks sort of take either side of that. Like, it's critical that it's blameless so that it doesn't feel like it's an attack. But also, like, I don't know, if one person did something, we should say that. So finding that gentle middle ground of having honest, real conversations but in a context of safety. Like, we're all going to make mistakes. We're all going to ship bugs; let's be clear about that. And so it's okay to sort of...anyway, that's about the process.
We had an outage. The specific outage was that we have introduced a new process. This is a Sidekiq process to work off a specific queue. So we wanted that to have discrete treatment. That had been running, and then it stopped running; we still don't know why. So we never got to the root-root cause. Well, we know what the mechanism was, which was the dyno count for that process was at zero. And so, eventually, we found a bunch of jobs backed up in the Sidekiq admin. We're like, that's weird.
And then, we went over to Heroku's configuration dashboard. And we saw, huh, that's weird. There are zero dynos processing this. That wasn't true yesterday. But unfortunately, Heroku doesn't log or have an audit trail around changes to those process counts. It's just not available. So that's unfortunate. And then the actual question of like, how did this happen? It probably had to be someone on the team. So there is like, someone did a thing. But that is almost immaterial because, again, people are going to do things, bugs will get shipped, et cetera.
So the conversation very quickly turned to observability and understanding. I think we've done a pretty good job of instrumenting error reporting and being quite responsive to that, making sure the signal-to-noise ratio is very actionable. So if we see a bug or a Sentry alert come through, we're able to triage that pretty quickly, act on it where it is a real bug, understand where it's a bit of noise in the system, that sort of thing. But in this case, there were no errors. There was no Sentry. There was nothing; there was the absence of something.
And so it was this really interesting case of that's where observability, I think, can really come in and help. So the idea of what can we do here? Well, we can monitor the count of jobs backed up in Sidekiq queue. That's one option. We could do some threshold alerting around the throughput of processed events coming from this other backend. There are a bunch of different ways, but it basically pushed us in the direction of doubling down and reinforcing the foundation of our observability within the platform.
So we're just kicking that mini-project off now, but it is something we're like, yeah, we feel like we could add some here. In particular, we recently added Datadog to the stack. So we now have Datadog to aggregate our logs and ideally do some metric analysis, those sort of things, build some dashboards, et cetera. I haven't explored Datadog much thus far. But my sense is they've got the whiz-bang things that we need here. But yeah, it was an interesting outage. That wasn't fun. The incident conversation was actually a good conversation as a team. And then the outcome of like, how do we double down on observability? I'm actually quite excited for.
STEPH: This is a fun moment for me because I have either joined teams that didn't have Datadog or have any of that sort of observability built into their system or that sort of dashboard that people go to. Or I've joined teams, and they already have it, and then nobody or people rarely look at it. And so I'm always intrigued between like what's that catalyst that then sparked a team to then go ahead and add this? And so I'm excited to hear you're in that moment of like, we need more observability. How do we go about this?
And as soon as you said Datadog, I was like, yeah, that sounds nice because then it sounds like a place that you can check on to make sure that everything is still running. But then there's still also that manual process where I'm presuming unless there's something else you have in mind. There's still that manual process of someone has to check the dashboard; someone then has to understand if there's no count, no squiggly lines, that's a bad thing and to raise a concern.
So I'm intrigued with my own initial reaction of, like, yeah, that sounds great. But now I'm also thinking about it still adds a lot of...the responsibility is still on a human to think of this thing and to go check it. Versus if there's something that gets sent to someone to alert you and say like, "Hey, this queue hasn't been processed in 48 hours. There may be a concern that actually feels nicer." It feels safer.
CHRIS: Oh yeah, definitely. I think observability is this category of tools and workflows and whatnot. But I think what you're describing of proactive alerting that's the ideal. And so it would be wonderful if I never had to look at any of these tools ever. And I just knew if I got, let's say, it's PagerDuty connected up whatever, and I got a push notification from PagerDuty saying, "Hey, go look at this thing." That's all I ever need to think about. It’s like, well, I haven't gotten a PagerDuty in a while, so everything must be fine, and having a deep trust in that.
Similar to like, if we have a great test suite and it's green, I feel confident deploying the sort of absence of an alert being the thing that I can trust. But right now, we're early enough in this journey that I think what we need to do is stand up a bunch of these different graphs and charts and metric analysis and aggregations and whatnot, and then start to squint at it for a while and be like, which of these would I be really concerned if it started to wibble?
And then you can figure the alerting around said wibble rate. And that's the dream. That's where we want to get to, but I think we've got to crawl, walk, run on this. So it'll be an adventure. This is very much the like; we're starting a thing. I'll tell you about it more when we've done it. But what you're describing is exactly what we want to get to.
STEPH: I love wibble rate. That's my new measurement I'm going to start using for everything. It's funny, as you're bringing this up, it's making me think about the past week that Joël Quenneville and I have had with our client work. Because a somewhat similar situation came up in regards where something happened, and something was broken. And it seemed it was hard to define exactly what moment caused that to break and what was going on.
But it had a big impact on the team because it essentially meant none of the bills were going through. And so that's a big situation when you got 100-plus people that are pushing up code and expecting some of the build processes to run. But it was one of those that the more we dug into it, the more it seemed very rare that it would happen.
So, in this case, as a sort of a juxtaposition to your scenario, we actually took the opposite approach of where we're like; this is rare. But we did load up a lot of contexts. Actually, I was thinking back to the advice that you gave me in a previous episode where I was talking about at what point do you dig in versus try to stay at surface level? And this was one of those, like, we've spent a couple of days on getting context for this and understanding. So it felt really important and worthwhile to then invest a little bit more time to then document it.
But then we still went with the simplest approach of like, this is weird. It shouldn't happen again. We think we understand it but then let's add a little bit of documentation or wiki page around like, hey, if you do run into this, here are some steps that will fix everything. And then, if you need to use this, let somebody know because this is so odd it shouldn't happen. So we took that approach in this case where we didn't increase the observability. It was more like we provided a fire extinguisher very close to the location in case it happens. And so that way, it's there should the need arise, but we're hoping it just never gets used.
We're also in the process of changing how a lot of that logic works. So we didn't really want to optimize for observability into a system that is actively being changed because it should look very different in upcoming months. But overall, I love the conversations that you bring about observability, and I'm excited to hear about what wibble rates you decide to add to your Datadog dashboard.
CHRIS: There's a delicate art and science to the selection of the wibble rates. So I will certainly report back as we get into that work. But with that, shall we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph and Chris are recording together! Like, in the same room, physically together.
Chris talks about slowly evolving the architecture in an app they're working on and settling on directory structure. Steph's still working on migrating unit tests over to RSpec.
They answer a listener question: "As senior-level developers, how do you set goals to ensure that you keep growing?"
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Faking External Services In Tests With Adapters
Testing Third-Party Interactions
Jen Dary - On Future Goals
Charity Majors - The Engineer Manager Pedulum
Charity Majors Bike Shed Episode
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What is new in my world? Actually, this episode feels different. There's something different about it. I can't quite put my finger on it. I think it may be that we're actually physically in the same room recording for the first time in two years and a little bit more, which is wild.
STEPH: I can't believe it's been that long. I feel like it wasn't that long ago that we were in The Bike Shed...oh, I said The Bike Shed studio. I'm being very biased. Our recording studio [laughs] is the more proper description for it. Yeah, two and a half years. And we tried to make this happen a couple of months ago when I was visiting Boston, and then it just didn't work out. But today, we made it.
CHRIS: Today, we made it. Here we are. So hopefully, the audio sounds great, and we get all that more richness in conversation because of the physical in-person manner. We're trying it out. It'll be fun. But let's see, to the normal tech talk and nonsense, what's new in my world? So we've been slowly evolving the architecture in the app that we're working on. And we settled on something that I kind of like, so I wanted to talk about it, directory structure, probably the most interesting topic in the world. I think there's some good stuff in here.
So we have the normal stuff. There are app models, app controllers, all those; those make sense. We have app jobs which right now, I would say, is in a state of flux. We're in the sad place where some things are application records, and some things are Sidekiq workers. We have made the decision to consolidate everything onto Sidekiq workers, which is just strictly more powerful as to the direction we're going to go. But for right now, I'm not super happy with the state of app jobs, but whatever, we have that.
But the things that I like so we have app commands; I've talked about app commands before. Those are command objects. They use dry-rb do notation, and they allow us to sequence a bunch of things that all may fail, and we can process them all in a much more reasonable way. It's been really interesting exploring that, building on it, introducing it to new developers who haven't worked in that mode before. And everyone who's come into the project has both picked it up very quickly and enjoyed it, and found it to be a nice expressive mode. So app commands very happy with that.
App queries is another one that we have. We've talked about this before, query objects. I know we're a big fan. [laughs] I got a golf clap across the room here, which I could see live in person. It was amazing. I could feel the wind wafting across the room from the golf clap. [chuckles] But yeah, query objects, they're fantastic. They take a relation, they return a relation, but they allow us to build more complex queries outside of our models.
The new one, here we go. So this stuff would all normally fall into app services, which services don't mean anything. So we do not have an app services directory in our application. But the new one that we have is app clients. So these are all of our HTTP clients wrapping external third parties that we're interacting with. But with each of them, we've taken a particular structure, a particular approach. So for each of them, we're using the adapter pattern. There's a blog post on the Giant Robots blog that I can point to that sort of speaks to the adapter pattern that we're using here.
But basically, in production mode, there is an HTTP backend that actually makes the real requests and does all that stuff. And in test mode, there is a test backend for each of these clients that allows us to build up a pretty representative fake, and so we're faking it up before the HTTP layer. But we found that that's a good trade-off for us. And then we can say, like, if this fake backend gets a request to /users, then we can respond in whatever way that we want.
And overall, we found that pattern to be really fantastic. We've been very happy with it. So it's one more thing. All of them were just gathering in-app models. And so it was only very recently that we said no, no, these deserve their own name. They are a pattern. We've repeated this pattern a bunch. We like this pattern. We want to even embrace this pattern more, so long live app clients.
STEPH: I love it. I love app clients. It's been a while since I've been on a project that had that directory. But there was a greenfield project that I was working on. I think it might have been I was working with Boston.rb and working on giving them a new site or something like that and introduced app clients. And what you just said is perfect in terms of you've identified a pattern, and then you captured that and gave it its own directory to say, "Hey, this is our pattern. We've established it, and we really like it." That sounds awesome.
It's also really nice as someone who's new to a codebase; if I jump in and if I look at app clients, I can immediately see what are the third parties that we're working with? And that feels really nice. So yeah, that sounds great. I'm into it.
CHRIS: Yeah, I think it really was the question of like, is this a pattern we want to embrace and highlight within the codebase, or is this sort of a duplication but irrelevant like not really that important? And we decided no, this is a thing that matters. We currently have 17 of these clients, so 17 different third-party external things that we're integrating with. So for someone who doesn't really like service-oriented architecture, I do seem to have found myself in a place. But here we are, you know, we do what we have to with what we're given. But yes, 17 and growing our app clients.
STEPH: That is a lot. [laughs] My eyes widened a bit when you said 17. I'm curious because you highlighted that app services that's not really a thing. Like, it doesn't mean anything. It doesn't have the same meaning of the app queries directory or app commands or app clients where it's like, this is a pattern we've identified, and named, and want to propagate.
For app services, I agree; it's that junk drawer. But I guess in some ways...well, I'm going to say something, and then I'm going to decide how I feel about it. That feels useful because then, if you have something but you haven't established a pattern for it, you need a place for it to go. It still needs to live somewhere. And you don't necessarily want to put it in app models. So I'm curious, where do you put stuff that doesn't have an established pattern yet?
CHRIS: It's a good question. I think it's probably app models is our current answer. Like, these are things that model stuff. And I'm a big believer in the it doesn't need to be an application record-backed object to go on app models. But slowly, we've been taking stuff out. I think it'd be very common for what we talk about as query objects to just be methods in the respective application record. So the user record, as a great example, has all of these methods for doing any sort of query that you might want to do. And I'm a fan of extracting that out into this very specific place called app queries.
Commands are now another thing that I think very typically would fall into the app services place. Jobs naming that is something different. Clients we've got serializers is another one that we have at the top level, so those are four. We use Blueprinter within the app. And again, it's sort of weird. We don't really have an API. We're using Inertia. So we are still serializing to JSON across the boundary. And we found it was useful to encapsulate that. And so we have serializers as a directory, but they just do that. We do have policies. We're using Pundit for authorization, so that's another one that we have.
But yeah, I think the junk drawerness probably most goes to app models. But at this point, more and more, I feel like we have a place to put things. It's relatively clear should this be in a controller, or should this be in a query object, or should this be in a command? I think I'm finding a place of happiness that, frankly, I've been searching for for a long time. You could say my whole life I've been searching for this contented state of I think I know where stuff goes in the app, mostly, most of the time.
I'm just going to say this, and now that you've asked the excellent question of like, yeah, but no, where are you hiding some stuff? I'm going to open up models. Next week I’m going to be like, oh, I forgot about all of that nonsense. But the things that we have defined I'm very happy with.
STEPH: That feels really fair for app models. Because like you said, I agree that it doesn't need to be ActiveRecord-backed to go on app models. And so, if it needs to live somewhere, do you add a junk drawer, or do you just create app models and reuse that? And I think it makes a lot of sense to repurpose app models or to let things slide in there until you can extract them and let them live there until there's a pattern that you see.
CHRIS: We do. There's one more that I find hilarious, which is app lib, which my understanding...I remember at one point having one of those afternoons where I'm just like, I thought stuff works, but stuff doesn't seem to work. I thought lib was a directory in Rails apps. And it was like, oh no, now we autoload only under the app. So you should put lib under app. And I was just like, okay, whatever.
So we have app lib with very little in it. [laughs] But that isn't so much a junk drawer as it is stuff that's like, this doesn't feel specific to us. This goes somewhere else. This could be extracted from the app. But I just find it funny that we have an app lib. It just seems wrong.
STEPH: That feels like one of those directories that I've just accepted. Like, it's everywhere. It's like in all the apps that I work in. And so I've become very accustomed to it, and I haven't given it the same thoughtfulness that I think you have. I'm just like, yeah, it's another place to look. It's another place to go find some stuff. And then if I'm adding to it, yeah, I don't think I've been as thoughtful about it. But that makes sense that it's kind of silly that we have it, and that becomes like the junk drawer. If you're not careful with it, that's where you stick things.
CHRIS: I appreciate you're describing my point of view as thoughtfulness. I feel like I may actually be burdened with historical knowledge here because I worked on Rails apps long, long ago when lib didn't go in-app, and now it does. And I'm like, wait a minute, but like, no, no, it's fine. These are the libraries within your app. I can tell that story. So, again, thank you for saying that I was being thoughtful. I think I was just being persnickety, and get off my lawn is probably where I was at.
STEPH: Oh, full persnicketiness. Ooh, that's tough to say. [laughs]
CHRIS: But yeah, I just wanted to share that little summary, particularly the app clients is an interesting one. And again, I'll share the adapter pattern blog post because I think it's worked really well for us. And it's allowed us to slowly build up a more robust test suite. And so now our feature specs do a very good job of simulating the reality of the world while also dealing with the fact that we have these 17 external situations that we have to interact with.
And so, how do you balance that VCR versus other things? We've talked about this a bunch of times on different episodes. But app clients has worked great with the adapter pattern, so once more, rounding out our organizational approach. But yeah, that's what's up in my world. What's up in your world?
STEPH: So I have a small update to give. But before I do, you just made me think of something in regards to that article that talks about the adapter pattern. And there's also another article that's by Joël Quenneville that's testing third-party interactions. And he made me reflect on a time where I was giving the RSpec course, and we were talking about different ways to test third-party interactions. And there are a couple of different ways that are mentioned in this article. There are stub methods on adapter, stub HTTP request, stub request to fake adapter, and stub HTTP request to fake service. All that sounds like a lot. But if you read through the article, then it gives an example of each one.
But I've found it really helpful that if you're in a space that you still don't feel great about testing third-party interactions and you're not sure which approach to take; if you work with one API and apply all four different strategies, it really helps cement how to work through that process and the different benefits of each approach and the trade-offs. And we did that during the RSpec course, and I found it really helpful just from the teacher perspective to go through each one. And there were some great questions and discussions that came out of it.
So I wanted to put that plug out there in the world that if you're struggling testing third-party interactions, we'll include a link to this article. But I think that's a really solid way to build a great foundation of, like, I know how to test a third-party app. Let me choose which strategy that I want to use.
Circling back to what's going on in my world, I am still working on migrating unit tests over to RSpec. It's a thing. It's part of the work that I do. [laughs] I can't say it's particularly enjoyable, but it will have a positive payoff. And along that journey, I've learned some things or rediscovered some things. One of them is read expectations very carefully.
So when I was migrating a test over to RSpec, I read it as where we expected a record to exist. The test was actually testing that a record did not exist. And so I probably spent an hour understanding, going through the code being like, why isn't this record getting created like I expect it to? And I finally went back, and I took a break, and I went back. And I was like, oh, crap, I read the expectation wrong. So read expectations very carefully.
The other one...this one's not learned, but it is reinforced. Mystery Guests are awful. So as I've been porting over the behavior over to RSpec, the other tests are using fixtures, and I'm moving that over to use factory_bot instead. And at first, I was trying to be minimal with the data that I was bringing over. That failed pretty spectacularly. So I have learned now that I have to go and copy everything that's in the fixtures, and then I move it over to factory_bot. And it's painful, but at least then I'm doing that thing that we talked about before where I'm trying to load as little context as possible for each test.
But then once I do have a green test, I'm going back through it, and I'm like, okay, we probably don't care about when you were created. We probably don't care when you're updated because every field is set for every single record. So I am going back and then playing a game of if I remove this line, does the test still pass? And if I remove that line, does the test still pass? And so far, that has been painful, but it does have the benefit of then I'm removing some of the setup. So Mystery Guests are very painful.
I've also discovered that custom error messages can be tricky because I brought over some tests. And some of these, I'm realizing, are more user error than anything else. Anywho, I added one of the custom error messages that you can add, and I added it over to RSpec. But I had written an incorrect, invalid statement in RSpec where I was looking...I was expecting for a record to exist. But I was using the find by instead of where. So you can call exist on the ActiveRecord relation but not on the actual record that gets returned.
But I had the custom error message that was popping up and saying, "Oh, your record wasn't found," and I just kept getting that. So I was then diving through the code to understand again why my record wasn't found. And once I removed that custom error message, I realized that it was actually because of how I'd written the RSpec assertion that was wrong because then RSpec gave me a wonderful message that was like, hey, you're trying to call exist on this record, and you can't do that. Instead, you need to call it on a relation.
So I've also learned don't bring over custom error messages until you have a green test, and even then, consider if it's helpful because, frankly, the custom error message wasn't that helpful. It was very similar to what RSpec was going to tell us in general. So there was really no need to add that custom step to it.
For the final bit that I've learned or the hurdle that I've been facing is that migrating tests descriptions are hard unless they map over. So RSpec has the context and then a description for it that goes with the test. Test::Unit has methods like method names instead. So it may be something like test redemption codes, and then it runs through the code.
Well, as I'm trying to migrate that knowledge over to RSpec, it's not clear to me what we're testing. Okay, we're testing redemption codes. What about them? Should they pass? Should they fail? Should they change? What are they redeeming? There's very little context. So a lot of my tests are copying that method name, so I know which tests I'm focused on, and I'm bringing over.
And then in the description, it's like, Steph needs help adding a test description, and then I'm pushing that up and then going to the team for help. So they can help me look through to understand, like, what is it that this test is doing? What's important about this domain? What sort of terminology should I include? And that has been working, but I didn't see that coming as part of this whole migrating stuff over. I really thought this might be a little bit more of a copypasta job. And I have learned some trickery is afoot. And it's been more complicated than I thought it was going to be.
CHRIS: Well, at a minimum, I can say thank you for sharing all of your hard-learned lessons throughout this process. This does sound arduous, but hopefully, at the end of it, there will be a lot of value and a cleaned-up test suite and all of those sorts of things. But yeah, it's been an adventure you've been on. So on behalf of the people who you are sharing all of these things with, thank you.
STEPH: Well, thank you. Yeah, I'm hoping this is very niche knowledge that there aren't many people in the world that are doing this exact work that this happens to be what this team needs. So yeah, it's been an adventure. I've certainly learned some things from it, and I still have more to go. So not there yet, but I'm also excited for when we can actually then delete this portion of the build process.
And then also, I think, get rid of fixtures because I didn't think about that from the beginning either. But now that I'm realizing that's how those tests are working, I suspect we'll be able to delete those. And that'll be really nice because now we also have another single source of truth in factory_bot as to how valid records are being built.
Mid-Roll Ad:
Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild, and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable. In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/ bikeshed. That's buildpulse.io/bikeshed.
Pivoting just a bit, there's a listener question that I'm really excited for us to dive into. And this listener question comes from Joël Quenneville. Hey, Joël. All right, so Joël writes in, "As senior-level developers, how do you set goals to ensure that you keep growing? How do you know what are high-value areas for you to improve? How do you stay sharp? Do you just keep adding new languages to your tool belt? Do you pull back and try to dig into more theoretical concepts? Do you feel like you have enough tech skills and pivot to other things like communication or management skills?
At the start of a dev career, there's an overwhelming list of things that it feels like you need to know all at once. Eventually, there comes a point where you no longer feel like you're drowning under the list of things that you need to learn. You're at least moderately competent in all the core concepts. So what's next?" This is a big, fun, scary question. I really like this question. Thank you, Joël, for sending it in because I think there's so much here that can be discussed.
I can kick us off with a few thoughts. I want to first highlight one of the things that...or one of the things that resonates with me from this question is how Joël describes going through and reaching senior status how it can really feel like working through a backlog of features. So as a developer, I want to understand this particular framework, or as a developer, I want to be able to write clear and fast tests, or as a developer, I want to contribute to an open-source project. But now that that backlog is empty, you're wondering what's next on your roadmap, which is where I think that sort of big, fun scariness comes into play.
So the first idea is to take a moment and embrace that success. You have probably worked really hard to get where you're at in your career. And there's nothing wrong with taking a pause and enjoying the view and just being appreciative of the fact that you are able to get your work done quickly or that you feel very confident in the work that you're doing. Growth is often very important to our careers, but I also think it's important to recognize when you've achieved certain growth and then, if you want to, just enjoy that and pause. And you're not constantly pushing yourself to the next level. I think that is a totally reasonable and healthy thing to do.
The second thing that comes to mind is that you're on a Choose Your Own Adventure mode now, so you get to...I would encourage folks, once you've reached this stage, to reflect on where you're at and consider what is your dream? What are your aspirations? Maybe they're related to tech; maybe they're not. And consider where is it that you want to go next? And then, what are the concrete steps that will help you achieve those goals?
So there's a really great article by Jen Dary, who's a career coach and owner of Plucky Manager Training, that describes this process. And there's a really great blog post that I'll be sure to include a link to in the show notes. But she has a couple of great questions that will then help you identify, like, what are my goals? Some of those questions are, "If I could do anything and money wasn't an object, I'd spend my time doing dot dot dot." And that doesn't necessarily mean sitting on a beach with your toes in the sand all day. I mean, it could, but then that probably just means you need a vacation. So take the vacation.
And then, once you start to get bored, where does your mind start to wander? What are the things that you want to do? Where are you interested in spending your time? And then, once you have an idea of how you'd like to spend your time, you can consider what actions you could take next that will point you in that direction.
There's also the benefit that by this point, you probably have an idea of the type of things that you like to do and where you like to spend your time. And so you can figure out which areas of expertise you want to invest in. So do you like more greenfield projects? Do you like architecture discussions? Do you like giving talks? Do you like teaching? Or maybe you're interested in management. I think there's also a more concrete approach that you can take that.
You can just talk to your managers in your team and say, "Hey, what big, hard problems are you looking to solve? And then, you can get some inspiration from them and see if their problems align with your interests. Maybe it's not even your own team, but you can talk to other companies and see what other problems industries are trying to solve. That might be an area that then spurs some curiosity or some interest.
And then, where do you feel underutilized? So with your current day-to-day, are there areas where you feel that you wish you had more responsibilities or more opportunities, but you feel like you don't have access to those opportunities? Maybe that's an area to explore as well.
This feels like a wonderful coaching question in terms of you have done it; congratulations. You've reached a really great spot in your career. And so now you're figuring out that big next step. This is going to be highly customized to each person to then figuring out what it is that's going to help you feel fulfilled over the next five years, ten years, however long you want to project out. Those are some of my thoughts. How about you? What do you think?
CHRIS: Well, first, those are some great thoughts. I appreciate that I get to follow them now. It's going to be a hard act to follow. But yeah, I think Joël has asked a fantastic question. And coming from Joël, I know how intentional and thoughtful a learner, and sharer, and teacher and all of these things are. So it's all the more sort of framed in that for me knowing Joël personally.
I think to start, the kernel of the question is as senior developers, that's the like...or senior level developers is the way Joël phrased it, but it treats it as sort of this discrete moment in time, which I think there's maybe even something to unpack. And I think we've probably talked about this in previous episodes, but like, what does that even mean?
And I think part of the story here is going from reactive where it's like, I don't know how anything works. I know a little bit. I can code some. And every day, I'm presented with new problems that I just don't understand. And I'm trying to build up that base of knowledge. Slowly, you know, you start, and it's like 95% of the time you feel like that. And slowly, the dial switches over, and maybe it's only 25% of the time you feel like that.
Somewhere along that spectrum is the line of senior developer. I don't actually know where it is, but it's somewhere in there. And so I think it's that space where you can move from reactive learning things as necessitated by the work that's coming at you to I want to proactively choose the things that I want to be learning to try and expand the stuff that I know, and the ways that I can think about the work without being in direct response to a piece of work coming at me.
So with that in mind, what do you do with this proactive space? And I think the way Joël frames the question, again, to what I know of Joël, he's such an intentional person. And I wouldn't be surprised if Joël is very purposeful and thinks about this and has approached it as a specific thing that he's doing. I have certainly been in more of “I'll figure it out when I get there.” I'll explore.
Or actually, probably the most pointed thing that I did was I joined a consultancy. And that was a very purposeful choice early on in my career because I'm like, I think I know a little bit. I don't think I know a lot. I would like to know a lot. That seems fun. So what do I think is the best way to do that? My guess, and it turned out to be very much true, is if I join a consultancy, I'm going to see a bunch of different projects, different types of technologies, organizations, communication structure, stuff that works, stuff that doesn't work.
And to be honest, I actually thought I would try out the consultancy thing for a little while, like a year or two, and then go on to my next adventure. Spoiler alert: I stayed for seven years. It was one of the best periods of my professional life. And I found it to be a much deeper well than I expected it to be. But for anyone that's looking for, like, how can I structure my career in a way that will just automatically provide the sort of novelty and space to grow? I would highly recommend a consultancy like thoughtbot. I wonder if they're hiring.
STEPH: Well, yes, we are hiring. That was a perfect plug that I wasn't expecting for that to come. But yes, thoughtbot is totally hiring. We'll include a link in the show notes to all the jobs. [laughs]
CHRIS: Sounds fantastic. But very sincerely, that was the best choice that I could have made and was a way to flip the situation around such that I don't have to be thinking about what I want to be learning. The learning will come to me. But even within that, I still tried to be intentional from time to time. And I would say, again, I don't have a holistic theory of how to improve. I just have some stuff that's kind of worked out well. One thing is focusing on fundamentals wherever I can, or a different way to put it is giving myself permission to spend a little bit more time whenever my work brushes up against what I would consider fundamentals.
So things that are in that space are like SQL. Every time I'm working on something, I'm like, ah, I could use like a CROSS JOIN here, but I don't know what that is. Maybe I'll spend an extra 30 minutes Googling around and trying to figure out what a CROSS JOIN is. Is that a thing? Is a CROSS JOIN a real thing? I may be making it up. [laughs] A window function, I know that those are real. Maybe I'll learn what a window function is. I think a CROSS JOIN is a real thing. A LEFT OUTER JOIN that's a cool thing.
And so, each time I've had that, SQL has been something that expanding my knowledge; I've continually felt like that was a good investment. Or fundamentals of HTTP, that's another one that really has served me well. With Ruby being the primary language that I program in, deeply understanding the language and the fundamentals and the semantics of it that's another place that has been a good investment. But by contrast, I would say I probably haven't gone as deep on the frameworks that I work with. So Rails is maybe a little bit different just because, like many people, I came to Ruby through Rails, and I've learned a lot of Rails.
But like in JavaScript, I've worked with many different JavaScript frameworks. And I have been a little more intentional with how much time I invest into furthering my skills in them because I've seen them change and evolve enough times. And if anything, I'm trying to look for ones that are like, what if it's less about the framework and more about JavaScript and web fundamentals underneath? Thus, I've found myself in Svelte land. But I think it's that choice of trying to anchor to fundamentals wherever possible.
And then I would say the other thing that's been really beneficial for me is what can I do that's wildly outside the stuff that I already know? And so probably the most pointed example I have of this is learning Elm. So I previously spent most of my career working in Ruby and JavaScript, so primarily object-oriented languages without a strong type system. And then, I was able to go over and experience this whole different paradigm way of working, way of structuring programs, feedback loop. There was so much about it that was really, really interesting.
And even though I don't get to work in Elm, frankly, as much as I would want at this point or really at all, it informs everything that I do moving forward. And I think that falls out of the fact that it was so different than what I was doing. So if I were to do that again, probably the next type of language I would learn is Lisp because those are like, well, that's a whole other category of thing that I've heard about. People say some fun stuff about them, but I don't really know. So it's that fundamentals and weird stuff is how I would describe it. And by weird, I mean outside of the core base of knowledge that I have.
STEPH: I love that categorization, and I'll stick with it, fundamentals and weird stuff, to stretch and grow and find some other areas. I also really like your framing, the reactive versus proactive. I think that's a really nice way to put it because so much of your career is you are just learning what your company needs you to learn, or you're learning what you need to keep progressing and to feel more competent with the types of features or the work that you're handling.
And I think that's why Jen Dary's blog post resonates with me so much because it's probably...up until now, a lot of someone's career, maybe not Joël's particularly, but I know probably for my career, a lot of it has been reactive in terms of what are the things that I need to learn? And so then once you reach that point of like, okay, I feel competent and reasonably good at all the things that I needed to know, where do I want to go next?
And rather than focus on necessarily the plans that are laid out in front of me, I can then go wide and think about what are some of the bigger things that I want to tackle? What are the things that are meaningful to me? Because then I can now push forward to this bigger goal versus achieving a particular salary band or title or things like that. But I can focus on something else that I really want to contribute to.
And there do seem to be two common paths. So once you reach that level, either you typically go into management, or you become that more like principal and then onward and upward, whatever is after principal. I don't even know what's after that, [laughs] but the titles that come after principal. So there's management, and then I've seen the other very strong contributors, so Aaron Patterson comes to mind. And I feel like those people then typically will migrate to places where they get to contribute to a language or to a framework.
And I think it comes down to the idea of impact because both of those provide a greater impact. So if you go into management, you can influence and affect a team of individuals, and you can increase the value created by that team. Then you've likely exceeded the value that you would have created as your own individual contributor.
Or, if you contribute to a language or a framework, then your technical decisions impact a larger community. So I think that would be another good thing to reflect on is what type of impact am I looking for? What type of communities do I want to have a positive impact on? And that may spur some inspiration around where you want to go next, the things that you want to focus on.
CHRIS: Yeah, I think one of the things we're picking up in that that Joël mentioned in his question is the idea of there is the individual contributor path. But then there's also the management path, which is the typical sort of that's the progression. And I think, for one, naming that the individual contributor path and the idea of going to principal dev and those sorts of outcomes is a fantastic path in and of itself.
I think often it's like, well, you know, you go along for a certain amount of time, and then you become a manager. It's like, those are actually distinctly different things. And people have different levels of interest and aptitude in them, and I think recognizing that is critically important. And so, not expecting that management just comes after individual contributor is an important thing.
The other thing I'll say is Charity Majors, who we had on the podcast a bit back, has a wonderful blog post about the pendulum swing called The Engineer/Manager Pendulum. And so in it, she talks about folks that have taken an exploration over into manager land and then come back to the individual contributor path or vice versa sort of being able to move between them, treating them as two potentially parallel career tracks but ones that we can move between.
And her assertion is that often folks that are particularly strong have spent some time in both camps because then you gain this empathy, this understanding of what's the whole picture? How are we doing all of this? How do we think about communication, et cetera? So, again, to name it, like, I think it's totally fine to stay on one of those tracks to really know which of those tracks speaks to you or to even move between them a little bit and to explore it and to find out what's true.
So we'll include a link to that in the show notes. And we'll also include a link to the previous episode a while back when we had Charity on. But yeah, those I think are some critical thoughts as well because those are different areas that we can grow as developers and expand on our impact within the team. And so, we want to make sure we have those options on the table as well.
STEPH: Absolutely. I love where teams will support individuals to feel comfortable shifting between experiences like that because it does make you a more well-rounded contributor to that product team, not just as an engineer, but then you will also understand what everybody else is working on and be able to have more meaningful conversations with them about the company goals and then the type of work that's being done. So yeah, I love it.
If you're in a place that you can maybe fail a little bit, hopefully not in a too painful way, but you can take a risk and say, "Hey, I want to try something and see if I like it," then I think that's wonderful. And take the risk and see how it goes. And just know that you have an exit strategy should you decide that you don't like that work or that type of work isn't for you. But at least now you know a little bit more about yourself.
Overall, I want to respond directly to something that Joël highlighted around how do you know what are high-value areas for you to improve? And I think there are two definitions there because you can either let the people around you and your team define that high value for you, and maybe that really resonates with you, and it's something that you enjoy. And so you can go to your manager and say, "Hey, what are some high-value areas where I can make an impact for the team?"
Or it could be very personal. And what are the high-value areas for you? Maybe there's a particular industry that you want to work on. Maybe you want to hit the public speaking circuit. And so you define more intrinsically what are those high-value areas for you? And I think that's a good place to start collecting feedback and start looking at what's high value for you personally and then what's high value to the company and see if there's any overlap there.
With that, I think we've covered a good variety of things to explore and then highlighted some of the different ways that you and I have also considered this question. I think it's a fabulous question. Also, I think it's one, even if you're not at that senior level, to ask this question. Like, go ahead and start asking it early and often and revisit your answers and see how they change. I think that would be a really powerful habit to establish early in your career and then could help guide you along, and then you can reflect on some of your earlier choices. So, Joël, thank you so much for that question.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph is joined by a very special guest and fellow thoughtbotter, Rob Whittaker.
Rob shares how he became the Software Development Director for Launchpad II, thoughtbot's Europe, Middle East, and Africa team. They also dive into what it's like to be a Development Director, the differences between mentoring and coaching, working with GitHub Codespaces, and strategies for boosting your creativity and problem solving capabilities.
thoughtbot is hiring!
ngrok
Time Off Book
Rob's Codespace Setup
Rob Whittaker on Twitter
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And today, I'm joined by a very special guest and fellow thoughtboter, Rob Whittaker.
Rob has been in the software business for the past 15 years and spent the last five and a half years at thoughtbot. Rob is the Director of Software Development for our Europe, Middle East, and Africa team and, in his spare time, likes to hunt down delicious beers and coffee. Rob, welcome to The Bike Shed. It's so lovely to have you on the show today.
ROB: Thank you for having me. It's a pleasure to be here. Yeah, thank you for that lovely introduction and my far too complicated job title. It sounds more serious than it actually is.
STEPH: Well, you do have a fancy job title, yeah, Director of Software Development. [laughs]
ROB: Yeah, it's the added on bit where it's Europe, Middle East, and Africa where I feel like there's about 20 of us maximum. But that sounds more grandiose than it actually is.
STEPH: Yeah, that's something that Chris and I haven't dug into too much on previous episodes are all the different teams that we have at thoughtbot. So the shorter way of saying that is Launchpad II, but not everybody knows that. But I'm going to circle back to that because I would love to talk a bit more about that specific team and the dynamic. But before we do that, I'm realizing I'm not familiar with your origin story as to how you came to thoughtbot and then how you became this very fancy grand title of Director of Software Development for Europe, Middle East, and Africa team.
ROB: Yeah, there's a bit of history about thoughtbot London as well that kind of ties into this. So before thoughtbot Launchpad II, it was thoughtbot London before we went remote. And initially, we had the plan of setting up a new studio in London to help expand thoughtbot outside of the Americas, but that plan fell through. But he knew some people from another agency called New Bamboo, and so we merged with or acquired that agency, and that agency then became the thoughtbot London team. I'm actually the first hire or...not the first hire, that's not true, the first development hire for the thoughtbot London team that would then become launchpad II.
I was at the Bath Ruby Conference six years ago, I guess. And there was just an advert up on the hiring board that Nick Charlton, who's a Senior Developer and Development Team Lead at Launchpad II now, had put up. And I saw it, and I was talking to somebody who was my mentor at the time that I'd worked with at a previous job at onthebeach.co.uk, a guy called Matt Valentine-House who now works at Shopify who, actually, fun fact, his face appears at the top of Ruby Weekly this week. If you open up this week's Ruby Weekly, you can see Matt Valentine-House, who said to me, "Yeah, apply for it, why not? You see what happens." And I was like, "Okay," and just kind of took the leap.
So I thought, thoughtbot, why would thoughtbot want me? Which is something I think a lot of people think when they want to join thoughtbot. They think, well, I can't do that. But I would implore people to apply. And so, from there, I never really wanted to move to London. I'd always lived in the North West of the UK. I made that leap to London because I wanted to work at thoughtbot. And then, gradually, over time, the London team expanded, and we needed to split out the management roles, and the development director role came up.
And I've always enjoyed the coaching side of software development. It seems that you gain more experience as you help people with less experience, and I've always enjoyed coaching. And that was a big part of the role for me. So I was fortunate enough to be allowed to do it. And then, from there, things have grown. Yeah, so it's been a really interesting journey as a development director.
The London studio went through a pretty tough time at one point where not long after I became development director that two-thirds of the team, in the space of two weeks, decided to hand their notice in and unbeknownst to each other. And so, all of a sudden, we didn't have a very big team. We didn't have very many prospects, and so it was a tough time. And so it's really nice to look back on the last three years and go, okay, we came through that. We're now one of the stronger teams at thoughtbot.
And somebody actually asked me in an interview the other day, somebody we actually hired, not just based on this question, but he said, "What is your proudest moment of working at thoughtbot?" And I was like, that's one of the best questions I've heard from a candidate. And I said, "Hmm, that's interesting." It's not anything development-related, but it's that I can now look back on this team and say this is the team that I have grown in my image and all these people apart from Nick, who was the person who put the advert of it at Bath Ruby.
I've hired all these people, and so the buck stops with me really because if anybody isn't able to perform, then it's kind of my fault because these are the people that I want to grow into being the team and see be a successful product design team or product development team, which brings us to modern-day I guess. So yeah, that was a long origin story. That's pretty much my whole thoughtbot biography. And I apologize.
STEPH: That was perfect. I thoroughly enjoyed hearing it. And yeah, that's an awesome question. What's your proudest moment, like, part of a team? That can yield so many insights. I love that question. And I love your answer as well in terms of this is the team. We've pulled through a hard time. And then we've built everybody to the point that they are now, which kind of leads in perfectly to my next question.
So being the software development director, could you walk us through a little bit of like, that's one of those titles I feel like a lot of companies have, but they can be very different from company to company. Would you mind walking us through a bit of the day-to-day in the life of being a development director?
ROB: Yeah, sure. It's one of those things where I think this is something that I'm not sure if it's unique to thoughtbot, but you end up taking on a lot of hats at thoughtbot. So I know you're a team lead. So you have to balance your responsibilities as an individual contributor, which is a term I don't like, but I haven't got a better way to say it yet, and your development team lead roles. And I have similar sort of responsibilities where I have to do my individual contributor work. I have to do my director work. I'm also on our DEI Council. So I have to add that work in too, and make sure it's balanced out.
So the start of my day is very much about prioritizing things. I know you and Chris, a few episodes ago, had quite lengthy discussions about productivity systems and what tools Chris wants to use. And I'm a big fan of Things, and I've been using it for maybe ten years, if not more, that I've now got my system down that I'm able to prioritize things in the way that I can pick up the right task at the right time. So a big part of my day-to-day is figuring out what is the most important thing to work on? So I have my client work, and then it's about supporting the team from that point.
And the big part of my idea of what a manager is is that my job isn't to tell you what to do; my job is to find out what you want to do and direct you in a place where you can find the answer. Or I can give you some guidance about where to find the answer. And I feel like I'm doing a bad job as a manager where if I have to act as a middle person. Because if somebody comes to me and says, "Oh, I want to do this thing," And I say, "Well, I'll talk to that person for you," and then come back, I have failed.
And my job is to say, "Oh, you should talk to that person about this." And to some extent, it's about being lazy. I don't want to be doing too much stuff because I have other things to do. But I want to make sure that those people have the right frameworks and guidelines in place so that I can point them in the right direction.
STEPH: I think the fancy term for that is just delegating. [laughs]
ROB: Yes, thank you. [laughs]
STEPH: But I like lazy. [laughter] I like that one as well. I love that framing of a manager where you're not telling someone to do, but as your job, you are helping that person figure out what they want to do and then supporting them. I've been chatting with Chris recently and some others because I've been reading the book Resilient Management by Laura Hogan. And it's really helped me cement the difference between mentorship, coaching, and sponsorship.
And I realized that I'm already falling a lot into the coaching and sponsorship because mentorship can be wonderful, but it is more directive of like, this is what I've done. And this is what has worked for me, and you should do this too. Versus the coaching and sponsorship, I think aligns far more perfectly with what you described as management, where it is my job to figure out what brings you joy, what brings you energy, and then how to help you progress to your next goals and your next steps in your career.
ROB: Yeah, I think Laura Hogan is a great resource like her blog posts and books. I haven't read Resilient Management. But I know that the team leads on my team had been on her training courses, and they say how great it is. And there's also a blog post of hers that's about managing in tough times. It has a much better title than that. But it's about how do we be good managers in such uncertain times when there are a lot of things going on around the world right now that we all have to deal with? And helping people deal with those situations.
Because at the end of the day, work isn't the most important thing; the most important thing is living. And it's something I say to my team, especially when people feel like...it's something that I say to my team when they're not feeling well. The most important thing is that you get better. And thoughtbot is still going to be here. The most important thing is how you live your life and how you look after yourself, and everything else is secondary.
STEPH: Absolutely. Well, and everybody needs something different from work too. Some people may be in a state where they really need more stability and predictability from their work. And some people may be in a space where everything else outside of work is very stable and calm, and then they want work to bring the challenge and the volatility and the variety to life.
So I remind myself very often that not everybody wants the same thing from work and to figure out what it is that someone wants from work. And then your seasons change. You may be in a season of where you want stability, or then you may be in a season of like, I'm ready to grow and push and take some risks. So helping someone identify which season of work they're in.
ROB: Yeah, I 100% agree. What people can't see is me nodding vigorously on the other side of this call. It's very much about understanding because everybody is different. And that's what we want from a good team; it's understanding everybody's different approach to things. And so sometimes people want the distraction of work because they don't want the time off to think about other things. They want to be able to sit and concentrate on something. And it's understanding different people.
STEPH: Yeah, that's a great point. I'm curious; you mentioned that as part of being development director, you are also, in addition to managing the team and being part of DEI then, there's also your day-to-day client work. I think you've started a new client recently. Could you tell me more about that?
ROB: Yeah, I'd recently been working for a client for two and a half years, which is a very long time to be working with one client at thoughtbot. And it came to the time where I was ready for a new challenge, and it was stable enough for me to move on. So I've been working for a company in the UK. They allow customers to buy and sell cars, not between customers, the customers like companies like Auto Trader but customers to dealers and back and forth. And primarily, they worked with buying cars. And they've launched a product in the UK where people can sell their cars as well because they found that 70% of people who are buying cars also want to sell their cars.
And from there, they're now looking to expand into Germany and Spain, so we are helping them to do that. And it's an interesting project, not necessarily from a technical point of view, but I might come back to that but definitely from a cultural point of view. The product at the moment allows you to put in a license plate or a registration plate for a car. And there's then a service in the UK that will allow you to pull up the maker model and the service history of that car. But you can't do that in Germany because it's against the privacy laws to find something from registration plates.
And so it's interesting these different cultural aspects that you have to take into account when expanding into other countries that you aren't from and that you have less knowledge about. Because I'm also aware that credit cards aren't a big thing in Germany either. So you have to think about how they pay for things in different countries. And the previous company I was working for they're based in the Middle East. And so we had to take into account how we would do right to left design in a mobile app, which is really interesting from a western point of view that you get so used to swiping through an experience from left to right.
But then it's not just the screen that's right to left. The journey moves from right to left. So you have to get used to the transitions of the screen going the other way and not thinking of that as going backwards. It's one of the best things about working in this region is that we get to deal with so many different cultures and how they expect to use applications. It's really satisfying.
STEPH: That's fascinating. Yeah, I haven't gotten to work on a project like that that has those types of considerations. I think the most relatable experience I have is more working in healthcare because that's one of those areas that I'm certainly not proficient. I've become more proficient because of the type of projects that I've worked on.
But I'm curious, for expanding into other regions and cultures, do those teams typically have an expert on their team that then helps guide the development process? Or, as you mentioned, the process of buying a car could be very different in some of the legal aspects that you're up against. Is there someone that you can turn to that's then helping mentor or be aware of that process?
ROB: Yes, the current client they have a team based in Germany, people who are from Germany that are advising us on different cultural aspects or legislative things. They are doing a lot of data analysis for us because we need a new service that we can use for looking up car details. Because there is a service that you give different information to to get information about the car back from. So yeah, we do have that team there. But that's not always the case because every client is different.
The company that we're working for in the Middle East didn't have a team. They had two developers who were helping us. But we have to figure things out just from their cultural background to ask them questions about things and allow them to advise us, but nobody who was really a specialist. But that's an interesting thing as well, not just the cultural aspects of the customers but the cultural aspects of the company that you work for.
We definitely found that the company in the Middle East was more hierarchical. And so that's another challenge that you have to work with because we tend to work in quite a flat way where we tend to default as on thoughtbot projects, of not having a point person on a project. Everybody is there to answer the questions. But some teams or clients want that point person. And so, we adapt and change to allow for that to happen and work in that way. But it is interesting to work in different companies as well as working as an agency.
STEPH: Yeah, you bring up a really good point of something that I don't reflect on very often, but it's something that I really appreciate about our thoughtbot culture is that we do try to strive for a very flat hierarchy. But also in working with clients, we purposely will avoid like, if there are two or more thoughtboters on a project, we don't want one person that is then the primary contact between the client and the thoughtbot team. The goal is that everybody shows up. Everybody is part of the process; everybody is part of meetings.
And we do have an advisor for projects, but otherwise, we work very hard to make sure that there's not just one person that's then responsible for communication. We want everybody to have opportunities to be part of meetings, to lead meetings, to take on initiatives versus having that one person. That is something that I really appreciate that we do.
ROB: Yeah. And it's more noticeable when you go to places where that isn't the norm, and you appreciate it more. And I think a big part of that is how much we are trusted. And we trust people to trust us, I guess.
STEPH: Yeah. And I think it fits in nicely with circling back to the management conversation is that when people have access to those opportunities, that makes my job so much easier as a team lead where then there are more opportunities to sponsor someone or to coach someone as to how they can then be the person that then takes on a project or if they want to lead a particular meeting, or if they want to help a team introduce retrospectives into their process. So it gives more opportunities for me to then coach someone into expanding their skill set in those ways.
ROB: Yeah, that's interesting to think about, allowing yourself to coach other people in that role. Because as we gain more experience and become senior developers, we naturally fall into that role of taking the lead on projects, even when we're not asked to. But then, when you gain other responsibilities in the management track, so you as a team lead and me as a team lead and a development director, it could be better for you to not take that role and allow somebody else to come into that role so you can coach them. That's been playing on my mind the last couple of days.
Josh Clayton, who's the Managing Director for one of our teams in the Americas, raised it on our pull request in our handbook where we were talking about team leads having a dedicated day to concentrate on team lead things. It's one of those things where somebody says something, and it's like, oh yeah, that really clicks. Maybe that's why we have been having certain struggles on projects where we need to rearrange things and learn from that and so we can be better on projects in the future. So that's something that really resonated with me, and it's flying around in the back of my mind at the moment.
STEPH: Yeah, that really resonates with me because while the predominant part of being a team lead at thoughtbot is having one-on-ones with folks, I find that when I have more time, a lot of the work also falls outside of that one-on-one where it's following up on conversations around hey, this person mentioned they're really interested in growing their skill. How can I help them? How can I help find opportunities?
Or I know that they're currently stretching their skill set right now. If I have some extra time, then I can check in with them. I can pair with them. I can see how things are going. So I find that while the one-on-ones are the staple thing that happens every two weeks, there's a lot of other behind-the-scenes work that's going on as well to make sure that that person is growing and feeling really fulfilled by their work.
ROB: I know we've spoken a lot about the product side and the client side of working on the new project that I'm working on. There are some interesting technical sides to it as well. The client has found that they have had some issues with Haskell and running on M1 Macs. And so, they've decided to take the leap and use GitHub Codespaces as their primary development environment, which has been interesting. I had heard about it but only in the background. I hadn't read anything about it or hadn't had any direct conversations. I just heard that there was a thing. So it's been quite interesting to play with that.
It's interesting the way the client is using it as well because they're using a Dockerized environment effectively inside Docker by using Codespaces. So you start the Codespace, which very basically is a Docker instance somewhere on GitHub's infrastructure. It's built very much for Visual Studio Code, and so you can just directly attach your Visual Studio Code session to the Codespace and go from there, but I'm a Vim user. I've started to feel like a bit of an old guard or a curmudgeon recently where I've been like; maybe I need to use Visual Studio Code. Maybe I should just unlearn my Vim key bindings and learn the Visual Studio ones.
And people say, "Oh, you could just use The Vim key bindings in VS Code." I'm like, that's cheating. I spent the time to learn the key bindings for Vim. I will take the time to learn the key bindings for Visual Studio Code and use it for the way it's intended. So it's been interesting to understand how Codespaces works, not necessarily in the way, it's intended. So you can still SSH into a Codespace session, but then you lose all the lovely setup stuff that you might have on your local machine.
So I did spend half a day porting my dotfiles which are based off thoughtbot's dotfiles, into something that Codespaces can use and made it publicly available. So if you go to github.com/purinkle/codespace, you can see what I use to set up my Codespace environment. And once that's set up, it becomes a bit easier because then you have all the things that you're used to running locally. It is very much early days for how the client is using it. And so they're really open to saying like, okay, let's find out what's not working, and let's work and figure out how to get it up and running properly.
So one of the things we do find is that Codespaces do timeout after a while. And then you might lose, like, even if I've created a tmux session, that tmux session disappears. And so I have to go in and create it again. I'm not sure what the timeouts are. I haven't had time to look into what those timeouts are yet. But that's definitely the main pain point at the moment of it being used as a development environment.
It's been interesting. It's been kicking around in the back of my head like the difference between developing locally and deploying locally. And it's something that I wanted to talk to people at thoughtbot and outside of thoughtbot as well to understand that more. Because I don't think you need everything running to develop locally, but you might need it to deploy locally. It's interesting to me to understand how different companies work on their products from that point of view.
STEPH: Yeah, I'm selfishly excited that you are using Codespaces for a client project because I have kept an eye on it, and I'm very intrigued by it. But I also haven't used it for a project. And it sounds really neat. I'm curious, have you found that it has helped them with onboarding or if you need to switch from working on one application to another? Have you found that it has helped them with some of those? I'm guessing that's the problem that they're optimizing to solve is how do we help people run everything quickly without having to set it up locally?
ROB: It's an interesting question because I don't have the comparison of trying to set up the environment as it was before. It was smoother. The main thing with access tokens because once you can set up your SSH keys and your GitHub tokens, it's just a case of running a script and letting it run. So yes, from that point of view, I can imagine if I tried to set up their previous environment, that it'd be a lot more challenging because they were using Vagrant and running things that way, which I know from experience would not be fun.
And I know that my Mac fans would just be spinning all the time. It would be like an aeroplane was trying to take off. So I'm thankful for that, that I don't have that experience anymore that my machine is going to slow down all the time. We've had on a previous client who had a Dockerized environment, but you have to have it all running on your machine. There are pros and cons to everything with these things. And it's like you said, what is the problem they're trying to solve with introducing this setup?
STEPH: Yeah, I can't decide if this is a good thing or a bad thing. But I'm also intrigued by the idea that if a team is using Codespaces, then that means everybody else is using VS Code. And you can still customize it so you can still have your own preferences. But that does set a standard, so everybody is using the same editor. There's a lot of cross-collaboration in terms of if you do run into an issue, then you can help each other out.
Versus when I join other teams, everybody's using their preferred editor, and then there you may have a day where someone's like, "Oh, I'm really stuck because my particular editor is suddenly having a problem and can't connect." And then you have less people that are able to help them if they're not using that same editor. And I can't decide if I like that or if I hate it [laughs] in terms of taking away people's ability to pick and choose their editor. But then the gains of everybody is using the same thing which is nice and would be really great for pairing too.
ROB: Yeah, that's an interesting point. I was talking to...I have a management coach. He's a PHP developer, and I'm a Rails developer. And we were talking about the homogenization of things nowadays. And is that good, or is that bad to use with stuff like RuboCop that lints everything, so it's exactly the same? Does that stifle creativity? But then, at the same time, the thing I like about Codespaces is I think we're biased coming at it from the point of view of Rails developers.
And if you look at how you can use Codespaces in the browser directly from GitHub, that's quite interesting because now you're lowering the barrier to entry to get started and saying you don't need to have an editor. You don't have to set up everything. You can just do it from your browser. A few years ago, I used to volunteer or coach at an organization called codebar. They help people who are less represented in the tech community get represented in the tech community. And we would see a lot of people coming for sessions using...I forgot what it's called. What was it called? Cloud 66 or something.
There was some remote development environment that people would come and say, "Oh, I've been using this," because they didn't know how to set up the necessary infrastructure to just get a Rails server going or things like that or didn't know how to set up Sublime or Atom or editor of choice. And it's really interesting if you remove your bias of 15 years of professional software development and go okay, if I were starting today, what would the environment look like, and how would I get started?
I'm lucky enough that I've grown up with the web and seen how web development has changed and been able to gain more knowledge as it's appeared. I don't envy anybody who has to come into the industry now and suddenly have to drink from this firehose of all these different frameworks, all these different technologies. Yeah, I started off by just right-clicking and viewing source on HTML files back in 1998 or something ridiculous like that. And CSS didn't even exist or wasn't used. And so it's a much different world than 24 years ago.
STEPH: That is something that Chris and I have mentioned on previous episodes where people are coming into software development, and as much as we love Vim and it sounds like you love Vim, our advice is don't start with Vim. Don't start there. You've got so much to learn. Start with something like VS Code that's going to help you out.
And you make such a great point in regards to this lowering the barrier to entry. Because I have been part of a number of classes where you have people coming in with Macs or with a Windows machine, and then you're trying to get everybody set up. You want them to use the same browser for testing. And we spend like a whole class just getting everybody on the same page and making sure their machines are working or then troubleshooting if something's not.
But if they can just go to GitHub and then they can run things seamlessly there, that's a total game-changer in terms of how I would teach a class, and it would just be far easier. So I hadn't even considered the benefits that would have for teachers or just for onboarding teams as well. But yeah, specifically for leading a class, I think that is a huge benefit.
GitHub did some pretty cool stuff around when they were launching that as well because I went back and watched some of their GitHub Universe sessions that they had where they were talking about Codespaces. And one of the things that they did that I really appreciated was how they went about launching Codespaces. So initially, it was how fast can this be? Or what's our proof of concept? And I think when they were building this, they found it took about 45 minutes if they wanted to spin up an application and then provide you a development environment. And they're like, okay, cool, like, we can do this, but it's 45 minutes, and that's not going to work.
And so then their next iteration, they got it down to 25 minutes, and then they got it down to 5 minutes. And now they've got it to the point that it's instantaneous because they're building stuff in the background overnight. And so then that way, when you click on it, it's just all ready for you. But I loved that cycle, that process that they went through of can we even do this? And then let's see, slowly, incrementally, how fast can we get it?
And then, to get feedback, instead of transitioning their own internal teams to it right away, they created this more public club. I think they called it The Computer Club, something like that. And they're like, hey, if you want to be part of Codespaces or try out this new feature that we have, delete all the source and the things that you need locally, and then just commit to using Codespaces. And then, if you are stuck or if you have trouble, then your job is to let us know so then we can iterate, and we can fix it. I really liked that approach that they took to launching this product and then getting feedback from everyone and then improving upon it.
ROB: Yeah, that sounds like an Agile developer's dream where you just put something out there that's the bare bones, and you're given license to learn from that experience and how people are actually using that tool. That's something we've actually tried to do on the client project at the moment is adding all the...now that there's a different flow in Germany, there are different questions we need to ask. And so that could be quite a complex thing to put into place.
So what we said is what we're going to do is just put in the different screens, and all you have is one option to click. So you click that option, you go to the next one, go to the next one, go to the next one. Then we have something that the customer can click on and play with and understand, and then we can iterate on top of that. But it also allows us to identify areas of risk because you can go; oh, where does this information come from? But now we need to get this from a third-party service.
So that's the riskiest thing we've got to work on here, where this other thing is just a hard-coded list of three-door or five-door cars. And so that's an easier problem to solve. So allowing yourself to put something that could be quite complex like GitHub Codespaces and go okay, we're going to put something out there. It takes 45 minutes to run-up. But we're telling you it takes 45 minutes to run it. We're not happy with it, but we want to learn how you're using it so that we can then improve it but improve it in the right direction.
Because it might be that we get it to 20 minutes to start up, but you need it in half a second. That's a ridiculous example. Or it might be that you need to be able to use RubyMine with it instead of VS Code, and that's where the market isn't. That's the thing that you can't learn in isolation that you have to put something out there for people to use and play with.
STEPH: There's one other cool feature I want to highlight that I realized that they offer as well. So in the past, I've used a tool called ngrok, which then you can make your localhost public so other people can access. You can literally demo what you're working on locally, and someone else can access it. And I think that it's very cool. It's come in handy a number of times. And my understanding is that Codespaces has that feature where they can make your localhost accessible. So your work in progress you can then share with someone, and I love that.
ROB: Oh, that's really interesting. I didn't know you could do that. I know you could forward ports from your local machine to that. But I didn't know you could share it externally. That'd be really cool. I can see how that can be really helpful in demos and pairing. And it makes sense because it's not running on your computer. It's running on some remote architecture somewhere. That's interesting.
STEPH: Well, that's the dream I've been sold from what I've been reading about GitHub Codespaces. So if I'm telling lies, you let me know [laughs] as you're working further in it than I am. But yeah, that was one of the features that I read, and I was like, yeah, that's great because I love ngrok for that purpose. And it would be really cool if that's already built into Codespaces as well.
ROB: ngrok is really interesting with things like trying to get third-party services to work. So from, the previous client, they wanted an Alexa Skill. And so, if you're trying to work with an Alexa Skill, you have to sign in from Amazon's architecture onto your local machine. You have to use ngrok as the tool there. So I wonder if that could potentially solve a problem where if there are three developers trying to develop on this if you could point to one Codespace that you're all working on rather than...
Because the problem we had was if me or Fritz or Rakesh was working on this, we'd have to go and then change the settings on the Amazon Alexa Skill to point to a different machine. Whereas I wonder if Codespasces allows you to have this entry point, you could point to like thoughtbot.codespace.github.com or something like that that would then allow you to share that instance. That's something interesting that I think about now. I wonder if you could share Codespace instances amongst each other. I don't know.
STEPH: Yeah, I'm intrigued too. That sounds like it'd be really helpful. So circling back just a bit to where we were talking about wearing different hats in terms of working on client work, and then also working on the team, and then also potentially some sales work as well, I'm curious, how do you balance that transition? How do you balance solving hard problems in a codebase and then also transition to solving hard problems in the management space? How do you make all of that fit cohesively in your day or your week?
ROB: The main thing that somebody said to me recently is that you can only do so much in a day, and it's about the order that you approach those things. And just be content with the fact that you're not going to get everything done. But you have to make sure that you work on things in the right order and just take your time and then work through them. I read a really good book recently that was recommended to me by my coach called Time Off. And it's all about finding your rest ethic, which sounds a bit abstract and a bit weird. But all it is it's about understanding that you can't be working 100% all the time. It's not possible.
As developers, sometimes we can forget that we're creative people, and creativity comes from a part of your brain that works subconsciously. So it's important for you to take breaks throughout the day and kind of go okay; I use the Pomodoro Technique. So I have an app that runs, and every 25 minutes, I just take a little break. I don't use it in the way that it's supposed to be used. I just use it to give me a trigger to have a break every 25 minutes. And so in that time, I'll just step away from my computer. I'll walk to the kitchen, grab a glass of water.
I usually have a magazine or a book next to my table. So I have a magazine here at the moment. I'll just read a page of that just to kind of rest my eyes, so they focus at a different level but also just to get my brain thinking about something else. And it seems counterproductive that like, oh, you're stepping out of what you were doing. But then I find like, oh, I suddenly have a little refresher to like, oh, I need to get back into what I was doing. I know where I've got to go. That thing that I was thinking about now makes a little bit more sense. And even if it's a bigger break, give yourself the license to go for a walk and just kind of clear your head.
And a big thing about going for a walk is not to concentrate on completing the task of walking but to concentrate on the walk itself and taking the things that are happening around you. And let your mind just kind of...you'll sometimes notice that oh, I can hear a bird. But that bird's been chirping for five minutes, and you didn't notice because your mind's kind of going. And if you concentrate on, I just want to complete this walk, that's what I'm out here to do, then you lose that ability to let your mind reset. That's a big thing that I'm working on personally to concentrate on the doing rather than the getting done.
And it ties into the craft of being a software developer because if you concentrate on the actual writing of the code and the best practices that we all believe in, you end up with something better that you don't then have to revisit at a later time. Where if you just try and get something done, you're just going to end up having to come back to it or have to revisit in some other way. I've actually got a blog post coming out soon about notifications on phones. I'm a big believer that your phone belongs to you and that if your work wants you to have work notifications on your phone, then they could buy you a phone just for that purpose.
The only thing where I kind of draw the line is I have notifications for meetings on my phone because I can't think of another way to get those things to ping up at me. And I understand that there are jobs where you do need to have those sorts of notifications, especially things like where you're on call; it's a big thing. But when it comes to things where a manager wants to get a hold of you straight away, from a trust point of view, that's where I think things fall down. And you're questioning, like, okay, why does this person need to get hold of me at 7:00, 8:00, 9:00, 10:00 o'clock at night? And should I be available?
We build by the day at thoughtbot. And so when I find, not when I find but when I talk to people, and they say, "Oh, I was still working at 7:30, 8:00 o'clock," I will say, "Why? You're devaluing your own time at that point because we're not billing any extra for that time. So you're making your craft and your skill...you're cheapening it. And I want them to relish the skills and competencies that they have. That's a big thing for me. We're very lucky at thoughtbot that we can draw a boundary at the end of the day and go, okay, that's it. There's no expectation for me. It is much more difficult at product companies.
But yeah, I think it's something that as an industry, and it's a bigger thing as a society, especially with younger people coming into the industry who have never worked in an office and may never work in an office, that idea of where is the cutoff? For so much of the pandemic, the people I would get concerned about the most are the people whose beds I could see behind them because I'm thinking to myself, you spend at least 16 hours a day in that same room.
And that's going to become the norm for people. And if people don't have those rest periods and those breaks and aren't given the opportunities to do that by their managers, then it's not going to end well. And happy people and fulfilled people do the best jobs from a business point of view. But that's never the way I approach it, but that's what I say to people.
STEPH: I think that's one of the biggest mistakes that I made early on in my career, and even now, I still have to coach myself through it. It's like you said, we are creative people and people in software and in general and not just developers, but it's a creative craft. And I wouldn't step away to take breaks. I just thought if I pushed hard enough, I would figure it out, and then I could get done with my work because I was so focused on getting it done versus the doing, as you'd highlighted earlier.
I haven't really thought about it in that particular light of focusing on this is the thing that I'm working on. And yes, I do want to get it done, but let's also focus on the doing portion of it. And so I wouldn't step away for walks. I wouldn't step away for breaks. And that is something that I have learned the hard way that when I actually gave myself that time to breathe, if I gave myself a moment to relax, then I would come back refreshed and then ready to tackle whatever challenge was in front of me.
And same for keeping a magazine that's near my desk; I have found that if I keep a book or something that I enjoy...because, at some point, my brain is going to look for some rest, like, it happens. That's when we flip open Twitter or Instagram or emails or something because our brain is looking for something easy and maybe a little bit of like brain candy, something to give us a little hit.
And I have found that if I keep something else more intentional by my desk, something that I want to read or that I'm enjoying, then I find that when I am seeking for something that's short that I can look at, that I feel more relaxed and fulfilled from that versus then if I go to Twitter, and then I see a bunch of stuff, I don't like, and then I go back to work. [laughs] And it has the opposite effect of what I actually wanted to do with my downtime.
I love the sound of this book. We'll be sure to include a link in the show notes because it sounds like a really good book to read. And I've also worked on improving the setup with my phone and notifications, where I have compartmentalized all the work-related apps into one folder, and then I keep it on the third screen of my phone.
So if I want to see something that's work-related, it's very intentional of like, I have to scroll past all of the stuff that matters to me outside of work and then get to that work section and then click in that folder to then see like, okay, this is where I have Slack, and Gmail, and Basecamp, and all the other things that I might need for work. And I have found that has really helped me because I do still have the notifications on my phone, but at least putting it on its own screen further away from the home screen has been really helpful.
ROB: Do you find that you still get distracted by that, though, when you're in the flow of doing something else?
STEPH: I don't with my phone. I am a person who ignores my phone really well. I don't know if that's a good thing or a bad thing, [laughs] but it is a truth of who I am where I'm pretty good at ignoring my phone.
ROB: That's a good skill to have. If there's any phone in the room and a notification goes off, my head swivels, and I pivot, and I'm like, oh, yeah, some dopamine hit over there that I can get from looking at somebody else's notification.
STEPH: I have noticed that in the other people that I'm around. Yeah, it's that sound that just triggers people like, oh, I got to look. And even if you know it's not your phone like you heard someone else's phone ding, it still makes you check your phone even though probably there's a part of your brain that recognizes like, that wasn't mine, but I'm still going to check anyways. And I have worked hard to fight that where even if I hear my phone go off, I'm like, okay, cool, I'll get to it. I'll check it when I need to. And I'm that person that whenever apps always ask me, "Can we send you notifications?" I'm like, no, you may not send me notifications. [laughs]
Something else you said that I haven't thought about until just now is the idea that there are some people who have never worked in an office or may never work in an office because we are leaning into more remote jobs. And that is fascinating to me to think about that someone won't have had that experience. But you make such a good point that we need to start thinking about these boundaries now and how we manage our remote work and our home life because this is, going forward, going to be the new norm for a number of people. So how do we go ahead and start putting good practices in place for those future workers?
ROB: One of the things, as we've hired people from a remote point of view who've only worked with thoughtbot remotely, is the idea of visibility. And I don't mean the visibility of I want to see when somebody's working but maybe the invisibility of people. Because you can't see when people are taking breaks, you assume that everybody is working all the time, and so then you don't take those breaks. And so this is something we saw with people who we hired in the first six months of being remote. And they were burning out because they didn't realize that other people were taking breaks. Because they didn't know about the cultural norms of how we worked at thoughtbot.
But people who had worked in the studio would know that people would get up and have breaks. People would get up and go get a coffee from a coffee shop and then have a walk around. They didn't know that that was the culture because they bring the culture from other places with them. But then it's much harder to get people to understand your way of working and how we think that we should approach things when you are sat in isolation in a room with a screen. And that's something that we've had to say to people to break that down.
And even things that we took for granted when we worked in a studio where somebody would get up and ask somebody if they could pair with them even if they weren't on the same project. Somebody might have more Elm knowledge or React Native knowledge, or Elixir knowledge. And you'd get up and say, "Hey, can I borrow some of your time just to go over this thing, to pair?" And everybody would say, "Yeah, yeah, I can find some time. If not now, we can do it later." And recently, we've had people saying, "Oh, is it okay if we pair across projects? Is it okay if we pair with other people?" It's like, "Yeah, pair."
One of the big things we say is that we have this vast amount of knowledge across thoughtbot, across the world that we can tap into and that you can use. And that's just one example of how do you get those core things that you take for granted and help people understand them? Because you don't know what people don't know. And it's all about that implied knowledge. So that's something that we learned. And we try and say to people and instill in them about yeah, take breaks. You can pair with people.
There are people who bring in culture from other places with them. But then, to go back to where you started, how do you start with people who have no culture with them or have the culture of coming from maybe from school, or university, or from a different industry? How do you help those people add to your culture but also learn from your culture at the same time? Big people problems.
STEPH: Have you found any helpful strategies to normalize that take a break culture?
ROB: One thing we tried, but it doesn't last very long because people are lazy, is putting it in Slack saying, "I'm going for a break." And you can do that, but it's so artificial. After a week or two weeks, people just stopped doing it. It was through conversation. We have a regular retrospective as the Launchpad II team where we talk about what is working, what isn't working. And we have such a trusting environment where people will say things along the lines of this isn't working for me, or I feel like I'm burning out. Then we will talk to each other about it and figure out where it comes from.
And it's a good point to raise that I don't think we have explicitly addressed it. But it is something that we will address. I'm not going to say could address; we will address it. I will talk to our latest hire, Dorian, who I have a one-on-one with next week, and to kind of talk to him about it. And we should maybe try and codify that in our handbook somewhere so everybody can learn from it, at least start a strategy and a conversation. Because I don't think it is something that we do talk about. It's the problem of being siloed and being remote and time zones as well.
A lot of stuff that Launchpad I knows Launchpad II doesn't necessarily know because we only have three, maybe, hours if people are based on the East Coast where we overlap. I have meetings with Geronda, who's our DEI Program Manager, and she lives in Seattle. And so sometimes I'll talk to her at 5:00 o'clock, and it's 7:00 o'clock in the morning for her. And you have different energy levels. But yeah, so we spend time to try and figure out how we work together.
STEPH: Yeah, I like that idea of highlighting that we take breaks somewhere that's part of your expectations as part of your role. Like, this is an expectation of your role; you're going to take breaks. You're going to step away for lunch. You're going to stick to a certain set of hours in terms of having like an eight-hour workday with a healthy lunch break in there. I think that's a really good idea.
On the Boost team, I have found that people have adopted the habit of not always but typically sharing of, like, "Hey, I'm stepping away for a coffee break," or "I'm having lunch. Maybe like a late lunch, but I'm taking it," Or "I am stepping away for a walk." You often see later in the afternoon where there are a number of people that are then saying, "Hey, I'm going for a walk."
And I feel that definitely helps me when I see it every day to reinforce like, yes; I should do this too because I already admitted I'm bad at this. So it helps reinforce it for me when I see other people saying that as well. But then I can see that that takes time to build that into a team's culture or to find easy ways to share that. So just putting it upfront in like a role expectation also feels like a really good place to then highlight and then to reinforce it as then people are setting that example.
ROB: One thing that Nick Charlton tried to introduce was a Strava group. There's a thoughtbot Strava group. So you can see if people are members of it that they've been walking and things like that. It was quite an interesting way to automate it. I think it fell off a cliff. But it was something that we did try to how can we make the visibility of this a little bit easier? But yeah, the best thing I've seen is, like you say, having that notification in Slack or somewhere where you can see that other people are stepping away from their keyboards.
STEPH: Well, as you mentioned, solving people problems is totally easy, you know. It's a totally trivial task although I'm sure we could spend too many hours talking about it. All right, so I do have one more very important question for you, Rob. And this goes back to a debate that Chris and I are having, and I'd love to get you to weigh in on it. So there are Pop-Tarts, these things called Pop-Tarts in the world. And I don't know if you're a fan, but if you were given the option to eat a Pop-Tart with frosting or a Pop-Tart without frosting, which one do you think you would choose?
ROB: That's an interesting question. Is there a specific flavor? Because I think that the Strawberry Pop-Tart I would have with frosting but maybe the chocolate one I have without. I know there are all sorts of exotic flavors of Pop-Tarts. But I think I would edge towards with frosting as a default. That's my undiplomatic answer.
STEPH: I like that nuanced answer. I also like how you refer to the flavors as exotic. I think that was very kind of you [laughs] other like melon crushed or wild flavors that they have. Awesome. All right. Well, I think that's a perfect note for us to wrap up.
Rob, thank you so much for coming on the show and for bringing up all of these wonderful ideas and topics and sharing your experience with Codespaces. For folks that are interested in following your work or interested in getting in touch with you, where's the best place for them to do that?
ROB: Yeah, thank you so much for having me. It's been fantastic to have a chat. If people do want to find me, the best place would be on Twitter. So my handle on Twitter is @purinkle which I understand is hard for people to maybe understand via a podcast, but we'll put a link in the show notes so people couldn't find me more easily.
And that's probably also a good time to say that I am actually trying to find a development team lead to join our Launchpad II team. So we are looking for somebody who lives in Europe, Middle East, or Africa to join our team as a developer and manager of two to three people. There's more information on the thoughtbot website, and I do tweet about it very, very often. So feel free to reach out to me if that's of any interest to you.
STEPH: Awesome. We'll be sure to include a link to that in the show notes as well. On that note, shall we wrap up?
ROB: Yeah, let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph has a baby update and thoughts on movies, plus a question for Chris related to migrating Test Unit tests to RSpec.
Chris watched a video from Google I/O where Chrome devs talked about a new feature called Page Transitions. He's also been working with a tool called Customer.io, an omnichannel communication whiz-bang adventure!
Page transitions Overview
Using yield_self
for composable ActiveRecord relations
A Case for Query Objects in Rails
Customer.io
Turning the database inside-out with Apache Samza | Confluent
Datomic
About CRDTs • Conflict-free Replicated Data Types
Apache Kafka
Resilient Management | A book for new managers in tech
Mixpanel: Product Analytics for Mobile, Web, & More
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Golden roads are golden. Okay, everybody's got golden roads. You have golden roads, yes? That is what we're --
STEPH: Oh, I have golden roads, yes.
[laughter]
CHRIS: You might should inform that you've got golden roads, yeah.
STEPH: Yeah, I don't know if I say might should as much but might could. I have been called out for that one a lot; I might could do that.
CHRIS: [laughs]
STEPH: That one just feels so natural to me than normal. Anytime someone calls it out, I'm like, yeah, what about it?
[laughter]
CHRIS: Do you want to fight?
STEPH: Yeah, are we going to fight?
CHRIS: I might could fight you.
STEPH: I might could. I might should.
[laughter]
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. I have a couple of fun updates. I have a baby Viccari update, so little baby weighs about two pounds now, two pounds. I'm 25 weeks along. So not that I actually know the exact weight, I'm using all those apps that estimate based on how far along you are, so around two pounds, which is novel. Oh, and then the other thing I'm excited to tell you about...I'm not sure how I should feel that I just got more excited about this other thing. I'm very excited about baby Viccari.
But the other thing is there's a new Jurassic Park movie coming out, and I'm very excited. I think it's June 10th is when it comes out. And given how much we have sung that theme song to each other and make references to what a clever girl, I needed to share that with you. Maybe you already know, maybe you're already in the loop, but if you don't, it's coming.
CHRIS: Yeah, the internet likes to yell things like that. Have you watched all of the most recent ones? There are like two, and I think this will be the third in the revisiting or whatever, the Jurassic World version or something like that. But have you watched the others?
STEPH: I haven't seen all of them. So I've, of course, seen the first one. I saw the one that Chris Pratt was in, and now he's in the latest one. But I think I've missed...maybe there's like two in the middle there. I have not watched those.
CHRIS: There are three in the original trilogy, and then there are three now in the new trilogy, which now it's ending, and they got everybody.
STEPH: Oh, I'm behind.
CHRIS: They got people from the first one, and they got the people from the second trilogy. They just got everybody, and that's exciting. You know, it's that thing where they tap into nostalgia, and they take advantage of us via it. But I'm fine. I'm here for it.
STEPH: I'm here for it, especially for Jurassic Park. But then there's also a new Top Gun movie coming out, which, I'll be honest, I'm totally going to watch. But I really didn't remember the first one. I don't know that I've really ever watched the first Top Gun. So Tim, my partner, and I watched that recently, and it's such a bad movie. I'm going to say it; [laughs] it's a bad movie.
CHRIS: I mean, I don't want to disagree, but the volleyball scene, come on, come on, the volleyball scene.
[laughter]
STEPH: I mean, I totally had a good time watching the movie. But the one part that I finally kept complaining about is because every time they showed the lead female character, I can't think of her character name or the actress's name, but they kept playing that song, Take My Breath Away. And I was like, can we just get past the song? [laughs] Because if you go back and watch that movie, I swear they play it like six different times. It was a lot. It was too much. So I moved it into bad movie category but bad movie totally worth watching, whatever category that is.
CHRIS: Now I kind of want to revisit it. I feel like the drinking game writes itself. But at a minimum, anytime Take My Breath Away plays, yeah. Well, all right, good to know. [laughs]
STEPH: Well, if you do that, let me know how many shots or beers you drink because I think it will be a fair amount. I think it will be more than five.
CHRIS: Yeah, it involves a delicate calibration to get that right. I don't think it's the sort of thing you just freehand. It writes itself but also, you want someone who's tried it before you so that you're not like, oh no, it's every time they show a jet. That was too many. You can't drink that much while watching this movie.
STEPH: Yeah, that would be death by Top Gun.
CHRIS: But not the normal way, the different, indirect death by Top Gun.
STEPH: I don't know what the normal way is. [laughs]
CHRIS: Like getting shot down by a Top Gun pilot.
[laughter]
STEPH: Yeah, that makes sense. [laughs]
CHRIS: You know, the dogfighting in the plane.
STEPH: The actual, yeah, going to war away. Just sitting on your couch and you drink too much poison away, yeah, that one. All right, that got weird. Moving on, [laughs] there's a new Jurassic Park movie. We're going to land on that note.
And in the more technical world, I've got a couple of things on my mind. One of them is I have a question for you. I'm very excited to run this by you because I could use a friend in helping me decide what to do. So I am still on that journey where I am migrating Test::Unit test over to RSpec.
And as I'm going through, it's going pretty well, but it's a little complicated because some of the Test::Unit tests have different setup than, say, the RSpec do. They might run different scripts beforehand where they're loading data. That's perhaps a different topic, but that's happening. And so that has changed a few things.
But then overall, I've just been really just porting everything over, like, hey, if it exists in the Test::Unit, let's just bring it to RSpec, and then I'm going to change these asserts to expects and really not make any changes from there. But as I'm doing that, I'm seeing areas that I want to improve and things that I want to clear up, even if it's just extracting a variable name.
Or, as I'm moving some of these over in Test::Unit, it's not clear to me exactly what the test is about. Like, it looks more like a method name in the way that the test is being described, but the actual behavior isn't clear to me as if I were writing this in RSpec, I think it would have more of a clear description. Maybe that's not specific to the actual testing framework. That might just be how these tests are set up.
But I'm at that point where I'm questioning should I keep going in terms of where I am just copying everything over from Test::Unit and then moving it over to RSpec? Because ultimately, that is the goal, to migrate over. Or should I also include some time to then go back and clean up and try to add some clarity, maybe extract some variable names, see if I can reduce some lets, maybe even reduce some of the test helpers that I'm bringing over?
How much cleanup should be involved, zero, lots? I don't know. I don't know what that...[laughs] I'm sure there's a middle ground in there somewhere. But I'm having trouble discerning for myself what's the right amount because this feels like one of those areas where if I don't do any cleanup, I'm not coming back to it, like, that's just the truth. So it's either now, or I have no idea when and maybe never.
CHRIS: I'll be honest, the first thing that came to mind in this most recent time that you mentioned this is, did we consider just deleting these tests entirely? Is that on...like, there are very few of them, right? Like, are they even providing enough value? So that was question one, which let me pause to see what your thoughts there were. [chuckles]
STEPH: I don't know if we specifically talked about that on the mic, but yes, that has been considered. And the team that owns those tests has said, "No, please don't delete them. We do get value from them." So we can port them over to RSpec, but we don't have time to port them over to RSpec. So we just need to keep letting them go on. But yet, not porting them conflicts with my goal of then trying to speed up CI. And so it'd be nice to collapse these Test::Unit tests over to RSpec because then that would bring our CI build down by several meaningful minutes. And also, it would reduce some of the complexity in the CI setup.
CHRIS: Gotcha. Okay, so now, having set that aside, I always ask the first question of like, can you just put Derek Prior’s phone number on the webpage and call it an app? Is that the MVP of this app? No? Okay, all right, we have to build more. But yeah, I think to answer it and in a general way of trying to answer a broader set of questions here...
I think this falls into a category of like if you find yourself having to move around some code, if that code is just comfortably running and the main thing you need to do is just to get it ported over to RSpec, I would probably do as little other work as possible. With the one consideration that if you find yourself needing to deeply load up the context of these tests like actually understand them in order to do the porting, then I would probably take advantage of that context because it's hard to get your head into a given piece of code, test or otherwise.
And so if you're in there and you're like, well, now that I'm here, I can definitely see that we could rearrange some stuff and just definitively make it better, if you get to that place, I would consider it. But if this ends up being mostly a pretty rote transformation like you said, asserts become expects, and lets get switched around, you know, that sort of stuff, if it's a very mechanical process of getting done, I would probably say very minimal.
But again, if there is that, like, you know what? I had to understand the test in order to port them anyway, so while I'm here, let me take advantage of that, that's probably the thing that I would consider. But if not that, then I would say even though it's messy and whatnot and your inclination would be to clean it, I would say leave it roughly as is. That's my guess or how I would approach it.
STEPH: Yeah, I love that. I love how you pointed out, like, did you build up the context? Because you're right, in a lot of these test cases, I'm not, or I'm trying really hard to not build up context. I'm trying very hard to just move them over and, if I have to, mainly to find test descriptions. That's the main area I'm struggling to...how can I more explicitly state what this test does so the next person reading this will have more comprehension than I do? But otherwise, I'm trying hard to not have any real context around it.
And that's such a good point because that's often...when someone else is in the middle of something, and they're deciding whether to include that cleanup or refactor or improvement, one of my suggestions is like, hey, we've got the context now. Let's go with it. But if you've built up very little context, then that's not a really good catalyst or reason to then dig in deeper and apply that cleanup. That's super helpful. Thank you. That will help reinforce what I'm going to do, which is exactly let's migrate RSpec and not worry about cleanup, which feels terrible; I'm just going to say that into the world. But it also feels like the right thing to do.
CHRIS: Well, I'm happy to have helped. And I share the like, and it feels terrible. I want to do the right thing, but sometimes you got to pick a battle sort of thing.
STEPH: Cool. Well, that's a huge help to me. What's going on in your world?
CHRIS: What's going on in my world? I watched a great video the other day from the Google I/O. I think it's an event; I'm not actually sure, conference or something like that. But it was some Google Chrome developers talking about a new feature that's coming to the platform called Page Transitions. And I've kept an eye on this for a while, but it seems like it's more real. Like, I think they put out an RFC or an initial sort of set of ideas a while back. And the web community was like, "Oh, that's not going to work out so well."
So they went back to the drawing board, revisited. I've actually implemented in Chrome Canary a version of the API. And then, in the video that I watched, which we'll include a show notes link to, they demoed the functionality of the Page Transitions API and showed what you can do. And it's super cool. It allows for the sort of animations that you see in a lot of native mobile apps where you're looking at a ListView, you click on one of the items, and it grows to fill the whole screen. And now you're on the detail screen for that item that you were looking at.
But there was this very continuous animated transition that allows you to keep context in your head and all of those sorts of nice things. And this just really helps to bridge that gap between, like, the web often lags behind the native mobile platforms in terms of the experiences that we can build. So it was really interesting to see what they've been able to pull off. The demo is a pretty short video, but it shows a couple of different variations of what you can build with it. And I was like, yeah, these look like cool native app transitions, really nifty.
One thing that's very interesting is the actual implementation of this. So it's like you have one version of the page, and then you transition to a new version of the page, and in doing so, you want to animate between them. And the way that they do it is they have the first version of the page. They take a screenshot of it like the browser engine takes a screenshot of it. And then they put that picture on top of the actual browser page. Then they do the same thing with the next version of the page that they're going to transition to. And then they crossfade, like, change the opacity and size and whatnot between the two different images, and then you're there.
And in the back of my mind, I'm like, I'm sorry, what now? You did which? I'm like, is this the genius solution that actually makes this work and is performant? And I wonder if there are trade-offs. Like, do you lose interactivity between those because you've got some images that are just on the screen? And what is that like? But as they were going through it, I was just like, wait, I'm sorry, you did what? This is either the best idea I've ever heard, or I'm not so sure about this.
STEPH: That's fascinating. You had me with the first part in terms of they take a screenshot of the page that you're leaving. I'm like, yeah, that's a great idea. And then talking about taking a picture of the other page because then you have to load it but not show it to the user that it's loaded. And then take a picture of it, and then show them the picture of the loaded page. But then actually, like you said, then crossfade and then bring in the real functionality. I am...what am I?
[laughter]
CHRIS: What am I actually?
STEPH: [laughs] What am I? I'm shocked. I'm surprised that that is so performant. Because yeah, I also wouldn't have thought of that, or I would have immediately have thought like, there's no way that's going to be performant enough. But that's fascinating.
CHRIS: For me, performance seems more manageable, but it's the like, what are you trading off for that? Because that sounds like a hack. That sounds like the sort of thing I would recommend if we need to get an MVP out next week. And I'm like, what if we just tried this? Listen, it's got some trade-offs. So I'm really interested to see are those trade-offs present? Because it's the browser engine. It's, you know, the low-level platform that's actually managing this. And there are some nice hooks that allow you to control it. And at a CSS level, you can manage it and use keyframe animations to control the transition more directly.
There's a JavaScript API to instrument the sequencing of things. And so it's giving you the right primitives and the right hooks. And the fact that the implementation happens to use pictures or screenshots, to use a slightly different word, it's like, okey dokey, that's what we're doing. Sounds fun. So I'm super interested because the functionality is deeply, deeply interesting to me.
Svelte actually has a version of this, the crossfade utility, but you have to still really think about how do you sequence between the two pages and how do you do the connective tissue there? And then Svelte will manage it for you if you do all the right stuff. But the wiring up is somewhat complicated. So having this in the browser engine is really interesting to me. But yeah, pictures.
STEPH: This is one of those ideas where I can't decide if this was someone who is very new to the team and new to the idea and was like, "Have we considered screenshots? Have we considered pictures?" Or if this is like the uber senior person on the team that was like, "Yeah, this will totally work with screenshots." I can't decide where in that range this idea falls, which I think makes me love it even more. Because it's very straightforward of like, hey, what if we just tried this? And it's working, so cool, cool, cool.
CHRIS: There's a fantastic meme that's been making the rounds where it's a bell curve, and it's like, early in your career, middle of your career, late in your career. And so early in your career, you're like, everything in one file, all lines of code that's just where they go. And then in the middle of your career, you're like, no, no, no, we need different concerns, and files, and organizational structures.
And then end of your career...and this was coming up in reference to the TypeScript team seems to have just thrown everything into one file. And it's the thing that they've migrated to over time. And so they have this many, many line file that is basically the TypeScript engine all in one file. And so it was a joke of like, they definitely know what they're doing with programming. They're not just starting last week sort of thing. And so it's this funny arc that certain things can go through.
So I think that's an excellent summary there [laughs] of like, I think it was folks who have thought about this really hard. But I kind of hope it was someone who was just like, "I'm new here. But have we thought about pictures? What about pictures?" I also am a little worried that I just deeply misunderstood [laughs] the representation but glossed over it in the video, and I'm like, that sounds interesting. So hopefully, I'm not just wildly off base here. [laughs] But nonetheless, the functionality looks very interesting.
STEPH: That would be a hilarious tweet. You know, I've been waiting for that moment where I've said something that I understood into the mic and someone on Twitter just being like, well, good try, but... [laughs]
CHRIS: We had a couple of minutes where we tried to figure out what the opposite of ranting was, and we came up with pranting and made up a word instead of going with praising or raving. No, that's what it is, raving. [laughs]
STEPH: No, raving. I will never forget now, raving. [laughs]
CHRIS: So, I mean, we've done this before.
STEPH: That's true. Although they were nice, I don't think they tweeted. I think they sent in an email. They were like, "Hey, friends."
[laughter]
CHRIS: Actually, we got a handful of emails on that.
[laughter]
STEPH: Did you know the English language?
CHRIS: Thank you, kind Bikeshed audience, for not shaming us in public. I mean, feel free if you feel like it. [laughs]
But one other thing that came up in this video, though, is the speaker was describing single-page apps are very common, and you want to have animated transitions and this and that. And I was like, single-page app, okay, fine. I don't like the terminology but whatever. I would like us to call it the client-side app or client-side routing or something else. But the fact that it's a single page is just a technical consideration that no user would call it that. Users are like; I go to the web app. I like that it has URLs. Those seem different to me. Anyway, this is my hill. I'm going to die on it.
But then the speaker in the video, in contrast to single-page app referenced multi-page app, and I was like, oh, come on, come on. I get it. Like, yes, there are just balls of JavaScript that you can download on the internet and have a dynamic graphics editor. But I think almost all good things on the web should have URLs, and that's what I would call the multiple pages. But again, that's just me griping about some stuff. And to name it, I don't think I'm just griping for griping sake.
Like, again, I like to think about things from the user perspective, and the URL being so important. And having worked with plenty of apps that are implemented in JavaScript and don't take the URL or the idea that we can have different routable resources seriously and everything is just one URL, that's a failure mode in my mind. We missed an opportunity here. So I think I'm saying a useful thing here and not just complaining on the internet. But with that, I will stop complaining on the internet and send it back over to you. What else is new in your world, Steph?
STEPH: I do remember the first time that you griped about it, and you were griping about URLs. And there was a part of me that was like, what is he talking about? [laughter] And then over time, I was like, oh, I get it now as I started actually working more in that world. But it took me a little bit to really appreciate that gripe and where you're coming from. And I agree; I think you're coming from a reasonable place, not that I'm biased at all as your co-host, but you know.
CHRIS: I really like the honest summary that you're giving of, like, honestly, the first time you said this, I let you go for a while, but I did not know what you were talking about. [laughs] And I was like, okay, good data point. I'm going to store that one away and think about it a bunch. But that's fine. I'm glad you're now hanging out with me still.
[laughter]
STEPH: Don't do that. Don't think about it a bunch. [laughs] Let's see, oh, something else that's going on in my world. I had a really fun pairing session with another thoughtboter where we were digging into query objects and essentially extracting some logic out of an ActiveRecord model and then giving that behavior its own space and elevated namespace in a query object. And one of the questions or one of the things that came up that we needed to incorporate was optional filters.
So say if you are searching for a pizza place that's nearby and you provide a city, but you don't provide what's the optional zip code, then we want to make sure that we don't apply the zip code in the where clause because then you would return all the pizza places that have a nil zip code, and that's just not what you want. So we need to respect the fact that not all the filters need to be applied. And there are a couple of ways to go about it. And it was a fun journey to see the different ways that we could structure it.
So one of the really good starting points is captured in a blog post by Derek Prior, which we'll include a link to in the show notes, and it's using yield_self for composable ActiveRecord relations. But essentially, it starts out with an example where it shows that you're assigning a value to then the result of an if statement. So it's like, hey, if the zip code is present, then let's filter by zip code; if not, then just give us back the original relation. And then you can just keep building on it from there.
And then there's a really nice implementation that Derek built on that then uses yield_self to pass the relation through, which then provides a really nice readability for as you are then stepping through each filter and which one should and shouldn't be applied. And now there's another blog post, and this one's written by Thiago Silva, A Case for Query Objects in Rails. And this one highlighted an approach that I haven't used before. And I initially had some mixed feelings about it.
But this approach uses the extending method, which is a method that's on ActiveRecord query methods. And it's used to extend the scope with additional methods. You can either do this by providing the name of a module or by providing a block. It's only going to apply to that instance or to that specific scope when you're using it. So it's not going to be like you're running an include or something like that where all instances are going to now have access to these methods.
So by using that method, extending, then you can create a module that says, "Hey, I want to create this by zip code filter that will then check if we have a zip code, let's apply it, if not, return the relation. And it also creates a really pretty chaining experience of like, here's my original class name. Let's extend with these specific scopes, and then we can say by zip code, by pizza topping, whatever else it is that we're looking to filter by.
And I was initially...I saw the extending, and it made me nervous because I was like, oh, what all does this apply to? And is it going to impact anything outside of this class? But the more I've looked at it, the more I really like it. So I think you've seen this blog post too. And I'm curious, what are your thoughts about this?
CHRIS: I did see this blog post come through. I follow that thoughtbot blog real close because it turns out some of the best writing on the internet is on there. But I saw this...also, as an aside, I like that we've got two Derek Prior references in one episode. Let's see if we can go for three before the end. But one thing that did stand out to me in it is I have historically avoided scopes using scope like ActiveRecord macro thing. It's a class method, but like, it's magic. It does magic.
And a while ago, class methods and scopes became roughly equivalent, not exactly equivalent, but close enough. And for me, I want to use methods because I know stuff about methods. I know about default arguments. And I know about all of these different subtleties because they're just methods at the end of the day, whereas scopes are special; they have certain behavior. And I've never really known of the behavior beyond the fact that they get implemented in a different way. And so I was never really sold on them. And they're different enough from methods, and I know methods well. So I'm like, let's use the normal stuff where we can.
The one thing that's really interesting, though, is the returning nil that was mentioned in this blog post. If you return nil in a scope, it will handle that for you. Whereas all of my query objects have a like, well, if this thing applies, then scope dot or relation dot where blah, blah, blah, else return relation unchanged. And the fact that that natively exists within scope is interesting enough to make me reconsider my stance on scopes versus class methods. I think I'm still doing class method. But it is an interesting consideration that I was unaware of before.
STEPH: Yeah, it's an interesting point. I hadn't really considered as much whether I'm defining a class-level method versus a scope in this particular case. And I also didn't realize that scopes handle that nil case for you. That was one of the other things that I learned by reading through this blog post. I was like, oh, that is a nicety. Like, that is something that I get for free. So I agree. I think this is one of those things that I like enough that I'd really like to try it out more and then see how it goes and start to incorporate it into my process.
Because this feels like one of those common areas of where I get to it, and I'm like, how do I do this again? And yield_self was just complicated enough in terms of then using the fancy method method to then be able to call the method that I want that I was like, I don't remember how to do this. I had to look it up each time. But including this module with extending and then being able to use scopes that way feels like something that would be intuitive for me that then I could just pick up and run with each time.
CHRIS: If it helps, you can use then instead of yield_self because we did upgrade our Ruby a while back to have that change. But I don't think that actually solves the thing that you're describing. I'd have liked the ampersand method and then simple method name magic incantation that is part of the thing that Derek wrote up. I do use it when I write query objects, but I have to think about it or look it up each time and be like, how do I do that? All right, that's how I do that.
STEPH: Yeah, that's one of the things that I really appreciate is how often folks will go back and update blog posts, or they will add an addition to them to say, "Hey, there's something new that came out that then is still relevant to this topic." So then you can read through it and see the latest and the greatest. It's a really nice touch to a number of our blog posts. But yeah, that's what was on my mind regarding query objects. What else is going on in your world?
CHRIS: I have this growing feeling that I don't quite know what to do with. I think I've talked about it across some of our conversations in the world of observability. But broadly, I'm starting to like...I feel like my brain has shifted, and I now see the world slightly differently, and I can't go back. But I also don't know how to stick the landing and complete this transition in my brain. So it's basically everything's an event stream; this feels true. That's life. The arrow of time goes in one direction as far as I understand it. And I'm now starting to see it manifest in the code that we're writing.
Like, we have code to log things, and we have places where we want to log more intentionally. Then occasionally, we send stuff off to Sentry. And Sentry tells us when there are errors, that's great. But in a lot of places, we have both. Like, we will warn about something happening, and we'll send that to the logs because we want to have that in the logs, which is basically the whole history of what's happened. But we also have it in Sentry, but Sentry's version is just this expanded version that has a bunch more data about the user, and things, and the browser that they were in. But they're two variations on the same event.
And then similarly, analytics is this, like, third leg of well, this thing happened, and we want to know about it in the context. And what's been really interesting is we're working with a tool called Customer.io, which is an omnichannel communication whiz-bang adventure. For us, it does email, SMS, and push notifications. And it's integrated into our segment pipeline, so events flow in, events and users essentially. So we have those two different primitives within it. And then within it, we can say like, when a user does X, then send them an email with this copy.
As an aside, Customer.io is a fantastic platform. I'm super-duper impressed. We went through a search for a tool like it, and we ended up on a lot of sales demos with folks that were like, hey, so yeah, starting point is $25,000 per year. And, you know, we can talk about it, but it's only going to go up from there when we talk about it, just to be clear. And it's a year minimum contract, and you're going to love it. And we're like, you do have impressive platforms, but okay, that's a bunch. And then, we found Customer.io, and it's month-by-month pricing. And it had a surprisingly complete feature set.
So overall, I've been super impressed with Customer.io and everything that they've afforded. But now that I'm seeing it, I kind of want to move everything into that world where like, Customer.io allows non-engineer team members to interact with that event stream and then make things happen. And that's what we're doing all the time. But I'm at that point where I think I see the thing that I want, but I have no idea how to get there. And it might not even be tractable either.
There's the wonderful Turning the Database Inside Out talk, which describes how everything is an event stream. And what if we actually were to structure things that way and do materialized views on top of it? And the actual UI that you're looking at is a materialized view on top of the database, which is a materialized view on top of that event stream.
So I'm mostly in this, like, I want to figure this out. I want to start to unify all this stuff. And analytics pipes to one tool that gets a version of this event stream, and our logs are just another, and our error system is another variation on it. But they're all sort of sampling from that one event stream. But I have no idea how to do that.
And then when you have a database, then you're like, well, that's also just a static representation of a point in time, which is the opposite of an event stream. So what do you do there? So there are folks out there that are doing good thinking on this. So I'm going to keep my ear to the ground and try and see what's everybody thinking on this front? But I can't shake the feeling that there's something here that I'm missing that I want to stitch together.
STEPH: I'm intrigued on how to take this further because everything you're saying resonates in terms of having these event streams that you're working with. But yet, I can't mentally replace that with the existing model that I have in my mind of where there are still certain ideas of records or things that exist in the world.
And I want to encapsulate the data and store that in the database. And maybe I look it up based on when it happens; maybe I don't. Maybe I'm looking it up by something completely different. So yeah, I'm also intrigued by your thoughts, but I'm also not sure where to take it. Who are some of the folks that are doing some of the thinking in this area that you're interested in, or where might you look next?
CHRIS: There's the Kafka world of we have an event log, and then we're processing on top of that, and we're building stream processing engines as the core. They seem to be closest to the Turning the Database Inside Out talk that I was thinking or that I mentioned earlier. There's also the idea of CRDTs, which are Conflict-free Replicated Data Types, which are really interesting. I see them used particularly in real-time application.
So it's this other tool, but they are basically event logs. And then you can communicate them well and have two different people working on something collaboratively. And these event logs then have a natural way to come together and produce a common version of the document on either end. That's at least my loose understanding of it, but it seems like a variation on this theme. So I've been looking at that a little bit.
But again, I can't see how to map that to like, but I know how to make a Rails app with a Postgres database. And I think I'm reasonably capable at it, or at least I've been able to produce things that are useful to humans using it. And so it feels like there is this pretty large gap. Because what makes sense in my head is if you follow this all the way, it fundamentally re-architects everything. And so that's A, scary, and B; I have no idea how to get there, but I am intrigued. Like, I feel like there's something there.
There's also Datomic is the other thing that comes to mind, which is a database engine in the Clojure world that stores the versions of things over time; that idea of the user is active. It’s like, well, yeah, but when were they not? That's an event. That transition is an event that Postgres does not maintain at this point. And so, all I know is that the user is active. Maybe I store a timestamp because I'm thinking proactively about this.
But Datomic is like no, no, fundamentally, as a primitive in this database; that's how we organize and think about stuff. And I know I've talked about Datomic on here before. So I've circled around these ideas before. And I'm pretty sure I'm just going to spend a couple of minutes circling and then stop because I have no idea how to connect the dots. [laughs] But I want to figure this out.
STEPH: I have not worked with Kafka. But one of the main benefits I understand with Kafka is that by storing everything as a stream, you're never going to lose like a message. So if you are sending a message to another system and then that message gets lost in transit, you don't actually know if it got acknowledged or what happened with it, and replaying is really hard. Where do you pick up again?
While using something like Kafka, you know exactly what you sent last, and then you're not going to have that uncertainty as to what messages went through and which ones didn't. And then the ability to replay is so important. I'm curious, as you continue to explore this, do you have a particular pain point in mind that you'd like to see improve? Or is it more just like, this seems like a really cool, novel idea; how can I incorporate more of this into my world?
CHRIS: I think it's the latter. But I think the thing that I keep feeling is we keep going back and re-instrumenting versions of this. We're adding more logging or more analytics events over the wire or other things. But then, as I send these analytics events over the wire, we have Mixpanel downstream as an analytics visualization and workflow tool or Customer.io. At this point, those applications, I think, have a richer understanding of our users than our core Rails app. And something about that feels wrong to me.
We're also streaming everything that goes through segment to S3 because I had a realization of this a while back. I'm like, that event stream is very interesting. I don't want to lose it. I'm going to put it somewhere that I get to keep it. So even if we move off of either Mixpanel or Customer.io or any of those other platforms, we still have our data. That's our data, and we're going to hold on to it.
But interestingly, Customer.io, when it sends a message, will push an event back into segments. So it's like doubly connected to segment, which is managing this sort of event bus of data. And so Mixpanel then gets an even richer set there, and the Rails app is like, I'm cool. I'm still hanging out, and I'm doing stuff; it's fine. But the fact that the Rails app is fundamentally less aware of the things that have happened is really interesting to me. And I am not running into issues with it, but I do feel odd about it.
STEPH: That touched on a theme that is interesting to me, the idea that I hadn't really considered it in those terms. But yeah, our application provides the tool in which people can interact with. But then we outsource the behavior analysis of our users and understanding what that flow is and what they're going through. I hadn't really thought about those concrete terms and where someone else owns the behavior of our users, but yet we own all the interaction points. And then we really need both to then make decisions about features and things that we're building next.
But that also feels like building a whole new product, that behavior analysis portion of it, so it's interesting. My consulting brain is going wild at the moment between like, yeah, it would be great to own that. But that the other time if there's this other service that has already built that product and they're doing it super well, then let's keep letting them manage that portion of our business until we really need to bring it in-house. Because then we need to incorporate it more into our application itself so then we can surface things to the user.
That's the part where then I get really interested, or that's the pain point that I could see is if we wanted more of that behavior analysis, that then we want to surface that in the app, then always having to go to a third-party would start to feel tedious or could feel more brittle.
CHRIS: Yeah, I'm definitely 100% on not rebuilding Mixpanel in our app and being okay with the fact that we're sending that. Again, the thing that I did to make myself feel better about this is stream the data to S3 so that I have a version of it. And if we want to rebuild the data warehouse down the road to build some sort of machine learning data pipeline thing, we've got some raw data to work with. But I'm noticing lots of places where we're transforming a side effect, a behavior that we have in the system to dispatching an event.
And so right now, we have a bunch of stuff that we pipe over to Slack to inform our admin team, hey, this thing happened. You should probably intervene. But I'm looking at that, and we're doing it directly because we can control the message in Slack a little bit better. But I had this thought in the back of my mind; it's like, could we just send that as an event, and then some downstream tool can configure messages and alerts into Slack? Because then the admin team could actually instrument this themselves. And they could be like; we are no longer interested in this event. Users seem fine on that front. But we do care about this new event.
And all we need to do as the engineering team is properly instrument all of that event stream tapping. Every event just needs to get piped over. And then lots of powerful tools downstream from that that can allow different consumers of that data to do things, and broadly, that dispatch events, consume them on the other side, do fun stuff. That's the story. That's the dream. But I don't know; I can't connect all the dots. It's probably going to take me a couple of weeks to connect all these dots, or maybe years, or maybe my entire career, something like that. But, I don't know, I'm going to keep trying.
STEPH: This feels like a fun startup narrative, though, where you start by building the thing that people can interact with. As more people start to interact with it, how do we start giving more of our team the ability to then manage the product that then all of these users are interacting with? And then that's the part that you start optimizing for. So there are always different interesting bits when you talk about the different stages of Sagewell, and like, what's the thing you're optimizing for? And I'm sure it's still heavily users. But now there's also this addition of we are also optimizing for our team to now manage the product.
CHRIS: Yes, you're 100%. You're spot on there. We have definitely joked internally about spinning out a small company to build this analytics alerting tool [laughs] but obviously not going to do that because that's a distraction. And it is interesting, like, we want to build for the users the best thing that we can and where the admin team fits within that. To me, they're very much customers of engineering. Our job is to build the thing for the users but also, to be honest, we have to build a thing for the admins to support the users and exactly where that falls.
Like, you and I have talked a handful of times about maybe the admin isn't as polished in design as other things. But it's definitely tested because that's a critical part of how this application works. Maybe not directly for a user but one step removed for a user, so it matters. Absolutely we're writing tests to cover that behavior. And so yeah, those trade-offs are always interesting to me and exploring that space. But 100%: our admin team are core customers of the work that we're doing in engineering. And we try and stay very close and very friendly with them.
STEPH: Yeah, I really appreciate how you're framing that. And I very much agree and believe with you that our admin users are incredibly important.
CHRIS: Well, thank you. Yeah, we're trying over here. But yeah, I think I can wrap up that segment of me rambling about ideas that are half-formed in my mind but hopefully are directionally important. Anyway, what else is up with you?
STEPH: So, not that long ago, I asked you a question around how the heck to manage themes that I have going on. So we've talked about lots of fun productivity things around managing to-dos, and emails, and all that stuff. And my latest one is thinking about, like, I have a theme that I want to focus on, maybe it's this week, maybe it's for a couple of months. And how do I capture that and surface it to myself and see that I'm making progress on that? And I don't have an answer to that.
But I do have a theme that I wanted to share. And the one that I'm currently focused on is building up management skills and team lead skills. That is something that I'm focused on at the moment and partially because I was inspired to read the book Resilient Management written by Lara Hogan. And so I think that is what has really set the idea. But as I picked up the book, I was like, this is a really great book, and I'd really like to share some of this. And then so that grew into like, well, let's just go ahead and make this a theme where I'm learning this, and I'm sharing this with everyone else.
So along that note, I figured I would share that here. So we use Basecamp at thoughtbot. And so, I've been sharing some Basecamp posts around what I'm learning in each chapter. But to bring some of that knowledge here as well, some of the cool stuff that I have learned so far...and this is just still very early on in the book. There are a couple of different topics that Laura covers in the first chapter, and one of them is humans’ core needs at work. And then there's also the concept of meeting your team, some really good questions that you can ask during your first one-on-one to get to know the person that then you're going to be managing.
The part that really resonated with me and something that I would like to then coach myself to try is helping the team get to know you. So as a manager, not only are you going out of your way to really get to know that person, but how are you then helping them get to know you as well? Because then that's really going to help set that relationship in regards of they know what kind of things that you're optimizing for. Maybe you're optimizing for a deadline, or for business goals, or maybe it's for transparency, or maybe it would be helpful to communicate to someone that you're managing to say, "Hey, I'm trying some new management techniques. Let me know how this goes." [chuckles]
So there's a healthier relationship of not only are you learning them, but they're also learning you. So some of the questions that Laura includes as examples as something that you can share with your team is what do you optimize for in your role? So is it that you're optimizing for specific financial goals or building up teammates? Or maybe it's collaboration, so you're really looking for opportunities for people to pair together.
What do you want your teammates to lean on you for? I really liked that question. Like, what are some of the areas that bring you joy or something that you feel really skilled in that then you want people to come to you for? Because that's something that before I was a manager...but it's just as you are growing as a developer, that's such a great question of like, what do you want to be known for? What do you want to be that thing that when people think of, they're like, oh, you should go see Chris about this, or you should go see Steph about this?
And two other good questions include what are your work styles and preferences? And what management skills are you currently working on learning or improving? So I really like this concept of how can I share more of myself? And the great thing about this book that I'm learning too is while it is geared towards people that are managers, I think it's so wonderful for people who are non-managers or aspiring managers to read this as well because then it can help you manage whoever's managing you. So then that way, you can have some upward management.
So we had recent conversations around when you are new to a team and getting used to a manager, or maybe if you're a junior, you have to take a lot of self-advocacy into your role to make sure things are going well. And I think this book does a really good job for people that are looking to not only manage others but also manage themselves and manage upward. So that's some of the journeys from the first chapter. I'll keep you posted on the other chapters as I'm learning more. And yeah, if anybody hasn't read this book or if you're interested, I highly recommend it. I'll make sure to include a link in the show notes.
CHRIS: That was just the first chapter?
STEPH: Yeah, that was just the first chapter.
CHRIS: My goodness.
STEPH: And I shortened it drastically. [laughs]
CHRIS: Okay. All right, off to the races. But I think the summary that you gave there, particularly these are true when you're managing folks but also to manage yourself and to manage up, like, this is relevant to everyone in some capacity in some shape or form. And so that feels very true.
STEPH: I will include one more fun aspect from the book, and that's circling back to the humans' core needs at work. And she references Paloma Medina, a coach, and trainer who came up with this acronym. The acronym is BICEPS, and it stands for belonging, improvement, choice, equality, predictability, and significance. And then details how each of those are important to us in our work and how when one of those feels threatened, then that can lead to some problems at work or just even in our personal life.
But the fun example that she gave was not when there's a huge restructuring of the organization and things like that are going on as being the most concerning in terms of how many of these needs are going to be threatened or become vulnerable. But changing where someone sits at work can actually hit all of these, and it can threaten each of these needs. And it made me think, oh, cool, plus-one for being remote because we can sit wherever we want. [laughs]
But that was a really fun example of how someone's needs at work, I mean, just moving their desk, which resonates, too, because I've heard that from other people. Some of the friends that I have that work in more of a People Ops role talk about when they had to shift people around how that caused so much grief. And they were just shocked that it caused so much grief. And this explains why that can be such a big deal. So that was a fun example to read through.
CHRIS: I'm now having flashbacks to times where I was like, oh, I love my spot in the office. I love the people I'm sitting with. And then there was that day, and I had to move. Yeah, no, those were days. This is true.
STEPH: It triggered all the core BICEPS, all the things that you need to work. It threatened all of them. Or it could have improved them; who knows?
CHRIS: There were definitely those as well, yeah. Although I think it's harder to know that it's going to be great on the way in, so it's mostly negative. I think it has that weird bias because you're like, this was a thing, I knew it. I at least understood it. And then you're in a new space, and you're like, I don't know, is this going to be terrible or great? I mean, hopefully, it's only great because you work with great people, and it's a great office. [laughs] But, like, the unknown, you're moving into the unknown, and so I think it has an inherent at least questioning bias to it.
STEPH: Agreed. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris switched from Trello over to Linear for product management and talks about prioritizing backlogs.
Steph shares and discusses a tweet from Curtis Einsmann that super resonated with the work she's doing right now: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one."
This episode is brought to you by BuildPulse. Start your 14-day free trial of BuildPulse today.
Linear
Curtis Einsmann Tweet
Louie Bacaj Tweet
Become a Sponsor of The Bike Shed!
Transcript:
AD: Flaky tests take the joy out of programming. You push up some code, wait for the tests to run, and the build fails because of a test that has nothing to do with your change. So you click rebuild and you wait. Again. And you hope you're lucky enough to get a passing build this time.
Flaky tests slow everyone down, break your flow and make things downright miserable.
In a perfect world, tests would only break if there's a legitimate problem that would impact production. They'd fail immediately and consistently, not intermittently. But the world's not perfect, and flaky tests will happen, and you don't have time to fix them all today. So how do you know where to start?
BuildPulse automatically detects and tracks your team's flaky tests. Better still, it pinpoints the ones that are disrupting your team the most. With this list of top offenders, you'll know exactly where to focus your effort for maximum impact on making your builds more stable. In fact, the team at Codecademy was able to identify their flakiest tests with BuildPulse in just a few days. By focusing on those tests first, they reduced their flaky builds by more than 68% in less than a month!
And you can do the same because BuildPulse integrates with the tools you're already using. It supports all the major CI systems, including CircleCI, GitHub Actions, Jenkins, and others. And it analyzes test results for all popular test frameworks and programming languages, like RSpec, Jest, Go, pytest, PHPUnit, and more.
So stop letting flaky tests slow you down. Start your 14-day free trial of BuildPulse today. To learn more, visit buildpulse.io/bikeshed. That's buildpulse.io/bikeshed.
CHRIS: Good morning, and welcome to Tech Talk with Steph and Chris. Today at the top of the hour, it's tech traffic hits.
STEPH: Ooh, tech traffic. [laughs] I like that statement.
CHRIS: Yeah. The Git lanes are clogged up with...I don't know. I got nothing.
STEPH: [laughs]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Actually, I have a specific new thing that I can share, which is, as of the past week, I would say, switched from Trello over to Linear for product management. It's been great. It was a super straightforward transfer. They actually had an importer. We lost some of the comment threads on the Trello cards. But that was easy enough to like each Linear ticket has a link back to Trello. So it's easy enough to keep the continuity.
But yeah, we're in a whole new world, a system actually built for maintaining a product backlog, and, man, it shows. Trello was a bunch of lists and cards and stuff that you could link between, which was cool. But Linear is just much more purpose-built and already very, very nice. And we're very happy with the switch.
STEPH: I feel like you came in real casual with that news, but that is big news, that you did a switch.
[laughter]
CHRIS: How are you going to bury the lead like that? You switched project management...[laughter] I actually didn't think it was...I'm excited about it but low-key excited, which is weird because I do like productivity and task management software. So you would think I would be really excited about this. But I've also tried enough of them historically to know that that's never going to be the thing that actually makes or breaks your team's productivity. It can make things worse, but it can't make you great. That's the thing that I believe. And so it's a wonderful piece of software. I'm very excited about it but --
STEPH: Ooh, I like that. It can make you worse, but it doesn't make you great. That's so true, yeah, where it causes pain. Well, and it does make sense. You've been complaining a bit about the whole login with Trello and how that's been frustrating. But I haven't even heard of Linear. That's just...that's, I mean, you're just doing what you do where you bring that new-new. I haven't heard of Linear before.
CHRIS: I try to live on the cutting edge. Actually, I deeply try to not live on the cutting edge at this point in my life. That early adopter wave, no, no, no, that's not for me anymore. But I've known a few folks who've moved to Linear. And everyone that I've spoken to who has moved to it has been like, "Yeah, it's been great." I've not heard anything negative. And I've heard or experienced negative things about every other product management tool out there. And so, it seemed like an easy thing.
And it was a low-cost enough switch in terms of opportunity costs or the like, it took the effort of someone on our team moving those cards over and setting up the new system and training, but it was relatively straightforward. And yeah, we're super happy with it. And it feels different now. I feel like I can see the work in a different way which is interesting.
I think we had brought in a Chrome extension for Trello. I think it's like Hello Epics or something like that that allows...it abuses the card linking functionality in Trello and repurchases that feature as an epic management thing. But it's quarter-baked is how I would describe it, or it's clearly built on top of existing things that were not intended to be used exactly in that way. So it does a great job. Hello Epics does a great job of trying to make something like parent-child list management stuff happen in Trello. But it's always going feel like an afterthought, a secondary feature, something that's bolted on.
Whereas in Linear, it's like, no, no, we absolutely have the idea of projects, of course, and you can see burndown charts and things. And the thing that I do want to be careful about is not leaning too much into management. Linear has the idea of cycles or sprints, as many other folks think of them, or iterations or whatever you want to call them. But we've largely not been working in that mode. We've just continued to work through the next up list; that's it. The next up list should be prioritized and well defined at the top and roughly in priority order. So just pick up the next card and work on it. And we just do that every single day.
And now we're in a piece of software that has the idea of cycles, and I'm like, oh, this is vaguely interesting. Do we want to do that? Oh, but if you're going to do that, you probably do some estimation, right? And I was like, oh no, now we're into a place that's...okay, I have feelings. I got to decide how to approach that. And so, I am intrigued. And I wonder if we could just say like ten carts that's how many come into a cycle, and that's it. And we use the loosest heuristics possible to define how we structure a cycle so that we don't fall into the trap of, oh, what's our roadmap going to look like six months from now? JK, what's anything going to look like six months from now? That's not a knowable fact.
STEPH: I was just thinking where you said that you're moving into sprints or cycles, and then there's that push, well, now you got to estimate. And I just thought, do you? Do you have to estimate? [laughs]
CHRIS: We need a burndown chart through 2024, and it must be meticulously accurate down to the hour.
STEPH: I think meticulously wrong is how that goes. [laughs]
CHRIS: Which is the best kind of wrong. If you're going to be wrong, be meticulous about it.
STEPH: Be thorough about it. [laughs] Yeah, the team that I'm on right now, we have our bi-weekly planning, and we go through the board, and we pull stuff in. But there's never a discussion about estimation. And I hadn't really appreciated that until just now. How we don't think about how long is this going to take? We just talked about, well, what's in-flight? And how much work do people still have going on? And then here's the list of things we can pull in. But there's always a list that you can go back to.
Like, it's very...we pull in the minimum and knowing that if we run out of work, there's another place to go where there's stuff that's organized. And I just love that cadence, that idea of like, let's not try to make guesses about the future; let's just have it lined up and ready for us to go when we're ready to pull it in. Although I know, that's also coming from a very developer's perspective, and there are product managers who are trying to communicate as to when features are going to get out into the world. So I get that there's a balance, but I still have strong feelings and hesitations around estimating work.
CHRIS: Well, I feel like there is a balance there. And so many things in history are like, well, this is an overcorrection against that, and that's an overcorrection against this. And the idea that we can estimate our work that far out into the future that's just obviously false to me based on every project I've ever worked on that has tried to do it. And it has always failed without question.
But critically, there is the necessity to sync up work and like, oh, marketing needs to plan the launch of this feature, and it's a critical one. What's it going to look like? When's it going to be ready? You know, we're trying to go for an event, it's not just know...we developers never estimate anything past the immediate moment where like, that's not acceptable. We got to find a middle ground here. But where that middle ground is, is interesting. And so, just operating in the sort of we do work as it comes up is the easiest thing because no one's lying about anything at that point.
But sometimes you got to make some guesses and make some estimations. And then it gets into the murky area of I believe with 75% confidence that in three weeks, we will have this feature ready. But to be clear, I said with 75% confidence that means one-quarter of the time; we will not be there at that date. What does that mean? What does that failure mode look like? Let's talk about that. And can you have honest, open, transparent, useful conversations there? That's the space that it becomes more subtle if you need to do that.
STEPH: You're reminding me of a conversation that I had with someone where they shared with me some very aggressive team goals. And it was a very friendly conversation. And they're like, "How do you feel about aggressive goals?" And I was like, "Well, it depends. How do you feel about aggressive failure?" Because then once I know where you stand there, then we can talk about aggressive goals. Now, if we're being aggressive, but then we fail to achieve that, and it's one of those, okay, we didn't meet the goal that we'd expected, but everything is fine, and it's not a big deal, then I am okay. Sure, let's shoot for the stars.
But if it's one of those, we are communicating these goals to the outside world, and it's going to become incredibly important that we meet these goals, and if we don't, then things are going to go on fire, people are going to be in trouble, and it's just going to be awful, then let's not set aggressive goals. Let's not box ourselves into a space where we are setting ourselves up to fail or feel pain in a meaningful way. I agree that estimations are important, especially in terms of you need to collaborate with other departments, and then also just to provide some sense of where the product is headed and when things may be released.
I think estimations then just become problematic when they do become definite, and they're based on so many unknowns, and then when I don't know is not an answer. So if someone asked, "What's your estimate for this?" And the very honest real answer is I don't know, like, we haven't done this type of work before, or these are all the unknowns, and then someone's like, "Well, let's just put an estimation of like two weeks on it," and they just sort of try to force-fit it into being what they want, then that's where it starts to just feel incredibly problematic.
CHRIS: Yeah, estimation is a very murky area that we could spend entire episodes talking about, and in fact, I think we have a handful of times. So with that, Linear has been great. We're going to see just how much or how little estimation we actually want to do. But it's been a very nice addition to the toolset. And I'll let you know as we deepen our usage of it what the experience is like, but that's the main thing that's new in my world. What's new in your world?
STEPH: Well, before we bounce over to my world, you said something that has intrigued me that has also made me start reflecting on some of the ways that I like to work. And you'd mentioned that you have this prioritized backlog that people are pulling tickets from. And I don't know exactly if there's a planning session or how that looks, but I have recognized that when I am working with a team, and we don't have any planning session, if everybody is just pulling from this backlog, that's being prioritized by someone on the team, that I find that a bit overwhelming.
Because the types of work being done can vary so drastically that I find I'm less able to help my colleagues or my teammates because I don't have the context for what they're working on. It surprises me. I'm like, oh, I didn't even know we're working on that feature, or I don't have the context for what's the problem that we're trying to solve here. And it makes it just a lot harder to review and then have conversations with them. And I get overwhelmed in that environment.
And I've recognized that about myself based on previous projects that were more similar to that versus if I'm on a project where the team does get together every so often, even if it's high level to be like, hey, here's the theme of the tickets that we're working on, or here's just some of the stuff, then I feel much more prepared for the work that is coming in and to be able to context switch and review. And yeah, so I've kind of learned that about myself. I'm curious, are you similar, or how does that work for you?
CHRIS: I'm definitely similar. And I think probably the team is closer to what you're describing. So we do have a planning session every week, just a quick 30-minute scan through the backlog, look at the things that are coming up and also the larger themes. Previously, Epics and Trello now projects and Linear. But talking about what are the bigger pieces of work that we're moving on, and then what are the individual tickets associated with that that we'll be expecting to work on in the next week? And just making sure that everyone has broad clarity around what that feature set is.
Also, we're a very small team at this point. Still, we're four people total, but one of the developers is on a break for a couple of weeks this summer. And so there are really only three of us that are driving on the code. And so, with three of us working on the projects, we try very intentionally to have significant overlap between the various...like, we don't want any one person to own any portion of things at this point. And so we're doing a good amount of pairing to cross-pollinate and make sure everyone's...not everyone's aware of everything, but at least one other person is sufficiently aware of everything between the three of us. And I think that's been working well.
I don't think we have any major gaps, save for the way that we're doing our mobile architecture that's largely managed by one of the developers on the team and a contractor that we're working with to help do a lot of the implementation. That's a known we chose to silo that thing. We've accepted the cost of that for now. And architecturally, the rest of us are aware of it, but we're not like in the Swift code writing anything because I don't know how to write Swift at this point. I'd love to learn it. I hear good things about the language.
[12:26]
So yeah, I think conceptually very similar to what you're describing of still want to have people be able to review. Like, I don't want to put up a PR and people be like, I don't know, that looks like code, I guess. I'm not sure what it does. Like, I want it to be very...I want us all to be roughly on the same page, and thus far, we are.
As the team grows, that will become trickier to maintain because there are just inherently probably more things that are moving, more different feature areas and surface area that we're tackling in any given week, or there are different ways to approach that. I know you've talked about having a limited number of themes for a given cycle, so that's an idea that can pop up. But that's something that we'll figure out as we get further. I think I'm happy with where we're at right now, so yeah, that's the story there.
STEPH: Okay, cool. Yeah, all of that resonates with me, and thinking about it a little more deeply in this moment, I'm realizing I think something you said helped me put this together where when I'm reviewing someone's change, I'm not necessarily just looking to see does your code work? I'm going to trust you that your code works. I may have thoughts about design and other things, but I really want to understand more what's the change to the product that we're making? What's the goal that we're looking to achieve? How are we measuring this?
And so if I don't have that context, that's what contributes to that feeling of like, hard context switching of not just context switching, but now I have to level myself up to then understand the problem that's being solved by this. Versus had I known some of the themes going into that particular cycle or sprint, I would have felt far more prepared for that review session versus having to then dig through all the data and/or tickets or talk to someone.
Well, switching back to what's going on in my world, I have a particular tweet that I want to share, and it's one that Joël Quenneville brought to my attention. And it just resonates so much with all the type of work that I'm doing right now. So I'm going to read the tweet, and then we'll link to it in the show notes as well. But it's from Curtis Einsmann, and Curtis wrote: "In software engineering, rabbit holes are inevitable. You will research libraries and not use them. You'll write code just to delete it. This isn't a waste; sometimes, you need to go down a few wrong paths to get to the right one."
And that describes all the work that I'm doing right now. It's a lot of exploratory, a lot of data-driven work, and finding ways that we can reduce the time that it takes to run RSpec on CI. And it also ties in nicely to one of the things that I think we talked about last week, where we discovered that a number of files have a high runtime variance. And I've really dug into the data there to understand, okay, is it always specific files that have these high runtime variants? Are there any obvious contributions to what's causing this? Are we making real network calls that then could sometimes take a long time to return? And the result is there's nothing obvious.
They're giant files. The number of SQL commands that are being run for each file varies drastically. They're all high, but it's still very different. There's no single fact about these files that has really been like, yes, this is what's causing these files to have such a runtime variance. And so while I've been in the data, I'm documenting it, and I'm making a list and putting it all together in a ticket so at least it's there to look at later. But I'm going to move on. It's one of those I would love to know what's causing this. I would love to address it because it would put us in an ideal state for how we're distributing tests, which would have a significant impact on our runtime.
But it also feels a little bit like chasing my tail because I'm worried, like with some of the other experiments that we've done in the past where we've addressed tentpoles, that as soon as you address the issue for one or two files, then other files start having the same problem. And you're just going to continue to chase and chase, and I don't want to be in that. So upfront, this was one of those; hey, let's look at the data. If there's something obvious, let's address it; if not, move on. So I'm at that point today where I'm wrapping up all of that data, and then I'm going to move on, move on to the next thing.
CHRIS: There's deep truth in that tweet that you shared at the start of this segment. The idea like if we knew the work that we had to do at the front, yeah, we would just do that, but often, we don't. And so, being able to not treat it as a failure when something doesn't work out is, I think, so critical. I think to expand on the idea just a tiny bit, the idea of the scientific method, failure is totally an option and is part of science.
I remember watching MythBusters, and Adam Savage is just kind of like, "Failure is always an option," and highlighting that as part of it. Like, it's an outcome. You've learned something. You have a new data point. You can take that and then carry it forward with you. But it's rough in the moment. And so, I do think that this is a worthwhile thing to meditate on. And it's something that I've had to revisit a handful of times in my career of just like, man, I feel like I've just been spinning my tires all week. I'm like, we know what we want to get done, but just each approach I take isn't working for one reason or another.
And then, finally, you get to the end. And then you've got this paragraph-long summary of all the things that didn't work in your PR and one-line change sort of thing. And those are painful, but they're part of the game. Like, that is unavoidable. I have not found a way to just know how to do the work upfront always. I would love that. I would sign up for whatever seminar was selling that. I wouldn't. I would know that that seminar is a lie, actually. But broadly, I'm intrigued by the idea if someone were selling that, I'd be like, well, I mean, pitch me on it. Tell me why I should believe you; I don't, just to be clear. But yeah.
STEPH: This project has really helped me embrace always setting a goal or a question upfront about what I'm wanting to achieve or what I'm looking to answer because a number of times while Joël and I have been spelunking through data...And then so originally, with the saga, we started out with why doesn’t our math match reality? We understand that if these tests are distributed perfectly across the CPUs, then that should cut the runtime in half. But yet, we weren't seeing that even though we had addressed the tentpoles.
So we dug into understanding why. And the answer is because they're not perfectly distributed, and it's because of the runtime variance. And that was a critical moment to look back and say, "Did we achieve the goal?" Yes, we identified the problem. But once you see a problem, it's just so easy to dig in and keep going. It's like, well, now I want to know what's causing these files to have a runtime variance.
But it's one of those we achieved our goal. We acknowledged upfront that we wanted to at least understand why. Let's make a second decision, do we keep going? And I'm at that point where, frankly, I probably dug in a little more than I should because I'm stubborn. But I'm recognizing that now's the time to back away and then go back and move on to the next high-priority item, which is converting for funsies; I'll share.
The next thing is converting Test::Unit test over to RSpec because we have, I think, around 25 tests that are written in Test::Unit. And we want to move them over to RSpec because that particular just step in the build process takes a good three to four minutes. And part of that is just booting up Rails and then running the tests very fast. And we're underutilizing the machine that's running them because it's only 25 tests, but there are 86 CPUs to run it.
So we'd really like to combine those 25 tests with the rest of the RSpec suite and drop that step. So then that should add minimal time to the overall build but then should take us down at least a couple of minutes. And then also makes it easier to manage and help folks so that way, there's one consistent testing framework that's in use versus having to manage some of these older tests.
CHRIS: It's funny; I think it was just two episodes back where we talked about why RSpec, and I think both of us were just like, well yeah. But I mean, if there are tests and another, like, it's fine, you just leave them with the exception that if there's like 2% of our tests are in Test::Unit, and everything else is in RSpec, yeah, maybe that that conversion efforts seem totally worth it.
But again, I think as you're describing that, what I'm hearing is just like the scientific method, just being somewhat structured in the approach to what's the hypothesis? And what's the procedure we're going to use to determine if that hypothesis is true or false? And then what do we do? And then what are the results? And then you just do that on loop. But being not just sort of exploring. Sometimes you have to be on exploratory mode. But I definitely find that that tiny bit of rigor of just like, wait, okay, before I actually do anything, what do I think is going on here? What's my guess?
And then, okay, if that guess were true, what would I be able to observe in the world? Okay, here we go. And just that tiny bit of structure is so...it sometimes feels highly formal to go into that mode and be like, no, no, no, let me take a step back. Let me write down my thoughts. I'm going to have a little checklist and do the thing. But I've never regretted doing it. I will say that. I have deeply regretted not doing it. I feel like I should make a list of things that fit that structure.
I've never regretted committing in Git ever. That's been great. I've always been able to unwind it, but I've never been able to not unwind it or the opposite. I've regretted not committing. I have not regretted committing. I have regretted not writing out my hypothesis or approach. I have not regretted doing it. And so, yeah, this feels like it falls firmly in that category of like, it's worth just a tiny bit of structure. There's a reason it is the scientific method.
STEPH: Yeah, I agree. I've not regretted documenting upfront what it is I look to achieve and how I think I'm going to answer the question. That has been immensely helpful. Because then I also forget, like, two weeks ago, I'll be like, wasn't there a question around why this was happening, and I need to understand? And all of that was so context-heavy that as soon as I'm out of the thick of it, I completely forget it. So if I care about it deeply or if I want to be able to revisit it, then I need to document it for myself.
It's given me a lot of empathy for people who do more scientific research around, oh my gosh, like, you have to document everything you do and then still be able to prove it five years from now or however long. I'm just throwing out numbers. And it needs to be organized enough that someone, if they do question your research that, then you have it there. My research documents would not withstand scrutiny at this point because they are still just more personal notes. But yes, it's given me a lot of empathy and respect for people who do run very serious research, experiments, and trials, and then have to be able to prove it to the world.
Pivoting just a bit, there's a particular topic that resonated with both you and I; in fact, it's a particular tweet. And, Louie, I do apologize if I mispronounce your last name, but Louie Bacaj. And we'll include a link in the show notes to the tweet, but Louis shared, "I managed multiple engineering teams before quitting tech. Now that I quit, I can speak freely. Here are 12 things your manager may not be telling you, but I know for a fact will help you."
So there are a number of interesting discussions and comments that are in this thread. The one thing in particular that really caught my attention is number 10, and it's "Advocate for junior developers." So they said that their friend reminded them that just because you don't have 10-plus years of experience does not mean that they won't be good. Without junior engineers on the team, no one will grow. Help others grow; you'll grow too.
And that's the part that I love so much is that without junior engineers on the team, no one will grow because that was very thought-provoking for me. It's something that I find that I agree with deeply, but I hadn't really considered why I agree with that so much. So I'm excited to dive into that topic with you. And then, as a second topic to go along with that is, can juniors start with a remote team? I think that's one of the other questions when you and I were chatting about this. And I'm intrigued to hear your thoughts.
CHRIS: A bunch of stuff there. Starting with the tweet, I love elements of this. Some of it feels like it's intentionally somewhat provocative. So like, without junior engineers on the team, no one will grow. That feels maybe a little bit past the bar because I think we can technically grow, and we can build things and whatnot. But I think what feels deeply true to me is truly help others grow; you'll grow too. The act of mentoring, of guiding, of training, of helping someone on their journey will inherently help you grow and, I think, change the way that you think about the work.
I think the beginner mind, the earlier in the career folks coming into a codebase, they will see things fundamentally differently in a really useful way. It's possible that along your career, you've just internalized things. You've been like, yeah, no, that was weird. But then I smashed my head against it for a while, and now I understand this thing. And it just makes sense to me. But it's like, that thing actually doesn't make sense. You have warped your mind to match the thing, not, quote, unquote, "come to understand it." This is sounding too judgmental to people who've been in the industry for a while, but I found this of myself.
Or I can just take for granted things that took a long time to adapt my head to, and if anything, maybe I should have pushed back a little more. And so, I find that junior engineers can be a really fantastic lens for the complexity of a project. Like, the world is truly a complex place, and that's just true. But our job as software engineers is to tame that complexity and manage it. And so, I love the mindset that can come or the conversations that can come out of that.
And it's much like test-driven development is a pressure on the complexity of your code, having junior engineers join the team and needing to explain how all of the different features work, and what the overall architecture is, and the message passing under this and that, it's a really useful conversation to have. And so that "Help others grow; you'll grow too" feels deeply, deeply true to me.
STEPH: Yeah, I couldn't agree more in regards to how juniors really help our team and especially, as you mentioned, with complexity and ¬having those conversations. Some of the other things that have come to mind for me as well around the importance of having junior developers on your team...and maybe it's not specifically they're junior developers but that you just have a variety of experience on your team. It's going to help you lean into a culture of learning because you have people that are at different stages of their career.
And so you want an environment where people can learn together, that they can fail together, and they can be public about it. And having people that are at different stages of their career will lead, well, at least ideally, it'll lead to more pair programming. It's going to lead to more productive code reviews because then people can ask more questions around why did you choose this, or why are you doing that? Versus if everybody is at the same level, then they may just intuitively have reasons that they think someone did something.
But it takes someone that's a bit new to say, "Hey, why did you choose this?" or to bring up some other ideas or ways that they could pursue it. They may bring in new ideas for, like, why has the team always done something this way? Let's think about new ways that we could do this. Or maybe this is really unfriendly, the way that we're doing this, not just for junior people but for people that are new to the team.
And then there's typically less knowledge siloing because then you're going to want to pair the newer folks with the more experienced folks. So that way, you don't have this more senior developer who's then off in a corner working by themselves. And it's going to improve your communication skills. There's just...I realized I'm just rambling because I feel like there are so many benefits that go along with having a variety of people on your team, especially in terms of experience. And that just leads to such a better learning environment and, ultimately, better software and better products.
And yet, I find that so many companies won't embrace people that are newer to software. They always want the senior developers. They want the 10x-er or whatever those are. They want the people that have many, many years of experience. And there's so much value that comes from mentoring the next group of developers. And it's incredibly frustrating to me that one, companies often aren't open to it. But honestly, more than that, as long as you're upfront and honest about like, hey, this is the team that we need right now to build what we're looking to build, I can get past that; I can understand that. But please don't then mislead people and say that you're a junior-friendly team, and then not be prepared.
I feel like some teams will go so far as to say, "Yes, we are junior-friendly," and they may even tweak their interview process to where it is a bit more junior-friendly. But then, by the time that person joins the team, they're really not prepared. They don't have an onboarding plan. They don't have a mentorship plan. And then they fail that person because that person has worked hard to get there. And they've worked hard to bring that person onto the team, but then they don't have a plan from there.
And I've seen it too many times. And it just frustrates me so much because it puts that junior person in such a vulnerable state where they really have to be an incredible self-advocate to then overcome those hurdles from a lack of preparation on that company's part. Okay, I think that's my event. I'm sure I could vent about this a lot more, but I will cut it off there. That's the heart of it.
CHRIS: I do think, like, with anything else, it's something that we have to be intentional about. And so what you're saying of like, yeah, we're a junior-friendly company, but then you're just hiring folks, trying to find folks that may work at a slightly lower pay grade, and that's what that means to you. So like, no, no, that's not what this is. This needs to be something different. We need to have a structure and an organization that can support folks at different points in their career.
But it's interesting to me to think about the sort of why of it. And the earlier part of this conversation, we talked about some of the benefits that can come organizationally from it, and I do sincerely believe in that. But I also believe that it is fundamentally one of the best ways to find really talented people early on in their career and be in a position to hire them where maybe later on in their career, that just wouldn't happen naturally. And I've seen this play out in a number of organizations.
I went to Northeastern University for my college, and Northeastern is famous for the co-op program. Northeastern sounds really fancy. Now I learned that they have like a 7% acceptance rate for college applications right now, which is wildly low. When I went to Northeastern, it was not so fancy. So just in case anyone's hearing that and they're like, "Oh, Northeastern, wow." I'm not that fancy. [laughs]
But they did have the co-op then, and they still have it now. And the co-op really is a differentiating thing. You do three six-month rotations. And it is this fundamental differentiator in terms of when you're graduating. Particularly, I was in mechanical engineering. I came out, and I actually knew what that meant in the world. And I'd used Outlook, and I knew what a water cooler was and how to talk near one because that's a critical thing to learn in the world. And really transformative experience for me.
But also, a thing that I observed was many of my friends ended up working at companies that they had co-opted for. I'm one of those people. I would say more than 50% of my friends ended up with a position at a company that they had done a co-op rotation with. And it really worked out fantastically. That organization and the individual got to try things out, experience. And then, I ended up staying at that company for a number of years, and it was a wonderful experience. But I don't know that I would have ended up there otherwise. That's not necessarily the way that would have played out.
And similarly like, thoughtbot has the apprenticeship. And I have seen so many wonderful developers start at that very early point in their career. And there was this wonderful structure around them joining the thoughtbot team, intentional, structured, supported. And then those folks went on to be some of the most talented developers that I've ever worked with at a wonderfully talented organization. And so the story of like, you should do this, organizations. This is a thing that you should invest in for yourself, not just for the individual, like, for both. Everybody wins in this case, in my mind.
I will say, though, in terms of transparency, I currently manage a team of three developers. And we hired very intentionally for senior folks this early on in where we're at. And that was an intentional choice because I do believe that if you're going to be hiring more junior developers, that needs to be something that you do very intentionally, that you have a support structure in place, that you're able to invest the time in where they're at and make sure we have sort of...
I think a larger team makes more sense to bring juniors into broadly. That's the thing that I'm saying out loud that I'm like, I should push on that a little bit. Is that true? Do I really believe that? But I think so, my actions obviously point to it. But it is an interesting trade-off space of how do you think about that? My hope is that as we grow as an organization, that we would then very intentionally start hiring folks in a more junior, mid-level to junior and be very intentional about how we support them, bring them into the organization, et cetera. I do believe it is a win-win situation for everyone when done with intention and with focus.
STEPH: That's such an interesting bit that you just said because I very much appreciate when companies recognize do we have the bandwidth to support someone that's more junior? Because at thoughtbot, we go through periods where we don't have our apprenticeship that's open because we recognize we're not in a place that we can support someone. And we don't want to bring someone in unless we can help them be successful. I very much admire that and appreciate that about companies when they can perform that self-assessment.
I am so intrigued. You'd mentioned being a smaller team. So you more intentionally hire senior developers. And I think that also makes sense because then you need to build up who's going to be in that mentorship pool? Because then people could leave, people could take vacations, and so then you need to have that support system in place. But yeah, I don't know what that then perfect balance is. It's like, okay, so then as soon as you have like five people available to mentor or interested in mentorship, it's like, then do you start bringing in the conversation of like, let's bring in someone that we can help build up and help them be successful and join our team? And I don't know what that magical number is.
I do think it's important for teams to reflect to say, "Can we take on someone that's junior?" All the benefits of having someone that's junior. And then just being very honest and then having a plan for once that junior person does arrive. What does their career path look like while they've joined that team, and who's going to be that person that's going to help them level up? So not only make that choice upfront of yes, we are bringing someone on but let's also think about like the first six months of their work here at the company and what that's going to look like.
It feels like an important step that a lot of companies fail to do. And I think that's why there are so many articles that then are like, hey, if you're a junior dev, here's all the things that you should do to be the best junior dev. That's fabulous. And we're constantly shoring up junior devs to be like, hey, here's all the things that you need to be great at. But we don't have as many conversations around; hey, here's all the things that your manager or the rest of your team should be great at to then support you equally as you are also doing your best to meet them. Like, they need to meet you halfway.
And I'm not completely unsympathetic to the plight; I understand. It's often where I've seen with teams the more senior developers that have very strong mentorship communication skills are then also the teammates that get pulled into all the meetings and all the different projects, so then they are less available to be that mentor. And then that's how this often fails. So I don't think anybody is going into this intentionally, but yet, it's what happens for when someone is new and joining a team, and it hasn't been determined the next six months what that person's onboarding and career path looks like.
Circling back just a bit, there's the question around, can juniors start with a remote team? I can go first. And I'm going to say unequivocally yes. There's no reason a junior can't start with a remote team. Because all the things that I feel strongly about come down to how is your team going to plan for this person? And how are they going to support this person? And all the benefits that you get from being in an office with a team, I think those do exist.
And frankly, for someone like myself, it can be easier to establish a bond with someone that you get to see each day, get to see in person. You can walk up to their desk and can say, "Hey, I've got a question for you." But I think all those benefits just need to be transferred into a remote-friendly way. So I think it does ratchet up how intentional you have to be with your team and then onboarding a junior developer. But I absolutely think it's doable, and we should do it.
CHRIS: You went with unequivocally yes as your answer. I'm going to go with a qualified maybe as my answer. I want this to be true, and I think it can be true. But I think it takes all the more intentionality than even what we've been describing. To shift the question around a little bit, what does remote work mean? It doesn't just mean we're doing the work, but we're separate. I think remote work inherently is at its best when we also are largely async first. And so that means more structured writing.
The nature of the conversation tends to be more well-formed in each interaction. So it's like I read a big document, and then I pass it over to you. And at your leisure, you respond to it with a bunch of notes, and then it comes back to me. And I think that mode of interaction, while absolutely wonderful and something that I love, I think it fits really well when you're a little bit further on in your career when you understand things a little bit better. And I think the dance of conversation is more useful earlier on and so forth.
And so, for someone who's newer to a team, I think having the ability to ask a quick question over and over is really useful to someone who's early on in their career. And remote, again, I think it's at its best when it's async. And those two are sort of at odds. And so it's that mild tension that gives me pause of like, something that I think that makes remote work great I do think is at least a hurdle that you would have to get over in supporting someone who's a little bit newer. Because I want to be deeply present for someone who's newer to their journey so that they can ask a lot of questions so that I am available to be interrupted regularly.
I loved at thoughtbot sitting next to someone and being their mentor and being like, yeah, anytime you want, just tap on my desk. If I got my headphones on, that doesn't mean I'm ignoring you; it means I just need to make the sounds go away for a minute because that's the only way my brain will work. But feel free to just tap on my desk or whatever and grab my attention for a moment. And I'm available for that. That's an intentional choice. That's breaking up my continuity of the day, but we're choosing that for a reason.
I think that's just a little harder to do in a remote context and all the more so if we're saying, hey, we're going to try this async thing where we write structured documents, and we communicate in these larger, more well-formed, communicates back and forth. But I do believe it can be done. I think it should be done. I just think it's all the harder for all of those reasons.
STEPH: I agree that definitely makes it harder. But I'm going to push a little bit and say that when you mentioned being deeply present, I think we can be deeply present with someone and be remote. We can reduce the async requirements. So if you are someone that is more senior or more accustomed to the team, you can fall back to more of those async ways to communicate.
But if someone is new, and needs more mentorship, then let's just set up time where we're going to literally hang out for a couple of hours each day or whatever pairing environment works best for them because pairing can also be exhausting. But hey, we're going to have a check-in each day; maybe we close out each day and touchpoint. And feel free to still message me and interrupt me. Like, you're going to just heighten your availability, even though it is remote. And be aware, like, hey, this person could message me at more times, and I'm okay with that. I have opted into this form of communication.
So I think we just take that mindset of, hey, there's this person next to me, and I'm their mentor to like, hey, they're not next to me, but I'm still their mentor, and I'm still here for them. So I agree that it's harder. I think it falls on us and the team and the mentors to change ourselves versus saying to juniors, "Hey, sorry, it's remote. That's not going to work for you." It totally works for them. It's us, the mentors, that need to figure out how to make it work.
I will say being on that mentor side that then not being able to see someone so if they are next to me, I can pick up on body language and facial expressions, and I can tell when somebody's stuck. And I can see that they're frustrated, or I can see that now's a good time for me to just be like, "Hey, how's it going? What are you working on? Or do you need help with something?" And I don't have that insight when I'm away. So there are real challenges like that that I don't know how to address.
I have gone the obnoxious route [laughs] where I just message people, and I'm like, "Hey, how's it going? How's it going? How's it going?" And I try not to do that too much. But I haven't found a better way to manage that other than to constantly check in because I do have less feedback from that person that I'm working with unless they are just incredibly open about sharing when they're stuck. But typically, when you're newer to a team or newer to a career, you're going to be less willing to share when you're stuck.
But yeah, there are some real challenges, but I still think it's something for us to figure out. Because otherwise, if we cut off access for remote teams to junior folks, I mean, that's where we're headed. There are so many companies and jobs that are headed remote that not being junior friendly and being remote in my mind is just not an option. It's something that we need to figure out. And it's hard, but we need to figure it out.
CHRIS: Yeah, 100% on we need to figure that out and that that's on us as the people managing and structuring and bringing folks into teams. I think my stance would be like, let's just be clear that this is hard. It takes effort to make sure that we've provided a structure in which someone newer to a team can be successful. It takes all the more effort to do so in a remote context, I think. And it's that recognition that I think is critical.
Because if we go into this with the wrong mindset, it's like, oh yeah, it's great. We got this new person on the team. And yeah, they should be ready to go in like two weeks, right? It's like, no, no, this is a different thing. We need to be very clear about it. This is going to require that we have someone who is able to work with them and support them in this. And that means that that person's output will likely be a little bit reduced for the period of time that we're talking about. But we're playing a long game here. Let's make sure we're clear on that. This is intentional.
And let's be clear, the world of hiring and software right now it's not like super easy. There aren't way more software developers than there are jobs; at least, that's been my experience. So this is something absolutely worth investing in for just core business reasons and also good for people. So hey, it's a win-win. Let's do it. Let's figure it out. But also, let's be clear that it's going to be a little tricky along the way. So, you know, let's be intentional about that. But yeah, obviously do it, got to do it.
STEPH: Wait, so I feel like we might have circled back to unequivocally yes. [laughs] Have we gotten there, or are you still on the fence?
CHRIS: I was unequivocally yes from the beginning, but I couched it in, but...yeah, I said other things. You're right. I have now come around; let's say to unequivocally yes.
STEPH: [laughs] Cool. I don't want to feel like I'm forcing you to agree with me. [laughs] But I mean, we just so rarely disagree. So we've either got to identify this as something that we disagree on, which would be one of those rare occasions like beer and Pop-Tarts.
CHRIS: A watershed moment. Beer and Pop-Tarts.
STEPH: Yeah, those are the only two so far.
[laughter]
CHRIS: Not together also. I just want to go on record beer and Pop-Tarts; I don't think would be...anyway.
STEPH: Ooh, I don't know. It could work. It could work.
CHRIS: Well, there's another thing we disagree on.
STEPH: I would not turn it down. If I was eating a Pop-Tart, and you're like, "Hey, you want a beer?" I'd be like, "Sure," vice versa. I'm drinking a beer. "Hey, you want a Pop-Tart?" "Totally."
CHRIS: Okay. Well yeah, if I'm making bad decisions, I'm obviously going to chain them together, but that doesn't mean that they're a good decision. It's just a chain of bad decisions.
STEPH: I feel like one true thing I know about you is that when you make a decision, you're going to lean into it. So like, this is why you are all about if you're going to have a Pop-Tart, you're going to have the highest sugary junk content Pop-Tart possible. So it makes sense to me.
CHRIS: It's the Mountain Dew theorem, yeah.
STEPH: I didn't know this had a theorem. The Mountain Dew theorem?
CHRIS: No, that's just my name. Well, yeah, if I'm going to drink soda, I'm going to drink Mountain Dew, the nonsense nuclear option of soda. So yeah, I guess you're describing me, although as you say it back to me, I suddenly feel very, like, oh God, is this who I am as a person? [laughs] And I'm not going to say you're wrong. I'm just going to spend a little while thinking about some stuff.
STEPH: I mean, you embrace it. I think that's lovely. You know what you want. It's like, all right, let's do this. Let's go all in.
CHRIS: Thank you for finding a wonderfully positive way to frame it here at the end. But I think on that note, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
We've got a tricycle anniversary! 🥳 Will it be ruined by a cockroach?
Steph shares an update regarding some of the progress and discoveries that she's helped make with a client in regards to speeding up CI.
Chris is finally getting a little bit more back into the code at work and finds himself riding another time management struggle bus. P.S.: Who even names these apps?!?!
Children of Time
Maker's Schedule, Manager's Schedule
The Backwards Brain Bicycle - Smarter Every Day 133
Clockwise - Time Management For Teams
One month on Analog
Getting Things Done
Bullet Journal
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: I have officially started recording. You are on the mic, friend.
CHRIS: This is on the mic. Oh goodness.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, normally, I would say, "What's new in your world?" But this week, this day, in fact, is a very special day. Actually, technically, it's tomorrow. But did you happen to know that you have achieved your tricycle anniversary here on The Bike Shed?
STEPH: No. [laughs]
CHRIS: Three years. Three years ago tomorrow, Episode 196:I Can Be Wrong on the Internet, wonderful title for it, was released. That was the first episode where you were formally a co-host. You'd come on a few times before that, but that was three years ago.
STEPH: That's incredible. Man, you totally got me. [laughs] You were switching it up for our intro, and our intro is very formalized. As you've said before, it is per your contract that’s how we do our intro. [laughs]
CHRIS: Yes.
STEPH: That is incredible. Three years. Wow. You know, I had thought about not the particular anniversary, but I was chatting with the Boost team earlier because I'm always encouraging people like, "Hey, write a blog post. What you just said sounds incredible. That would be wonderful as a blog post." And so I felt the need to convey like, I'm terrible at writing blog posts. I have written a grand total of two. I have a third one that's in draft state and has been that way for a long time, at least a month, I believe.
And so, I am not great about writing and publishing blog posts. But I was like, but I could podcast. And so I looked up, and I was like, I know I've done around over 100; I think around 140 episodes. And so I was like, that makes me feel better. Those who can't write podcast. [laughs]
CHRIS: I'm with you on that front, that she can just keep editing a blog post for forever. I actually do have some stats that I gathered for this as well. So like you said, you're close to 140 episodes. Let's assume an average of 40 minutes per. That gets us to around 5,000 minutes of audio or said differently; that's like 87 hours or 3.6 days.
STEPH: Whoa.
CHRIS: I know, right?
STEPH: The hours really hit home for me. [laughs]
CHRIS: Not the days? I like the days one.
STEPH: No. The hours one, I don't know, the hours one resonates with me. That is something. That's very cool.
CHRIS: 87 hours, yeah. Hopefully, in that time, I think we've said some useful things.
STEPH: I was going to say as someone who started out as your co-host, and I was really certain I had nothing to say, I have 87 hours that documents otherwise. [chuckles]
CHRIS: Indeed. And having been on the other side of the mic from you for most, if not the vast majority of those, you have wonderful things to say. It has been such an absolute pleasure getting to share the mic with you and talk about tech, and nonsense, and life, and all of the things. So yeah, thanks for coming on this adventure.
STEPH: Well, that's very sweet. Thank you so much. And I appreciate you. And it has been amazing. It's been so fun to be your co-host. So I'm really glad that you convinced me to come on this adventure with you. Speaking of adventures, I have a very silly one that I'm going to start us out with. I want to tell you about Henry. You don't know Henry, so I'm going to introduce you to who Henry is.
So we have moved into our new place in North Carolina, and I went to our bathroom to brush my teeth and get ready for bed. And when I turned on the water, out popped this giant cockroach that just came scurrying across the sink, and I panicked. And so I went flying out of the bathroom and hopped up on the bed because I'm an adult, and that's what you do when you encounter a cockroach. And I was like, okay, it's just a cockroach, calm down, which is funny. Like, cockroaches and spiders, I can't do. Snakes and mice totally cool with; I can handle them. But cockroaches and spiders are my fear.
So then I was like, well, maybe if I name him or if I named them, this cockroach, then that will help, and I will be less scared of them. So I have named this cockroach Henry. And so now, when I go into the bathroom, I will often tell Henry like, "Hey, I need you to vacate. I'm coming in." And I reached over…it was a day or two later I reached over to get some soap, and then out pops Henry.
Turns out naming them didn't help. I immediately ran out of the bathroom because [laughs] they are just so fast. Something moving that quickly just scares me. So I now have this ongoing battle with Henry. I think Henry is going to win, and I'm going to end up having to use the guest bathroom until Henry decides to vacate the home.
CHRIS: Oh, Henry. Well, Henry wins this game. But I like that you tried, though, giving it a name. I will say I've actually had an experience very similar to this, and it worked in my case. So there's a really fantastic book series that I've read called The Children of Time I believe is the first one. There's the Children of Time and then The Children of Ruin. And then there's a third one coming out. It's by a fantastic author, Adrian Tchaikovsky. The first book, The Children of Time, was recommended to me by another former thoughtboter, Greg Fisher.
It's just such a unique book. It's about spiders; just it's super-duper about spiders. Whenever I tell anyone this, they're like, "I don't like spiders." I'm like, "Trust me; I'm not a spider guy. That's not who I am." And yet reading this book, these spiders they've got personality. It's just fantastic. This author has such a unique voice and really does such a fantastic job of bringing to life a different type of intelligence, a different sort of point of view on the world and the universe, spiders specifically. And I found myself thinking about spiders differently.
In the book, there are a handful of names for spiders. They're not specific characters in the book. It's almost like the names get reused for different representative spiders, which I realize I'm doing a terrible job of describing it. Everyone should read this book. It's utterly fantastic. But I found myself I would see a spider out in the world, and I'd be like, "Oh, that's Fabian. Yeah, no, Fabian is our friend," literally, this happened.
There was a large spider that was living out on our deck, and I was like, oh, no, no, no, we have to protect Fabian. This is Fabian's spot now. Like, he's just chilling. He's killing some bugs that we don't want. Like, Fabian is our friend. And so this totally worked for me with that book. But I had to go to that level; just giving it a name would not have been enough. They needed to have a backstory, and a history, and all of that. But yeah, again, cannot recommend these books enough.
STEPH: I really appreciate that I'm not alone in this approach and that it resonated with you as well. Okay. All right. So next, I will work on a backstory for Henry, and then maybe this will help. I also just need Henry to slow down. Henry is just…he's too fast. And so when he comes charging out of these hidden places, it scares me. [laughs] So I also need Henry to just be a little slower or a lot slower or to just go find another home. That would be great too.
CHRIS: Yeah, I'll be honest; I haven't read a book about sentient cockroaches. I feel like, I don't know, maybe I could come around. But I was surprised by what happened with spiders, so...
STEPH: Just the idea of that book, ugh. Okay, I'm going to move on. I'm going to move on. [laughs]
CHRIS: It's so good. You have to read it. Here's the thing, like that feeling, what if that feeling got to go away and you could think about spiders differently? Because here's the thing, spiders are friends. Spiders are on our team.
STEPH: Spiders I can handle. I'm into the idea of that book, but when you said a book about sentient cockroaches, that one just ugh. [laughs]
CHRIS: Gotcha. Okay. All right. Okay. Sure.
STEPH: So I appreciate you humoring my Henry story. But now, for my own sake, I'm going to move away from the topic of cockroaches. [laughs] That's what people came to listen to, right? So a story about roaches named Henry and spiders named Fabian.
On a more technical note, I have an update that I'm very excited to share in regards to some of the progress and discoveries that Joël and I have made with our current client in regards that we're looking to speed up CI, interested in adding some more machines to then also speed up how quickly the tests run.
And along that journey, we have also talked about tentpole. In terms of how we're splitting up and distributing that work right now, it's based per file. So for a tentpole, if you have a file that takes 10 minutes but all of your other files take 2 minutes to complete, then 10 minutes is the fastest that you're ever going to achieve because you have this one file that's holding everything else up.
So we have manually addressed a lot of those files by splitting them out. So just literally taking like this 10-minute file and splitting it into three or four files, whatever we need to bring it in line with the other files. And that had some really positive results theoretically.
So looking at the math, if we start with our current state, so we have 86 processors; this is on one machine. So on one machine, we have 86 processors with the presence of tentpoles, so we haven't manually split any files. It takes around 14 minutes for all of the RSpec tests to run. So that's around 560 minutes total of work that is then being distributed across these 86 processors.
In theory, if we addressed all of those tentpoles and we could bring all the files down where they only take six and a half minutes to complete so everything is equal in that regard, we should be able to have the RSpec test complete in around eight and a half minutes. So essentially, the math behind that is we have 560 minutes' worth of work. We divide that by 86 CPUs. We added two minutes because there's going to be a little bit of like boot-up time and just maybe some other noise that's there. So then that brings us to around eight and a half minutes. The problem is we haven't seen that. We haven't gotten that low when we actually run the RSpec test suite.
So we're seeing around more like 11 minutes. So it currently takes around 14 minutes. We've manually split files, and we're seeing more consistently that things are taking to 11 and 11 and a half minutes. And so we really paused because our next step is we want to automate how we address these tentpole files where we don't want to have to manually split files. That is just too brittle. It's very easy for someone to add to that file, and then suddenly it spikes back up, and it becomes the file that's holding up everything else.
It's also really hard to split these files. I'm not going to go there, but this is one of my other gripes with shared examples. These files can be difficult to split because you don't exactly know what's getting run, and then trying to chunk them into different files is tedious and not an easy task. So we really don't want people to have to do that either or introduce metrics. We don't really want to do that, although we would around like, "Hey, you have now introduced some tests to this file. And it now exceeds the threshold that we're looking for in terms of how we want to make sure there aren't any tentpoles in the test suite."
So to avoid a lot of those concerns, ideas, then we automate. So there are some tools that we could use. parallel_tests is the tool that we're using right now, but that's per file. There's another gem that I'd mentioned before called parallel_split_test that we have taken a look at that will then chunk them at smaller levels. I'm still not exactly sure how because even though I declared that I'm looking into that gem, we've gotten distracted with some other work. But there's parallel_split_test. There are also some other approaches. There's RSpec Queue. There's Knapsack Pro that will then split tests more at the individual example level, so then you don't have a bottleneck as to how long the file is. We don't care anymore.
So we wanted to start looking into how we could automate using those tools. But now that we're not seeing the payoff from when we've done the manual splitting, we've now backed up to say, "Okay, before we invest in this automation, we really want to see this pay off first." So that's where we're at right now. And the mantra that has been in my mind as we're going through this is verify twice, automate once.
And we've realized that we're just not there yet when it comes to automating. So to help us verify, before we go into automation mode, we are tracking the data in terms of we see how the tests are actually getting distributed across all the CPUs. We can see which tests were run for each CPU, and then we can see across which CPU how much work each process was given.
And we're seeing that some of the processes are given more work, like maybe one process is given like nine minutes of work, but another one's only given like five minutes of work. But then, if we look at the individual tests that are being distributed, there's room. It's like, okay, well, why didn't parallel_tests then distribute some of these groups of tests over to this other CPU that only was given four minutes of work?
And we see there's an opportunity because not all of those tests...it's like, well, maybe it's too big. They couldn't bring it over to the other CPU because then that would have pushed it to 10 minutes or something like that. But that's not the case because some of the test files are short enough where it's maybe like 30 seconds or maybe a minute. So it's like, it seems like a really good candidate that it could have gotten shifted to a CPU that has less work.
The thing that we have discovered is we're analyzing two different things. So we are looking at the expectation. So we are providing to parallel_tests like, hey, in a previous run, this is how you split the test, or here's the runtime for every single file, so use this data to assign work to every CPU. Based on that data, then some of the CPUs weren't given more work because we expected that CPU to take longer to complete. But then, in reality, it took less time. So we're seeing that there is a variation in terms of how long a test file takes to run.
So based on historical data, let's say we have like a controller test, and based on historical data, it took four minutes to run. But then, in actuality, maybe it took suddenly eight minutes to run, or maybe it took like two minutes to run. So we have this level of variation in how long a test actually takes to run that we can't achieve a perfect distribution because it's fluctuating so much.
And that's why we're seeing in these graphs that we're creating around why one CPU has a lot more work than another CPU is because yes, based on historical data, this should have been distributed perfectly, but the actual runtime then of those tests has changed enough. That's why it looks like our distribution algorithm isn't working.
And one of the ways we confirmed this is Joël took the parallel RSpec runtime log that is being generated from running that test (So that's the historical data that we're providing that shows how long each test takes to run.) and ran that through the parallel_tests algorithm, and it was perfect. Parallel_tests did a beautiful job of assigning those to all of the CPU. So we realized it had to be some fluctuation in the actual runtime that's then causing this concern.
So I'm going to share some stats; a couple of these are a little scary stats where, for example, we have a test that could run anywhere between 6.4 seconds but has taken as long as 60 seconds to run. So when you have that level of variation in terms of how long the test takes to run, you can't begin to distribute the work perfectly because it's going to differ each time.
And so now we're looking to understand what's causing that variation. Maybe it's real network requests that are going out that are then causing that fluctuation in timing. Maybe it's database contention in terms of some of those files are just creating a lot of records, and then sometimes it runs quickly, and sometimes it goes more slowly if there are a lot of other tests that are also running and hitting the database at that same time. Who knows?
We have lots of interesting options to explore, but I know I've shared a lot. Hopefully, that painted a helpful picture as to some of the discoveries that we have made around how we're distributing this work. But yeah, I'm going to pause there.
CHRIS: Oof. Yeah, that's an adventure. Partly, I think you're just making a great sales pitch for Knapsack because my understanding is this is exactly what their point of view and approach is; like, don't try and think your way out of this problem, just try and distribute the work on demand. Just in time distribution sort of thing of like, you got a bunch of workers, and each worker pulls off the queue.
There's probably some intelligence to how to sort the queue like maybe you lead with feature specs or the things that have the highest variance put them earliest in the queue such that at the end, you're getting a very deterministic distribution such that you get nice and even performance across them.
But nonetheless, Knapsack Pro seems like a very good thing. I'm intrigued by it for our usage as well. But I'm super intrigued by something that can vary by 10X, 6 seconds versus 60 seconds is just so interesting. Like, are these features-specs first of all? Because that's the only thing that could make sense. If this is a model spec and this is happening, I'm like, wow, what? A feature spec, maybe. But even then, it's still surprising.
STEPH: So I haven't dug into the specifics of that file. There are some other files that also have some scary stats. So that is next on my list to figure out how that variation is taking place. I don't think it's a feature spec just from glancing at the title of the file. But I'm also intrigued.
CHRIS: Huh. Like, the feature specs is one standout that could make sense to me because, inherently, there is waiting. There are wait characteristics in feature specs. And so they can have you stack up enough of those waits for each of the sort of like find, assert, look for, fill in, et cetera. Each of those can have a couple of seconds attached to it. But other stuff is...even a feature spec; it's surprising to take that long. Anything else is super surprising.
A couple of things stood out to me otherwise on what you said; the verify twice, automate once I love that. The general sort of the follow-up, if nothing else, of did that thing that we thought might make a change actually do it? Is it impacting in the way that we expected? Are our assumptions being validated?
There's a wonderful blog post that I read a while back that just sort of named an idea that has stuck in my head; it was called Calling Your Shot with TDD. I think I'm misrepresenting the title. But it was something in that space where TDD as an approach is a way to be like, I think I have a mental model of how the system works, and I'm now going to write a test that constrains to that behavior, and then we go from there.
And TDD as a practice represents a certain sort of specific example of this. But what you're describing of like, well, we made this change. Did it produce the outcome? Or it's so easy to do this with performance optimizations of, oh yeah, an N+1 query. Let's fix that, and then the page will be better. And it's like, is it actually, though? In some cases, in most cases, fixing an N+1 will improve performance. But will it improve performance in a useful amount? Or is it the limiting factor in terms of the performance fixes?
And so having that habit and that muscle of having the hypothesis, making the change, and then verifying that you get the expected result is so, so critical. And just sort of going through that loop is so important. It's interesting to hear that you're like, we had some ideas. And then we tried some stuff, and we're not super-duper seeing what we expected is...yeah, you're doing the work there if nothing else.
STEPH: Yeah, we have found on this project that it's really important that, as you said, to call your shot in terms of like, what's the math telling us in terms of what we think is achievable versus what are the actual results that we're seeing? And then verifying it. And I feel a lot better because before, it was just like, we don't know why this isn't paying off. We thought that splitting up these files would totally pay off, and we would drop from 14 minutes to suddenly eight and a half minutes, and it was going to be beautiful. And we didn't get there. Why is that?
And so this journey has helped me feel a lot better in regards to we have more of an understanding as to why, and it's because we validated that parallel_tests is doing a great job in terms of distributing the work across each of the processes. But because there's that variance in the test files, the algorithm can't begin to then manage that. I mean, there's if you have your data kind of lies to you in terms of what you think is going to run versus what actually may run, then we're always going to have this poorly distribution of work.
And so then the next step is, do we actually dig into understanding more of like do we want to look at those files specifically and address that concern? Or do we want to go ahead and take that leap of faith over to Knapsack Pro? Because that's what paused us from moving on to something bigger to then automate how we distribute work
Maybe since we now understand why we didn't see the payoff, we feel more comfortable taking that leap of faith that we will still see the payoff once we hand this off to a different process. And we're actually breaking it up per test or per example versus per test file. We may see less variation. Maybe that's not true. Maybe there will actually be some examples that still have a high variation, but at least it won't be grouped as an entire file, so it may get a little better. There's also the fun idea...I'm going to categorize this under the good idea, bad idea area.
CHRIS: I think I go good idea, terrible idea, just to be clear [laughs] but.
STEPH: I like it. All right. Yeah, good idea, terrible idea. So for this, one of the ideas that came up was, what if we try to finagle some of this just to still see some of the reward? Because there's part of us that we just want to see it pay off. Like, we've put a lot of effort into this. We want to see something get faster.
And it's like, well, if we know there's some variance with these files, what if we had those files, like, we added to them in terms of instead of parallel_tests ever thinking that this test could take as few as six seconds, we just say, "Nope, we know this file is always going to take more somewhere in the middle," so to then improve our distribution. And that way, there's not such a high variance of maybe the historical data will show that it took 10 seconds, but then the next run it takes more like a full minute.
And in this case, we're just like, no, you always take a full minute, or you always take 30 seconds. That might even out some of the distribution. That was one of the fun ideas that came up in terms of how could we help improve the distribution? I don't think we'll actually do that because that seems like unnecessary work and then has to be managed and justified and documented somewhere. And there are concerns that go with that. But it was still fun to brainstorm of, like, we have this thing. How do we want to do a better job of distributing between the actual work, tricking it, and then moving to another service?
CHRIS: I vote I'm team tricking it just to be clear but nah, probably not. Looping back, there is one other thing that you said in there that really stands out to me is there's a moment as you're looking through all this data, like, A, again to highlight the important work of measuring and actually validating that your hypothesis and procedure and everything's doing what you expected.
But then there was that moment of huh, that's weird, which is that is like a quote. That is an idea. That is a feeling; huh, that's weird. And whether or not you pursue it is such an interesting space to be in because a lot of the time, the work that we're doing has little bits of huh, that's weird. And do you pursue it, or do you not? Do you ignore it? And you're like, oh, that bug report is interesting. That could be a gigantic problem in our system, or it could just be a noisy network connection. I don't know.
As I think about the spectrum between folks very early in their career and senior developer as we try and think about what that means, this is one of those spaces that becomes really interesting to me. How much stuff...like, how deep is your understanding of all of the different things that you're working on such that most of the time, you have a mental model, and then the system behaves roughly in accordance with your mental model?
I think very early on in your career; I remember when I was starting out, I was like, I don't know how anything works. This is fun. I type some stuff. It works some of the time, and this is great. And slowly, over time, my knowledge and experience grew such that most of the stuff that I was doing fits within my mental model.
And then every once in a while, there's those huh, that's weird moments, and do you pursue them or do you not? Do you put them on a list to follow up on later? Just, like, that judgment point becomes a really interesting variation of what you know and don't know. And so, just the way you described that was interesting to me because it reminded me of that sort of conceptual space.
STEPH: I like that a lot because, yeah, that's constantly the space that we're in. And one of the things that I find so interesting about these kinds of Rails rescue or working with legacy code, I mean, I'm sure they apply to newer projects too. But there's a goal that you have in mind. And when do you recognize that you need to shift the goal, or you need to stick even more diligently to that goal?
And we're in that space of where we're constantly reassessing; this is our goal to speed up CI. Do we need to break for a week or two to then improve some of these tests concerns that we see? Or is this one of those times where we need to ignore that and acknowledge that it's a thing and be glad that we know about it but then really stick to our goal of speeding up CI and then moving forward with pulling in a service like RSpec Queue or Knapsack? One of those. And I agree; I think that's incredibly interesting.
And I like having those conversations of how do you decide what's the next best goal or a path to pursue? In my case, I often use timeboxing as my way to get around it where it's one of those okay; we have this idea. So like, I would like to timebox a couple of hours to look at those files to see.
Because then I can collect more data to be like, how obvious is this as to why we have this fluctuation and how long it takes this test file to run? Is it because we're making a real network call? Is it that obvious? Or is it something that's a bit more murky, and it's going to take a lot more time for us to then triage?
That is typically the tool that I will use that if I'm still not sure between two decision points, I'm like, okay, well, let me timebox and collect a few more data as if I'm pursuing this direction and then come back in a couple of hours to then reconsider which path I want to go down. I will keep you up to date as to which way we pursue, and yeah, I'll let you know how it goes. But that's most of what's going on in my world. What's new in your world?
CHRIS: What's new in my world? This week I've been getting a little bit more back into the code. We've just got a bunch of work to do. And so I've been trying to move from more of the defining the work thinking about it into actually writing some code, as it were. And it has been harder than I expected, and I've been surprised by it. And specifically, what I mean by that is I have spent a lot of my time over the past handful of weeks more in the conversation, planning, negotiating, management-y type space. So we have a lot of different integrations that we're trying to work on.
So many of the things that I'm doing are having meetings with the external companies and talking through those integrations and what does that look like? And what features do we need? And what's the contract going to look like? Yeah, interesting, fun little things, maybe not my favorite stuff in the world, but that's fine. It needs to happen. It's critical work. But at this point, I'm now ready to move back in and actually start writing some code.
It was odd to me as I was struggling more than I expected to to get back into the code. And then it kind of clicked back in, and I was like, oh no, wait, the work was too nebulous, and I couldn't find an angle of attack. Where do I start with this big, amorphous...like, build the integration for system X. I was like, I don't know how to do that. And I was kind of like, you know, like a kid rolling around the peas on their plate and not actually eating any of them sort of thing. I was just like, well, or I could...
It was weird. It was almost like in a dream where your legs don't quite work. I'm using a lot of analogies today, but that's fine. You think you can run. You're certain you knew how to run at one point in your life, but your legs just won't work. It was a little bit of that. And then I sort of snapped back in, and I remembered. I was like, oh, just break it apart into little pieces, write a checklist, sort that checklist by the things that make sense to you, start with the first one, and then work through it. And it came back. But there was this very odd moment where I didn't know how to do the work anymore. And it was like, that was scary. Yeah, it was weird.
STEPH: I find it so heartwarming that someone who is as skilled as you are and experienced as you are that you still can have those moments of like, there's this really big task in front of me, and I'm not entirely sure how to do this. And then I love that you fell back to sort of that what's my systems approach and to like, how do I solve big, murky problems? And that is to start with creating a list of some of the things that I know to do to move this forward and then organizing those. I love that story.
CHRIS: It's funny you mentioned timeboxing a minute ago, and I was like, yeah, right. That's another of the tactics that I use. And there's this whole toolset that exists, but it exists largely for me in the implementation side of the work that I do. And the other side, the conversations, and the planning and all of that, they really feel like these very distinct spaces. There's the Maker Versus Manager Schedule article by Paul Graham.
And really, it's interesting to me to experience how I'm moving across what feels like a wider gap. Like, I've bounced from the frontend to the backend a lot, or from I'll do some product management planning sort of stuff, and then I'll get back into the work. And somehow, that has all felt more cohesive and consistent.
And yet the nature of this work is sort of I'm finding that as I go back and forth between the two different sides, too, is also probably reductive. And there are probably like nine different ways in which this thing gets sliced up. But as I'm bouncing between the different facets of my work, it's been trickier. And so it was useful to just recognize that and to recognize the fact that I was able to click back in.
There's a really fantastic video on YouTube. It's by...I believe the title of the channel is Smarter Every Day. It's this guy Destin who talks about different ideas and mechanical things and whatnot. But at one point, he built a bike that was backwards. And specifically, the way that it was backwards is your handlebars; when you turn your handlebars to the right, your front wheel turns to the right, when you turn to the left, your wheel goes to the left. So it's a very direct connection.
He made a bike that was reversed such that when he turned the handlebars to the right, the front wheel would turn to the left. And obviously, he could not ride this bike. This is an impossible bike to ride for a person who has learned to ride a bike under the normal circumstances. But he battled, and he fought, and eventually, he tricked his brain into learning how to ride this new bike.
But then he couldn't ride a regular bike. And there's actually a video of it. He describes the experience of trying to ride a regular bike and how it became this other foreign thing. And yet, at one point, his brain just clicked back in, and suddenly, he knew how to ride a bike again. And the people that were watching him thought he was pretending or something; really fantastic video.
And it speaks to this sort of thing. There are modes of thinking and ways that you're operating, and it sort of felt like that. If you watch this video and you go to the end where he's in Amsterdam, and he has to try and ride a regular bike, and it clicks back in, that's what work kind of felt like for me this week. So I'm going to stop stacking analogies on top of each other, but that's sort of where I'm at right now. In a way, it's been fun.
STEPH: Well, to be fair, your analogy of pushing the peas around on the plate, I could just see it. [laughs] I think that was a really good analogy for me that really resonated. I loved that one. I haven't seen that video. That makes a lot of sense to me. I think it does. I don't know. If someone was like, "Can you ride a bike with backwards handlebars?" I probably would have been like, "Sure," and then totally failed.
I can't recall if we've talked about this before, but in everything that you're sharing, it made me think about the context switching in regards to how my schedule has changed where before, I have my one on ones, and then I also have client work. And then I was interstitialing a lot of those one on ones with client work. But now I have three days that are dedicated to client work, and I have one day that is dedicated to the Boost team, and then I have Friday for investment days. And that has been huge for me.
I didn't realize how exhausting it was for me when I was switching context so much because then there's also some prep work and a little after work that goes with each one on one. And then just knowing that I had it and I had to make sure I budgeted time for that each day in addition to the client work. But then once I shifted to like, I have a day to just focus on this particular...like, my brain can click in to like, this is the mode I'm in. I am totally focused on my team and being a good team lead and having one on ones versus I am totally focused on I can just work on code and work on some of those gnarly problems.
That has been a really big shift for me and something that I just can't unsee now to realize how stressful it was before and how I feel like I wasn't doing as good of a job. But now, I feel really good at the end of each day that I was in a particular mode, and I was more productive because I was focused on that mode.
CHRIS: Yeah, absolutely. The phrase "click in" there I love that as a mental-physical sort of representation of the thing. There's a friend of ours and former guest on the show, Matt Sumner, has recommended a piece of software to me a few times, which is called Clockwise. And Clockwise does an interesting thing that I feel calm. I've not yet pursued it because I think there's sort of a...well, anyway, the thing that it does is it rearranges schedules to try and push meetings together such that you have larger gaps of heads down deep work focused time. I love that idea. I absolutely love it.
But I think it's really interesting to like; I believe very strongly as a manager in not rescheduling one on ones with my teammates. I want to make sure that that time is protected, that to them, it's very clear that we have this time. This is the space that we've carved out to have these sorts of conversations. I'm more okay with them switching it on me. But I think it's very important for me to not change that out on them or to not reschedule at the last minute.
And so I'm sensitive to just juggling around the meetings. But I love the idea of this thing of let's just try and squash all the meetings together. I'm happy to just have like three hours straight. I'm in that mode. That's the I'm thinking about people and process and all of that kind of stuff. And then I break, I have lunch, and I come back in the afternoon, and the afternoon is entirely clear. And that's heads down working time or vice versa.
I'm actually more of I would like to have my mornings entirely clear, and that's where I do my heads down thinking work, precious brain, all that kind of stuff. And then in the afternoon, I'd prefer all my meetings. I don't necessarily want the world to entirely go around my preferences for meetings, but if it happened, it'd be fine. But Clockwise is a really interesting sort of technical solution to this problem that I've yet to pursue, but I'm intrigued by.
STEPH: I'm intrigued by this too because I did this just today where I was going through, and I was updating my schedule. And I do this on my own. I am my own AI in this case where I'm thinking through, like, okay, I want to stack meetings together so that I don't have these awkward like 15-30 minute breaks, and then I still have more of a big chunk of focus time. And so, I am manually doing that for my schedule. And I would be intrigued to see what software would recommend...they could show me a pattern that I hadn't considered that works better for me than the version that I have.
The flip side is I've also learned to just be really good with, like, I have 10 minutes. Well, let's look at my to-do list, and what can I push along for 10 minutes? That is the other thing that having a tight schedule has helped me get better at is where even if I only have 10 minutes, before, I might have been like, oh, that's not enough time to do anything. Totally a lie. Ten minutes is great. Ten minutes you can totally take a look at something and then make a comment, or read it, or just have a little more context and nudge it along. I love the nudge it along approach versus the I have to sit down and get it done approach.
On this particular theme of context switching and productivity, I have a question for you. I was debating as to whether I was going to share it or not because I feel like it's still hand-wavy enough. I'm not sure I'm going to do a great job asking this question, but I'm going to go for it. I am looking for a way to manage not just the things that I have to do each day but some higher goals. So I really like the idea of themes. So I love when a week has a theme, a month has a theme. These are the things that I'm focused on.
So then, when I do have like these 10-15 minutes or this focus time, I know there's a particular theme that I'm pursuing, maybe it's more technical related, maybe it's more mentorship, or it's something I'm interested in pursuing. But I know it's going to take a couple of iterations to work on. And I haven't found a really good way to capture those themes.
Right now, if I have something like I know I'm meeting with someone on Friday and we have a goal that we're going to collect some examples of this topic, so then what I am currently doing is for my calendar is I'm setting a daily reminder each morning to be like, hey, just so this is like simmering in the back of your mind, don't forget about it. Collect some examples about this topic. So it's one of those if I happen to see something; I want to be able to grab it and remind myself, like, hey, you're looking for this.
But it's been okay. I haven't loved it. And so I'm just in that space of where I'm trying to find a way of how do I capture the theme that I'm working on for a week or for a month but still keep that in line with my to-do items and my calendar and still ideally keep it all together? I don't want to have so many disparate places I have to go look to understand all the things that I'm focused on. Do you have any thoughts? Do you have a system for how you manage or even think about things in that space?
CHRIS: Oof. You've opened Pandora's Box here.
STEPH: [laughs]
CHRIS: I have some thoughts. What you asked, I think, is an incredibly deep question or one that there's no singular answer to this sort of thing. And the answer is specific to the person, and it's an evolving thing. Like for me, I have explored this space, personal productivity, and how to think about the bigger goals and all of that. I've explored it a lot. And it's evolved, and each phase of my life has a slightly different answer to how I think about this.
Also, to be clear, sometimes I say stuff, and it sounds like I know what I'm talking about or I've thought about this. And I'm like; I got it; I do not have it. This is a I do not have this one on lock. I'm constantly trying to solve this problem. I think the first thing that I'd go to is Getting Things Done book on personal productivity. It's the most sort of impactful or foundational in my thinking about how to look at this. And in particular, it has some ideas around the different levels at which we think about our work. There's like the day-to-day actions, and there are projects and areas.
And it's a little bit formal, frankly, in my opinion. But it does introduce the idea of the weekly review. And I think that structure that's one of the things that has been true for me throughout all of the different variations of tools, and approaches, and productivity whatnots. But the weekly review being a really useful time to sort of take a step back and think about things at a slightly higher level to make sure that you're staying connected to bigger goals and whatnot.
The other thing that comes to mind as you say this is Dave Rupert, another person who has been a guest on the show, has written a couple of times about his analog productivity system. So there's this...I think Ugmonk is the name of the cards, if I'm remembering correctly. But they're little index cards, basically, and you basically rewrite them every day. And it's this very manual, almost meditative, but very focused practice. It's vaguely similar to bullet journaling, which is another approach.
But each of them have this structured way in which you look at the work that you're doing. And I think there's a good opportunity in both of those systems, either the analog productivity thing that Dave Rupert does or bullet journaling to be like, this is where my goals go, and so these are the goals for the week, and they're written at the top of the page, and then everything else goes underneath that.
But you always have top of mind and very visible the goals that you're going towards. And so it's things like that that have been my answer to this of like, I need to find somewhere within the system that I'm working in to have the overarching goals be present and accounted for. That's tricky. And analog actually seems to be a really great way to do it, just like pen and paper is a great solution.
So even if you're using a system like Todoist, then maybe your daily structure is written on a piece of paper such that at the top of it, you write those things that are true for the week, or they're on your iPhone, desktop, or whatever, that's not a thing. They're in like a widget on your iPhone screen or on your computer desktop or something like that. But you keep them top of mind. You find some way to do that such that you're constantly anchored to the things that you say or the big rocks that you want to fill the sand around.
STEPH: I really like that book, Getting Things Done. I read it a long time ago, so that would be fun to revisit and see if I get any new bits of knowledge that are helpful for me. I like the idea of that more manual task of writing things down. I have found that to be very helpful for me because I am someone that I really like to have as little screen time as possible.
So if I can have my to-do list away from my screen, that's really nice. But then I also just recognize that there's a nicety to having it stored in an app. So then that way, it is shared across devices, and I can see it at any point. And it's stored somewhere, and I don't have to try to reread my messy handwriting. There are benefits.
But I think you highlighted the thing that I'm looking for, which is in Todoist or something similar. Right now, I have these discrete action items, and I would love if at the top...I've done this with Trello boards before where a team is working on a particular experiment or trying something new that for that iteration, we make a ticket and then we will label it something, some bright, pretty color, and then just keep it at the top of to-do and then each day we walk the board. And it's a friendly reminder of, like, hey, this is our theme for this iteration. Here's a friendly reminder.
I would love something that's like that where it's like, hey, this is your theme for this week, or this is your theme for the month; here's a friendly reminder. And I think I'm going to see if there's a way I can do that with Todoist to keep things on the same space even though I don't think it's really built to support something like that. But I'm going to check it out.
And there is a boards feature in Todoist that I haven't leveraged. So maybe if I, instead of doing the ordered list view if I do the boards view, then I could do what I just said with Trello in terms where I have a card that stays there for the week and reminds me of a goal that I'm working on or a theme that I'm working on. Cool. That was helpful, thank you. But yeah, I've been in that space of trying to figure out how to capture goals. So I appreciate you sharing those ideas with me.
CHRIS: Oh, I'm always happy to talk about this. If anything, I've been trying to be somewhat reserved so that every episode of the show isn't me talking about my continuous search for a new productivity tool. I'm still in Things. I'm not super happy about it. I keep looking at TickTick. I want Todoist to work, but it doesn't. OmniFocus calls to me.
I have a note on my phone with the list of features that I want. And I keep telling myself over and over, you're not allowed to write your own software for this. And thus far, I have successfully avoided once again writing my own productivity and list management software. [laughs] But I don't know how long I can hold out.
STEPH: As you list the names for the different apps you're using, like, what was the first one?
CHRIS: Things.
STEPH: Things. Thank you.
CHRIS: OmniFocus, TickTick. What else? There's plenty more that I've looked at, [laughs] Todoist. Yeah.
STEPH: Yeah. As you're listing all of those, that reminds me that I have decided that I think people who name apps and startups are also the same people that name baby items because I've joined a mother's group. So another thoughtboter, Elaina, runs a really wonderful group of where moms get together once a month, and we just chat about all the mama things. And they were helping supply some recommendations. They were like, "Do you know what stuff you need to buy?" And I'm like, "No. Please, please tell me. What am I going to need?"
But they're having this conversation around like, "Oh, you've got to get the Björn bouncer, and the SNOO, and the Cuzzle Wuzzle, and the Bippity Bop, and the like...all these things. I'm like, "I'm going to need y'all to use different terms because I have no idea what you're talking about." [laughs] And that is also, I think...yeah, that also goes with people who are naming things like TickTick and Things, naming those apps.
CHRIS: I'm also going to need you to spell them because many of these are not phonetic or broadly, the English languages and phonetic, the BabyBjörn, you know, that sort of thing. To do but T-E-D-E-A-U-X, I think, is one of them. It's like, come on, what are we doing here? [laughs] So yes, it's complicated out there.
STEPH: All I can think of is anytime someone's like, "Come on," all I can think of is that Peter Griffin clip where he's like, "Come on," and he's trying to get people to agree. I feel like that's some of the...that's my reaction when I read some of these [laughs] or some of those names where it's like, you're just trying to trip me up. But yeah, startups and naming baby items.
CHRIS: That's what this podcast is about.
STEPH: That's what it's all about, and cockroaches and spiders. [laughs] And I'm going to stop myself. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris came up with a mnemonic device: Fn-Delete – for when he really wants to delete something and is also thinking about password complexity requirements, which leads to an exciting discussion around security theater.
Steph talks about the upcoming RailsConf and the not-in-person option for virtual attendees. She also gives a shoutout to the Ruby Weekly newsletter for being awesome.
NIST Password Standards
3 ActiveRecord Mistakes That Slow Down Rails Apps: Count, Where and Present
Difference between count, length and size in an association with ActiveRecord
Ruby Weekly
Railsconf 2022
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris, happy Friday. You know, each time I do that, I can't resist the urge to say happy Friday, but then I realize people aren't listening on a Friday. So happy day to anyone that's listening. What's new in your world, friend?
CHRIS: I'm going to be honest; you threw me for a loop there. [laughs] I think it was the most recent episode where we talked about my very specific...[laughs] it's a lovely Friday, that's true. There's sun and clouds. Those are true things. But yeah, what's new in my world? [laughs] I can do this. I can focus. I got this.
Actually, I have one thing. So this is going to be, I'm going to say vaguely selfish, but I have this thing that I've been trying to commit into my brain for a long time, and I just can't get it to stick. So today, I came up with like a mnemonic device for it. And I'm going to share it on The Bike Shed because maybe it'll be useful for other people. And then hopefully, in quote, unquote, "teaching it," I will deeply learn it.
So the thing that happens in my world is occasionally, I want to delete a URL from Chrome's autocomplete. To be more specific, because it's easier for people to run away with that idea, it's The Weather Channel. I do not like weather.com. I try to type weather often, and I just want Google to show me the little, very quick pop-up thing there. I don't want any ads. I don't want to deal with that.
But somehow, often, weather.com ends up in my results. I somehow accidentally click on it. It just gets auto-populated, and then that's the first thing that happens whenever I type weather into the Omnibox in Chrome. And I get unhappy, and I deal with it for a while, then eventually I'm like, you know what? I'm deleting it. I'm getting it out of there. And then I try and remember whatever magical key combination it is that allows you to delete an entry from the drop-down list there. And I know it's a weird combination of like, Command-Shift-Alt-Delete, Backspace, something.
And every single time, it's the same. I'm like, I know it's weird, but let me try this one. How about that one? How about that one? I feel like I try every possible combination. It's like when you try and plug in a USB drive, and you're like, well, it's this way. No, it's the other way. Well, there are only two options, and I've already tried two things. How can I not have gotten it yet? But I got it now.
Okay, so on a Mac specifically, the key sequence is Shift-Function-Delete. So the way I'm going to remember this is Function is abbreviated on the keyboard as Fn. So that can be like I'm swearing, like, I'm very angry about this. And then Shift is the way to uppercase something like you're shouting. So I just really need to Fn-Delete this. So that's how I'm going to remember it. Now I've shared it with everyone else, and hopefully, some other folks can get utility out of that. But really, I hope that I remember it now that I've tried to boil it down to a memorable thing.
STEPH: [laughs] It's definitely memorable. I'm now going to remember just that I need to Fn-Delete this. And I'm not going to remember what it all is tied to. [laughs]
CHRIS: That is the power of a mnemonic device. Yeah.
STEPH: Like, I know this is useful in some way, but I can't remember what it is. But yeah, that's wonderful. I love it. That's something that I haven't had to do in a long time, and I hadn't thought about. I need to do that more. Because you're right, especially changing projects or things like that, there are just some URLs that I don't need cached anymore; I don't want auto-completed. So yeah, okay. I just need to Fn-Delete it. I'll remember it. Here we go. I'm speaking this into the universe, so it'll be true.
CHRIS: Just Fn-Delete it.
STEPH: Your bit about the USB and always getting it wrong, you get it 50-50 [laughs] by getting it wrong, resonates so deeply with me and my capability with directions where I am just terrible whether I have to go right or left. My inner compass is going to get it wrong. And I've even tried to trick myself where I'm like, okay, I know I'm always wrong. So what if I do the opposite of what Stephanie would do? And it's still somehow wrong. [laughs]
CHRIS: Somehow, your brain compensates and is like, oh, I know that we're going to do that. So let's...yeah, it's amazing the way these things happen.
STEPH: Yep. I don't understand it. I've tried to trick the software, but I haven't figured out the right way. I should probably just learn and get better at directions. But here we are. Here we are.
CHRIS: You just loosely referred to the software, but I think you're referring to the Steph software when you say that.
STEPH: Yes. Oh yeah, Steph software totally. You got it. [laughs]
CHRIS: Gotcha. Cool. Glad that I checked in on that because that's great. But shifting gears to something a little bit deeper in the technical space, this past week, we've been thinking about passwords within our organization at Sagewell. And we're trying to decide what we want to do. We had an initial card that came through and actually got most of the way to implemented to dial up our password strictness requirements. And as I saw that come through, I was like, oh, wait, actually, I would love to talk about this.
And so we had the work that was coming through the PR that had been opened was a pretty traditional set of let's introduce some requirements on our passwords for complexity, so let's make it longer. We're going from; I think six was the default that Devise shipped with, so we're increasing that to, I think it was eight. And then let's say that it needs a number, and a special character, and an uppercase letter or something like that.
I've recently read the NIST rules, so the National Institute of Standards and Technology, I think, is what they are. But they're the ones who define a set of rules around this or guidelines. But I think they are...I don't know if they are laws or what at this point. But they tell you, "This is what you should and shouldn't do." And I know that the password complexity stuff is on the don't do that list these days. So I was like, this is interesting, and then I wanted to follow through.
Interestingly, right now, I've got the Trello boards up for The Bike Shed right now. But as a result, I can't look at the linked Trello card that is on the workboards because they're in different accounts. And Trello really has made my life more difficult than I wanted. But I'm going to pull this up elsewhere. So let's see.
So NIST stuff, just to talk through that, we can include a link in the show notes to a nice summary. But what are the NIST password requirements? Eight character minimum, that's great. Change passwords only if there is evidence of a compromise. Screen new passwords against a list of known compromised passwords. That's a really interesting one. Skip password hints, limit the number of failed authentication attempts. These all sound great to me.
The maximum password length should be at least 64 characters, so don't constrain how much someone can put in. If they want to have a very long password, let them go for it. Don't have any sort of required rotation. Allow copy and pasting or functionality that allows for password managers.
And allow the use of all printable ASCII characters as well as all Unicode characters, including emojis. And that one really caught my attention. I was like, that sounds fun. I wish I could look at all the passwords in our database. I obviously can't because they're salted and encrypted, and hashed, and all those sorts of things where I'm like, I wonder if anybody's using emojis. I'm pretty sure we would just support it. But I'm kind of intrigued.
STEPH: You said something in that list that caught my attention, and I just want to see if I heard it correctly. So you said only offer change password if compromised? Does that mean I can't just change my password if I want to?
CHRIS: Sorry. Yeah, I think the phrasing here might be a little bit odd. So it's essentially a different way to phrase this requirement is don't require rotation of passwords every six or whatever months. Forgotten password that's still a reasonable thing to have in your application, probably a necessity in most applications. But don't auto-rotate passwords, so don't say, "Your password has expired after six months."
STEPH: Got it. Okay, cool. That makes sense. Then the emojis, oh no, it's like, I mean, I use a password manager now, and thanks to several years ago where he shamed me into using one. Thank you. That was great. [laughs]
CHRIS: I hope it was friendly shame, but yeah.
STEPH: Yes, it was friendly; kind shame if that sounds like a weird sentence to say. But yes, it was a very positive change. And I can't go back now that I have a password manager in my life. Because yeah, now I'm thinking like, if I had emojis, I'd be like, oh great, now I have to think about how I was feeling at the time that then I introduced a new password. Was I happy? Was I angry? Is it a poop emoji? Is unicorn? What is it? [laughs] So that feels complicated and novel.
You also mentioned on that list that going for more complexity in terms of you have to have uppercase; you have to have a particular symbol, things like that are not on the recommended list. And I didn't know that. I'm so accustomed to that being requirements for passwords and the idea of how we create something that is secure and less easy to guess or to essentially hack. So I'm curious about that one if you know any more details about it as to why that's not the standard anymore.
CHRIS: Yeah, I think I have some ideas around it. My understanding is mostly that introducing the password complexity requirements while intended to prevent people from using very common things like names or their user name or things like that, it's like, no, no, no, you can't because we've now constrained the system in that way. It tends in practice to lead to people having a variety of passwords that they forget all the time, and then they're using the forgotten password flow more often.
And it basically, for human and behavior reasons, increases the threat surface area because it means that they're not able to use...say someone has a password scheme in mind where it's like, well, my passwords are, you know, it's this common base, and then some number of things specific to the site. It's like, oh no, no, we require three special characters, so it's like they can't do their thing. And now they have to write it down on a Post-it Note because they're not going to remember it otherwise. Or there are a variety of ways in which those complexity requirements lead to behavior that's actually less useful.
STEPH: Okay, so it's the Post-it Note threat vector that we have to be worried about. [laughs]
CHRIS: Which is a very real threat factor.
STEPH: I believe it. [laughs] Yes, I know people that keep lists of passwords on paper near their desk. [laughs] This is a thing.
CHRIS: Yep, yep, yep. The other thing that's interesting is, as you think about it, password complexity requirements technically reduce the overall combinatoric space that the passwords can exist in. Because imagine that you're a password hacker, and you're like, I have no idea what this password is. All I have is an encrypted hashed salted value, and I'm trying to crack it. And so you know the algorithm, you know how many passes, you know potentially the salt because often that is available. I think it has to be available now that I think about that out loud.
But so you've got all these pieces, and you're like, I don't know, now it's time to guess. So what's a good guess of a password? And so if you know the minimum number of characters is eight and, the maximum is 12 because that actually happens on a lot of systems, that's actually not a huge combinatoric space. And then if you say, oh, and it has to have a number, and it has to have an uppercase letter, and it has to have a special character, you're just reducing the number of possible options in that space.
And so, although this is more like a mathematical thing, but in my mind, I'm like, yeah, wait, that actually makes things less secure because now there are fewer passwords to check because they don't meet the complexity requirements. So you don't even have to try them if you're trying to brute-force crack a password.
STEPH: Yeah, you make a really good point that I hadn't really thought about because I've definitely seen those sites that, yeah, constrain you in terms of like, has to have a minimum, has to have a maximum, and I hadn't really considered the fact that they are constraining it and then reducing the values that it could be. I am curious, though, because then it doesn't feel right to have no limit in terms of, like, you don't want people then just spamming your sign up and then putting something awful in there that has a ridiculous length. So do you have any thoughts on that and providing some sort of length requirement or length maximum?
CHRIS: Yeah, I think the idea is don't prevent someone who wants to put in a long passphrase, like, let them do that. But there is, the NIST guidelines specifically say 64 characters. Devise out of the box is 128, I believe. I don't think we tweaked that, and that's what we're at right now. So you can write an old-style tweet and that can be your password if that's what you want to do. But there is an upper limit to that. So there is a reasonable upper limit, but it should be very permissive to anyone who's like, I want to crank it up.
STEPH: Cool. Cool. Yeah, I just wanted to validate that; yeah, having an upper bound is still important.
CHRIS: Yeah, definitely. Important...it's more for implementation and our database having a reasonable size and those sorts of things. Although at the end of the day, the thing that we saw is the encrypted password. So I don't know if bcrypt would run slower on a giant body of text versus a couple of characters; that might be the impact. So it would be speed as opposed to storage space because you always end up with a fixed-length hash of the same length, as far as I understand it.
But yeah, it's interesting little trade-offs like that where the complexity requirements do a good job of forcing people to not use very obvious things like password. Password does not fit nearly any complexity requirements. But we're going to try and deal with that in a different way. We don't want to try and prevent you from using password by saying you must use an uppercase letter and a special character and things that make real passwords harder as well. But it is an interesting trade-off because, technically, you're making the crackability easier. So it gets into the human and the technical and the interplay between them.
Thinking about it somewhat differently as well, there's all this stuff about you should salt your passwords, then you should hash them. You should run them through a good password hashing algorithm. So we're using bcrypt right now because I believe that's the default that Devise ships with. I've heard good things about Argon2; I think is the name of the new cool kid on the block in terms of password hashing. That whole world is very interesting to me, but at the end of the day, we can just go with Devise's defaults, and I'll feel pretty good about that and have a reasonable cost factor. Those all seem like smart things.
But then, as we start to think about the complexity requirements and especially as we start to interact with an audience like Sagewell's demographics where we're working with seniors who are perhaps less tech native, less familiar, we want to reduce the complexity there in terms of them thinking of and remembering their passwords. And so, rather than having those complexity requirements, which I think can do a good job but still make stuff harder, and how do you communicate the failure modes, et cetera, et cetera, we're switching it.
And the things that we're introducing are we have increased the minimum length, so we're up to eight characters now, which is NIST's low-end recommended, so it's between 8 and 128 characters. We are capturing anytime a I forgot password reset attempt happens and the outcome of it. So we're storing those now in the database, and we're showing them to the admins.
So our admin team can see if password reset attempts have happened and if they were successful. That feels like good information to keep around. Technically, we could get it from the logs, but that's deeply hidden away and only really accessible to the developers. So we're now surfacing that information because it feels like a particularly pertinent thing for us.
We've introduced Rack::Attack. So we're throttling those attempts, and if someone tries to just brute force through that credential stuffing, as the terminology goes, we will lock them out so either based on IP address or the account that they're trying to log into. We also have Devise's lockable module enabled. So if someone tries to log in a bunch of times and fails, their account will go into a locked state, and then an admin can unlock it. But it gives us a little more control there. So a bunch of those are already in place.
The new one, this is the one that I'm most excited about, is we're going to introduce Have I Been Pwned? And so, they have an API. We can hit it. It's a really interesting model as to how do we ask if a password has been compromised without giving them the password? And it turns out there's this fun sort of cryptographic handshake thing that happens. K-anonymity is apparently the mechanism or the underpinning technology or idea.
Anyway, it's super cool; I'm excited to build it. It's going to be fun. But the idea there is rather than saying, "Don't use a password that might not be secure," it's, "Hey, we actually definitively know that your password has been cracked and is available in plaintext on the internet, so we're not going to let you use that one."
STEPH: And that's part of the signup flow as to where you would catch that?
CHRIS: So we're going to introduce on both signup and sign-in because a password can be compromised after a user signs up for our system. So we want to have it at any point. Obviously, we do not keep their plaintext password, so we can't do this retroactively. We can only do it at the point in time that they are either signing up or signing in because that's when we do have access to the password. We otherwise throw it away and keep only the hashed value. But we'll probably introduce it at both points.
And the interesting thing is communicating this failure mode is really tricky. Like, "Hey, your password is cracked, not like here, not on our site, no, we're fine. Well, you should probably change your password. So here's what it means, there's actually this database that's called Have I Been Pwned? Don't worry; it's good, though. It's P-W-N-E-D. But that's fine." That's too many words to put on a page. I can't even say it here in a podcast.
And so what we're likely to do initially is instrument it such that our admin team will get a notification and can see that a user's password has been compromised. At that point, we will reach out to them and then, using the magic of human conversation, try and actually communicate that and help them understand the ramifications, what they should do, et cetera. Longer-term, we may find a way to build up an FAQ page that describes it and then say, "Feel free to reach out if you have questions." But we want to start with the higher touch approach, so that's where we're at.
STEPH: I love it. I love that you dove into how to explain this to people as well because I was just thinking, like, this is complicated, and you're going to freak people out in panic. But you want them to take action but not panic. Well, I don't know, maybe they should panic a little bit. [laughs]
CHRIS: They should panic just the right amount.
STEPH: Right.[laughs] So I like the starting with the more manual process of reaching out to people because then you can find out more, like, how did people react to this? What kind of questions did they ask? And then collect that data and then turn that into an FAQ page. Just, well done.
CHRIS: We haven't quite done it yet. But I am very happy with the collection of ideas that we've come to here. We have a security firm that we're working with as well. And so I had my weekly meeting with them, and I was like, "Oh yeah, we also thought about passwords a bunch, and here's what we came up with." And I was very happy that they were like, "Yeah, that sounds like a good set." I was like, "Cool. All right, I feel good." I'm very happy that we're getting to do this.
And there's an interesting sort of interplay between security theater and real security. And security theater, just to explain the phrase if anyone's unfamiliar with it, is things that look like security, so, you know, big green lock up in the top-left corner of the URL bar. That actually doesn't mean anything historically or now. But it really looks like it's very secure, right? Or password complexity requirements make you think, oh, this must be a very secure site. But for reasons, that actually doesn't necessarily prove that at all.
And so we tried to find the balance of what are the things that obviously demonstrate our considerations around security to the user? At the end of the day, what are the things that actually will help protect our users? That's what I really care about. But occasionally, you got to play the security theater game. Every other financial institution on the internet kind of looks and feels a certain way in how they deal with passwords.
And so will a user look at our seemingly laxer requirements or laxer approach to passwords and judge us for that and consider us less secure despite the fact that behind the scenes look at all the fun stuff we're doing for you? But it's an interesting question and interesting trade-off that we're going to have to spend time with. We may end up with the complexity requirements despite the fact that I would really rather we didn't. But it may be the sort of thing that there is not a good way to communicate the thought and decision-making process that led us to where we're at and the other things that we're doing.
And so we're like, fine, we just got to put them in and try and do a great job and make that as usable of an experience as possible because usability is, I think, one of the things that suffers there. You didn't do one of the things on the list, or like, it's green for each of the ones that you did, but it's red for the one that you didn't. And your password and your password confirmation don't match, and you can't paste...it's very easy to make this wildly complex for users.
STEPH: Security theater is a phrase that I don't think I've used, but the way you're describing it, I really like. And I have a solution for you: underneath the password where you have "We don't partake in security theater, and we don't have all the other fancy requirements that you may have seen floating around the internet and here's why," and then just drop a link to the episode. And, you know, people can come here and listen. It'll totally be great. It won't annoy anyone at all. [laughs]
CHRIS: And it'll start, and they'll hear me yelling about Fn-Delete that weather.com URL.
[laughter]
STEPH: Okay, maybe fast forward then to the part about --
CHRIS: Drop them to the timestamp. That makes sense. Yep. Yep.
STEPH: Mm-hmm. Mm-hmm. [laughs]
CHRIS: I like it. I think that's what we should do, yeah. Most features on the app should have a link to a Bike Shed episode. That feels true.
STEPH: Excellent Easter egg. I'm into it. But yeah, I like all the thoughtfulness that y'all have put into this because I haven't had to think about passwords in this level of detail. And then also, yeah, switching over to when things start to change and start to move away, you're right; there's still that we need to help people then become comfortable with this new way and let them know that this is just as secure if not more secure. But then there's already been that standard that has been set for your expectations, and then how do you help people along that path? So yeah, seems like y'all have a lot of really great thoughtfulness going into it.
CHRIS: Well, thank you. Yeah, it's frankly been a lot of fun. I really like thinking in this space. It's a fun sort of almost hobby that happens to align very well with my profession sort of thing. Actually, oh, I have one other idea that we're not going to do, but this is something that I've had in the back of my mind for a long time.
So when we use bcrypt or Devise uses bcrypt under the hood, one of the things that it configures is the cost factor, which I believe is just the number of times that the password plus the salts and whatnot is run through the bcrypt algorithm. The idea there is you want it to be computationally difficult, and so by doing it multiple times, you increase that difficulty.
But what I'd love is instead of thinking of it in terms of an arbitrary cost factor which I think is 12, like, I don't know what 12 means. I want to know it, in terms of dollars, how much would it cost to, like dollars and cents, to crack a password. Because, in theory, you can distribute this across any number of EC2 instances that you spin up. The idea of cracking a password that's a very map-reducible type problem.
So let's assume that you can infinitely scale up compute on-demand; how much would it cost in dollars to break this password? And I feel like there's an answer. Like, I want that number to be like a million dollars. But as EC2 costs go down over time, I want to hold that line. I want to be like, a million dollars is the line that we want to have. And so, as EC2 prices go down, we need to increase our bcrypt cost factor over time to adjust for that and maintain the million dollar per password cracking sort of high bar. That's the dream.
Swapping out the cost factor is actually really difficult. I've looked into it, and you have to like double encrypt and do weird stuff. So for a bunch of reasons, I haven't done this, but I just like that idea. Let's pin this to $1 value. And then, from there, decisions naturally flow out of it. But it's so much more of a real thing. A million dollars, I know what that means; 12, I don't know what 12 means.
STEPH: A million-dollar password, I like it. I feel like --
CHRIS: We named the episode.
STEPH: I was going to say that's a perfect title, A Million-Dollar Password. [laughs]
CHRIS: A Million-Dollar Password. But with that wonderful episode naming cap there, I think I'm done rambling about passwords. What's up in your world, Steph?
STEPH: One of the things that I've been chatting with folks lately is RailsConf is coming up; it's May 17 through the 19th. And it's been sort of like that casual conversation of like, "Hey, are you going? Are you going? Who's going? It's going to be great." And as people have asked like, "Are you going?" And I'm always like, "No, I'm not going." But then I popped on to the RailsConf website today because I was just curious. I wanted to see the schedule and the talks that are being given.
And I keep forgetting that there's the in-person version, but there's also the home edition. And I was like, oh, I could go, I could do this. [laughs] And I just forget that that is something that is just more common now for conferences where you can attend them virtually, and that is just really neat. So I started looking a little more closely at the talks. And I'm really excited because we have a number of thoughtboters that are giving a talk at RailsConf this year.
So there's a talk being given by Fernando Perales that's called Open the Gate a Little: Strategies to Protect and Share Data. There's also a talk being given by Joël Quenneville: Your Test Suite is Making Too Many Database Calls. I'm very excited; just that one is near and dear to my heart, given the current client experiences that I'm having. And then there's another one from someone who just joined thoughtbot, Christopher "Aji" Slater, Your TDD Treasure Map.
So we'll be sure to include a link to those for anyone that's curious. But it's a stellar lineup. I mean, I'm always impressed with RailsConf talks. But this one, in particular, has me very excited. Do you have any plans for RailsConf? Do you typically wait for them to come out later and then watch them, or what's your MO?
CHRIS: Historically, I've tended to watch the conference recordings after the fact. I went one year. I actually met Christopher "Aji" Slater at that very RailsConf that I went to, and I believe Joël Quenneville was speaking at that one. So lots of everything old is new again. But yeah, I think I'll probably catch it after the fact in this case.
I'd love to go back in person at some point because I really do like the in-person thing. I'm thrilled that there is the remote option as well. But for me personally, the hallway track and hanging out and meeting folks is a very exciting part. So that's probably the mode that I would go with in the future. But I think, for now, I'm probably just going to watch some talks as they come out.
STEPH: Yeah, that's typically what I've done in the past, too, is I kind of wait for things to come out, and then I go through and make a list of the ones that I want to watch, and then, you know, I can make popcorn at home. It's delightful. I can just get cozy and have an evening of RailsConf talks. That's what normal people do on Friday nights, right? That's totally normal. [laughs]
CHRIS: I mean, yeah, maybe not the popcorn part.
STEPH: No popcorn?
CHRIS: But not that I'm opposed to popcorn just —-
STEPH: Brussels sprouts? What do you need? [laughs]
CHRIS: Yeah, Brussels sprouts, that's what it is. Just sitting there eating handfuls of Brussels sprouts watching Ruby conference talks.
STEPH: [laughs]
CHRIS: I do love Brussels sprouts, just to throw it out there. I don't want it to be out in the ether that I don't like them. I got an air fryer, and so I can air fry Brussels sprouts. And they're delicious. I mean, I like them regardless. But that is a really fantastic way to cook them at home. So I'm a big fan.
STEPH: All right, I'm moving you into the category of fancy friends, fancy friends with an air fryer.
CHRIS: I wasn't already in your category of fancy friends?
STEPH: [laughs] I didn't think you'd take it that way. I'm sorry to break it to you.
[laughter]
CHRIS: I'm actually a little hurt that I'm now in the category of fancy friends. It makes a lot of sense that I wasn't there before. So I'll just deal with...yeah, it's fine. I'm fine.
STEPH: It's a weird rubric that I'm running over here. Pivoting away quickly, so I don't have to explain the categorization for fancy friends, I saw something in the Ruby Weekly Newsletter that had just come out. And it's one of those that I see surface every so often, and I feel like it's a nice reminder because I know it's something that even I tend to forget. And so I thought it'd be fun just to resurface it here. And then, we can also provide a link to the wonderful blog post that's written by Benito Serna.
And it's the difference between count, length, and size and an association with ActiveRecord. So for folks that would love a refresher, so count, that's a method that's always going to perform a SQL count query. So even if the collection has already been loaded, then calling count is always going to execute a database query. So this is the one that's just like, watch out, avoid it. You're always going to hit your database when you use this one.
And then next is length. And so, length loads the whole collection into memory and then returns that length to the number of items in that collection. If the collection has been loaded, then it's not going to issue a database call. And then it's just still going to use...it's going to delegate to that Ruby length method and let you know how many records are in that collection. So that one is a little bit better because then that way, if it's already loaded, at least you're not going to have a database call.
And then next is the size method, which is just the one that's more highly recommended that you use because this one does have a nice safety net that is built-in because first, it's going to check if we need to perform a database call, if the records have been loaded or not. So if the collection has not been loaded, so we haven't executed a database query and stored the result, then size is going to perform a database query. Specifically, it's using that SQL count under the hood. And if the collection has been loaded, then a database call is not issued, and then going to use the Ruby length method to then return the number of records.
So it just helps you prevent unnecessary database calls. And it's the reason that that one is recommended over using count, which is going to always issue a call. And then also to avoid length where you can because it's going to load the whole collection into memory, and we want to avoid that. So it was a nice refresher. I'll be sure to include a link in the show notes.
But yeah, I find that I myself often forget about the difference in count and size. And so if I'm just in the console and I just want to know something, that I still reach for count. It is still a default for mine. But then, if I'm writing production code, then I will be more considered as to which one I'm using.
CHRIS: I feel like this is one of those that I've struggled to lock into my head, but as you're describing it right now, I think I've got, again, another mnemonic device that we can lock on to. So I know that SQL uses the keyword count, so count that's SQL definitely. Length I know that because I use that on other stuff. And so it's size that is different and therefore special. That all seems good. Cool, locking that in my brain along with Fn-Delete. I have two things that are now firmly locked in.
So you were just mentioning being in the console and working with this. And one of the things that I've noticed a lot with folks that are newer to ActiveRecord and the idea of relations and the fact that they're lazy, is that that concept is very hard to grasp when working in a console because at the console, they don't seem lazy.
The minute you type out user.where some clause, and the minute you type that and hit enter in the console, Ruby is going to do its normal thing, which is like, okay, cool, I want to...I forget what it is that IRB or any of the REPLs are going to do, but it's either inspect or to_s or something like that. But it's looking for a representation that it can display in the console. And ActiveRecord relations will typically say like, "Oh, cool, you need the records now because you want to show it like an array because that's what inspect is doing under the hood."
So at the console, it looks like ActiveRecord is eager and will evaluate the query the minute you type it, but that's not true. And this is a critical thing that if you can think about it in that way and the fact that ActiveRecord relations are lazy and then take advantage of it, you can chain queries, you can build them up, you can break that apart. You can compose them together. There's really magical stuff that falls out of that.
But it's interesting because sort of like a Heisenberg where the minute you go to look at it in the REPL, it's like, oh, it is not lazy; it is eager. It evaluates it the minute I type the query. But that's not true; that's actually the REPL tricking you. I will often just throw a semicolon at the end of it because I'm like, I don't want to see all that noise. Just give me the relation. I want the relation, not the results of executing that query. So if you tack a semicolon at the end of the line, that tells Ruby not to print the thing, and then you're good to go from there.
STEPH: That's a great pro-tip. Yeah, I've forgotten about the semicolon. And I haven't been using that in my workflow as much. So I'm so glad you mentioned that. Yeah, I'm sure that's part of the thing that's added to my confusion around this, too, or something that has just taken me a while to lock it in as to which approach I want to use for when I'm querying data or for when I need to get a particular count, or length, or size. And by using all three, I'm just confusing myself more. So I should really just stick to using size.
There's also a fabulous article by Nate Berkopec that's titled Three ActiveRecord Mistakes That Slow Down Rails Apps. And he does a fabulous job of also talking about the differences of when to use size and then some of the benefits of when you might use count. The short version is that you can use count if you truly don't care about using any of those records. Like, you're not going to do anything with them. You don't need to load them, like; you truly just want to get a count. Then sure, because then you're issuing a database query, but then you're not going to then, in a view, very soon issue another database query to collect those records again. So he has some really great examples, and I'll be sure to include a link to his article as well.
Speaking of Ruby tidbits and kind of how this particular article about count, length, and size came across my view earlier today, Ruby Weekly is a wonderful newsletter. And I feel like I don't know if I've given them a shout-out. They do a wonderful job. So if you haven't yet checked out Ruby Weekly, I highly recommend it.
There are just always really great, interesting articles either about stuff that's a little bit more like cutting edge or things that are being released with newer versions, or they might be just really helpful tips around something that someone learned, like the difference between count, length, and size, and I really enjoy it. So I'll also be sure to include a link in the show notes for anyone that wants to check that out.
They also do something that I really appreciate where when you go to their website, you have the option to subscribe, but I am terrible about subscribing to stuff. So you can still click and check out the latest issue, which I really appreciate because then, that way, I don't feel obligated to subscribe, but I can still see the content.
CHRIS: Oh yeah. Ruby Weekly is fantastic. In fact, I think Peter Cooper is the person behind it, or Cooperpress as the company goes. And there is a whole slew of newsletters that they produce. So there's JavaScript Weekly, there's Ruby Weekly, there's Node Weekly, Golang Weekly, React Status, Postgres Weekly. There's a whole bunch of them. They're all equally fantastic, the same level of curation and intentional content and all those wonderful things. So I'm a big fan. I'm subscribed to a handful of them.
And just because I can't go an episode without mentioning inbox zero, if you are the sort of person that likes to defend the pristine nature of your email inbox, I highly recommend Feedbin and their ability to set up a special email address that you can use to then turn it into an RSS feed because that's magical. Actually, these ones might already have an RSS feed under the hood. But yeah, RSS is still alive. It's still out there. I love it. It's great. And that ends my thoughts on that matter.
STEPH: I have what I feel is a developer confession. I don't think I really appreciate RSS feeds. I know they're out there in the ether, and people love them. And I just have no emotion, no opinion attached to them. So one day, I think I need to enjoy the enrichment that is RSS feeds, or maybe I'll hate it. Who knows? I'm reserving judgment. Either way, I don't think I will. [laughs] But I don't want to box future Stephanie in.
CHRIS: Gotta maintain that freedom.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Steph has a question for Chris: When you have no idea how you're going to implement a feature, how do you write your first test?
Chris has thoughts about hybrid teams (remote/in-person) and masked inputs.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Preemptive Pluralization is (Probably) Not Evil
iMask
Mitch Hedberg - Escalator Joke
This episode is brought to you by Studio 3T. Try Studio 3T's full suite of features for 30 days, no payment details needed.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: I am recording in a new room because we're in Pennsylvania, and so I'm recording at this little vanity desk which is something. [laughs] But there's a mirror right in front of me, so I feel very vain because it's just like, [laughs] I'm just looking at myself while I'm recording with you. It's something.
CHRIS: [laughs] That is something.
STEPH: [laughs] So, you know.
CHRIS: Fun times.
STEPH: Pro podcast tip, you know, just stare at yourself while you chat, while you record.
CHRIS: I mean, if that works for you, you know, plenty of people in the gym have the mirrors up, so podcasting is like exercising in a way, and I think it makes sense.
STEPH: I appreciate the generosity. [laughs]
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. So I have a funny/emotional story that [laughs] I'm going to share with you first because I feel like it kind of encapsulates how life is going at the moment. So we've officially moved from South Carolina to North Carolina. I feel like I've been talking about that for several episodes now. But this is it: we have finally vacated all of our stuff out of South Carolina house and relocated to North Carolina. And once we got to North Carolina, we immediately had to then leave town for a couple of days.
And normally, Utah, our dog, stays with an individual in South Carolina, someone that we found, trust, and love. And he has a great time, and I just know he's happy. But we didn't have that this time. So I had to find just a boarding facility that had really high reviews that I felt like I could trust him with. I didn't even have time to take him for a day to test it out. It was one of those like, I got to show up and just drop him off and hope this goes well, so I did.
And everything looks wonderful. Like, the facility is very clean. I had a list of things to look for to make sure it was a good place. But it's the first time leaving him somewhere where he's going to spend significant time in a kennel that has indoor-outdoor access. And as I walked away from him, I started to cry. And I just thought, oh no, this is embarrassing. I'm that dog mom who's going to start crying in this boarding facility as she's leaving her dog for the first time. So I put on my shades, and I managed to make it through the checkout process.
But then I went to my truck and just sat there and cried for 15 minutes and called my husband and was like, "I'm doing the right thing, right? Like, tell me this is okay because I'm having a moment." And I finally got through that moment. But then I even called you because you and I were scheduled to chat. And I was like, I am not in a place that I can chat right now. I think I told you when you answered the phone. I was like, "Everything is fine, but I sound like the world's ending, or I sound like a mess." [laughs]
And yeah, so I had like two hours of where I just couldn't stop crying. I partially blame pregnancy hormones. I'm going to go with that as my escape rope for now. So I feel like that's been life lately. Life's been a little overwhelming, and that felt like the cherry on top. And that was the moment that I broke. Update: he's doing great. I've gotten pictures of Utah. He's having a wonderful time at camp, it seems. [laughs] It was just me, his mom, who is having trouble.
CHRIS: Well, you know, reasonable to worry, and life's dialed up to 11 and all of that. But yeah, I will say even though you lead the conversation with everything's fine, your tone of voice did not imply that everything was fine. So when I eventually came to understand what we were talking about, I hope I was kind in the moment. But I was like, oh, okay, this is fine. We're fine. I'm so sorry you're feeling terrible right now.
STEPH: [laughs]
CHRIS: But okay, we're fine. For me, there was a palpable moment of like, okay, my stress is now back down a little bit. But I'm glad that things are going well and that Utah is having a fun vacation.
STEPH: Yep, he seems to be doing fine. I've calmed down. You know, as you said, life's been dialed up lately. On a less emotional note and something that's a little bit more technical, I had a really great conversation with another thoughtboter where we were talking about testing. And the idea of when you learn testing, it's often very focused on like, you have this object, and it has a method. And so, you're going to write a unit test for this particular method. And it's very isolated, very specific as to the thing that you're looking to test.
Versus in reality, when you pick up tickets, you don't have that scope, and like, it is so broad. You have to figure out what feature you're implementing, figure out how to test it. And it feels like this mismatch between how a lot of people learn to test and learn TDD versus then how we actually practice it in the wild.
And so we had a phone conversation around when you are presented with a ticket like that, and you have no idea how you're going to implement a feature, how do you get started with testing, and when do you write your first test? Do you TDD? Do you BDD? Or do you PDD? That last one I made up, it stands for Panic-driven development. But it's what's your approach to how do you actually then get to the point where you can write a test? And I have a couple of thoughts. But I'm really curious, how does that flow work for you? What have you learned throughout the years to then help yourself write that first test? Or where do you start?
CHRIS: Well, this is an interesting question. I like this one. I think it varies. And I think there's a lot of dogma around TDD as a practice. And I think it is super useful to break that apart and hear different individual stories of it. I know there are plenty of folks who are like, TDD is just not a thing and whatnot, and I'm certainly not in that camp. But I also don't TDD 100% of the time because sometimes I'm not super clear on what I'm doing, or I'm in more of an exploratory phase.
That said, I think there's a...I want to answer the question somewhat indirectly, which is I know how to test most of the code that I work on now as a web developer in a Rails application because I've done most of the things a bunch of times. And the specifics may be different, but the like, to integrate with this external system, and I have to build an API client or whatever, I know how to do that.
And there is a public API of some class that I will be exercising against and so I can write tests against that. Or I know that the user is going to click a button, and then something needs to happen. And so I can write that test, and it fails, and then it starts to push me towards the implementation. There are also times where it's actually quite hard to get the test to lead you in the right direction, and you have to know what hop to make, and so sometimes I just do that.
But yeah, rolling back a little bit, I think there is a certain amount of experience that is necessary. And I think one of the critical things that I want to share with folks that are potentially newer to testing overall is that it is actually quite hard. You have to understand your system and how you're going to approach it, you know, one step removed, or it's like a game of chess where you're thinking a couple of moves ahead. You have to understand it in a deeper way.
And so, if testing is difficult, that might just be totally reasonable at this point. And as you come to see the patterns within a Rails application or whatever type of application you're working on over and over, it becomes easier to test. But if testing is hard, that may not mean...like, how do I phrase this? There's like an impostor syndrome story in here of like, if you're struggling with testing, it may not be that something is fundamentally broken. You just may need a couple more chances to see that sort of thing play out.
And so, for me, in most cases, I tend to know where to start or when not to. Like, I feel fine not testing when I don't test most of the time. I will eventually get things under test coverage such that I feel confident in that. And whenever I have one of those moments, I will stop and look at it and say, "Why didn't I know how to test this from the front, like, from the start?"
But it's rare at this point for things to be truly exploratory. There's always some outer layer that I can wrap around. But like, I know X needs to happen when Y occurs. So how do I instrument the system in that way? But yeah, those are some thoughts. What are your thoughts? Does what I said sound reasonable here?
STEPH: Yeah, I really like how you highlighted that pausing for reflection. That was something that I didn't initially think of, but I really liked that, to then go back to be like, okay, revisiting myself a couple of days or however earlier when I first started this. Now I can see where I've ended up. How could I have made that connection sooner as to where I was versus the tests I ended up with? Or perhaps recognizing that I couldn't have gotten there sooner, that I needed that journey to help me get there. So I really like the idea of pausing for reflection because then it helps cement any of those learnings that you have made during that time.
Also, the other part where you mentioned the user clicks a button, and something happens, that's where I immediately went with this. I also liked that you highlighted that TDD has that bit of dogma, and I don't always TDD. I do what I can, and it helps me. But it has to be a tool versus something that I just do 100% of the time. But with more of that BDD approach or that very high-level user-level integration test of where if I need to pull data from an API and then show it to the user, okay, I know I can at least start with a high-level test of I want the user to then see some data on a page.
And that will lead me down some path of errors. It might help me implement a route and a controller and then a show action, so it will at least help me get started. Or even if it doesn't give me helpful enough errors, it at least serves as my guideline of like, this is my North Star. This is where I'm headed. So then, if I need to revisit, okay, what's the thing that I'm focused on at the moment? I can go back and be like, okay, I'm focused on achieving this. What's the next smallest step I can take to get there?
The other thing that I've learned over time is I've given myself the chance to be messy because I got so excited about the idea of unit testing and writing small, fast test that I would often try to start with small objects and then work my way backwards into like, okay, I have this one object that does this thing and one object that like...let's use a concrete example. So one object that knows how to communicate with API and one object that knows how to then parse and format the data I want and then something else that's then going to present that data to the user.
But I found when I started with small objects, I would get a little lost, and I wasn't always great at bringing them together. So I've taken the opposite approach of where if I'm really not sure where I'm headed and I'm in that more exploratory phase or even just that first initial parse of a feature, I will just start messy. So if I am pulling data from an API and need to show it to a user on a screen, I'll just dump it in the controller if I need to. I'll put it all there together.
And then once I actually have something that is parsing, or I have something appearing on the page, then I will start to say, "Okay, now that I can see what I need and I can see the pieces that I've written, how can I then start to extract this into smaller objects?” And now, I can start writing unit tests for that data. So that is something that has helped me is just start high, keep it high, be messy, and until you start to see some of the smaller objects that you can pull out.
CHRIS: Yeah, I think there's something that you were just saying there that clicked for me of we didn't start with the why of TDD. And I don't think we've talked about why we believe in TDD in a while. So this feels like a thing we're saying. It's not good just because it's good, or we don't believe it's good just because that's what we say. For me, it is because it anchors us outside of the code sort of it starts to think of it from the user perspective or some outer layer.
So even if you're unit testing some deeply nested class within your application, there's still an outer layer. There's still a user of that class. And so, thinking about the public API, I think is really useful. And then the further out you get, the better that is, and I believe strongly in thinking from the outside in on these sort of things.
And then the other thing you said of allowing for refactoring. And if we have tests, then it's so much easier to sort of...I totally 100% agree with like; I start messy. I start very messy. I wanted to pretend that I was going to be like, oh, I'm so...Steph, I can't believe this. But no, of course, I start messy.
Why would you start trying to do the hard thing first? No, get something that works. But then having the test coverage around that makes it so much easier to go through those sequential refactoring steps. Versus if you have to write the code correctly upfront and then add test coverage around that, it sort of inverts that whole thing.
And so, although it may take a little bit longer to write the tests upfront, I do exactly what you're describing of like, I write the tests that tell some truth about the system and constrain the system to do that thing. And then I can have a messy implementation that I can iteratively refactor over and over, and I can extract things from. And then, I can tell a more concise testing story about those. And so it really is both the higher-level perspective I think is super useful and then the ability to refactor under that test coverage is also very useful. And it makes my job easier because I can start messy. I love starting messy. It's so much better.
STEPH: Yeah, and I think former me had the idea that for me to do TDD properly meant that I had these small, encapsulated objects that I wrote unit tests for. And yes, that is the goal. I do want that, but that doesn't mean I have to start there. That is something that then I can work my way towards.
That also falls in line with the adage from Sandi Metz that the wrong abstraction is more costly than no abstraction. And so I'd rather start with no abstractions and then start to consider, okay, how can I actually move this out into smaller objects and then test it from there?
There's also something that I heard that I haven't done as often, but I really liked the idea; it feels very freeing, is that when you do get started and if you write your first test, if you write a test and it helps you make some progress but then you come back to it later and you're like, you know, the test doesn't really add value, or it's not helping me anymore, just thank it and delete it and move on. Just because you wrote it doesn't mean it needs to stay.
So if it provided some benefit to you and helped you through that journey of adding the feature, then that's wonderful. But don't be timid about deleting it or changing it so that it does serve you because otherwise, it's just going to be this toxic test that gets merged into the main branch, and it's going to be untrustworthy. Or maybe it's fussy and hard to please, or it's just really not the supportive test that you're looking for. And so then you can turn it into more of a supportive test and make it fit your goals instead of just clinging to every test that we've written.
CHRIS: I like the framing of tests as scaffolding to help you build up the structure. But then, at the end, some of the scaffolding gets ripped away and thrown out. And I do think, again, testing ends up in this weird place. The dogmatic thing that we were talking about earlier feels very true. And I've noticed, particularly on larger teams, folks being very hesitant to delete tests like, that feels like sacrilege. Of course, you can't delete tests; the tests are how we know it's true, which is true, but you can interrogate that. You can see like, how true is it?
And every test has a cost and maintenance burden, runtime, et cetera. You probably know well, Steph, about having test suites that take a bunch of time to run and then maybe wanting to spend a little bit of time trying to reduce that overall time. And so there's always going to be a trade-off there.
Actually, someone reminded me of an anecdote recently. I joined a project, and most of the test suite or all of the test suite was commented out because it was flaky or intermittent. And I was like, "Oh, I'm going to delete that." And people were like, "You're what?" I'm like; it's commented out. We're not using it. Let's tell the truth. Git will have it. We can go back and get it. But let's tell the truth with what we're like...this commented-out test suite is almost worse in my mind than having nothing there. The nothing feels painful, right? Let's experience that.
Whereas the commented out stuff is like, well, we have a test suite; it's just commented out. It's like, no, you don't have a test suite at all. That's not what's going on here. But there were other thoughtboters on the project that poked a good amount of fun at me when they were like, "The first thing you did on this project was delete the test suite?" As I was like, "Yeah, I don't know, I was feeling spicy that day or something."
But I think the test suite needs to serve the work that we're doing in the same way that everything else does. And so occasionally, yeah, deleting tests is absolutely the right thing and then probably add back some more.
STEPH: It's funny how that reaction exists. And I've done it before myself where like, if you see commented out code and you put up a PR to remove it, I feel like most people are going to be like, yeah, yeah, that's great. Let's get rid of this. It's clearly not news. It's commented out. But then removing a skipped test then has people like, "Well, but that test looks like it could be valuable, and we're going to fix it."
And it's like...all I can go back to is that silly example of like, you've got your skinny jeans, one day I'm going to fit into those skinny jeans. And so one day, I'm going to fix this test, and it's going to serve the purpose. And it's going to be the me I want to be. [laughs] And it is funny how we do that. With code, we're like, sure, we can get rid of it. But with tests, we feel this clinginess to them where we want to hold on to it and make it pass. And I think that sometimes has to do with the descriptions.
There are test descriptions commented out that I've seen are like, user can log in, or if given a user without permission, they can't access. And it's like, oh, that sounds important. I'm now nervous to delete you versus fix you, but you're still not actually running and providing value. And so then I have to negotiate with myself as to where do we actually go from here? But I do love the idea of deleting tests that are skipped because we should just let them go. We either have to dedicate time to fix them or let them go and make that hard decision.
CHRIS: The critical idea of future me will have more time, future me will be calm and will work through all the other bugs and future discounting; as far as I understand it as a formalization of the term, yeah, it's never true. I've only gotten busier over time, just broadly speaking.
And that seems to be a truism in software projects as well. It's like, oh, we just have to write a bunch of features, and then it'll be calm. I don't even think I'd want that. But future me will not have more time. And so choosing the things that we do invest in versus not is tricky, but the idea of that future me will have a lot of time or future us probably not true.
STEPH: Well, I think the story that I just shared at the beginning of our chat highlights that future me won't always be calm. [laughs] So let's work with what I've got. Let's not bank on that. Future Stephanie might be very emotional about dropping her dog off at boarding for a couple of days. [laughs] Future me might be very emotional about fixing this test. All right, well, thanks for going on that journey with me. That's really helpful. I knew you'd have some great insights there.
Mid-roll Ad:
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: What's going on in my world? Last week we had our first ever Sagewell all-hands get-together in person. Many of us have met in person before, but not everyone. And so this was a combination celebration for our seed fundraising round, which had happened actually sometime right at the end of last year. But due to COVID in the world and complexity, it was difficult to get everybody together. So that finally happened. And then we sort of grafted on to that celebration, that party that we were having. Like, let's just extend a day in either direction and do some in-person working and all of that. And that was really great.
I'm trying to find that ideal middle ground between we are a remote team, but there is definitely value in occasionally being in person, particularly getting to know people but also just having some higher bandwidth conversations, planning, things like that. They just feel different in person. And so, how do we balance that? And how do we be most productive and all that?
But it was really great to meet the team more so than I had on the internet and get to spend some time in person and do some whiteboarding. I drew on a whiteboard with a team. We were all looking at the same whiteboard. We're in the same room. And I drew on a whiteboard some entity relationship diagrams. It was awesome. [laughs] It was super fun. It was one of those cases where we had built an assumption deeply into our codebase, and suddenly instead of having one of a thing, we may now have multiple of a thing.
There's a wonderful blog post by Shawn Wang called Preemptive Pluralization which I think is based on an episode of Ben Orenstein's podcast, The Art of Product, where Ben basically framed the idea of like, I've never regretted pluralizing something earlier. A user has one account; they have multiple accounts. They just happen to have one at this time, et cetera. So we're in one of those.
And it was a great thing to be able to be in a room and whiteboard. I knew at the time when I did it way back when that I was making the wrong decision. But I didn't know exactly how and the shape. And so now we have to do that fun refactoring so glad that we have a giant test suite that will help us with said refactoring. But yeah, so that was really great to be able to do in person.
STEPH: I think there can be so much value in getting together and getting to see your team and, like you said, have those high-level conversations and then just also getting to hang out. So it's really nice to hear that reinforced since you experienced that same positivity from that experience. Do you think that's something that y'all will have going forward? Do you think you're going to try to get together like once a year, once a quarter? Maybe it hasn't even been talked about. But I'm hearing that it was great and that maybe there will be some repeats.
CHRIS: Yes, yeah. I think I'm inclined to quarterly at a minimum and maybe even slightly more than that. Some of us are centered around Boston, and so it's a little bit easier for us to pop in and work at a WeWork, that sort of thing. But I think broadly, getting the team together and having that be intentional. And personally, I'm inclined to that being more social time than productive time because I think that's the thing that is most useful in person is building relationship and rapport and understanding folks better.
I remember so pointedly when thoughtbot would have the annual Summer Summit, and leading up to that; there was a certain amount of conversation. But there were also location-specific rooms, and a lot of the conversation happened like in the Boston channel or whatnot. And then, without fail, every year after the Summer Summit, suddenly, there was a spike in cross-team chatter. Like, the Ruby room now had a bunch of people from San Francisco talking to Boston, talking to New York, et cetera. And it was just this incredibly clear...I think we could actually, like, I think at one point someone plotted the data, and there's just this stepwise jump that would happen every time.
And so that sort of connecting folks is really what I believe in there. And the more we're leaning into the remote thing, then the more I think this is important. So I think quarterly is probably the lowest end that I would think of, but it might be more. And it's also a question of like, what shape does this take? Is it just us going and hanging out somewhere? Or are we productively trying to get together with a whiteboard? I think we'll figure that out as we go on. But it's definitely something that I'm glad we've done now, set the precedent for, and we'll hopefully do more of moving on.
STEPH: Yeah, I always really love the thoughtbot Summits. In fact, we have one coming up. It's coming up in May, and this one's taking place in UK. But there have been some interesting conversations around Summit because before, it was the idea that everybody traveled. But typically, they were in Boston, so for me, it was particularly easy because it was already where I lived. So then showing up for Summit was no biggie.
But with this one happening in UK and COVID and travel still being a concern, there's been more conversations around; okay, this is awesome. People who want to get together can. There are these events going on. But there are people who don't want to travel, don't feel up to travel. They have family obligations that then make it very difficult for them to leave one partner at home with the kids. And I myself I'm in that space where I thought really hard about whether I was going to travel or not. And I've decided not to just for personal reasons.
But then it brings up the question of okay, well, if we have a number of people that are going to be in person together, then what about the people who are remote? And the idea of running something that's hybrid is not something that we've really figured out. But those that are remote, we're going to get together and figure out what we want to do and maybe what's our version of our remote summit since we're not going to be traveling.
But I feel like that's definitely a direction that needs to be considered as teams are getting in person because if you do have people that can't make it, how can you still bring them in so it's an inclusive event but respect to the fact that they can't necessarily travel? I don't know if that's a concern that every team needs to have, but it's one that I've been thinking about with our team. And then I know others at thoughtbot we've been considering just because we do have such a disparate team. And we want to make everybody comfortable and feel included.
CHRIS: Yeah, as with everything in this world, there's always complexities and subtlety. Thankfully, for our first get-together, we were able to get everyone into the same space. But I do wonder, especially as the team grows, even just scheduling, the logistics of it become really complicated.
So then does the engineering team have get-togethers that are slightly different, and then there's like once yearly a big get-together of the whole team? Or how do you manage that and dealing with family situations and all that? It is very much a complicated thing that thankfully was very straightforward for us this first time, but I fully expect that we'll have to be all the more intentional with it moving forward. And, you know, that's just the game.
But switching gears ever so slightly, we did have a fun thing that we've worked on a little bit over the past few weeks. We've finally landed it in the app. But we were swapping out our masked input library that we were using, so this is for someone entering their birthday, or a phone number, or social security number, or dates. I guess I already said dates. Passwords I think we also use here. But we have a bunch of different inputs in the app that behave specially.
And my goodness, is this one of those things that falls into the category of, oh yeah, I assume this is a solved problem, right? We just have a library out there that does it. And each library is like, oh no, all of the other libraries are bad. I will come along, and I will write the one library to solve all of the problems, and then we'll be good. And it is just such a surprisingly complicated space. It feels like it should be more straightforward.
And as I think about it, it's not; it's dealing with imperative interactions between a user and this input. And you need to transform it from what happens when you hit the delete key? What do you want to happen? What's the most discoverable for every user? How do we make sure they're accessible? But my goodness, was it complicated. I think we're happy with where we landed, but it was an adventure.
STEPH: I'll be honest, that's something that I haven't given as much thought to. But I guess that's also I just haven't worked with that lately in terms of a particular library that then masks those inputs. So I'm curious, which library were using before, and then which one did you switch to?
CHRIS: That's a critical piece of information that I have left off here. So for the previous one, we were using one called svelte-input-mask, which, again, part of the fun here is you want to have bindings into whatever framework that you're using. So svelte-input-mask is what we were using before. We have now moved on to using iMask, which is not like the thing you wear on your face, but it is the letter I so like igloo, Mike, et cetera, I-M-A-S-K, iMask.
And so that is a lower-level library. There are bindings to other things. But for TypeScript and other reasons, we ended up implementing our own bindings in Svelte, which was actually relatively straightforward. Again, big fan of Svelte; it's a wonderful little framework. But that is what we're using now, and it is excellent. It's got a lot of features. We ended up using it in a slightly more simple version or implementation. It's got a lot of bells and whistles and configurations. We went up the middle with it. But yeah, we're on iMask, which also led to a very entertaining moment where it was interacting with our test suite in an interesting way.
And so, one of the developers on the team searched for Capybara iMask. [laughs] And I forget exactly how it happened, but if you Google search that, for some reason, the internet thinks an iMask is a thing that goes over your mouth. And so it's a Capybara, like the animal, facemask. It's very confusing, but this got dropped into our Slack at one point, someone being like, "I searched for Capybara iMask, and it got weird, everybody." So yeah, that was a fun, little side quest that we got to go on.
STEPH: [laughs] I just Googled it as you told me to, and it's adorable. Yeah, it's a face mask, and it has a little capybara cartoon on the front of it. Yeah, there are many of these. [laughs]
CHRIS: When I think of an iMask, though, it's the thing that you put over your eyes to block the light if you want to sleep. But they're like, an iMask like, a mask that still keeps her eyes outside of it. I don't understand the internet. It's a weird place.
STEPH: I think that was just Google saying Capybara iMask. Nope, don't know I, so let's put together Capybara mask, and that's what you got back. [laughs]
CHRIS: I guess, yeah. It's just a Capybara mask. And I'm projecting the ‘I’ because I phonetically heard that for a while. Anyway, yes. But yeah, masked inputs so complicated.
STEPH: This is adorable. I feel like there should be swag for when people move. Like when people find things like this, this is the type of thing that then I stash and then wait for their anniversary at the company, and then I send it to them to remind them of this time that we had together. [laughs]
There was also a moment where you said, ‘I.’ You were explaining I as in in the letter I, not E-Y-E for eyemask. And you said igloo, and my brain definitely short-circuited for a minute to be like, did he just say igloo? Why did he say igloo? And it took me a minute to, oh, he's helping phonetically say that this is for the letter I.
CHRIS: Yep. The NATO phonetic alphabet that if you don't explain that that's what you're doing, now I'm just naming random other objects in the world. Sorry.
STEPH: [laughs]
CHRIS: And that's why I cut myself off halfway through. I'm like, now you're just naming stuff. This isn't helping.
STEPH: [laughs]
CHRIS: Yes, the letter I, the letter M. [laughs]
STEPH: All of that was a delightful journey for me, and I was curious. I'm glad you brought the test because I was curious if y'all are testing if things are getting obscured, but it sounds like y'all are, which is what helped give you confidence as you were switching over to the new library.
CHRIS: Yeah, although to name it, we're not testing at a terribly low level. This is a great example of where I believe in feature specs. Like, within our Capybara feature spec, we are saying, and then as a user, I type in this value into the input. And critically, although this input needs to have special formatting and presentational behavior, it should functionally be identical. And so it was a very good litmus test of does this just work?
And then, actually, our feature specs ended up in a race condition, which is just an annoying situation where Capybara moves so quickly that it represented a user. But as we were having that conversation, I was like, wait a minute; I know that users are slower than a computer. But is this actually an edge case that's real that we need to think about? And I think we did end up slightly changing our implementation. So our feature specs did, in a way, highlight that.
But mostly, our feature specs did not need to change to adapt to and then fill in the formatted input. It was just fill in the input with the value. And that did not change at all, but it did put a tiny bit of pressure on our implementation to say, oh, there is a weird, tiny, little race condition here. Let's fix that. And so we did race conditions, no fun at all.
STEPH: Interesting. Okay, so y'all aren't actually testing. Like, there's no test that says, "Hey, that when someone types into this field, that then there should be this different UI that's present because then we are obscuring the text that they're putting into this field." It was, as you mentioned, we're just testing that we changed over libraries, and everything still works. So then do you just go through that manual test of, then you go to staging, and then you test it that way?
CHRIS: Yeah, that's a great question, yes, although as you say it, it's interesting. I guess there's a failure mode here or that our test suite does not enforce that the formatting masking behavior is happening. But it does test that the value goes through this input, gets submitted to the server, turns into the right type of value in the back end, all of that. And so I guess this is an example of how I think about testing, like, that's the critical bit, and then it's a nicety. It's an enhancement that we have this masking behavior.
But if that broke, as long as the actual flow of data is still working, that can't break in a way that a user can't use. It sort of reminds me of the Mitch Hedberg joke, an escalator can never break, and it can only become stairs. And so I'm in that mindset here where a masked input that you have proper feature spec coverage around can never quote, unquote, "break." It can just become a plain text input.
STEPH: I love how much that resonates with me. And I now know that when I'm writing tests, I'm going to think back to Mitch Hedberg and be like, oh, but is it broken-broken, or is it just now stairs? Because that's often how I will think of feature specs and how low level I will get with them. And this is on that boundary of like, yes, it's important that we want to obscure that data that someone's typing in, but it's not broken if it's not obscured.
So there's that balance of I don't really want to test it. Someone will alert us. Like if that breaks, someone will alert us, and it's not the end of the world. It's just unfortunate. But if they can't sign in or they can't actually submit the form, that's a big problem. So yes, I love this comparison now of is it actually broken, or is it just stairs? [laughs] As a guideline for, how much should we test at this feature level or test in general? What should we care about?
CHRIS: I feel like this is a deep truth that I believed for a long time. And I think I probably, somewhere in the back of my head, connected it to this joke. But I feel really good that I formally made that connection now because I feel like it helps me categorize this whole thing. Sorry for the convenience as a joke. And so yeah, that's where we're at.
STEPH: For anyone that's not familiar with the comedian Mitch Hedberg, we'll be sure to include a link to that particular joke because it's delightful. And now it's connected to tech, which makes it just even more delightful.
CHRIS: I only understand anything by analogy, especially humorous analogy. So this is just critical to my progression as a developer and technologist.
STEPH: Yeah, I've learned over the years that there are two ways that I retain knowledge: it either caused me pain, or it made me laugh. Otherwise, it's mundane, and it gets filtered out. Laughter is, of course, my favorite. I mean, pain sticks with me as well. But if it's something that made me laugh, I just know I'm far more likely to retain it, and it's going to stick with me.
Mid-Roll AD:
And now a quick break to hear from today's sponsor, Studio 3T.
When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines, Studio 3T makes it easy.
And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase.
You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free. That's studio3t.com/free.
CHRIS: On that wonderful framing there, I think we should wrap up. What do you think?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris got a bike. Specifically, he bought a bike to use in a triathlon he signed up to participate in. Now he needs to name the bike, and speaking of naming things, a more technical topic that he talks about is the Crispy Brussels Snack Hour.
Steph talks about Rescue Rails projects and increasing developer acceleration.
They answer a listener question asking, "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec?"
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Translate frustrations into professional corporate
Learn Hotwire by Building a Forum
parallel_tests
parallel_split_test
This episode is brought to you by Studio 3T. Try Studio 3T's full suite of features for 30 days, no payment details needed.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Oh, but I recently learned that Robert Downey Jr. in the Marvel movies he's snacking a lot, maybe not Iron Man, but something...oh no, he's stacking a lot. And I'd read that he was snacking a lot on set, and so they just built it in to where like, sure, you can snack as your character while you're doing stuff.
CHRIS: [laughs]
STEPH: And I think that's so cool because I find that I am eating every time I show up to record with you. So I would like the same special star treatment as Robert Downey Jr., [laughs] and I just get to eat during each Bike Shed. [laughs]
CHRIS: All right. [chuckles] My understanding is also that he was wildly the highest paid of all the actors, so I think that should also come along with this.
STEPH: [laughs]
CHRIS: Yeah, there's a lot that we can sort of layer on here, but it makes sense to me, and I'm fully on board.
STEPH: You're an excellent agent. Thank you for fighting for my higher pay.
[laughter]
CHRIS: You are welcome.
STEPH: What a good co-host you are.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. One of these days, I'm going to say, "I'm Chris Toomey," and then I'm just going to see how you roll with it, although now I'm ruining it, I should have just gone for it. [laughs]
CHRIS: Nothing can prepare me for this despite the fact that you're telling me in this moment. In that future moment when you do it, I will still be completely knocked out of whack. Just like for anyone out there listening, the thing that Steph would normally have said instead of what [laughs] she just said was, "What's new in your world?"
STEPH: [laughs]
CHRIS: And I contractually require that that is the only way she starts this question to me because I get completely lost. She's like, "How are you doing?" I just overthink it, and I get lost, and then we end up in a place like this where I'm just rambling.
STEPH: Every podcast contract you have from here on out must begin with hey, Chris, what's new in your world? [laughs] I will still get to that question. I just also had to tell you my future joke. I'm going to play that. Hopefully, you'll forget, and one day I will resurface.
CHRIS: I can pretty much promise you that I'm going to forget it.
[laughter]
STEPH: Excellent. Well, to make sure I stick within the Chris Toomey contract guidelines, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Now I just want to spend a lot of time putting together my rider. There can be no brown M&M's in the bowl. No eye contact, please. And I can only be addressed with this one question which is, to be clear, very not true, Steph. And I always record with a video because we actually like to have human faces attached to things. Anyway, I'm going to tighten this all up. When we get to the technical segment of my world, I'm going to tell you about Crispy Brussels Snack Hour, so just throwing that out there as an idea.
But before we do that, I'm going to share a fun little thing which is I bought a bike, which is exciting. It's not that exciting. People have bikes. This is exciting for me. But the associated thing that is more exciting/a little terrifying is I'm going to try and run a triathlon. I'm going to try and run, swim, and bike a triathlon as they go, specifically a sprint triathlon for anyone out there that's listening and thinking, oh wow, that sounds like a thing. The sprint is the shortest of the distances, so that's what I'm going to go for. But yeah, that's a thing that I'm thinking about in my world now.
STEPH: I know next to nothing about triathlons. So what is a sprint in terms of like, what is the shortest? What does that mean?
CHRIS: I think there actually maybe even shorter distances but of the common, there's sprint, Olympic. I want to say half Ironman, and then Ironman are the sequence. And an Ironman, as far as I understand it, I think it's a full marathon. It's like a century bike ride or something like that. It's an astronomical amount of everything.
Whereas the sprint triathlon being the shortest, I think it's a 3.6-mile run, so a little over a 5K run, a 10-mile bike ride, and a quarter-mile swim, I want to say, something like that. But they're each scaled down to the rough equivalent of a 5K but in each of the different events. So you swim, and then you bike, and then you run.
And so I'm going to try that, or at least I'm going to try to try. It's in September, and now is not September. So I have a lot of time between now and then to do some swimming, which I haven't done...like, I've swum but not in a serious way, not in an intentional way. So I got to figure out if I still know how to swim, probably get better at biking, and do a little bit of running, and it's going to be great. It's going to be a lot of fun. I'm super excited about it. Only a little terrified.
STEPH: I think this is where as your co-host coach, which you have not asked me to be, where I would say something about there is no try, to mimic Yoda. [laughs]
CHRIS: Yep, yep. Yep. Do or do not. Sprint or sprint not. There is no trying. Oh, were you making a try pun there?
STEPH: I didn't go that far, but you just brought it home. I see where you're going. [laughs]
CHRIS: This is pretty much what I do professionally is I just take words, and I roll them around until I find something else to do with them. So glad that we got there together.
STEPH: Well, I'm really excited to hear about this. I don't know anyone that's trained for a triathlon. I think that's true. Yeah, I don't think I know anyone that's trained for a triathlon. So I'm curious to hear about how that goes because that sounds intense, friend.
CHRIS: I think so. None of the individual segments sound that bad but stitching them all together, and I think the transitions are some of the tricky parts there. So yeah, it'll be fun. It's something I find...I used to never run; that was the thing. Like, deeply true in my head was that I'm not a runner. This is just a true fact about me. And then I ran a 5K one year for...it was like a holiday 5K fun run with friends. And every bit of the training leading up to it was awful. I did Couch to 5K. I hated it.
My story in my head of I'm not a runner was proven with every single training run. Man, did I hate it. And then something magical happened on the day that I actually ran the race, and it was fun. And I was out there, and there was the energy of being in this group of people. But it was competitive and not competitive in this really interesting way. And then it ended, and we were just hanging out in a parking lot, and they gave us beer. And I was like, well, this is actually delightful. Maybe I actually like this thing.
And so I've run a bunch of different races. And I've found that having a race to train for, and by train, I just mean some structured attempt at running, has been really enjoyable and useful for me. So yeah, this is just ratcheting that up a tiny bit. I've done a couple of half marathons is the high watermark so far. It's a good distance. But I don't know that a full marathon makes sense; that's a real commitment. And I'm looking to move laterally rather than just keep getting more complex in my running. So we're trying the shortest possible triathlon that I know of.
STEPH: I am such a believer that exercise should be fun, so I love that. Like, I'm not a runner, but then you get around people, and it's exciting. And then there's that motivation, and then there's a fun ending with beers that totally jives with me. Because sure, I can go to the gym; I can lift weights, I can make myself exercise. There's some fun to it.
But I strongly prefer anything that's more of like a sport or group exercise; that's just so much more fun. Well, super cool. Well, I'm excited. I would ask you all the details about your bike, but I know nothing. Do you want to share details about your bike? There may be other people that are interested.
CHRIS: Oh yeah, my bike. I went to the bike store, and I said, "Could I have a bike, please?" And then they toured me around and showed me all the fancy...they were like, "This is our most modest entry-level bike." And then they kept walking around and showing me fancier bikes. And I was like, "Can we go back to that first one? That one seemed great."
STEPH: [laughs]
CHRIS: Because it got all of the checkboxes I was looking for, which is basically it's a bike. So actually, the specifics on it are it's a hybrid bike, so like a mix between road, and I don't even know the other road bikes I know of, and maybe it's trail. But I don't think it's meant for going on the trail. But for me, it'll be fine for what I'm trying to do as far as I understand it.
It's technically a fitness hybrid, which I was like, oh, fancy. It's a fitness bike; look at me go. But it was basically just like, I would like a bike. General-purpose hybrid seems like the thing that makes sense. So I got a hybrid bike. And that's where I'm at. Oh, and I got a helmet because that seems like a smart move.
STEPH: Nice. Yeah, the bike I own is also one of those hybrids where it's like…because when I moved to Boston...and lots of people have the road bikes, but their tires are just so skinny; it made me nervous. And so I saw one of the hybrid bikes, and I was like, that one. That looks a little more steady and secure, so I went with that one even though it's heavier. Do you have a name for your bike? Are you going to think of a name for your bike?
CHRIS: I didn't, and I wasn't planning on it. But now that you've incepted me with this idea that I have to name my bike, of course, I have to name my bike. I'm going to need a couple of weeks to figure it out, though. We're going to have to get to know each other. And you know, something will become true in the universe for me to answer that question. But as of so far, no, I do not have a name for the bike.
STEPH: Cool. I'll check back in. Yeah, it takes time to find that name. I feel you.
CHRIS: [laughs] Yeah, don't make up a name. I have to find what's already true and then just say it out loud. Speaking of naming things and perhaps doing so in a frivolous way, as I mentioned earlier, the more technical topic that I want to talk about, oddly, is called Crispy Brussels Snack Hour. [laughs] So, within our dev team, we have started to collect together different things that don't quite belong on the product board, or at least they're a little more confusing. They're much more technical.
In a lot of cases, they are...our form handling is a little rough. And it's the sort of thing that comes up a lot in pull requests where we'll say, "I feel like this could be improved." And we're like, "Yeah, but not in this pull request." And so then it's what do you do with that? Do you put a tech debt card in the product board? You and I have talked about tech debt cards plenty of times, and it's a murky topic.
But we're trying within the team to make space and a way and a little bit of process around how do we think about these sorts of things? What are the pain points as a developer is working on the system? So to be clear, this isn’t there is a bug because bugs we should just fix; that's my strong feeling, or we should prioritize them relative to the rest of the work. But this is a lower level. This is as a developer; I'm specifically feeling this sort of pain.
And so we decided we should have a Trello board for it. And they were like, "Oh, what should we name the Trello board?" [laughs] And I decided in this moment I was like, "You know, if we're being honest, I've named everything very boring, very straight up the middle. We don't even have that many things to name. So we have zero frivolous names within our team. I think this is our opportunity. We should go with a frivolous name. Anybody have any ideas?"
And someone had worked on a team previously where maybe it was a microservice or something like that was called crispy Brussels, like, crispy Brussels sprouts but just crispy Brussels. And so I was like, "Sure, something like that. That sounds great." And then they ended up naming it that which was funny, and fun, and playful in and of itself.
But then we were like, "Oh, we should have a time to get together and discuss this." So we're now exploring how regularly we're going to do it. But we were like, let's have a meeting that is the dev team getting together to review that board. And we were like, "What do we call the meeting?" And so we went around a little bit, but we ended with the Crispy Brussels Snack Hour.
STEPH: That's delightful. I love the idea of onboarding new people, and they just see on their calendar it's Crispy Brussels Snack Hour, come on down. [laughs]
CHRIS: It's also got an emoji Brussel sprout and an emoji TV on either side of the words Crispy Brussels Snack Hour. So it's really just a fantastic little bit of frivolity in our calendars.
STEPH: [laughs] That's delightful. How's that going? I don't think we've tried something like that explicitly in terms of, like you said, there are discussions we want to have, but they're not in the sprint. They're not tech debt cards that we want to create because, like you said, we've had conversations. So yeah, I'm curious how that's working for you.
CHRIS: Well, so we've only had the one so far; it went quite well. We had a handful of different discussions. We were able to relatively prioritize this type of work within that. But one of the other things that we did was we had a conversation about this process, about this meeting, and the board. And whatnot.
So we identified a couple of rules of the road or how we want to approach this that I think will hopefully be useful in trying to constrain this work because it's very easy to just like; nothing's ever perfect. And so this could very easily be a dumping ground for half-formed ideas that sound good but aren't necessarily worth the continued effort, that sort of thing.
So the agenda for the meeting as described right now is async between meetings. Any of us can add new cards, ideally stated as problems and not solutions. So our form handling could use improvement. And then in the card, you can maybe make a suggestion of I think we could use this library or something like that. But rather than saying use this library or move to this library, we frame in terms of the problem, not necessarily the solution.
And then, at the start of the meeting, any individual can champion a card so they can say, "Here's the thing that I really want everyone to know about that I've been feeling a lot of pain on." So it's a way for individuals who have added things to this to add a little bit more detail. Then using Trello as voting functionality, we each get a couple of votes, and we get to sprinkle them across different cards, and then using that now allows us collectively to prioritize based on those votes. And so the things that get voted up to the top we talk about; we prioritize some amount of work coming into the sprint.
If it's actually going to turn into work, then it'll go onto the product board because ideally, it's moved from problem space to more of solution space even if the solution, the work to be done is do a spike on XYZ library or approach to form handling or whatever it is. But so ideally, it then moves on to the other board.
The other thing that I felt was important is it's very easy for this to be a dumping ground for ideas. So my suggestion is at the end of the meeting, we sort by date, and we prune the oldest things. So it's like, if it's still hanging around and we haven't done it yet, and it's not getting voted up, then yes, we might feel some pain but not enough. It's not earning its place on this board. So that's my hope is we're weeding the Brussel sprouts garden that we have at the end of the meeting.
That's roughly what we have now. We really only had the one, so that idea of pruning will probably come in later on. And it may be that this doesn't work out at all, and this ends up being tech debt cards that get stale and don't capture the truth. But I'm hopeful because there's definitely...there's a conversation to be had here. It's just whether or not we can make sure that conversation is useful and capturing the right amount of context and at the right points in time and all of that.
STEPH: Yeah, I like it. I like the whole process you outlined. You know what it made me think of? It sounds like a technical retro, not that retros can't be technical; we bring up technical stuff all the time. But this one sounds like there was more technical discussion that was still looking for space to bring up. So the way that you mentioned that people add their thoughts, that it can be done async, and then you vote up, and then as things get stale, you remove them and focus on the things that the team voted for, that's really cool. I've never thought of having just a technical-specific retro.
CHRIS: Yeah, definitely informed by retro. But again, just that slight honing the specific focus of this is just the dev team chatting about deeply dev-y things and making a little bit of space for that. I think the difficulty will be does this encourage us to work on this stuff too much? And that's the counterbalance that we have to have because this work can be critically important.
But it can also be a distraction from features that we got to ship or bugs that are in the platform or other things like that. So that balancing act is something that I'm keeping in mind, but thus far, the way we structured it, I'm hopeful. And I'm interested in exploring it more, so we'll see where we get to. And I'll certainly report back as we refine the Crispy Brussels Snack Hour over time.
STEPH: I feel like the opposite is true as well, where you have these types of concerns and things that you want to bring up. And even if they're on the board, once you get to sprint planning, there's a lot of context and conversation there that maybe the whole team doesn't have. It doesn't feel like the right moment to dive into this because you're trying to plan a new sprint.
So then that stuff gets bumped down to the bottom or just never really discussed, or it gets archived. So I feel like the opposite is totally true, too, where you have this stuff, but then it never gets talked about because sprint planning is not the right place. So yeah, I'm really intrigued to see how that balance works out for y'all as well.
CHRIS: Yeah, I think it's an exciting time, and we'll see where it goes. But like I said, I'm hopeful on it. But yeah, bikes, triathlons, and crispy Brussels, that's my world.
Mid-roll Ad:
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: I have a couple of fun things that I want to share and then something that's a little more in the techie space. The first one is there's a delightful Twitter thread that caught my attention recently that I just want to share; totally not tech-related. But this person shared a thread talking about how they help everyone on their team who's older than they are, making sure that the slang that they're using is correct in its context. And so they provided some funny examples.
And then, in return, they also will translate this person's frustrations into professional corporate-speak, and it's such a good thread. So if you need a good laugh, I will make sure to include a link in the show notes. The slang is really funny, but it's actually the translation of frustrations into professional corporate-speak that that's the part that resonated with me. That was really good. [laughs]
CHRIS: You shared this with me outside of this conversation, and I've read through them. Listeners out there, do not sleep on this. I highly suggest reading through this thread because it is fantastic.
STEPH: The other thing that I saw is Andrea Fomera, who is a Rails developer and creates a fair amount of content...I haven't been through some of that content, but I know there's content around Rails. And specifically, there is a newer course called Learn Hotwire by Building a Forum. And she has made this totally free, and I just think that is so cool.
And she shared that on Twitter, so I'll be sure to include a link in that to the show notes because Hotwire is something I haven't used yet. And so I saw this free course, and I think it would be fun to dabble and go through the course. And I know there are some other people at thoughtbot that have used it and seem really happy with it or interested in using it as well. Is that something that you've used?
CHRIS: I have not. I skipped over Hotwire in my adventures. I'd found Inertia and was quite happy with that. And then, in that way that, I sometimes limit the amount of things that I'm allowed to explore on the internet in hopes of actually getting some work done; I have not spent much time.
But enough folks that I deeply respect are very excited about Hotwire that it remains in the like; I would love to have an afternoon just to poke around with that. So I may take a look at this, although I don't know, I'm probably still in my moratorium. I'm not allowed to look at new frameworks for a little while time period. But I hear great things.
STEPH: That's fair. That's also what I've heard. I've heard great things. So yeah, I just figured I would share that in case anybody else is interested in looking for a course that they could take and also dabble at Hotwire.
The other thing that's on my mind is more the type of projects that I'm really getting a lot of joy from. Because I've known about myself for a while that greenfield projects are nifty, but they're not my thing. They're not the thing that brings me a lot of joy. It's just kind of nice. You got your own space, and you're building from the ground up, cool, cool, cool.
But this one, I found that the projects that I’m really starting to gravitate towards are what I've heard someone else call Rails Rescue projects. So those are the projects where they have been around for a while, or they've just been built in a way that the data modeling structure makes it really hard to implement new features. Maybe there's a lack of test coverage that makes it really risky to ship new work or to make changes. There are lots of bug reports and errors that the team is fighting with.
So then that type of work comes down to where you're trying to either increase stability for the application and for users and/or you're looking to increase developer acceleration. And I really, really liked those projects. That's the type of project that I've been a part of for...I think my last couple of clients have been in that way. I don't know that they would describe it that way, that it's a Rails Rescue project.
But if I can see that opportunity where I see there's a stability issue or developers are feeling a lot of pain in one area, then that's the portion of the application, the portion of the team that I'm going to gravitate towards. Or like the current work that I'm doing where we're really focused on testing and making some improvements there or reducing that pain that the team is feeling around how long CI takes to run or the flakiness because then you're having to re-verify your CI runs.
I like that work. It's a bit slow and frustrating, so it does seem to require a patient person. You also have to have lots of metrics that are guiding you because you can have a lot of assumptions around I'm going to make this improvement, but it's going to take effort to get there. And it'd be great if I can validate that effort upfront. So I feel like a lot of my time is spent more around metrics, and data, and excel sheets than necessarily coding. I don't know if that's great, but it's part of the work. There's a balance there. So I just found that interesting.
I don't think I would have thought this is something I was interested in until now that I've been on these projects for a while. And I've started noticing a theme where I really enjoy them. Although I realize looking back at former Stephanie days when I was going through Launch Academy and learning to code, I really thought I wanted to be in DevOps. DevOps seemed like the cool kids’ corner. They knew how the internet worked. They knew what was happening. They were making it live. And I just thought it seemed really cool. For the record, it is still a cool kids’ corner.
But I have also learned that the work-life balance isn't great with DevOps because you just never know when you're going to be on call. And that really stood out to me as something that I didn't want to do. And I do like building some features. But essentially, it's that developer acceleration that I really liked because they were the ones that were coming and often building tools and making it easier for then people to then ship their code and get it out into the world and triage.
And so I liked the fact that their users were developers versus the people using the application as much, although, I guess, technically both. But the people they were often striving to help the most was the internal team, and that resonated with me. So I guess I have eventually found my way into that space. It wasn't through DevOps, but it is now through this idea of projects that need some rescuing.
CHRIS: I love that you've spent enough time now to figure out what it is that draws you in the work and the shape of projects that is meaningful to you. Interestingly, I find myself not on the opposite side of things...you know, we're always looking for a disagreement, and this isn't a disagreement, but this is a thing on which we differ a surprising amount because I do like the early-stage stuff, the new, the breaking ground, all of that exciting whatnot.
But how do I not make this a more complicated statement? I appreciate that you have the point of view that you do. I think the world needs more of what you're doing than the inclination that I have, like; I want to start something bright, and fresh, and new, and I can see so much progress immediately in front of me. And this is amazing. But the hard, meaningful work like maintenance, and support, and legacy, and rescue where necessary is such a critical aspect of the work.
I see this in open source so often where there are people who are like; I made an open-source project; this is great. I hacked for a bunch of weekends, and look; I made a thing. And then the support burden builds up. And open source can be this wildly undervalued thing overall. And the maintenance of open source is even more so, and you have this asymmetry between the people that are using it and don't think that their voice is one of the thousands that are out there requesting a new feature or anything like that.
The handful of people that I see out there in the world that come along later in the lifespan of an open-source project and just step in to do maintenance, my goodness, is that heroic work, just quiet, necessary heroic work. And what you're describing feels sort of similar but at the project level. And I don't know; I'm sort of like silent. I'm out loud on a podcast, not silently at all judging myself because I'm like, I feel like you're doing the thing over there. That seems like a good thing. But I also like my early projects... [laughs]
STEPH: I think they're...I mean, we need each other. I need you to start the code, and the applications for them to then need some help down the road [laughs] to [crosstalk 24:30].
CHRIS: But I need to do a bad enough job that we have to be rescued by you.
STEPH: [laughs]
CHRIS: Hey, don't you worry, friend, I'm doing a terrible...no, I think I'm doing an okay [laughter] job. Hopefully, I'm avoiding those traps, but it's hard to know when you're writing legacy code, you know.
STEPH: It is hard for the reasons we were talking about earlier. Like, those technical discussions build-up, and then if you don't really have a space to then address it, then it just keeps getting sidelined until you suddenly get to this point of it's either we come to a grinding halt because we can't ship work, or we find ways to start bringing this into our process.
And so that's the other part of the Rails Rescue projects is often looking at the team's process and figuring out, okay, instead of hiring consultants to come in and then try to help with this, how else can they also integrate this into their own project? So then, once thoughtbot lives, they now have ownership of this, and they can carry it forward as well.
There is an aspect of this work that I'm still working on, and it comes around to the definition of work because if you go into a team or a project that's like, hey, we really need help with X. We really need help with addressing all these errors. Or we really need help improving developer happiness or getting test coverage in place. Finding out exactly how you're going to tackle that, are you going to join a team of the other developers?
Like, are you looking for more of a mentorship? Like, hey, we're going to work alongside your team to then mentor them to then bring this into their own process and their own habits, so then they feel empowered to address this in the future. Are we doing this more as a triage where then we have a specific goal or two that then we're going to meet? And then once we get stuff out of this on fire state, then maybe we start pairing with other people. Or are we going to work closely with the people who are fighting fires with the bug reports and the errors?
There are a bunch of different ways that you can tackle that. And I think it really helps define the success of that engagement and then your outcomes because otherwise, I feel like you can get distracted by so much. Because there's so much that's going to try to get your attention that you want to work on and fix. So you have to be very upfront about there are different areas that we can work on. Let's figure out some metrics together that we're really going after to then help define what does success look like for this first iteration of our work?
And then what's the long-term plan for this work? Then how do we keep it going forward? How do we empower the team to keep this work going forward? And that's an area that I've learned just from trial and error from being part of these projects. And I'm very interested in still cultivating that skill and figuring out what's the area that we're focused on?
CHRIS: There's something that you said in there that I want to hone in on, which is the idea of you've learned from going on so many of these different projects, and you're carrying forward ideas that you have. But I think more generally, there's something interesting in what you were just saying there around you've worked on a bunch of different projects at different organizations with certain things that they were great at, with certain things that they struggled with at different sizes. And you're able to bring all that experience to bear on each project.
But I think also taking a step back, as you were describing, you're like, I think I've figured out what it is that I like and the type of projects that I want to do. I cannot say enough good things about working in a consultancy for a while because, my goodness, you get to try out a bunch of different stuff. And A, you get to learn a ton about how to do the work, and how to communicate, and different technologies and all of that. But you also get to figure out what it is that you might want to double down on and lean into in terms of the work. That's definitely a big part of my story.
Seven years at thoughtbot, I tried a lot of different stuff, worked at a lot of different companies. And I would describe it as I found a lot of things that I didn't want. And then there's that handful of things that I really did want, and I was able to then more intentionally pursue that. So for anyone out there that's considering it, working at a consultancy is fantastic, or at least it has fantastic elements to it.
It also can be complicated as you talk about finding organizations and having to, you know, if you're brought in for a certain job, but when you get there, you're like, "Ooh, I know you want me to fix bugs, but actually, I think I just need to work with your team because they're the ones writing the bugs. And why are they writing the bugs" "Well, because the salespeople are selling things, and then we have timelines." Like, we got to start at the very top of this whole pyramid and fix it. And so it can be very complicated. But there's so much that you can learn about yourself in the process, in the work, and I adored that portion of my career.
STEPH: Yeah, I totally agree. Anytime someone mentions, they're like, "Oh, consultancy work. What's that like?" And I remember it was a couple of years ago I mentioned I was working for a consultancy, and they were like, "Oh, you must travel a lot." I was like, "No, [laughter] I stay put. I just work from an office in Boston." But I remember that caught me off guard because I hadn't considered that I was supposed to travel, but that makes sense that you think of consultants that travel.
But when I meet people or talk to people, and they're like, "Oh, you've been at thoughtbot for five-plus years, and how's that going? And what's it like to be at a consultancy?" And exactly what you just said, it's the variety that I really like and getting to try on so many different hats and see how different teams and processes work and then identify like, oh, that worked really well for that team, or this isn't working well for that team. I have really enjoyed that.
And it can be a roller coaster because you have to get really good at onboarding. You have to go through that initial phase of like; I swear I'm smart. I will get up to speed quickly, and I will learn things. But it's a period that you just have to go through with each team that you join, but you do it twice a year, maybe three times a year. And so you get comfortable with that over time.
So there are definitely some challenges that then have to fit your personality and things that work for you and bring you joy. And I completely understand that it's not for everybody, just kind of I really enjoy product work, but I also really enjoy being able to move around to different teams and help folks.
CHRIS: I love the idea that as a consultant, your job is to just walk through airports and high-five every Accenture billboard in it and just go up to the wall and pay your respects. But no, no, that is not our version of consulting. [laughs]
STEPH: That's why I have so much time for The Bike Shed. It's because I'm just, you know, I'm in different airports high-fiving signs. And then this is my real job; Bike Shed is my real job.
CHRIS: Oh, that would be fun.
STEPH: [laughs] You know, I have such a fondness of Bike Shed that now something interesting has happened where someone was like, "Oh, you're bike-shedding." And they're not being mean, but they're just like, "Oh, we're totally bike-shedding," or "This is dissolving into bike-shedding." And I'm like, oh, bike-shedding, hooray. And I'm like, oh, wait, bad. [laughter] And I have to catch myself each time.
CHRIS: Yeah, we've taken away a lot of the meaning. Well, I mean, have we or do we live up to it every single week? Who can say? But I, too, have a fondness for this phrase, perhaps not aligned with what it is actually meant to signify.
STEPH: On a slightly different tech-related note, there is a gem that I'm really excited to check out. I saw it mentioned on the parallel_tests gem, which is what helps you run your tests in parallel, and it's what we're currently using. But you can group your tests in different ways. And right now, we're using the runtime strategy where essentially then we use the output from RSpec where we know how long each file took to run. And then parallel_tests will then use that data to then figure out, okay, how should I split up your test file? So then try to balance them as evenly as possible.
We're at that point, though, where we've talked about tentpoles, so we have certain files that, say, take 10 minutes; other files will only take two minutes. And that balance is really throwing off our ability to then bring down the CI build time. So on parallel_tests, there's reference to another gem called parallel_split_test, where then you can run multiple test scenarios that are in one file but then split them out across different processes or different machines. And that is exactly what I want in my life right now.
I haven't checked it out yet, so I feel like I'm giving a daily sync update of like, I'm going to go off and explore this thing. I will report back and see how it goes. [laughs] In the past, I usually try to say, "I've tried this thing, and this is how it went," nope, opposite today. I am sharing the thing I'm going to try, and then hopefully, it goes well.
CHRIS: Well, either way, we should definitely report back. That's the truth. I like that you're leading us into this and giving us a preview. But then yeah, we'll see where we get to. That does sound like the thing you want, though. So I hope it goes well.
STEPH: Yeah, we've learned at this point where we are splitting work across different machines that until we address some of those tentpole concerns, adding more machines won't help us because then a machine's going to run as long as the longest file. So we've been doing some manual work to split up those files. That's not the best, but it does help you see some results. So then, at least you know you're making progress.
So now we really need to find a way to automate that because we don't want someone to have to manually figure out where are the tentpoles, split those files up, commit that, and then keep track of, like, do we have another tentpole on the horizon? We really need a gem or something to help us automate that process. So yeah, I will be happy to report back.
MIDROLL AD:
And now a quick break to hear from today's sponsor, Studio 3T.
When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy.
And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase.
You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free. That's studio3t.com/free.
STEPH: Pivoting just a bit, we have a listener question. This question comes from Steve Polito. And Steve wrote in, "Longtime listener, first-time thoughboter." Yay. Yay is my addition. Anything that goes up in voice is probably my addition, [laughs] just so people know. All right, back to what Steve said. "Why do so many developers and agencies, thoughtbot included, replace the default test suite in Rails with RSpec?
Not only does Rails provide a fully functional test suite by default," looking at you Minitest, "but it's also well-documented and even provides the ability to run system tests. Rails is built on the principle of convention over configuration. And it seems odd to me that so many developers want to override such a fundamental piece of framework." Thanks in advance, [singing] Steve Polito. Steve, I hope it's okay I sang your name [laughs] because we're here now.
That is an awesome question. I'm going to give what may be less of an awesome answer which is, well, one; Steve highlights that people will then replace Minitest with RSpec. I haven't done that. I haven't actually gone into a project and said, "Okay, we need to replace your test suite and bring in RSpec instead." But if I'm starting out a project, I do have a heavy preference for RSpec, and frankly, that's just from experience. Like, that's what I was raised on, to say it in that way. [laughs]
RSpec is what I know; it's what I'm used to. It's what, even when I joined thoughtbot, was just the framework that we used for all of our testing and what we focused on so heavily. So frankly, for me, it's just a really strong bias. I know it's something that I'm really good at. I know it's something that works really well. I know it's well-documented. I know it's also very accessible for other people to use.
But actually replacing it on a different project, I don't think I would do that. I'd have to have a really strong reason, or maybe if we haven't actually started testing anything yet, to then replace it because that feels a bit aggressive to me. But then it just depends on the situation, I suppose. But yeah, overall, I just default to RSpec because that's what I'm accustomed to, and it's the testing framework that I know.
CHRIS: Yeah, I think my answer is largely the same. It's the thing that I've worked with by far the most. Similarly, I've been on projects that were using Minitest, and therefore I used Minitest because it's definitely not worth the effort to switch. But in a lot of...well, I will say this, I've much less experience, and this may be less true over time. But there were many things that drew me to RSpec, and that continues to be interesting to me in the RSpec world.
Even things as small as the assertion syntax, assert_equal is the method that's, you know, this is how you do an assertion in Minitest, and it's assert_equal expected, actual. That's the order of the arguments. It's expected first and then actual. That makes sense, probably with the expected, but I would get that wrong constantly. I do get that sort of thing wrong. They're just positional arguments that there's nothing about this that tells me which way to go. And so it's very easy to get failure messages that are inverted, and so it's just this tiny little thing.
But with RSpec, we end up with expect and then in parentheses, the thing that we are expecting to equal the other thing, and it just reads a little more honestly. It fits within the Ruby mindset in my world. I want my code to be as expressive as possible, and Minitest feels much lower level to me. It feels more, you know, assert as a word is just...I'm not asserting. That just feels so formal. And so these are, again, to be clear, very, very small things, but they all add up.
And there's a reason that we're using Ruby overall. And there's a reason that we're using Rails is this expressiveness is a big part of it for me, so I'll cling to that. I'll hold on to that as something that's true. Also, Rspec's mocking support, rspec-mocks as the library, I found to be really fantastic, and I've grown very comfortable working with it. And I know how and where to use that.
I also have so much built-up knowledge, like the idea of when to use let and not use let in RSpec. It's just this deep thing that I know about. I'm sure there's an equivalent in the Minitest world, but I would have to have a different understanding in argument, and that conversation would just feel different.
I think the other thing that's worth saying is this is a default for us at this point that I personally have not felt the need to reconsider. When I've worked on projects that have used Minitest, I certainly wasn't called to it. I wasn't like, oh, this seems really interesting; I'm going to lean into this more. I was like, I miss RSpec. And some of that is, again, just familiarity. But at the end of the day, we only have so much time to do things. And so, I firmly stand by my not reconsidering my testing option at this point.
Like, RSpec does the things that I want. It does it really well. Critically, I'm able to build a system and write a test suite and maintain that test suite over time and have it tell me the truth as to whether or not my application should be deployed to production. That is the measure. That's the thing that I care about. I think it's maybe a little bit slower than Minitest, but I'm fine with that. I have solutions to that problem.
And the thing that I care about is when the test suite is green, do I feel confident deploying? RSpec has helped me for years on that journey. And I've never questioned whether or not I should go back to the drawing board and revisit that consideration. So initially, it was probably because it was the thing that we were all using, and then that is for me why it has stuck around. And I love RSpec. I think how many episodes have we just said, "Thanks, RSpec," as a little aside? So we do love it in a deep way.
STEPH: Probably not enough episodes have we said that. [laughs] Yeah, I like what you said where you haven't felt the need to switch over or to move away from RSpec. And I wonder, looking back at some of the earlier projects that I joined that were using RSpec, I don't know if maybe they chose RSpec at that time because RSpec had more of those features built-in, and Minitest was still working on those. Maybe they were parallel at the time; I'm not sure.
But I like what you said about you just haven't had a need to go back and change. At this point, if I switched over to Minitest, it would definitely be a learning curve for me, which is totally fine. But yeah, I'm just happy with it, so I stick with it.
And I also appreciate that idea that, yeah, unless you're new in a project, I wouldn't encourage someone to then switch over to something else unless I feel like there's just a lot of pain for some reason with the current testing setup. There has to be a reason. There has to be a drive. It can't be just a personal bias of like, I know this thing, so I want to use it. There's got to be a better reason that benefits the whole team versus just a personal preference.
But overall, I think it comes down to for us; it's just a choice because it's the familiar choice. It's the one that we know. But I think Minitest and RSpec are both so widely supported. I was thinking about that convention over configuration. And yes, Rails ships with Minitest, but RSpec is so common that I don't feel like I'm breaking convention at that point. They're both so widely supported and used that I feel very comfortable going with either option. And then it's just my personal preference for RSpec.
So thanks, Steve, for sending in that question. And for anyone else that has a question that you would love to share with Chris and I, you can reach us in a couple of different ways. You can reach us on Twitter via @_bikeshed. You can also go to the website, bikeshed.fm/content. We will drop some links in the show notes. But if you go there, then you can send a question or also email us directly at [email protected].
And we're running a little low on listener questions, so we would love to have a listener question from you. And we would love to talk about anything that y'all want to talk about, okay, within reason, you know, triathlons, Brussel sprouts, things like that. All of that falls within the wheelhouse.
CHRIS: Normal stuff.
STEPH: Normal stuff, yeah.
CHRIS: And to be clear, despite the fact that Steve did recently become a thoughtboter, you don't have to be a thoughtboter to send in a listener question. [laughs] In fact, it's much more common to not be a thoughtboter when sending in a listener question. But we'll take them from anybody. We're happy to chat with you.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Being pregnant is hard, but this tapas episode is good! Steph discovered and used a #yelling Slack channel and attended a remote magic show. Chris touches on TypeScript design decisions and edge cases.
Then they answer a question captured from a client Slack channel regarding a debate about whether I18n should be used in tests and whether tests should break when localized text changes.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Emma Bostian
Ladybug Podcast
Gerrit
Gregg Tobo the Magician
Sean Wang - swyx - better twitter search
Twemex
GitHub Pull Request File Tree Beta
Sam Zimmerman - CEO of Sagewell Financial on Giant Robots
TypeScript 4.1 feature
The Bike Shed: 269: Things are Knowable (Gary Bernhardt)
TSConfig Reference - Docs on every TSConfig option
Rails I18n
This episode is brought to you by Studio 3T. Try Studio 3T's full suite of features for 30 days, no payment details needed.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. There are a couple of new things in my world, so one of them that I wanted to talk about is the fact that being pregnant is hard. I feel like this is probably a known thing, but I feel like I don't hear it talked about as much as I'd really like, especially in sort of like a professional context. And so I just wanted to share for anyone else that may be listening, if you're also pregnant, this is hard.
And I also really appreciate my team. Going through the first trimester is typically where you experience a lot of morning sickness and fatigue, and I had all of that. And so I was at the point that most of my days, I didn't even start till about noon and even some days, starting at noon was a struggle. And thankfully, the thoughtbot client that I'm working with most of the teams are on West Coast hours, so that worked out pretty well.
But I even shared a post internally and was like, "Hey, I'm not doing great in the mornings. And so I really can't facilitate any morning meetings. I can't be part of some of the hiring intros that we do," because we like to have a team lead provide a welcoming and then closing for anyone that's coming for interview day. I couldn't do those, and those normally happen around 9:00 a.m. for Eastern Time. And everybody was super supportive of it. So I really appreciate all of thoughtbot and my managers and team being so great about this. Also, the client team they're wonderful.
It turns out growing a little human; I'm learning how hard it is and working full time. It's an interesting challenge. Oh, and as part of that appreciation because…so there's just not a lot of women that I've worked with. This may be one of those symptoms of being in tech where one, I haven't worked with tons of women, and then two, working with a woman who is also pregnant and going through that as well. So it's been a little bit isolating in that experience.
But there is someone that I follow on Twitter, @EmmaBostian. She's also one of the co-hosts for the Ladybug Podcast. And she has been just sharing some of her, like, I am two months sleep deprived. She's had her baby now, and she is sharing some of that journey. And I really appreciate people who just share that journey and what they're going through because then it helps normalize it for me in terms of what I'm feeling. I hope this helps normalize it for anybody else that might be listening too.
CHRIS: I certainly can't speak to the specifics of being pregnant. But I do think it's wonderful for you to use this space that we have here to try and forward that along and say what your experience is like and share that with folks and hopefully make it a little bit better for everyone else out there. Also, you snuck in a sneaky pro-tip there, which is work on the East Coast and have a West Coast team. That just sounds like the obvious correct way to go about this.
STEPH: That has worked out really well and been very helpful for me. I'm already not a great morning person; I've tried. I've really strived at times to be a morning person because I just have this idea in my head morning people get more stuff done. I don't think that's true, but I just have that idea. And I'm not the world's best morning person, so it has worked out for many reasons but yeah, especially in helping me get through that first trimester and also just supporting family and other things that are going on.
Oh, I also learned a pro-tip about Twitter. This is going to seem totally random, but it was relevant when I was searching for stuff on Twitter [laughs] that was related to tech and pregnancy. But I learned...because I wanted to be able to search for something that someone that I follow what they said but I couldn't remember who said it.
And so I found that in the search bar, I can add filter:follows. So you can have your search term like if you're looking for cake or pregnancy, or sleep-deprived and then look for filter:follows, and then that will filter the search results to everybody that you follow. I imagine that that probably works for followers too, but I haven't tried it.
CHRIS: I like the left turn you took us on there but still keeping it connected. On the topic of Twitter search, they apparently have a very powerful search, but it's also hidden, and you got to know the specific syntax and whatnot. But there is a wonderful project by Shawn Wang, AKA Swyx, on the internet, bettertwitter.netlify.com is the URL for it. I will share a link to his tweet introducing it. But it's a really wonderful tool that just provides a UI for all of these different filters and configurations. And both make discoverability that much better and then also make it easy to just compose one of these searches and use that.
The other thing that I'll recommend is, I think it's a Chrome plugin. I'm guessing is what I'm working with here like a browser extension, but it's called Twemex, T-W-E-M-EX. And there's a sidebar in Twitter now, which just seems wonderful and useful. So as I'm looking at a Swyx post here, or a tweet as they're called on Twitter because I know that vernacular, there's a sidebar which is specific to Shawn Wang.
And there's a search at the top so I can search within it. But it's just finding their most popular tweets and putting that on a sidebar. It's a very useful contextual addition to Twitter that I found just awesome. So that combination of things has made my Twitter experience much better. So yeah, we'll have show notes for both of those as well.
STEPH: Nice. I did not know about those. This may cause someone to laugh at me because maybe it's easier than I think. But I can never remember that advanced search that Twitter does offer; I have to search it every time. I just go to Google, and I'm like, advanced Twitter search, and then it brings up a site for me, and then I use that as the one that Twitter does provide. But yeah, from the normal UI, I don't know how to get there. Maybe I haven't tried hard enough. Maybe it's hidden.
CHRIS: It's like they're hiding it.
STEPH: Yeah, one of those. [laughs]
CHRIS: It's very costly. They have to like MapReduce the entire internet in order to make that search work. So they're like, well, what if we hide it because it's like 50 cents per query? And so maybe we shouldn't promote this too much.
STEPH: [laughs]
CHRIS: And let's just live in the moment, everybody. Let's just swim in the Twitter stream rather than look back at the history. I make guesses about the universe now.
STEPH: [laughs] On a different note, I also discovered at thoughtbot in our variety of Slack channels that we have a yelling channel, and I had not used it before. I had not hung out there before. It's a delightful channel. It's a place that you just go, and you type in all caps. You can yell about anything that you would like to. And I specifically needed to yell about Gerrit, which is the replacement or the alternative that we're using for GitHub or GitLab, or Bitbucket, or any of those services.
So we're using Gerrit, and I've been working to feel comfortable with the UI and then be able to review CRs and things like that. My vernacular is also changing because my team refers to them as change requests instead of pull requests. So I'm floating back and forth between CRs and PRs. And because I'm in Gerrit world, I missed some of the updates that GitHub made to their pull request review screen. And so then I happened to hop in GitHub one day, and I saw it, and I was like, what is this? So that was novel.
But going back to yelling, I needed to yell about Gerrit because I have not found a way to collaborate with someone who has already pushed up changes. I have found ways that I can pull their changes which then took a little while. I found it in a sneaky little tab called download. I didn't expect it to be there. But then the actual snippet it's like, run this in your terminal, and this is then how you pull down the changes. And I'm like, okay, so I did that.
But I can't push to their existing changes because then I get like, well, you're not the owner, so we're going block you, which is like, cool, cool, cool. Okay, I kind of get that because you don't want me messing up somebody else's content or something that they've done. But I really, really, really want to collaborate with this person, and we're trying to do something together, and you're blocking me. And so I had to go to the yelling channel, and I felt better. And I'm yelling again. [laughs] Maybe I don't feel that great because I'm getting angry again talking about it.
CHRIS: You vented a little into the yelling channel; maybe not everything, though.
STEPH: Yeah, I still have more to vent because it's made life hard. Every time I wanted to push up a change or pull down someone else's changes, there are now all these CRs that then I just have to go and abandon, which is then the terminology for then essentially closing it and ignoring it, so I'm constantly going through.
And if I do want to pull in changes or collaborate, then there's a flow of either where I abandon mine, or I pull in their changes, but then I have to squash everything because if you push up multiple commits to Gerrit, it's going to split those commits into different CRs, don't like that. So there are a couple of things that have been pain points. And yeah, so plus-one for yelling channels, let people get it out.
CHRIS: Okay, so definitely some feelings that you are working through here. I'm happy to work together as a team to get through some of them. One thing that I want to touch on is you very quickly hinted at GitHub has got a bunch of new things that are cool. I want to talk about those. But I want to touch [laughs] on an anecdote. You talked about pushing something up to someone else's branch. You're like, oh, you know, I made some changes locally, and I'm going to push them up.
I had an interesting experience once where I was interacting with another developer. I had done some code review. They weren't quite understanding where I was. They had a lot of questions. And finally, I said, you know what? This will just be easier. Here, I pushed up a commit to your branch, so now you can see what I'm talking about. And I thought of this as a very innocuous act, but it was not interpreted that way. That individual interpreted it in a very aggressive sort of; it was not taken well.
And I think part of that was related to I think of Git commits as just these little ephemeral things where you're like, throw it out, feel free. This is just the easiest way for me to communicate this change in the context of the work that you're doing. I thought I was doing a nice favor thing here. That was not how it went. We had a good conversation after I got to the heart of where we both were emotionally on this thing. It was interesting. The interaction of emotion and tech is always interesting.
But as a result, I'm very, very careful with that now. I do think it's a great way as long as I've gotten buy-in from the person beforehand. But I will always spot check and be like, "Hey, just to confirm, I can just push up a commit to your branch, but are you okay with that? Is that fine with you?" So I've become very cautious with that.
STEPH: Yeah, that feels like one of those painful moments where it highlights that the people that you work with that you are accustomed to having a certain level of trust or default trust with those individuals, and then working with someone else that they don't have that where the cup is half-full in terms of that trust, or that this person means well kind of feelings towards a colleague or towards someone that they're working with.
So it totally makes sense that it's always good to check and just to be like, "Hey, I'd love to push up some changes to your branch. Is that cool?" And then once you've established that, then that just makes it easier. But I do remember that happening, and yeah, that was a bit painful and shocking because we didn't see that coming and then learned from it.
CHRIS: I do think it's an important thing to learn, though, because for me, in that moment, this was this throwaway operation that I thought almost nothing of, but then another individual interpreted it in a very different way. And that can happen, that can happen across tons of different things. And I don't even want to live in the idealized world where it's just tech; we're just pushing around zeros and ones; there's no human to this. But no, I actually believe it's a deeply human thing that we're doing here.
It's our job to teach the computers to be a little closer to us humans or something like that. And so it was a really pointed clarification of that for me where it was this thing that I didn't even think once about, no less twice, and yet someone else interpreted it in such a different way. So it was a useful learning situation for me.
STEPH: Yeah, I totally agree. I think that's a really wise default to have to check in with people before assuming that they'll be comfortable with something that we're comfortable with.
CHRIS: Indeed. But shifting back to what you mentioned of GitHub, a bunch of new stuff came in GitHub, and you were super excited about it. And then you went on to say other things about another system. [laughs] But let's talk about the great things in GitHub. What are the particular ones that have caught your eye? I've seen some, but I'm intrigued. Let's compare notes.
STEPH: So this is one of those where I hadn't seen GitHub in quite a while, and then I hopped in, and I was like, this is different. But some of the things that did stand out to me right away is that on the left-hand side, I can see all of the files that have been changed, and so that's a really nice tree where I just then immediately know.
Because that was one of the things that I often did going to a PR is that I would see what files are involved in this change because it was just a nice overview of what part of the applications am I walking through? Are there tests for this? Have they altered or added tests? And so I really like that about it. I'm sure there's other stuff. But that is the main thing that stood out to me. How about you?
CHRIS: Yeah, that sidebar file tree is very, very nice, which I find surprising because I don't use a file tree in my editor. I only do fuzzy finding to jump to files. But I think there's something about whenever GitHub had the file list; these are all the files that are changed. I'm like, this is just noise. I can't look at this and get anything out of it. But the file tree is so much more...there's a shape to it that my brain can sort of pattern match on. And it's just a much more discoverable way to observe that information. So I've really loved that. That was a wonderful one.
The other one that I was surprised by is GitHub semantic code analysis; stuff has gotten much, much better over time subtly. I didn't even notice this happening. But I was discussing something with someone today, and we were looking at it on GitHub, and I just happened to click on an identifier, and it popped up a little thing that says, "Oh, do you want to hop to the references or the definition of this?" I was like, that is what I want to do. And so I hopped to the definition, hopped to the definition of another thing, and was just jumping around in the code in a way that I didn't know was available. So that was really neat.
But then also, I was in a pull request at one point, and someone was writing a spec, and they had introduced a helper just like stub something at the bottom of a given spec file. And it's like, I feel like we have this one already. And I just clicked on the identifier. I think it might have actually been a matcher in RSpec, so it was like, have alert. And I was like, oh, I feel like we have this one, a matcher specific to flash message alerts on the page. And I clicked on it, and GitHub provided me a nice little inline dialog that showed me all of the definitions of have alert, which I think we were up to like four of them at that point.
So it had been copied and pasted across a couple of different files, which I think is totally fine and a great way to start, but they were very similar implementations. I was like, oh, looks like we actually already have this in a couple of places, maybe we clean it up and extract it to a common spec support thing, and ta-da, I was able to do all of that from the GitHub pull requests UI. And I was like, this is awesome. So kudos to the GitHub team for doing some nifty stuff. Also, can I get into the merge queue? Thank you.
...
STEPH: [laughs] There it is. That is very cool. I didn't know I could do that from the pull request screen. I've seen it where if I'm browsing code that, then I can see a snippet of where everything's defined and then go there, but I hadn't seen that from the pull request. I did find the changelogs for GitHub that talk about the introduction of having the tree, so we'll be sure to include a link in the show notes for that too. But yes, thank you for letting me use our podcast as a yelling channel. It's been delightful. [laughs]
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: Well, speaking of podcasts, actually, there was an interesting thing that happened where the CEO of Sagewell Financial, the company of which I am the CTO of, Sam Zimmerman is his name, and he went on the Giant Robots Podcast with Chad a couple of weeks ago. So that is now available. We'll link to that in the show notes. I'll be honest; it was a very interesting experience for me. I listened to portions of it. If we're being honest, I searched for my name in the transcript, and it showed up, and I was like, okay, that's cool. And it was interesting to hear two different individuals that I've worked with either in the past or currently talking about it.
But then also, for anyone that's been interested in what I'm building over at Sagewell Financial and wants to hear it from someone who can probably do a much better job of pitching and describing the problem space that we're working in, and all of the fun challenges that we have, and that we're hopefully living up to and building something very interesting, I think Sam does a really fantastic job of that. That's the reason I'm at the company, frankly.
So yeah, if anyone wants to hear a little bit more about that, that is a very interesting episode. It was a little weird for me to listen to personally, but I think everybody else will probably have a normal experience listening to it because they're not the CTO of the company. So that's one thing.
But moving on, I feel like today's going to be a grab bag episode or tapas episode, lots of small plates, as we were discussing as we were prepping for this episode. But to share one little thing that happened, I've been a little more removed from the code of late, something that we've talked about on and off in previous episodes. Thankfully, I have a wonderful team that's doing an absolutely fantastic job moving very rapidly through features and bug fixes and all those sorts of things.
But also, I'm just not as involved even in code review at this point. And so I saw one that snuck through today that, I'm going to be honest, I had an emotional reaction to. I've talked myself down; we're fine now. But the team collectively made the decision to move from a line length of 80 characters to a line length of 120 characters, and I had some feelings.
STEPH: Did you fire everybody? [laughs]
CHRIS: No. I immediately said, doesn't really matter. This is the whole conversation around auto-formatting tools is like we're just taking the decision away. I personally am a fan of the smaller line length because I like to have multiple files open left to right. That is my reason for it, but that's my reason. A collective of the developers that are frankly working more in the code than I am at this point decided this was meaningful. It was a thing that we could automate. I think that we can, you know, it's not a thing that we have to manage. So I was like, cool. There we go.
The one thing that I did follow up on I was like, okay; y'all snuck this one in, it's fine, I'm fine with it. I feel fine; everything's fine. But let's add that to the git-blame-ignore-revs file, which is a useful thing to know about. Because otherwise, we have a handful of different changes like this where we upgrade Prettier, and suddenly, the manner in which it formats the files changes, so we have to reformat everything at once. And this magical file that exists in Git to say, "Hey, ignore this revision because it is not relevant to the semantic history of the app," and so it also takes that decision out of the consideration like yeah, should we reformat or not? Because then it'll be noisy. That magical file takes that decision away, and so I love that.
STEPH: I so love the idea because you took vacation recently twice. So I love the idea of there was a little coup and people are applauding, and they're like, while Chris is on vacation, we're going to merge this change [laughs] that changes the character line. And yeah, that brings me joy. Well, I'm glad you're working through it. Sounds like we're both working through some hard emotional stuff. [laughs]
CHRIS: Life's tricky, is all I'm going to say.
STEPH: I am curious, what prompted the 80 characters versus 120? This is one of those areas that's like, yeah, I have my default preference like you said. But I'm more intrigued just when people are interested in changing it and what goes with it. So do you remember one of the reasons that 120 just suited their preferences better?
CHRIS: Frankly, again, I was not super involved in the discussion or what led them to it.
STEPH: [laughs]
CHRIS: My guess is 120 is used...I think 80 is a pretty common one. I think 120 is another of the common ones. So I think it's just a thing that exists out there in the mindshare. But also, my guess is they made the switch to 120 and then reformatted a few files that had like, ah, this is like 85 characters, and that's annoying. What does it look like if we bump it up?
And so 120 provided a meaningful change of like, this is a thing that splits to four lines if we have an 80 character thing, or it's one line if it's 120 characters, which is a surprising thing to say, but that's actually the way it plays out in certain cases because the way Prettier will break lines isn't just put stuff on the next line always. It's got to break across multiple lines, actually. All right, now that we're back in the opinion space, I have a strong one.
STEPH: This is The Bikeshed. We can live up to that name. [laughs]
CHRIS: So I do want an additional configuration in Prettier Ruby. This is the thing I'll say. Maybe I can chase down Kevin Newton and see if he's open to this. But when Prettier does break method call with arguments going into it but no parens on that method call, and it breaks out to multiple lines, it does the dangling indent thing, which I do not like. I find it distasteful; I find it noisy, the shape of the code. I'm a big fan of the squint test. I know that from Sandi Metz, I believe, or maybe it's Avdi Grimm. I associate it with both of them in my mind.
But it's just a way to look at the code and kind of squint, and you see the shape of it, and it tells you something. And when the lines break in that weird way, and you have these arbitrary dangling indents, the shape of the code is broken up. And I don't feel so strongly. I actually regularly stop myself from commenting on pull requests on this because it's very easy. All you need to do is add explicit parens, and then Prettier will wrap the line in what I believe is a much more aesthetically pleasing, concise, consistent, lots of other good adjectives here that are definitely just my preferences and not facts about the world.
But so what I want is, Prettier, hey, if you're going to break this line across multiple lines, insert the parens. Parens are no longer optional for breaking across multiple lines; parens are only optional within a given line. So if we're not breaking across lines, I want that configuration because this is now one of those things where I could comment on this. And if they added the optional parens, then Prettier would reform it in a different way. And I want my auto formatter don't give me ways to do stuff. Like, constrain me more but also within the constraints of the preferences that I have, please, thank you.
STEPH: I love all the varying levels there [laughter] of you want a thing, but you know it's also very personal to you and how you're walking that line and hopping back and forth on each side. I also love the idea. We have the idea of clean code. I really want something that's called distasteful code now [laughs] where you just give examples of distasteful code, yes. Well, I wish you good luck in your journey [laughs] and how this goes and how you continue to battle.
I also appreciate that you mentioned when you're reviewing code how you know it's something that you really want, but you will refrain from commenting on that. I just appreciate when people have that filter to recognize, like, is this valuable? Is it important? Or, like you said, how can we just make this more of the default so then we don't even have to talk about it? And then lean into whatever the default the team goes with.
CHRIS: Well, thank you. I very much appreciate that because, frankly, it's been very difficult.
STEPH: I do have something I want to yell about but in a very positive way or pranting as we determined or, you know, raving, the actual real term that wonderful listeners pointed out to us.
CHRIS: Prant for life. That's my stance.
STEPH: We had a magic show at thoughtbot. It was all remote, but the wonderful Gregg Tobo, the magician, performed a magic show for us where we all showed up on Zoom. And it was interactive, and it was delightful, and it was so much fun. And so if you need something fun for your team that you just want to bring folks together, highly recommend. I had no idea I was going to enjoy a magic show this much, but it was a lot of fun. So I'll be sure to include some links in the show notes in case that interests anyone. But yeah, magic. I'm doing jazz hands. People can't see it, but magic.
I like how you referred earlier, saying that today is more of like a tapas episode. And I'm realizing that all of my tapas are related to being pregnant, yelling, and magic shows, and I'm okay with that. [laughs] But on that note, what else is on your tapas plate?
CHRIS: Actually, a nice positive one that came into the world...I always like when we get those. So this is interesting because I was actually looking back at the history, and I had Gary Bernhardt on The Bike Shed back in Episode 269. We'll include a link in the show notes. But we talked a bunch about various things, including TypeScript. And I was lamenting what I saw as a pretty big edge case in TypeScript.
So the goal of TypeScript is like, all right, JavaScript exists, this is true. What can we do on top of that? Let's not fundamentally change it, but let's build a type system on top of it and try and make it so that we can enforce correctness but understand that JavaScript is a highly dynamic language and that we don't want to overconstrain and that we've got to meet it where it is.
And so one of the design decisions early on with TypeScript is if you have an array and you say like it's an array of integers, so you have typed that array to be this is an array of int, or it will be an array of number in JavaScript because JavaScript doesn't have integers; they only have numbers. Cool. [laughs] Setting aside other JavaScript variables here, you have an array of numbers. And so if you use element access to say, like, say the name of array is array of nums and then use brackets and you say zero, so get me the first element of that array.
TypeScript will infer the type of that to be a number. Of course, it's a number, right? You got an array of numbers, you take a number out of it, of course, you're going to have a number, except you know what's also an array of numbers? An empty array. Well, of course. So there's no way for TypeScript because that's a runtime thing, whether or not the array is full of things or not. Or imagine you get the third element from the array. Well, JavaScript will either return you the third element, which indeed is a number, or undefined because there's no third element in this array.
So that is an unfortunate but very understandable edge case that TypeScript was like, listen, this is how JavaScript works. So we're not going to…frankly, we don't think the people embracing TypeScript and bringing it into their world would accept this amount of noise because this is everywhere. Anytime you interact with an array, you are going to run into this, this sort of uncertainty of did I actually get the thing? And it's like, yeah, no, I know how many things are in the array that I'm working with. Spoiler, you maybe don't is the answer.
And so, we ran into this edge case in our codebase. We were accessing an element, but TypeScript was telling us, "Yes, definitively, you have an object of that type because you just got it out of an array, which is an array of that type." But we did not; we had undefined. And so we had, you know blah is not a method on undefined or whatever that classic JavaScript runtime error is. And I was like, well, that's very sad.
But now we get to the fun part of the story, TypeScript, as of version 4.1, which came out like the week that I recorded with Gary Bernhardt, which was interesting to look at the timeline here. TypeScript has added a new configuration. So a new strictness dial that you can configure in your tsconfig called noUncheckedIndexedAccess. So if you have an array and you are getting an element out of it by index, TypeScript will say, "Hey, you got to check if that's undefined," because to be clear, very much could be undefined. And I was so happy to find this.
We turned it on in our codebase. It found the error in the place that we actually had an error and then found a few others that I think probably had errored at some times. But it was just one of those for me very nice things to be able to dial up the strictness and enforce correctness within our codebase, and so I was very happy about it. Other folks may say that seems like too much work. And, you know, I get that, I get that take. I'm definitely on the side of I'm willing to go through the effort to have enforced correctness, but you know, that's a choice.
STEPH: Yeah, that's thoughtful. I like that, how you said you can dial up the strictness so then as you are introducing TypeScript, then people have that option. There is an argument there in the back of my head that's like, well, if you're introducing types, then you want to start more strict because then you're just creating problems for yourself down the road. But I also understand that that can make things very difficult to then introduce it to teams in existing codebases. So that seems like a really nice addition where then people can say, "Yeah, no, I really want the strictness. This is why I'm here," and then they can turn that on.
CHRIS: So TypeScript in the configuration has strict mode, so you say strict true. And that is a moving target with each new version of TypeScript. But it's their sort of [inaudible 28:14] set of things that are part of strict, but apparently, this one's not in it. So now I'm like, wait, can I have a stricter? Can I have a strictest option? Can I have dial it to 11, please? [laughs] Really rough me up and make sure my code is correct.
But it is the sort of thing like when we turn any of these on; it will find things in our codebase. Some of them, we have to appease the compiler even though we know the code to be correct. But the code is not provably correct as it sits in our file. So I am, again, happy to make that exchange. And I like that TypeScript as a project gives us configurability. But again, I am on team where's the strictest button? I would like to push that as hard as I can and live that life.
STEPH: Yeah, I like that phrasing that you just said about provably correct. That's nice.
CHRIS: That's the world I want to live in, everything you own in the box to the left, which is probably correct.
STEPH: [laughs] That's how that song goes.
CHRIS: Yeah. This is a reference to move errors to the left, which I think I've referenced before. But now that I'm just referencing Beyoncé and not the actual article, it's probably worth referencing the article, but the idea of, like, if a user hits an error, that's not great. So let's move it back to QA, that's a little further to the left in sort of the timeline.
But what if we could move it to an automated test in CI? But what if we could move it into your editor? What if we could move it even further to the left? And so, a type system tends to be sort of very far ratcheted up to the left. It's as early as possible that you can catch these. So again, to reference Beyoncé, everything you own in a box to the left.
STEPH: [singing] Everything you own in the box to the left.
CHRIS: Thank you for doing the needful work there.
STEPH: [laughs]
Mid-roll Ad
And now a quick break to hear from today's sponsor, Studio 3T.
When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy.
And now, there's Studio 3T Free, a free edition of Studio 3T, which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free, and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with MongoDB, SQL, Oracle, and Sybase.
You can start today by downloading Studio 3T Free, which also includes a 30-day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial, head to studio3t.com/free that's studio3t.com/free.
STEPH: I have a question for you that I'd really love to get your opinion on because I myself I’m waffling back and forth where someone brought up some really great points about a concern or just a question they had brought up around testing and i18n specifically. And I agree with the things that they're saying, but yet, there's also a part of me that doesn't, and so I'm Stephanie divided. And so, I'm trying to figure out where I stand on this.
So let me dive in and give you some context; I'm going to share the statement/question that they had asked. So here we go. "One of my priorities has been I should be able to review a test without having to reference any other code. References to i18n means that I have to go over to YAML and make sure the right keys have the right values, and that seems error-prone.
In some cases, a lack of a hit in the YAML defers to defaults. If the intent is to override the name of model attribute and error messages and it is coded incorrectly, the code fails silently without translating and uses the humanized attribute name, and that would go undetected. If libraries change structure, it might also fail silently as well, so to me, the only failsafe way is to be fully explicit in test."
So this goes with the idea that if you're writing tests and then you're testing text, but it’s on the screen or perhaps an email, that you're actually going to assert against that string that is shown to the user instead of referencing the i18n keys. And then that also backs up this person's idea that you really want to not have to jump around. If you're reading a test, everything you really need to know about that test should live very close by.
And I really agree with that initial statement; I want everything that's very close to the test, especially if it's anywhere in that expectation line, I really want it close, so I can understand what's the expectation, what's under test, what are the inputs, what's the expected outcome. So I wholeheartedly support that idea.
But yet, I am in the camp that I then will use YAML keys instead of providing that exact string because I do look at i18n as a helpful abstraction, and I want to trust that i18n is doing its job. And so that way, I don't have to provide that string that's there because then we're also choosing, okay, well, which language are we going to always use for our test?
So this is the part where I feel divided. So I'm going to walk you through some of the reasons that I really support this idea and other reasons that I still use the i18n keys and then get your take on it. So there is a part of me that when I'm using the i18n YAML keys, it does make me sad because it reduces the readability in tests. Sometimes the keys are really well named where maybe it's a mailer.welcomemessage. And I'm like, okay, I understand the gist. I don't need to go see the actual string.
I also think they highlighted a really good use case where if you're overriding behavior and it could default to something else, your test is still going to pass, and you don't actually know. So I could see the use case there where if you are overriding, then you want to be explicit about the string that you expect back. I also think there are some i18n messages that are fairly complex, and where then I really would like to see the string.
So if you are formatting a date or a time or you're passing in just a lot of variables, then there's a chance that I do want to see how did that actually get generated for the person who's going to be reading it versus just maybe it's garbage text that came out? And I want to validate that the message that we think we're crafting is actually the one that the user is going to see.
The case against actually being explicit, my biggest one is because then I do see i18n as a helpful abstraction. And I want to trust this abstraction that it's doing its job and it's doing it well. Because then if I do use explicit strings, it makes me sad if I change text from like hello to welcome, and now I have a failing test. I don't like that idea either.
So I'm torn between these two worlds of it is very nice to have everything that you need in a test to be able to understand what is the expectation, but then I also lean into this abstraction and reference the i18n keys. So, Chris, with all of that, that was a bit of a whirlwind, [laughs] what are your thoughts? How do you test this stuff?
CHRIS: Honestly, I'm surprised that you've got that much division in your own answer because for me, this is very obvious there's one...no, I'm kidding. This is obviously complicated. Similar to you, I think I'm going to have to give a grab bag of answers because I don't have a singular thought of like it is concretely this or that.
I tend to go for explicit strings and tests all the way to...so like the readability of a test, and the conciseness of a test is interesting. I will often see developers extract. Say they're creating a user with a specific email, and then they log in with that email later, and then they expect something else. And so the email is referenced a few times, and they'll extract that into a variable called email. And I personally will tend to not do that. I will inline the literal string like [email protected], and I'll do it in a few places. And I'm fine with that duplication because I like the readability of any given line that you're reading. So I will make that trade-off within tests.
This is the thing I think we've talked about before, but the idea of DRY in tests is like I want to be careful applying that idea, Don't Repeat Yourself, to break apart the acronym. Those abstractions I will use them less than tests. And so I want the explicitness, I want the readability, I want to tell a little story, all of that feels true.
That said, to flip it around, one of the things that I'm hearing...so I think I'm hearing a part of this that is around well, we can fail silently because we fail symmetrically in both the implementation and our test. Then an assertion may actually match even though it's matching on a fallback. I think that's a configurable thing. I would actually want my test to raise if I'm referencing an i18n key that is not defined.
Now, granted, that's different for languages. And maybe this becomes a more complex story of like in production; in a different locale, it will fail because we don't have 100% parity across all our locale files. But fundamentally, I want to make sure that at least exists in our base, which I think typically would be en-US as the locale. I want to make sure all keys are looked up and found, and it's an error otherwise in our test. So that's a feeling. But am I misunderstanding that part of the story or how that configuration typically works?
STEPH: No, I think you've got it. But just to make sure we're on the same page, so if you reference a key that doesn't exist, then it is going to fail. So at least you have your test failure is going to let you know that you've referenced something that doesn't exist. But if you are referencing, like if you want to override the defaults that Rails or i18n has provided for a model and say for an error message, if you reference that, but you want to override it, but then you've forgotten, that does exist.
So you're not going to get the failure; you're going to get a different message. So it's probably not a terrible experience for the user. It's not going to crash. They're going to see something, but they're not going to see the custom message that you intended them to see.
CHRIS: Gotcha. Okay, well, just to name it, the thing that I was describing, I don't know that that would be the configuration for every system. So I would strongly encourage any system where i18n just has a singular behavior which is we fall back to the key. I want my test to absolutely tell me if that's happening. And that should be a failure of the test. But to the discoverability documentation bit, I do wonder if tooling can actually help answer the question.
And as I was describing the wonderful experience I had on GitHub the other day, viewing code as just static characters in a file is both true and also, I think increasingly, a limited view of it. We have editors, and we have code hosting tools that can understand semantically our code a little bit better. There's got to be like 20 Different VS Code plugins that, when you hover on an i18n reference, it will do the lookup for you. That feels like a thing that exists, and if it doesn't, well, now I’ve nerd-sniped myself, and I got a weekend project. JK, I'm definitely not building that this weekend.
But that feels like can we use that to solve this? Maybe not. But that's just another thought of where we have these limitations where it's static, like those abstractions can be useful. But if we can very quickly dereference them, then the cost of the abstraction or that separation becomes smaller, and so the pain is reduced. And I wonder if that's a way to sort of offset it.
STEPH: If I can poke at that a little bit more, because I think you're touching on something that I haven't expressed or thought through explicitly, but it's the idea of, like, why do I like the abstraction? What is it that's drawing me towards using these keys? And I think it's because most of the cases, I don't care. I don't care what the string is, and so that feels nice. Like, I understand that, yes, we're referencing something. If that key didn't exist, I'm going to see a failure.
So I know that there's text there, and that's why I do lean into referencing the keys instead of the text because it feels good to not have to care about that stuff. And if we do make changes to the text, then it suddenly doesn't fail, and then I have to go update a test because we added a period or added a comma. I think that's the path of more sadness for me. And my goal is always a path of least sadness. So I think that's why I lean into it [laughs], I'm guessing. Is that why you lean into it as well? Or what do you like about referencing the keys over the explicit text?
CHRIS: No, I think I share your inclination there, and the reason that you're in favor of it, and I think the consistency like if we're going to use i18n, then we should lean in because it's a non-trivial thing to do like porting to i18n projects, and they're tricky. Getting it right from the first step is also tricky. If you're going to do it, then let's lean in, and thus let's use that abstraction overall. But yeah, same ideas as you.
STEPH: Cool. I think that helps validate where I'm at in terms of how I rationalize about this where ultimately, I do like leaning into that abstraction. And as you'd mentioned, some of those porting projects, I haven't been on one specifically, but I've seen that they are a lot of work. And so, if we have that in our system, then we want to continue to use it.
It does reduce some of the readability. Like you said, maybe there's a VS Code plugin or some way that then we can help people be able to see if they want that full context in the test and not have to jump over to YAML. But yeah, otherwise, unless it's overriding default behavior or complex, then that's what I'm going to go with is with the keys.
But I really appreciate this person's very thoughtful question and approach to testing because, normally or typically, I fully agree with I want full context in the test. And this one was one of those outliers that came up for me, and I had to really think through all the feelings and the reasons that I have for those feelings. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeee!!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris is back from vacation and gives hiring and onboarding updates.
Steph has an update about the CI slowdown and scaling CI.
They tackle a listener question regarding having some fear around potential merge conflicts.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
group_by
strategyThis episode is brought to you by Studio 3T. Try Studio 3T's full suite of features for 30 days, no payment details needed.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Golden roads are golden.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Oh, I also have a new intro that I want to try out. This is thanks to Irmela from Twitter, where it's good morning and hooray; today is Bike Shed day. They technically said Tuesday, but we don't record on Tuesdays. So today is Bike Shed day, so happy Bike Shed day. And hey, Chris, what's new in your world?
CHRIS: What is new in my world? Yeah, I loved when I saw that tweet come out. It really warmed my heart. So Tuesday, in theory, is Bike Shed day, but for you and I, Friday is Bike Shed day. It's confusing breaking the fourth wall, as I so often do. But yeah, what's new in my world? I'm back from vacation, which is the thing that I did. For listeners, well, I have been absent the previous week related to vacation and all those sorts of things.
But I did what we're going to describe as a not smart thing. It wasn't intentional. The world just kind of conspired in this way. But I had two separate vacation islands that existed in my mind, and then they both kind of congealed, but as they did that, they moved towards each other, but they didn't connect.
And so what I ended up with was two weeks back to back where I was out on Thursday and Friday of one week, and then I was back for Monday and Tuesday. And then I was out for Wednesday, Thursday, Friday of the following week. Protip: that's a terrible idea. It's just not enough time to sort of catch up.
The whole of it was like the ramp-up to vacation and then the noise of vacation, then getting back and being like, oh, there are so many emails. Let me try and catch up on them. But also, on the very positive side, we had a new hire join the team, and so most of my focus on the days that I was in the office was around getting that new person comfortable on the team, onboarding, spending as much time as possible with them.
And so, all total, it was an adventure. And again, I would strongly recommend against this. The world just kind of conspired, and suddenly these three different forces in my life came together. And this was just the shape of things. But yeah, I went on vacation, and it was great. The vacation part was great.
STEPH: I will take your advice. So next time I have like two segments of PTO, I'm just going to stitch them together and just go ahead and take that whole intermittent time off.
CHRIS: That probably would have been better. Again, someone new joining the team, it was very important to me to get some time with them early on, and so I opted not to do that. But yeah, the attempt to catch up in between was a completely lost effort, I would say. But I think I'm mostly caught up now, having been back in the office for about a week, so yeah.
But let's see, what else has been up in my world? It's actually been a while since you and I have chatted based on the various timing and schedules and the nonsense vacation schedule that I had that you so kindly accommodated across a couple of weeks. Let's see, hiring and onboarding; the hiring went really well. We talked about that a bunch of weeks back. But now we're in the onboarding phase. And so next week will be the first week that all four of us on the engineering team are in the office together for the full week. I'm super excited to experience that.
We've had different portions of it, with me being on vacation and other folks being on vacation. But now, for the first time, we're really going to feel what it’s like as this team. And we're going to have our first retro as a group and all those sorts of things, so I'm very excited to do that. And thus far, all of the interactions that we've had have been really wonderful as a team. And so now it'll be the first time we're just bringing all of those various pieces together.
STEPH: I just have to clarify; you said all of y'all in the office together. Do you still mean remotely?
CHRIS: Oh, yes, yes, I just mean not on vacation, all present and accounted for on the internet. Remote is another interesting facet of what we're doing here and trying to figure out how to navigate that, particularly where there are some folks that are closer and can potentially get together in the city, that sort of thing, and then folks that are truly remote and making sure that we're...I'm very much of the opinion if we have anyone that's remote, we are remote team, and we must embrace async communication and really lean into that.
And I think the benefits of async communication as its own consideration are so worth it. And it's one of those things that's hard to do. It requires careful, intentional thought. It requires more purposeful communication. But I think there are a lot of good things that fall out of that. It's similar to TDD in that way in my mind, like, it's not easy. It's actually quite difficult. But all the effort that I put into trying to learn how to do that has made me a better developer, I think, on all the various fronts.
And I think similarly, async communication I believe in as a tool to force just better communication. And so I'm a big believer in it, and I've found a ton of benefit in remote that I'm also a big believer in that now. I, like everyone else, was forced into it as the world was, but I've really come to enjoy it a lot. And so yeah, so, no, not physically in the office, to answer your very short question with a long rambling aside.
STEPH: [laughs] I like that comparison. I hadn't thought about it in that way but comparing that thoughtfulness and helpfulness of async communication and then also to TDD, where it's not easy, but the payoff is so worth it, the upfront cost of it. That is something that at thoughtbot, we've had conversations around where there are folks that really value...they want to be around people. They get energy from people, and so they want that option to be able to rent a WeWork space and maybe get together with a colleague once or twice a week, and that was supported by thoughtbot.
But we also wanted to express well, if you are together, do treat everything still as a remote work environment. So let's say if you and your colleagues are on a project, but then there's a third person on that project that's remote, you still need to act like everything's remote to make sure that everyone else is still getting to participate and hear everything and be part of the conversation. So just keeping that in mind that yes, we want to support you doing your best work, and if that's around people, that's wonderful. But we are still remote-first, and communication needs to be in that fashion.
Well, that's super exciting that you'll have all of the team together. That sounds like it will be wonderful to hear about and then also retros and meetings, and yeah, it sounds like you've got a fun week ahead.
CHRIS: Indeed. I'm super excited to see what sort of new things come out of the new voices on the team and practices that each of the individuals have experienced at other companies that we can now fold together. The work that we've done so far has been very much inspired by thoughtbot ideas, and approaches, and workflows, and processes because that's what I brought to the table. But I'm super excited to bring in more voices and see what of that 100% stays on versus does anything change? Do we get entirely new things? So yeah, very excited about all of that.
But to revisit a topic that we've talked about in the past, this week is catching up from vacation, so there's a certain amount that will constrain my work. But this was definitely another week of I did not do much coding. I'm trying to think if I did any coding this week. It's possible that the answer is no. The fact that I don't even know the answer to that is an interesting one. I still have in my mind the desire to get back to it, and I think I will. But there's so much other stuff to do.
Recently, this week, there's been a lot of vendor selection and contract negotiation, which is an interesting facet of the work, but just trying to figure out, oh, we need platforms to do X, Y, and Z. And it turns out they're wildly costly and have long sales cycles. And how do you go through that, and how do you make sure that we're getting the right thing? And so that's been a big part of my work. Hiring and onboarding, again, has been a big part of it.
There's also some amount of communicating back to the broader team - what are we doing? What is the product organization or the engineering team delivering? And so I'm okay at presentations, I think. I'm comfortable with giving presentations. The thing that I struggle with is finding the optimization point in preparation. I will, of my own accord, over-prepare. And that may sound a little bit like, oh, what's my greatest weakness? That I care too much.
But I mean it sincerely as like, I would love to find that right amount of like, it's like an hour of preparation for a 15-minute presentation to the team. That's the right ratio. And I just hit that on the head, and it's great. But whenever I know that I need to give a larger presentation, it will distract me. And it's work that can expand to fill whatever time you give it, and so trying to thread that needle is a tricky one for me.
STEPH: Yeah, I'm with you. Presentations, for me, they're one of those things that it's very stressful, anxiety-inducing; all the prep feels distracting from some of the other work that I want to do. Or maybe I'm excited about the presentation, and that is the work that I want to do. But it's not until it's done that then I'm like, oh, that was fun. That went well. This was great.
It's not until after that then I feel good about it. So the lead-up to it is very stressful. And so if you can optimize that to say, well, I know exactly what this group needs, where I can cut corners, where I have to go into details, that sounds incredibly valuable.
I'm curious, so this is probably a bad idea, but it's the only way I really know how to find those boundaries is you got to experiment and tweak a little bit and let yourself fail a little bit or just be very explicit with folks about this is what the presentation is, if you expected something else, let me know. Or here's what I've got, have someone to bounce ideas off of.
But there's such a nicety if you can find that I'm going to try failing just a little bit and get some feedback. Or maybe it's not failing at all, but you are testing that boundary to find out did this work, or should I put more effort into this? I'm curious, do you have thoughts on that? How you're going to find that right optimization level?
CHRIS: Not as specific to truly honing in on whatever the correct number is. The thing that I've been doing is I...this will sound complicated, but I wait until the last minute but a specific version of the last minute. So at most, I start working on it an hour and a half before the meeting. And these are, again, not particularly large presentations, and it's a recurring sort of thing. So it's sort of engineering talking about the work that we've done recently and trying to find the right level of detail and whatnot, so giving myself a smaller time window.
I think that's enough time to tell the story and to find a meaningful way to tell the story and grab the screenshots and all of that, but it's constrained so that I don't over-optimize, over-edit, overthink. I'm using Deckset, which is a presentation tool that starts from a Markdown file. So it's just a Markdown file that I'm editing. That's great; that works really well. I do not twiddle with fonts. There's one theme that I use. It is white background with black text. That's it.
And I think I've given myself deep permission to be the CTO that has a white background with black text and no transitions. I don't even go into presentation mode for it. I'm literally showing the UI of Deckset, and then just hitting the arrow to move between them. But the Chrome and the drop-down menu at the top is still visible because I want to see people's faces as I'm presenting. And I haven't figured out how to do that correctly on my computer. So I'm just presenting the window of Deckset.
And I'm like, I have given myself permission to do all of those things, and that has been super helpful, actually. So that's a version of me negotiating what this means. Where I do invest the effort is trying to enumerate all of the things and then understand what is the story that I'm telling around the things and how do I get the message right for the collective audience? So, for a developer team, I would say much more nuanced technical things, for marketing folks, it would be at this end of the spectrum.
I do lean on the old idea of, like, let us talk about it in the mindset of the user, so it's very much user-centric, but then some of the things that we're doing are important but invisible to the users. They're part of how we broadly build the platform that we need to, but they're completely invisible to users. And so, how do I then tell that story still with ideally a user-centric point of view? So that's where I do invest the time, and I give myself complete freedom to just grab screenshots, put black text on a white background, and then talk over it.
STEPH: I love it. Because you made this comparison earlier, so now I'm thinking of a comparison of like TDD-driven presentations where it's like, what's the end goal? What's the assertion? What's the outcome that I want? And then backfilling from there. Or, in your case, you're talking about what's the story that I need to tell? What's the takeaway that I want people to have? So then you start there, and then you figure out what's the supplemental information that you need to provide to then get there.
And the fact that you don't twiddle with fonts and all that stuff, I think you're already really on your way [chuckles] in terms of finding that right optimization of I need to present a clear and helpful message but not sink too much time into this
CHRIS: Black text on a white background is very clear. So...
STEPH: [laughs] If there are any designers listening to this, they might just be cringing to this conversation right now. [laughs]
CHRIS: I actually wonder about what the...I know that dark mode is a thing that lots of folks care about. I'm thinking about the accessibility affordance of it now. I'm actually thinking through it now that I said it somewhat flippantly. I actually don't know what I'm talking about, but it was easy, and it wasn't a choice that I allowed myself to think about. So there we are.
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: In my personal world, so Tim and I are moving. We're on the move. We are transitioning from South Carolina to North Carolina. So I think I may have shared a bit of this news, but Tim has acquired his first software developer job, which is just phenomenal. It is in North Carolina. He does need to be there in person for it. So we are currently selling our South Carolina house and then moving.
It's not too far. It's like three and a half hours away to where we're moving in North Carolina because we're already pretty far in North and South Carolina. So yeah, there's always another box that needs to be packed. And there's always just something else that you forget, another thing that you want to take to Goodwill or try to give to a neighbor. It's a good way to purge. I will definitely say that every time you move, it's a good time to get rid of things.
CHRIS: That is a very cup half-full point of view on it, but yeah, it feels true.
STEPH: [laughs] It's true. I'm a very cup half-full person. For more technical news, for more client stuff that I've been working on, so I think the last time we chatted, I was sharing that we had this mysterious CI slowdown where we were going from CI builds taking around 25 minutes to spiking to 35, sometimes 45 minutes, and I have an update there. So we found out some really great things, and we have gotten it back down to probably more about 23 minutes is where the CI is running currently.
As for the actual who done it, like what caused this specific slowdown, we got to a point where we were like, we're doing so much investigative work to understand exactly what caused this that it felt less helpful because at the end of the day, we really just wanted to address the issue. And so solving the mystery of exactly what caused this started to feel less and less meaningful because we're like, well, we want to improve this anyways.
So even if we found that one line or something that happened that caused this, we want a bigger solution to this type of problem because then this could happen again like, someone else maybe adds one line or something happens, and things get thrown off balance, and then suddenly, we have a slowdown, and that just takes too long to investigate.
So I don't have a concrete who done it answer for the slowdown. But we've learned a couple of things; one of the things that we learned is we're using parallel_tests to then split our tests across all of the CPUs that are then running the RSpec test. And we realized that we weren't actually splitting tests based on runtime data.
So there are a couple of ways that parallel_tests will let you divvy up your test, and two of those ways one is file size. So you can split the files based on the size of the file, or you can use info from the runtime log. So then parallel_tests can be a little bit more intelligent about like, well, I know how long these files take, so I'm going to split it based on that versus just the size of the file.
And we realized that we were defaulting and using the file size instead of the runtime even though we all thought we were using runtime. And the reason for this took a bit of source code diving because looking at the README for parallel_tests; it looked like as long as we're passing in a file to the runtime log path, then parallel_tests is going to use that runtime data. But then there's some sneaky-sneaky in there that I'll actually link to in the show notes in case anybody's interested.
But if you are setting a particular flag and don't pass in another flag, then parallel_tests is going to be like, cool, I'm going to portion out your test based on file size instead of the runtime. So we fixed that, or we updated that, and that has had a significant improvement for the test being split out more evenly. So we didn't have a CPU that was taking 25 minutes while the next CPU was only taking like 17 minutes. And parallel_tests also provides some really helpful data that because we have that runtime log file, we could tell how long each CPU is running and how they're getting split.
So the past couple of weeks, it's heavy measure, measure, measure, take all the data, create lots of graphs, understand what's happening, and then look for ways to then fix it. So figuring out how these files or how the tests were being distributed across, we had a number of graphs that were just showing us what's actually happening. So then we could track the improvement, so that was really nice. It was the measure twice and change something once [laughs], and then we got to see the benefit from it.
For scaling the CI, so we are looking on adding more machines to then process tests. That has been really interesting because we're at the point where we are adding more machines, but if we add more machines, we're not going to speed up how quickly our CI processes everything. Because we are splitting tests based on file size and not by examples, we're always going to have this effect of a tentpole. So if we have a file that takes 10 minutes, that's the fastest we're ever going to get.
So Joël and I are in discussions right now of where we still really want to understand what's the fastest we can achieve just by adding another machine or two versus are we at the point that, okay, scaling horizontally and adding more machines has been helpful, but we have reached the breaking point where we actually need to divvy out the tests at a smaller scale and have a queue approach? So then that way, we can really harness the power of then we don't have one file that takes 10 minutes, and we don't have to care either.
So if somebody adds a test to a file and suddenly a file goes from 12 minutes to like...well, hopefully, they added more than one test. [laughs] But let's say it goes from 10 minutes to 15 minutes; we don't want to have to manage that and understand that there's a tentpole. We just want to be able to divvy out all the examples and then have a queue approach.
That's probably going to be MVP two of this, but we're still waiting that out. But it's just been really interesting to realize that scaling horizontally really only takes you so far. Like, we've added one machine, maybe one more, so then we'll have three total. And then it's like, okay, that's great, but now we need to actually address this other bigger problem.
CHRIS: I know we've talked about this in previous episodes, but I'm super interested to hear as you progress into the queue approach because that's something that's been top of mind for me for a while. I don't know if we've talked about it before specifically, but Knapsack Pro is the one thing that I'm available as a service that does this. Do you have other tools that you're looking at for that, or is this still in the exploratory phase?
STEPH: Knapsack is still a top contender. There's also RSpec Queue; that's another one that we have in mind. Unfortunately, I really wish parallel_tests let us do this, but parallel_tests just doesn't quite offer that feature. And someone in the team, I think, even reached out to the maintainer of parallel_tests, and they were like, "Yep, you're totally right. We're actually more focused in making sure that this works for everybody versus has specific features." And they gave a really nice thoughtful response, which we appreciated, so at least we could confirm that parallel_tests won't do exactly the thing that we need. So yeah, RSpec Queue, Knapsack, I think those are the top two that I'm familiar with.
CHRIS: Gotcha. I don't know if I've seen RSpec Queue before. I'm intrigued. So actually, an interesting thing happened. While I was away on vacation, one of the folks who just joined the team as one of their first steps joining the team, noticed that our CircleCI config wasn't actually taking advantage of the parallelism that we had configured; that's on me. I turned on parallelism and then never did anything with it, which is a complete waste.
And so I was super happy to come back and saw that CI, which had been creeping up to six or seven minutes, had suddenly dropped back down to two to three minutes sort of thing. I was like, this is amazing. But now I'm at the point where our RSpec suite is spreading across the different, I think, it's like four different cores that we have available, but it's not doing it as efficiently as we would like. So I'm like, oh, okay, can we dial it up to 11?
But I'm intrigued; I've only looked very much in passing at RSpec Queue literally now that you've mentioned it. But Knapsack Pro exists as a different service. And so, as far as I understand, the agent that's running is going to communicate and say, "Give me another test. Give me another test." But there needs to be some external process running and managing that queue. Does RSpec Queue do that? Somebody owns the queue, right? Who owns it? Do you understand how that works?
STEPH: So I was definitely familiar with this. If you'd asked me a couple of weeks ago, when I was diving heavily into the queue work before then, we transitioned more into focusing on then adding new machines; I was very up to speed on this. So I may get a couple of things wrong, but my understanding is that RSpec Queue, you're going to manage your own queue. So you bring in the gem and then use something like Redis, so then you are in charge of that.
And with Knapsack, then you are using their service to manage that queue. And then they have found ways to optimize around what if you can't reach their API or something; their service is down? And making sure that that doesn't impact your CI so then you can't still run your test just because you can't reach their queue somewhere. So that's my current understanding, RSpec Queue you own it, Knapsack they're going to own it.
CHRIS: Gotcha. That makes sense. That about maps to what I was expecting, and so I wonder if I could use RSpec Queue. Now I'm going to have to go research that. But it's always nice to have new things to look at on this to go at ludicrous speed. That's what I'm going for. I want to get to ludicrous speed for our CI.
STEPH: I like that name. I haven't heard of that speed. I feel like I have. I feel like you've dropped that before, [laughs] like you've used that.
CHRIS: I don't know; quite possibly, I have. It's a Spaceball’s reference. It's a throwback to days of old.
STEPH: Well, then we may be investigating RSpec Queue together. Because yeah, Joël's and I goal for this week has been very much to figure out what's our boundaries with TeamCity? What are our boundaries with horizontal scaling? And I think we're both getting to that conclusion of like, okay, this has been good, it's helpful, but we really need to look into the queue stuff if we really want to see significant progress.
Also, some of the stuff we're doing because we're pushing on it, we are manually splitting files. So if there's a file that has created this tentpole that's taking 10 minutes, but we know ideally most of the other files only take six minutes, then we are splitting that file, so then we have two spec files that are associated with the same class. And then using that as a way to say, okay, what would this look like? Let's say if this were better balanced.
And that's also been pushing us in the direction of like, okay, this is fun, this is informative, but it's not sustainable. We don't want to have to keep worrying about splitting these files and doing this manually and pushing us towards that queue-based approach.
MIDROLL AD:
And now a quick break to hear from today's sponsor, Studio 3T.
When you're developing applications, it can often be a chore to work with your underlying data. Studio 3T equips you with a complete set of tools to work with MongoDB data. From building queries with drag and drop, to creating complex aggregation pipelines; Studio 3T makes it easy.
And now, there's Studio 3T Free, a free edition of Studio 3T which delivers an essential core of tools. This means you can get started, for free, with Studio 3T Free and when you're ready, you can upgrade and enjoy even more features through Studio 3T Pro and Studio 3T Ultimate. The different editions unlock more tools and additional integrations with Mongo DB, SQL, Oracle and Sybase.
You can start today by downloading Studio 3T Free, which also includes a 30 day free trial of all the features of Studio 3T Ultimate, so you can try out some of the enterprise features as well. No credit card required. To start your trial head to studio3t dot com forward slash free. That's studio dot com forward slash free.
But shifting gears just a bit, we have a listener question. So this person wrote in, "I have listened and loved your podcast for many years dreaming of getting a job with people half as thoughtful and intentional as you, and finally it happened. I have my first junior dev job, and my co-workers and bosses are all super awesome. Up until now, I've been flying solo.
And in my new job, I've been finding it very unsettling to resolve merge conflicts. As careful as I am to comb through the conflict and contact the other developer if needed, I feel like I am covering my eyes and crossing my fingers whenever I select the resolve conflict button. Is there some type of process or checklist I could rely on? Is it normal to have such a high fear factor with a merge conflict? Any advice or maybe just a bit of been there felt that way...?"
All right. So one, that's fabulous, congratulations on the new job. That's very exciting. I think I've voiced this many times, getting your first junior dev job is so hard, and so I'm so excited when it works out for people, and they get there. And then, for the merge conflict, I have thoughts. Chris, do you want to start? Shall I start? How are you feeling?
CHRIS: Why don't you start? Well, actually, I'm going to add some pre-commentary, and then I think you should lead into our actual answer. But first, I just want to say a deep thank you to this listener for sending in the question. Again, we really love getting these questions. And also, thank you for the very kind words.
To be clear, listener, if you're going to send in a question, you don't have to say very kind words, but they are really wonderful to hear and especially to hear if we had any part in helping this person feel more comfortable getting into that first dev role and having an idea of what maybe a good version of that could look like.
Additionally, I really love the shape of this question because it gets into the people stuff and the tech stuff, so I'm super excited about this question. Actually, both Steph you and I responded very quickly to this one. And so it really did catch our attention because I think it crosses that boundary in an interesting way that I think is sort of The Bike Shed space in the world. But to that end, you did reply first in our email chain. So I think you should start, and then I'll follow on after that.
STEPH: I should also check with you. Wait, so you don't have a filter on your email that's like kind words only to The Bike Shed, and then you filter out anything that's negative?
CHRIS: I have a sentiment analysis, and if it's even neutral, it gets sent straight to the trash, only purely positive. No, constructive feedback is welcome too. We would love to hear that. Well, love is a strong word. We would accept it into our inboxes and then deal with it, but yeah.
STEPH: [laughs] It will be tolerated. Must require at least three hearts in all emails; just kidding. [laughs]
CHRIS: Are you kidding? I'm counting them now, and I see a lot of hearts in our emails.
[laughter]
STEPH: Merge conflicts. So is it normal to have such a high fear factor with a merge conflict? I'm going to say absolutely. Resolving a merge conflict can be really tricky and confusing. And I think; frankly, it's something that comes with just time and practice where then you start to feel more confident.
As you're resolving these, you're going to feel more comfortable with understanding what's in the branch and the code changes that you're pulling in versus something that you need to keep on your side. So I think over time, that fear will subside. But I do think it's totally normal for that to be a very scary thing that then takes practice to become accustomed to it.
As for if there is some type of process or checklist, I don't know of a particular checklist, but I do have a couple of ideas. So one of the things that I do is I will often push my code to whatever management system I'm using. So if I'm using GitHub, then I'm going to push up my branch because then, at least that way, someone has a copy of my work.
So if I do something and I completely botch it locally, I know I can always reset to whatever it is that I pushed up to GitHub, so then that way, I have more freedom to make mistakes and then reset from there. So that is one idea is just put it somewhere that you know is safe, so that way you now have this comfortable sandbox to then make mistakes.
The other one is run the test. So hopefully, the application that you're working with has tests that you can trust; if not, that could be another conversation. But if they do have tests, then you can run those, and then hopefully, that would let you know that if you have left something in, like maybe you left a syntax error, or maybe you removed some code that you shouldn't have because you weren't sure, then those tests are going to fail, and they'll let you know that something went wrong.
And you can run those while you're still in the middle of that merge conflict as long as you've addressed like...well, no, if you haven't addressed syntax errors, that's still a time that you can run it, and it's going to let you know that you haven't caught all of the issues yet. So you don't have to wait till you're done to then go ahead and run that.
A couple of other ideas, practice. So go ahead and create your own merge conflicts on purpose. So this is something that I think is really helpful because it will teach you, one, what causes a merge conflict? Because now you have to figure out how to create one, and then it will help you become comfortable because you're in a completely safe place where you have made up the issue, and now you're having to resolve that, so it'll help you become more confident in reading that merge conflict message.
And then last but certainly not least, grab a buddy so if you are just feeling super nervous. Anytime I'm doing something that I just feel a little nervous about, then I just ask someone like, "Hey, would you look over my shoulder? Would you pair with me while I do this?" And I have found that's incredibly helpful because it eases some of my fear. I've got someone else that is also looking through this with me. But I also find it really helpful because then it encourages that person to be like, hey, if they're ever in a spot that they need to pair, I want them to know that they can also reach out to me and have that same buddy system.
I guess that's my checklist. That's the one I would create. How about you, Chris? What do you think?
CHRIS: Well, first, I just want to say that basically everything you said I 100% agree with, and purposely I think was great that you actually replied to the email first and that you're saying those things first because I think everything that you said is true and is foundational. And it's sort of the approach that I would definitely recommend taking as well.
My answer, then adding on to that, has to do with how I've approached learning about this space in my own career. To name it, to answer the core question, is it reasonable to be scared of this? Yes, Git is confusing. Git is deeply confusing. I absolutely love Git. I have spent a lot of time trying to understand it, and in understanding it, I've come to love it. But it's only through deep effort that I've gotten to that place. And actually, the interface, the way that we work with Git on a day-to-day basis, particularly the command line is rough.
I'm going to say, what does Git checkout do? Well, it does just about everything, it turns out. That command just does all of the stuff, and that's too much. It's, frankly, the UI for Git, specifically the command-line user interface; the commands that we run to manipulate the Git history are not super intuitive. But it turns out if you pop open the hood, the object model underneath the core way that Git stores your code is actually very simple. I find it's very easy to understand, but I, unfortunately, have found that I can't understand it without dropping down to that level.
And so, in my own adventures, I kind of went deep on this topic a couple of years ago, and I created a Vim plugin because obviously, that's the best way to encapsulate your knowledge about Git, and so I created a plugin called Vim Conflicted. I don't necessarily recommend the plugin. It's fine if you want to use it. I don't do a great job of maintaining my plugins at this point, to be honest. But there was a weekend where I was trying to understand the world of Git and merge conflicts in particular, and it was really sort of fighting me.
And as I started to understand it better, there's a little diagram that I drew on the README that I think is probably the most interesting artifact from it. But it's this idea that there are actually four files, four versions of a given file involved in any merge conflict. And that realization shifted my thinking a good amount. And then as I started to think about that, I was like, oh, okay, and then I want to see this version of it, and this version of it, and this combination, and the diff between these two, and that was super helpful for me.
More generally, I also made a course on Upcase about Git as I tried to understand it better. And there are two particular videos from the middle of the course named the Git Object Model and Object Model Operations. And again, those two videos deal with popping the hood on Git, looking inside it, and what actually is happening to your code as you perform different Git operations.
One of the wonderful things about Git is it is immutable. So you're never going to destroy your Git history if you've committed. So one of the rules that I have is just always be committing, never worry about committing. If you've committed, you can always get back to that version. You would have to try very hard to destroy committed code in Git. It's the things that you do when you haven't yet committed the code that are dangerous.
So commit the code, like you said, Steph, push that up to GitHub, so you have a backup of it. You will have a backup locally as well, and that's a thing that you can come to be more comfortable with. But then, from there, there's actually a lot of room to experiment and play around because there's a ton of safety in the way that Git stores the code. You do have to know how to get at it, and that's the unfortunate and tricky part.
But I think, again, to sort of summarize, yes, this is confusing. Your feelings are absolutely valid and totally grounded, but it is also knowable, is what I would say. And so, hopefully, there are a couple of breadcrumbs that we've laid there in how you might go about learning about it. But yeah, find a buddy, watch a video or two, and give it a try. This is definitely a thing that you can get there but totally reasonable that your first approximation is this is confusing because it sure is.
STEPH: I often forget that Git has that local copy of my code, so I'm so glad you mentioned that. And then yeah, I saw when you linked to Vim Conflicted. The diagrams are great. I had not seen these before. So yeah, I highly recommend folks take a look at those because I found those very valuable.
CHRIS: In that case, it's a white background, but I allowed myself to use some colors in the little images to help differentiate the different pieces. And it's an animated diagram, so it's really a high bar for me. [laughs]
STEPH: So now the question is, did you go too far? Have you over-optimized? [laughs]
CHRIS: I'm going to be honest; it was a weird weekend.
STEPH: [laughs] Well, I don't think you've over-optimized. I do think it's wonderful. And I think this is definitely a reference that I'll keep in mind for folks whenever they're learning about merge conflicts or just want to get more knowledgeable about them. I think these diagrams are fabulous.
CHRIS: Well, thanks. Yeah, I hope...they frankly were a labor of love, and the course is three and a half hours of me rambling about Git, so hopefully, it's useful to folks. If anything, it was super useful to me because my understanding of Git was deeply crystallized in making that course. But I do hope that it's useful to other folks. And particularly those two videos that I highlighted, I think are the ones that have been most impactful for me in terms of how I think about working with Git and getting comfortable with it.
STEPH: Do you still receive emails every now and then from people, or maybe they are tweets from people that are like, "Hey, I watched one of your videos and found it really helpful." I feel like I still see that every now and then where people are just commenting on like, they watched some of the content that you created for Upcase a while back, and I think that's really cool. I'm curious if you still see that.
CHRIS: I do, yeah, from time to time. It is absolutely wonderful whenever I hear that. Again, listener, do not feel the need to send me anything, but it is nice when I get them.
STEPH: It does seem like I'm fishing for compliments now. [laughs]
CHRIS: It does seem like that. So I want to be clear that's not what's going on here. But it is nice because I do actually forget that they're out there. But a lot of the stuff that I produced for Upcase, in particular, I tried to do more timeless stuff, so like the Vim content was really about how Vim works in a deep way.
And the tmux course and the Upcase course...or the tmux course, the Git course. I look back at them, and a couple of little syntactic things have changed. But I'm still like, yeah, I agree with me from six years ago or whatever it was. Oh, that's a weird number to say, and I think is honest. It's fine. I'll just be over here. [laughs]
STEPH: [laughs] That's helpful to hear, though, because that's always one of my fears in creating content. It's like, I don't know, it's okay if it's more opinionated and I change my mind and disagree with my past self. But it's more like, yeah, keeping up with is this still accurate? Is this still reflective of the times? And then having to keep that stuff updated. Anywho, that's a whole big thing, content creation.
CHRIS: Content creation, but there's a parallel to it that many folks will not be creating content, and I think that's a very fine and good way to go about progressing on the internet. But there's a parallel to it in learning that I think is useful. I, at this point, will typically lean in if there is something in the SQL layer that is fighting me. I have never found effort spent trying to better understand the structured query language to be wasted time.
Similarly, Git is one of those tools that is just so core to the workflow that it felt very worth it to me to spend a little bit of extra time to get to a deeper level of comfort with it, and I have not regretted one minute of that. Vim and tmux are pretty similar because they're such core tools for me.
But React, I would not call myself a deep expert of React. I follow some of the changes that are happening but not as deeply, and I'm not as worried about it. And if I'm like, I don't know how to do this thing, should I spend two hours learning about it or not? With frameworks and tools that have not been part of my toolset for as long, I will spend less time on them.
And I think that the courses that I produced on Upcase mirror that. They're the things that I'm like; I feel very true about these things versus other stuff. Maybe it was in a weekly iteration episode or something like that. But that very much mirrors how I think about learning as well. What are the things that I'm going to continually invest in versus what are the things that I’ll sort of keep an eye on from a distance but not necessarily invest as much time in?
STEPH: There's a particular article that you're making me think of as we're talking about content creation and, as you mentioned, finding the things that you always find value in investing in. There's a wonderful blog post that was recently posted on the thoughtbot blog by Matheus Richard, and it's called The Opportunity Will Find You.
And it made me think a lot about what you're talking about, find the things that you're excited about, find the things that you think are a good investment and just go ahead and lean into it. And it's okay if maybe that's not the thing that you're using currently at your work, but if it's something that gets you excited, then go ahead and pursue that.
So in this article, for example, Matheus uses the example of learning Rust, and that's something that he's very excited about and wants to learn more about. And then there's another one where he started looking into crafting interpreters. And then that has actually led to then some fruitful work around creating custom RuboCops because then he had more knowledge around how the code is being interpreted so then he could write custom RuboCops.
So yeah, plus-one to finding the things that give you energy and joy and leaning into that and investing in it. And if you share it with the world, that's fabulous, and if you don't, then keep it for yourself and enjoy it, whatever makes you happy. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph celebrates Utah's adoption day and Daylight Savings Time and troubleshoots a CI build time that had suddenly spiked for a client project using TeamCity. She also shares a minor update regarding the work that thoughtbot is doing to scale horizontally and add more machines quickly and efficiently to process more RSpec tests.
Chris was alarmed by logs and unknown-unknowns and had some fun using Git down. Git bless his heart!
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. Today is Utah's adoption day. So officially, one year ago, we adopted Utah. He's about a year and a half years old now because we got him when he was around the six-month mark. So Utah, aka Raptor, which is the nickname that you gave him, and aka UD [spelling] the cutie is his other nickname...which I've forgotten, why do you call him Raptor? Why is that a name?
CHRIS: Because there's a Utah Raptor.
STEPH: A person? [laughs]
CHRIS: No, I think it was like the fossils were found in Utah. But the Utah Raptor is a type of dinosaur. And so when I heard Utah, my brain went to Raptor, and then I dropped the Utah sort of a Cockney rhyming slang sort of thing. Shout out to Matt Sumner real quick. But yeah, Raptor.
STEPH: Cool. Cool. Cool. I'm so glad I asked. Now I know. I just accepted it when you called him Raptor. I was like, sure, he can be a Raptor. [laughs]
CHRIS: I feel like that says a lot about me that you were just like, okay, why not?
STEPH: [laughs]
CHRIS: That's different and has no apparent connection to the actual name of the creature, but that's fine. I might be a nonsense person.
STEPH: Or me for accepting it. You share a lot of nonsense, and I accept a lot of nonsense. That might be our dynamic. [laughs] So it works out.
CHRIS: That just may be our dynamic.
STEPH: That's why I'm always so nice with the good idea, bad idea, or even terrible. [laughs]
CHRIS: You're like, it's all nonsense 100% of the time, but yeah. So Utah is one year into living with you folks. So that's lovely.
STEPH: Yeah, and he's growing up so well. Oh, and I've been training him for one of his latest tricks. I'm very excited because it seems to be really sinking in. So every night, we take him out for his final bathroom potty but then before we go to bed. And one night, for some reason, I started singing The Final Countdown. [singing] It's the final countdown. But I started singing it's the final potty instead.
So now, when it's time to go out for the bathroom late at night, I look at him, and I start singing. And I start singing [vocalization], and it's working. He's starting to recognize that when I started singing that tune, he's like, okay, and he gets up from his comfy spot, and we go outside. And it brings me a lot of joy.
CHRIS: That is perhaps the best use of Pavlovian conditioning that I've ever heard of. Also, I really appreciate that you both mentioned the final countdown but then said just in case anyone is unfamiliar with the tune, let me hum a few bars. Thank you for doing the service there.
STEPH: I have been singing so much this week. I don't know if Joël Quenneville, who I've been pairing with a lot, appreciates that. Sorry, Joël. But I have been singing so much. And I think that's post-vacation vibes. That's what vacation does for you. And it helps you get back into, you know, lots of singing or at least it does for me.
Let's see, what else is going on this week? So this is the week that we have DST in the USA, so Daylight Savings Time, aka summertime, where we advance our clocks so everybody...although this is going to be late. So at this point, by the time people are hearing this, you're going to have already dealt with all those bugs that have crept up. But those are creeping up this week, where people are starting to notice a lot of those flaky specs that aren't technically flaky. They're actually breaking for real reasons because they were tested in a way that shows that they're not considering that daytime boundary.
CHRIS: It's as if you spend some of your time fixing flaky specs that that's where your mind goes with DST. Because I'm going, to be honest, part of what you're doing right now is telling me that this is coming up, and I didn't know. I had forgotten about that, which is very exciting, except you lose an hour asleep for this one, right? Or is it that you gain?
STEPH: We're going forward. Yeah, it's fall back and then spring forward. That's how I remember it.
CHRIS: Worth it. I'll take the sunshine at night.
STEPH: Yeah, it's supposed to be so we have more sunshine during the daylight hours. That's the reasoning for the nonsense, the headaches. On some more technical news, when I came back from vacation, we noticed that the CI build time has suddenly spiked for the client project where previously we were averaging, I'd say, around 25-26 minutes. There's definitely a range there. But that seems to be pretty consistent.
And right now, builds are taking more about 35, sometimes upwards to 45 minutes. And so it's been a bit of who done it or what caused it adventure of figuring out why, what's causing the spike. And so Joël and I have been pairing heavily on that to investigate what's going on and learned a lot of features that TeamCity offers and just diving into this particular issue.
One thing that brought me joy is by looking through all the builds that are taking place on TeamCity. As I noticed, there are a number of builds that are using the RSpec selective testing that I added where if you only change a test to then we're only going to run those tests instead of the whole suite. And it was one of those changes where I thought, okay, maybe someone's going to get use out of this. Joël and I will probably get use out of this. But I'm actually seeing it about one every ten build something like that. And I'm just like, oh, this is awesome.
One, people are improving tests. That's amazing. And then two, that then they're benefiting especially while we have this spike going on. So that was a suggestion from you that I appreciate because that is paying dividends. And so that brought me a lot of joy while looking into this other issue, which we haven't resolved yet. We think it has something to do with how the tests are being balanced across all the different parallelized processes. And we think that there is an imbalance that has happened. And then that's what's really throwing things off.
So we can see that one particular process is taking around 26-27 minutes, but then the next process that's highest in time is only taking 17 minutes. So it's like, why is there suddenly ten more minutes that's being attributed to one process? And why is that not getting spread out? So still looking into that. That's the mystery for this week. But that's mostly what's going on in my world. What's up in your world?
CHRIS: What is up in my world? I'm going to say a quite alarming thing happened this week, which was we were investigating some changes, or we were investigating some behavior where the particular portion of the system ended up in the logs, just sort of combing through. And I happened to notice this one log line that...our logs tend to be somewhat verbose. They're JSON-structured log format. I've talked about the lograge setup that we use in the past, but there's a bunch. These are long lines of JSON-structured data.
But this line that caught my eye was not. It was just some text, and it said, "Unreported event: and then some other texts." And I was like, ah, what? Who didn't report which to when? I did some digging, eventually figured out that this was Sentry. Sentry was logging that it had not reported an event to us. But had we not randomly happened upon this in the logs, which is sort of a random thing to see, we would have missed this, which is scary. I mean, it was missed for a little while. And so Sentry was not reporting certain events.
We had made a change, particularly to Sentry's before_send configuration. So there's a way that you can do some amount of filtering client-side or client being, in this case, our Ruby app. So that's like the client-side of Sentry, and then there's their server backend. So that would, weirdly, that's the way the client-server work in this case.
But the idea is you can do some proactive filtering of being like, you know what? Rather than sending a ton of noise...because we know there's this one error that we can't stop for reasons. It's a JavaScript Chrome extension that's getting embedded in the app. That doesn't mean anything; that's just noise. Rather than even sending those over to Sentry, let's proactively filter them out. before_send is a function within the Sentry SDK that allows you to do this.
But it turns out if you raise an error in there, if you happen to have introduced something that doesn't cover all the possible edge cases, then Sentry will just not let you know and will log, interestingly, that they did not report the event. I'm going to throw it out there that I would love if Sentry were to say Sentry me...that's where I put something very bad happened, and you should look at it.
And they're just like, well, something pretty darn bad happened. We'll log it. Supposedly, my understanding is before_send can be used to filter out like PII or other things like that. And so their failure mode is quiet intentionally. That's my understanding as to maybe why this is true. I wish there were configuration that said, no, please fail as loudly as humanly possible. But that was terrifying.
STEPH: Yeah, absolutely. I'm going to piggyback on what you just said for a minute because I was also thinking earlier and related to the sudden spike in our CI builds where I was like, it would be really nice if there's...because I suspect there's one particular change that has caused this to happen. I don't know what it is yet, but that's just my suspicion. And it would be great if when that build ran, let's say that build went from an average of 25 minutes and suddenly we have a build that took 35 minutes if TeamCity had alerted us or if something more aggressive had to happen to say like, "Hey, your team..." or maybe it's just in the logs somewhere.
Okay, not in the logs somewhere more visible on the build where it's like, "Hey, your build took an extra 10 minutes compared to the average, just letting you know. I don't have a diagnosis for you, but we're just letting you know." So yeah, plus-one to getting those types of alerts out to people and notifying us when there's an average that's not being met or when things aren't getting logged like you'd expect them to.
CHRIS: As part of what we were doing in the logs...like how to get to that anomaly detection place is a really interesting question in my mind. And this is a case where we were in the logs, and we wanted to instrument more things. So we have a bunch of stuff right now that goes in. It's either a warn or error log level. And the error should be pretty rare because, ideally, those are going to Sentry instead, but we still want to keep an eye on them.
But we introduced a new search within log entries, which is what we're using for logging aggregation and searching. And the idea was to group all warn-level messages and to group it on the message string. So ideally, what this allows us to do is say, "Oh, we've seen 200 instances in the past two days of this new warning that we didn't see before." The difficulty is, as a human, I would see unhandled error blah as one bucket of warning, or I might want to see it that way. I might want to group it on part of the message. So it becomes really hard to find the signal in the noise on these, but at least it was a start.
We now have this little graph for both warning and error-level log messages that we can see are there any new anomalies that are occurring pretty regularly? But this, again, was just this weird edge case where we were lucky to catch it. But it was very scary that it was just throwing stuff away. So the universe might have been true that our error log did get a little quiet for a little while, which was nice, but it wasn't 100%. It wasn't like we were at 10 hours, an hour, and then we went to zero. It was like some, and then we went to a lower number because we were still getting some. We were only filtering out certain ones.
But yeah, it's how do you know at runtime that the system is doing the thing? This is increasingly the question that I have in my mind. But yeah, so that was the thing. We fixed it. It's fixed now. I also set up an alert in log entries to say, "If you ever see this particular phrase again unhandled or unreported," then please tell me about that post-haste. So we've got that now.
STEPH: That's perfect. That's what I was about to ask us if there's a way that you could add a filter or add a warning for that anomaly detection. So that sounds great.
CHRIS: I've got that now because this became a known-unknown, but there are still the unknown-unknowns, and there are so many of them. And I can't know them is my understanding of how they work. I would love to know them. I would love to pin them down and be like, "Hey, what are you doing here?" Someday maybe. But anyway, that was the thing in my world. [laughs] It was fun. It was a great little time. What else is up in your world?
STEPH: I feel like you can always judge the level of fun based on how high someone's voice goes. No, it was fun. It was great. It was fun.
[laughter]
CHRIS: I believe that is an accurate assessment, yes.
STEPH: I've caught myself doing that. I'm like, my voice is extra high, so I don't think I really mean that when I'm using the word fun. [laughs]
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
I do have a small update that I can share regarding the work that we're doing to be able to scale horizontally. So we want to be able to add more machines quickly and easily so we can then process more RSpec tests. And we have discovered with TeamCity that we're pushing forward on that particular path because they have something called a composite build. And with a composite build, it's essentially your parent or your supervisor build. And then, from there, you can create other subsequent builds.
So we can then say, all right, let's have multiple builds that then run the RSpec test, and then we can separate in that way. And right now, we're going about it in the hacky way because we just want a proof of concept. So we are saying specifically in this particular step, we want you to run spec models. And in this other process, we want you to run these particular tests just because we want to see how this works. And so far, the aggregation seemed great.
So when you look at that composite parent build, it's showing you how each of those builds are doing. It's also reporting back the failures. It's even de-duping them. Because initially, we set it up where we were running the full test suite in parallel on both of these builds, [laughs] not what we wanted, fixed that. But it did highlight that it was de-duping the test failures. So that part was nice.
So the UI seems great and seems quite very capable of doing this. Composite build seems to be the way that we can do this with TeamCity. But we're still diving into actually getting the metrics like, okay, how much is this actually going to speed us up? And what does this look like if we want to be able to scale up to say from 5 to 10 where we went from 5 machines to 10 machines? And that part doesn't feel graceful because then you have to go in and change the configuration and copy the configuration to then add a new build that then is going to process RSpec test.
So other services like Buildkite make it very easy. I can't remember if it's like literally a slider or if it's a number that you enter. But you can say, "This is how many processes that I want to run," in which it would be a lot nicer for that actual scaling. Versus TeamCity, it feels far more manual and intentional where you then have to duplicate and add those settings. But it's a really good first step because, as we'd highlighted before, there's a lot of risk in moving over from an existing infrastructure to something totally new.
So if we can have some wins with this approach and help out the team and reduce build time, then that gives us more grace period. So then we can assess, okay, do we really want to move over to Buildkite? What do we want to do next? What does this look like? And have further discussions. So that's a small update there. Next time I should have some more updates around actual data on how things are looking.
CHRIS: Oh, cool. Yeah, I appreciate the update and definitely interested to hear how this continues to play out. This is a large project that you're undertaking and all the facets and whatnot, so yeah, super interested to hear the continued journey of the test build time reduction. Let's see, other news in my world. I've been exploring something that I'm intrigued by the idea. Let's go with that. [chuckles] That's going to be my start. I always start with these lead-ins that build things up too much.
But I am finding a small tension in trying to just keep up with what the team is doing, which is a wonderful place to be. Our team is growing. We actually have someone new joining tomorrow, very exciting. But I'm trying to find the right version of I don't want to block things. I don't want all code review to have to go through me. But I do want to keep an eye on everything. I want to kind of know what we're doing collectively.
And ideally, mostly, that's me being like, yep, that makes sense. We're doing that. I remember that, cool. Wait, what's this? And rarely, occasionally, there'll be a point where I'm like, oh, I want to intervene here. I want to have a conversation. I want to rethink how we're building this. And so it's moving from a place of any sort of blocking synchronous review or the necessity for that to ad hoc post-review sort of thing. And so the way that I'm trying to poke around with this, of course, I'm writing some code to do it because of me.
So the two systems that we're using that seem most of interest are GitHub and Trello. And so it turns out GitHub has a wonderful search, and I can create a search that is parameterized like create a URL that jumps into a parameterized search saying, "Show me everything that was merged in the past X amount of time, " so I can say the past two days because I haven't checked it in two days. So I'll see all of the PRs that were merged, and some of them I'll have already reviewed. So I maybe could even filter further there. But for anything that I haven't seen, I'm like, oh, what was this? What was that? What was this other change?
Similarly, on Trello, there's a way via the API to get all of the card update actions. And then I can filter down to say whenever a card was moved, which in our system that means...we're doing Kanban-style, so a card being moved from this column to that column that tells me that someone is progressing forward with some work. And then I can further filter down because, again, I don't really want to be blocking on this. I'm most interested in what have we done or completed in the most recent timeframe.
And thus far, it's an interesting data set. And it's an interesting way to switch the problem around such that I'm not feeling...there was FOMO or organizational FOMO is perhaps how I would describe it of like, I want to try and keep an eye on stuff and make sure I'm responsive. But I'm now blocking, so I have to step away. But now I'm worried that I'm missing things. And so I'm trying to find that good middle spot. And this feels like an interesting exploration of that.
STEPH: I'm intrigued when you mentioned the card moving over, so then you can tell things are progressing. And then you're answering the question of what did we do in this particular chunk of time? When you move stuff over, is there a clear sweep of we have finished this sprint, and then you have the date of that sprint at the top, and so then you essentially have a column that represents all the work that was done in that sprint? Is that an approach that you're using? Because that's the one that immediately came to mind for me when you're wondering what was accomplished during this week or two-week period?
CHRIS: Interesting question. So we're not really doing sprints, or there are no real iterations. We're doing more of the I think Kanban is the way to describe it. But basically, we have a prioritized next up column. And then every day, I can say continuously, the work has the same shape, which is pick up the next most important thing, work on it, move it through the various columns.
I did introduce in Trello just the idea of, like, here's a month, so we can see by month what we're doing, but that's too low granularity in my mind. I want to review it a month at a time. The whole point of this in my mind is to see stuff as it's happening vaguely in real-time but not requiring me to constantly be monitoring everything. So it gives me an opportunity at the end of the day to be like, what happened today? What do we do? But yeah, so there's no real sprint that I would couple this to because we're not really doing sprints.
STEPH: Got it. Yeah, that gives me more context. I understand why you're then looking for ways as to how to answer that question of, like, what did we accomplish in this week or a particular time period?
CHRIS: And to name it, this is not an intention on my part to be like, I need to control everything. I need to make all the decisions. I very much want to empower the team. And in my mind, this is actually a mechanism to empower the team. I want to give them more freedom and then have the opportunity occasionally to check back in and be like, oh, actually, there was some context that was missing here the way we did this. Let's actually unwind that, do it this other way for these reasons. But it gives me the ability to potentially have that conversation after the fact.
We're trying very hard to have the tickets be as representative and complete, and well documented as possible. But that's very difficult to get to. And there are also things that I don't even know to mention. Again, I think the critical bit is this is not an attempt to make sure everything aligns with what I think; it's more I want to empower the team to move without me most of the time. And then, where there are things that potentially should have a small conversation or a redirection, then we have the ability to do that. And so, I'm trying to build that back into my workflow while basically loosening up my connection to the work in progress at any given point in time.
STEPH: So you just touched on a topic that's really interesting to me or a particular space. You're doing a very kind thing where you want tickets to have lots of context so that people feel confident when they're picking up what's the action item to be done. And for someone that's new, that's incredibly helpful, and I think more important since they are new to that world.
But in general, my spicy take of the moment is going to be as developers; that's part of our job. If we notice that context is missing or if we're not clear about the action item, is to think through what is it that I'm missing? Who do I reach out to? Who can I go to for help? How can I scope this work? All of that, to me, is very much part of our role.
And the idea that tickets always have to be perfectly curated, which I don't think you're saying, but you're just trying to be extra helpful. But if someone were to have that expectation, I think that expectation is wrong. And I do think it is part of our work that then we help make sure that tickets are well-scoped and well-defined and have those conversations with the people creating the tickets or creating them ourselves.
CHRIS: I love the clarification there, and I'm definitely in agreement with you. I don't know how picante of a take it is. I would be intrigued. Listeners, let us know. Are we breaking your mold of what things should be? But I do like the idea that it is a conversation so back and forth. And so the idea that as developers, there should just be this very clear list of things to do and you just kind of pick up a card and heads down, just get it done, I don't think that should be the mold.
But I do think; ideally, the why is the most important thing that I think should be in a card. So ideally, a card should have little in terms of technical implementation notes and should have more in terms of here's the goal that we're going for, here's the problem, or here's the thing that we're trying to solve. And then maybe a suggestion of like, I think it could be an X, Y, and Z, but I'm not sure. Or we want to be able to send transactional emails, but I don't know any more than that.
Our goal is to engage users. Like that last sentence, that last little bit of our goal is to engage users is a critical, critical data point, versus our goal is to solve for a regulatory and compliance issue. It's like, well, those are different. And they will lead to different solutions and different implementations and all that.
So yeah, I definitely share the idea that cards don't need to be perfectly specified. And if anything, I think I'm closer to that than it probably sounded like I was. But for that reason, it's totally possible in my mind, that work will be done in a way that after the fact, I'm like, "Oh, sorry, there was a misunderstanding here. Let's revisit this work."
And so, my goal is to try and stay connected and have a feedback mechanism at the end of the process. So when the work is done, be able to spot-check it rather than trying to have to watch it as it's happening or proactively define everything in excruciating detail such that exactly the right things happen all the time. So I'm moving to a place of ask forgiveness, not permission. That's the wrong analogy here. But that idea of like, we can clean it up after the fact, that's fine. And we don't need to try and prevent any sort of things, or at least that's what I'm exploring.
STEPH: Yeah, I love that you highlighted having the why. I adore that when that's on a card just because I then I want to know the goal because then that's going to help me ask questions and think about scoping versus if it's like a very specific implementation, then I feel so narrowly scoped that I don't feel as confident that I can be like, okay, I know why I'm doing this versus I just feel very directed to do a thing, and that's incredibly helpful.
I have also felt the pain that you're mentioning where it does feel like a ticket has all of the work clearly defined, and the goals, and the whys, and it can have everything there, but just something gets lost in the communication. And so someone implements something in a way that is how they interpreted the work versus it's not actually what the ticket or what the goal of the work was to be done.
So I appreciate that where you are looking for ways to tweak things to make sure that whoever is picking up that ticket will have the same interpretation that the author intended for them to have. And then if that does happen, and things get misaligned, then you chat and figure out ways to improve it.
I think that's the point that I was really thinking about, and my air quotes, "hot take," is that as developers, a big part of our job is communication, and then also sharing the knowledge that we have with other people. And so if someone is expecting that they can just always pick up work and never talk to someone, I don't know, maybe you're in the wrong business. [laughs] That's my hot take.
CHRIS: I, for one, like the hot take. It is nice and ever so slightly spicy.
STEPH: Thanks. Yeah, I just think communication is incredibly important. Earlier, you mentioned, I don't think we were on mic at the moment, but you mentioned something about a new Git alias. And I am very intrigued on hearing about what you've added, what it does, all the details.
CHRIS: All the details, that's probably too many, but some of the details I can certainly provide. So I have two new Git aliases; one is Git gone, which is probably the heart of the whole thing. And so the background of this is I found myself pushing the green merge button on GitHub more. We've introduced some branch protection stuff, which I've talked about in previous episodes. And I dream of the day that one of my good, good friends at GitHub will give me access to the merge queue beta. Please, please, I implore thee. But in the interim, still clicking the green merge button more often than not.
STEPH: Wait. I have to ask to help you in this dream. Are you forwarding these episodes to someone? You can just take a clip of you saying, "Please, please, please give me access," [chuckles] and just forwarding that or mentioning someone at GitHub or GitHub in general.
CHRIS: Just leaving voicemails for people with a Bike Shed section of me begging for access to the merge queue beta?
STEPH: Yeah. [laughs]
CHRIS: No, I'm not. But maybe I need to up my game. You're right. [laughs] Someday, I'll get there. And that will only exacerbate this issue that I'm feeling, which is again, I'm clicking the merge button. That's what's happening. And as a result, that means my local branch is now like it's done its job. You've served me well. And in the Marie Kondo sense, I need to hold you up, thank you for your service, and then let you go. But I obviously wanted to automate that.
So Git gone does that automation, and it was fun. So I found a blog post which we'll include in the show notes, that had most of the pieces here, but it was still fun to play with the shell pipeline in a way that I hadn't in a while. So it does a Git fetch and then git-for-each-ref with a particular structured format that references the upstream of the branch then uses awk to search for the word gone.
Because Git, if you print it out in this particular way using this format, it will say the local branch name and then the upstream. But if you've deleted the upstream, it will specifically say (gone) in brackets, so you can actually use that to filter them down. And then I pipe that to git branch-D so..well, xargs of course. I love a little shell pipeline. As an aside, these are fun little things to build up. So that is Git gone.
And then the other one that I have is Git down, which is what I use more. And Git down works on top of Git gone, so it's Git checkout main and and Git pull and and Git gone. But that means I get to type Git down into my terminal whenever a branch happens to get merged in the upstream land. [laughs]
STEPH: [laughs] Oh, that's adorable. I love it. I like the Git gone, and yeah, I like the Git down just for fun. You are inspiring me where I now really want a Git bless your heart that's like maybe a Git blame or a Git revert. [laughs]
CHRIS: I've definitely seen people do Git praise as an alias for Git blame.
STEPH: That's nice.
CHRIS: But Git bless your heart is...ooh, I love that.
STEPH: [laughs] I might have to add that just so I can type it, and then someone can say, "What are you doing?" [laughs] Cool, I love it.
CHRIS: Little things, little fun bits to add to your day and to automate and have a little fun while you're at it. So that's where I'm at.
STEPH: All about the communication and fun. That's what I'm here for and the singing. Let's not forget the singing.
CHRIS: And the singing, of course.
STEPH: [singing] On that note, shall we wrap up?
CHRIS: Let's shall. Oh.
STEPH: [laughs]
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Bye.
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
BIG NEWS! Steph's expecting a baby boy! 🍼🎉
Aaaand unfortunately, the rest of the show isn't nearly as exciting. Chris talks about admin pagination using Pagy, and Steph wants to delete some code and is nervous that she's going to break something.
They answer a listener question from Slash, who asks, "What are the first keyboard shortcuts you teach junior devs?"
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Let's go. Oh man, now all I can think of is the...but what's his face? The quarterback for The Patriots formerly. Oh my God, I can't remember his name.
CHRIS: Tom Brady?
STEPH: [laughs] Thank you. Tom Brady. I wanted to say Brad Pitt, but I'm like, that's not right. [laughs]
CHRIS: See, I thought of the musical...well, I think it's a musical, Encanto. Have you watched Encanto?
STEPH: Oh, I love Encanto.
CHRIS: It's so good.
STEPH: We Don't Talk About Bruno, no, no, no!
CHRIS: We Don't Talk About Bruno but...it was my wedding day.
STEPH: But also Luisa Song, that's actually a good one too.
CHRIS: Luisa Song that we have now listened to the soundtrack a lot of times, and we've only watched the movie twice, once ourselves and then once with our niece and nephew. That is the order it happened in as well, just to be clear. [laughs] But yeah, it's good, lots of slaptitude to all the music and the overall movie and really just fantastic work. Lin-Manuel Miranda does good stuff.
STEPH: Super good. I have to bring back closure to my Tom Brady confusion, though.
CHRIS: Yeah, what was that? [laughs]
STEPH: [laughs] Now when I say the let's go, I think it was the Hertz commercial where he said, "Let's go," and he was impatient. I don't know if you've seen it. But marketing works, and now it's in my brain, and I hate it. [laughs]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: What's new in my world? I got some stuff, some tech things, and whatnot to ramble about, but frankly, much less interesting. What's new in your world, Steph?
STEPH: I do have some news, and I have some exciting news. So Tim and I, we are expecting our first child, hooray!
CHRIS: Yay!
STEPH: [laughs] We found out over Christmas or around the holiday. So depending on when this airs, but I'm around 14-15 weeks long. And it's tough. Growing a little human is tough. And it's been quite an adventure. And thankfully, I've got some friends to lean on and talk to through the process. But yes, that's my big news. We're going to have our first child. And oh, we also found out recently the sex of the child, and we found out we're going to have a baby boy, which is very exciting.
CHRIS: Well, that is so wonderful. I'm so happy for you. And on behalf of the entire Bike Shed audience, I think it's very easy for me to carry forward their best wishes. But this is absolutely wonderful. And I hope you didn't mind me redirecting the questioning right back at you at the start of this episode because this seemed more important than any tech nonsense I'm going to ramble about.
STEPH: Well, and you've been in the know for quite some time. And so we've been keeping it hush-hush until it felt like the right time to share. And this feels like the right time to go ahead and share. So there might be some interesting episodes up ahead where I get to complain and talk about this some more on the mic [laughs] since I have been keeping it quiet, but now I can talk about it more publicly.
CHRIS: Absolutely. I think this has been decidedly missing from The Bike Shed content in the six, seven years that this show's been going. So yeah, let's sell some truths.
STEPH: Bike Shed baby, that'll be one of our topics for sure. [laughs] But yeah, that's some of my big, exciting news. I'm going to kick it back to you since you were so kind to let me lead. What's going on in your world?
CHRIS: Back to me. I'm going to talk about admin pagination. So it really felt wrong to me that I'll be like, let me talk about pagination for a while. Also, what's up? So that said, now that we have shared the wonderful, exciting news, I can talk about the mundane realities of pagination. So yeah, I'm going to try and tell this one in a story to make it more interesting. So we have an admin page. It lists out the users of our application, as is so often the case.
And we hit that wonderful place where the page became wildly unreliable because we had so many people sign up for the application; yay! Very exciting. I feel like it's a meaningful milestone to get to where we're like, oh yeah, I guess we have to add pagination to the admin. Unfortunately, I picked this one up as, just like, this should be a quick, easy thing. This will be fun.
Coding, you know, I've gone back and forth on individual weeks where I have space in my schedule for coding, and then sometimes I find it difficult. And I'm trying to not pick up larger, more critical pieces of work. But this one seemed like a perfect little pickup. I'm just going to grab this, and I'm just going to quickly bang out some pagination, and the admin team will be so excited. Everything's going to be wonderful. Smash cut to that did not work out so great.
So I tried to introduce pagination using the Pagy gem, which I had not used before, but it's good. It seemed useful. And in particular, I'd seen an example from another Inertia Rails application that was using Pagy, and I was like, oh, cool, I'll just crib what they're doing. Because basically, we need to both get the data for the users and then serialize down the pagination data, the metadata about pagination. What page are we on? How many pages are there? Is there a next page? Is there a previous page? All of that kind of stuff. We need to serialize that down to the front end, and then use that to build a little pagination UI.
So like, they're using Pagy, and it does seem to do a good job of yielding both of those pieces of data to me, so both the recordset that it's paginated down to and the metadata about pagination. But unfortunately, when I first tried to use it, I ran into a wall where our user page basically just lists out the user. At the top, it has a little search bar, so you can type in name, or email, or ID. And then there's a little drop-down for the account status. What's the status of this user? So we can filter down to active accounts or onboarding accounts or that sort of stuff. Running the search, everything went fine.
When I went to filter down, suddenly it broke, and that was sad. So I went and chased it down. Pagy was throwing an error that it couldn't work with the collection that it was working with. And the reason was our admin user query object was iterating over the objects in Ruby land to do the filtering, which was very sad. So it turned out that the admin user query object, when it needed to do that filtering based on status, it was actually iterating through all the records and filtering them out using select or reject, whichever side. I forget which way it was implemented. But either way, it was iterating through the entire collection.
And so, Pagy, like most pagination things, tries to use offset-based pagination and database queries. So OFFSET LIMIT, that combination of things allows you to move through a recordset pretty easily. This is setting aside the idea of cursor-based pagination, which I've never fully understood or implemented, but let's just stick with OFFSET and LIMIT because they're going to get us what we need in this particular case. But by virtue of the fact that we're actually working with that recordset, turning it into an array, getting all of the records, it was less efficient than it needed to be. But it also meant that we didn't have an ActiveRecord relation that we could do this with.
And so then began the adventure of like, okay, this should be easy. I'll just turn it into a database query, except the account status was implemented as a method spread across a few models that looked at a value and then returned something, and that's why it was doing this in-memory filtering. But this is a classic case of I just want to add pagination. It will be super easy. Never mind, let me undertake a fundamental refactoring to the entire application and unify the idea of account status across user and the background object and this other object.
And then once that giant refactoring PR lands and I deal with the fallout of how this broke analytics and other pages in the app...it was a good thing. It was necessary. That was a mess, and we knew that. Fixing that was a good thing to do not just for the pagination but for actually unifying all of those ideas. Then once I landed that refactoring PR, oh, it's so easy to put in the pagination. [laughs] Just like, oh yeah, just paginate. That'll be great.
STEPH: You got back to the happy place of where it was easy again.
CHRIS: Took me like a week and a half to do the refactoring PR, though, partly because I was in and out on it. I couldn't give it my full focus. But there was definitely a morning where I was like, oh yeah, I'm going to add pagination to the admin UI. And the admin team was like, that's fantastic. We're very excited. A week and a half later, I was like, I'm sorry, I finally got to it, though. It's really good, though, right?
STEPH: I changed a bunch of things that you can't tell that I changed, but I promise it's a lot better. So now I can actually implement the change that you want to see. Well, I'm glad you walked away with a win because I've definitely been in the space where I have entered the refactor world and walked away with an L and realized that it's something that either wasn't worth tackling at the time or was just too challenging.
CHRIS: Oh yeah, I've definitely had that. And I think if it were a different shape of refactoring that were necessary to support this, I probably would have backed away, but because it was fundamental data model cleanup that needed to happen under the hood, I was like, that feels right. We should be doing this anyway. I'm also a big believer in dealing with ActiveRecord relations. So for anyone that's not familiar with the way ActiveRecord works, query evaluation is lazy. And so you can say user.where first name, blah. And that returns an unevaluated query, the idea of a query in the future, AKA an ActiveRecord relation.
And you can keep chaining on to that and building new relation. So you can say .where this thing and then pass that return object to something else, which then chains on another thing .joins to something else and then filter on an aspect of that. But again, we're not going to evaluate the query until we need it, typically until we iterate through the records that are part of it. And so, this is one of those things that I have, over time, slowly worked on and refined. And this is a skill area that I continually find value in investing in.
Again, I'll reference the wonderful Advanced ActiveRecord Querying course on Upcase that Joe Ferris hosted, and then I got to be a participant in, and still, I'm learning bits from that one years later. But the idea of really understanding what we can do with the database layer and then how we can reflect that in the ActiveRecord query syntax.
And then ideally, I have this motto in my head, which is just stay in relation land for as long as you possibly can. The minute you type .to_a to coerce it into an array or something like that, you have perhaps solved the immediate problem that you have, but at what cost? I ask, at what cost? The answer is a very big cost. You can't do other cool stuff after that.
STEPH: I'm intrigued how you refactored it because when you're talking about having the status, in my mind, I was presuming that it was a database column on one of the models. But then you'd mentioned it's not and that it's scattered across. So how did you refactor that and so then you could stay in relation land?
CHRIS: Primarily, we pushed the logic. So, unfortunately, it was spread across a few different objects. That was one complicated thing. So the idea of a status was spread across a few different spots actually in a few different models. So one thing was just to unify them all into one enum on the canonical record that should really own this idea. And then, really, it was to push it down into the database. So that was part of the work.
We also recognized that we had done not a great job with the implementation of the enum and with the naming of the key and the value in that enum in terms of how it was implemented in Rails. So there was a bunch of confusion. There was basically just a bunch of places where we had been less intentional than we probably should have. So mostly, it was just pushing all of that together and down into the database. And then where the status changes at any point in the application, we're just updating that column in the database, and then everything else can just happily work with that value.
STEPH: Got it. Yeah, that sounds great. Thanks.
CHRIS: You're welcome. But yeah, it really was a case of like that PR to refactor was a bit of a slog, if we're being honest. It was not fun. I was scared I was going to break stuff. I had to be very intentional with it. But once I was on the other side, then I got to have a query object, which is a lot of fun. I love writing those.
And I got to build pagination in Inertia, which is also one of those things that I really love. This is a place where Inertia really shines. It's incredibly performant. It allows you to do all of this stuff but in a familiar Rails way but yet still have fancy UI on the front end and just an intersection of some of my favorite things. So the refactoring both paid off in terms of what we got in the application but also was just fun at the end. Well, not the refactoring; the thing that came after the refactoring was fun.
STEPH: Nice. Well, speaking of being nervous about breaking things, I am feeling very determined right now where I want to delete some code. And I have that nervousness around I'm going to break something. But I've spent enough time with this code to feel confident that it's not truly in use. But knowing the exact entry point for that code is part of the CI script process. So I don't have full control over the entry points. It's something that I'm having to coordinate with another team to verify, like, hey, I'm pretty sure it's a script that's never called and never used, or at least this particular path we're not passing these particular flags to the script.
And going through the code caused enough confusion for me that if I can simplify this and get rid of that code, I'd really like to. So I'm at that point where I'm feeling good. I'm going to issue a change that deletes the code. But there's definitely a part of me that's nervous because it's one of those like, somewhere someone could be running this script on their machine locally. Or they could be using it as part of a different build process that I'm not aware of, which worst-case, then we realize something breaks, and then we have to roll it back.
But it feels like one of those important I'm going to do it while I've got the context. Let's delete the code. Let's see what breaks. Someone mentioned to me earlier there's the idea of the screen test where you delete something, and then you just wait, and you see who starts screaming. [laughs] I was like, yeah, that's exactly what I want to do. I've done more validation upfront. I don't want anybody to scream. But that's essentially the metric that I'm then going to go off of once we do merge this in and see how it goes.
But it felt like one of those interesting conversations with myself and someone else that was then looking at it with me where we could easily leave it. We could just walk away. Because I saw this code while I was working on something else, and then I really needed to assess whether I needed to alter this code as well or whether I could leave it alone. And I've decided I could leave it alone for this reason.
So it's one of those moments of like, okay, well, I could just walk away. I've done the thing that I needed to do for my change. But it feels important to go ahead and follow-through that while I have all of this context. So let's just go ahead and delete it, so someone else doesn't have to build up all that context.
CHRIS: As you're describing this, I'm sort of thinking of my own career arc as a developer. And early on, I'd just be like, it's fine, we'll just change it now. Nothing will go wrong. And then, obviously, something goes wrong. And slowly, over time, I've built up enough battle wounds from that that I was like, you know what? I'm hesitant to change it. I'm a little scared. What if we break production? And so then, there was a period of a couple of years where I would probably be more hesitant to change things.
And then eventually I got to the place where I'd seen the cost of not changing the thing when you have the context and letting the less correct implementation or the incorrect domain modeling sit and grow and become worse over time, and then the deep pain that you can feel down the road. And so now I'm like yeah, no, we're probably going to feel some pain on this change. Somebody is going to yell, but we should do it, and that's it. And I like thinking about that arc of just brazen confidence to oh God; everything's terrible to a different type of confidence. Like, yeah, I know something is going to break. But sometimes you got to crack some eggs, you know.
STEPH: And it's one of those areas where if we do find out something breaks and someone reports like, "Oh, this is really critical, and you took this away from me," then that's great, at least now I've got validation we know where it's used. And then we can; I don't know, maybe somewhere document that somehow or at least we don't even have to document it. But we just at least know that this is a valid code path that needs to be supported, and then I'll feel better about that.
Versus in this world right now, I'm in the I don't think this is important, but I don't have solid proof that it's not important, but I'm not going to treat it important. And that feels like the worst place to be. I want to know if this is valid or not. So this will help push us in that direction. But yeah, I like that arc that you described. I can definitely relate to that.
CHRIS: I definitely share the hesitancy and the worry that like, man, is this going to silently break something that someone relies on? But if that's true, then that means our test suite is missing something. If this is a critical code path and I could just delete it, and the test suite is like, cool, that seems fine, then we have a gap in our code coverage.
And I don't mean code coverage in the percentage metric; I mean it in the an important thing is not enforced by the test suite. And so it's a complicated and messy way to find out what's missing from our test suite, but it is a way. Just remove it and see then what happens. And then we backfill in the test suite to say, "Oh gosh, we should have had that. Deleting that was bad."
STEPH: So I absolutely agree with what you're saying. This particular scenario is a little tricky because the entry point isn't a traditional user-driven action or something that I feel more concrete that I can test a user flow even if it had test coverage, which it doesn't right now. But even if it had test coverage, it would be part of our CI process then calls this script, and then we expect the script to behave and respond as expected. But even then, if we had that test, we could still have unused code. It could still be a path that just doesn't need to be supported.
I guess as I'm saying this, that could be true of a user flow as well. I just talked myself into that; cool. [laughs] Yeah, it's one of those even if we had tests that wouldn't give me the full confidence to know whether we have a valid path or not, that needs to be supported. So it feels a bit tricky in that regard. Because I am so used to then relying on my tests to help me know that yes, this is something that's important to the application or not. And in this case, I don't think that actually helps me.
I think honestly, at this point, it's talking to people who have built a lot of the infrastructure in their CI system to say, "Hey, can you help me track down? I've looked at all the places that I know to look. I even issued a change that raises." So if that script was getting called, then something in the CI infrastructure should have blown up to let me know. So I've taken all the incremental steps that I can to see if anything breaks. And so far, nothing's breaking yet, but we just won't know until it's gone.
Comparing this to previous situations, it does feel like one of those areas where if I was uncertain about if something is in use, that looking at the test is always helpful but then also having a product manager to go to because then that person can confirm yes, this is something that I'm certain that someone still uses, or we need to support. Or they can say, "You know what? Even if it is something that someone is using, we don't wish to support this feature anymore, and we'd like to get rid of it." It feels like I'm in that space. But there's not a clear product manager.
Now, for this one, there is someone very knowledgeable who helped build a lot of this system that I can go to. So in effect, they are acting as that person that can then let me know to say yes; even though we have code to support this path, I'm pretty sure we don't want to support it anymore, and we can get rid of it. So that's ultimately what's given me the confidence to move forward with the change.
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
Pivoting just a bit, we have a listener question. And this question comes from Slash. And they wrote in, "What are the first keyboard shortcuts you teach junior devs? What is so powerful that can explain to juniors why it's important to know them?" Yeah, all right. I have thoughts. Chris, do you want me to kick us off, or do you want to kick us off?
CHRIS: This is an interesting one because, for folks that have seen some of my internet movements, you will know that I am a fan of the keyboard and shortcuts there. And I like Vim; I like Tmux. I do not like things that require me to use my mouse. And so, my answer might be somewhat surprising, but I actually try to avoid this. I think the trap of productivity enhancements and whatnot can be, especially early on, a way to distract from the real work.
There's so much as a junior dev to learn and, especially if you're a web developer, and I'm imagining this is potentially like full-stack web dev. But even if you're a front-end developer, there's a just world of complexity that you have now opened up. You need to understand about HTTP, and CSS, and HTML, and JavaScript, and TypeScript, and there are so, so many things. And so if there's anything we can take off of our plates at that point, I think that's really useful.
So my very clear suggestion to folks that are new is just use VS Code seems like it's great. It's got a ton of stuff built-in. It's going to work well, and you can get really far with it. And there's a point in time that you will get to where you feel like maybe that tool is slowing you down, although my understanding is VS Code is a really impressive piece of technology. And so if you're not as deeply drawn to keyboard-only as I am, then VS Code you can keep growing within that. There are so many enhancements and customizations and whatnot that you can do there.
But it's weird that my answer is kind of like none or deflect the question and say, do these cutters around Ruby and Rails and JavaScript or whatever the language or framework or whatever it is that you're working in. But that's my first approximation. I probably have a more real set of answers. But I'm interested in your response to that or your thoughts on that, Steph.
STEPH: I love that. I think you make such a good point that you are entering a world where there's so much to learn that I don't think this is of high importance. It is something that you can cultivate over the years but not something that you need to focus on. And when you highlighted using VS Code instead of saying our de facto like Vim, which is something that you and I love, I thought about that because Tim, my husband, went through a coding bootcamp recently. And I don't know if I'd shared this, but he just got his first job as a junior dev. So that's incredibly exciting.
And when we were talking about editors to use, VS Code was one of my top ones. I just know it's well built; it's popular. It does a lot for you. And it's just that way; you can focus on everything else that you mentioned that junior devs are often having to focus on that then at that point, it really becomes all the shortcuts for you to learn.
It's just get to know your editor. Understand how to do the fuzzy search for files or a specific method. How do you split windows? How do you make it easy to toggle between running your testing code? But I do think it's important to become familiar with those shortcuts and commands so that way you feel very competent and productive in your editor. Whatever your editor is, just get to know those commands.
To go a bit broader with it, I do think there are some things that are really helpful to know. And I'm also working off the assumption that if you're a junior dev, you probably already know the basics. You know the copying and pasting, refreshing a page, and undoing a change. So some of the keyboard shortcuts or tooling that would be more helpful, in my opinion, is as learning your editor, learn some terminal shortcuts. So like pressing up to rerun the last command, I think that's probably one of the first things that if I don't see someone doing that, I would remind them or let them know that they can do.
I also think it's really awesome to have a command-line tool like fzf that lets you find and filter files or search through your command history because I use that all the time. So I'm constantly just searching through my command history in my terminal so I can rerun commands versus having to remember what to type. And then also, I mean, there are a couple of basic browser shortcuts, so navigating between tabs, opening new tabs.
And then the big one is Git. If you're using Git for your job, spend time with Git, and that doesn't really fall into the whole keyboard shortcuts. That's a whole different topic. So I'm totally cheating here. But I think it's important enough to focus on that over the keyboard shortcuts is get good at writing commit messages, amending changes, viewing a history, and how to rebase, things like that.
CHRIS: I think the list that you gave there is actually a really practical one of, like, learn how to move around within your editor and in between the files because that's going to be a thing that you're just constantly doing. And so I'm also a huge fan of fuzzy finding, so Ctrl+P in the Vim world or fzf that you listed as a more generic utility that has it. I know VS Code has a command palette where you can fuzzy search for files and different variations there. And that is such a nice way to work.
I don't actually want keyboard shortcuts, if we're being honest. This is maybe a somewhat heretical thing, but I want modes. I love Vim's modes, and there's a whole language there. I made a YouTube video a while back about it because I believe in it so strongly. But it's that idea of like, if I have to remember these arcane movements of my fingers, that's not fun for me.
I want the computer to learn me rather than me it. And so fuzzy finding anything, being able to type any substring of the things that you're matching against is such a powerful way to interact with stuff that's like, it's not me knowing the magic key command to do something; it's the computer understanding me a little bit better.
You also listed being able to run a test file or an individual test. I love that one. That is something that I use constantly. And so that's one that I think would be worth investing in because being able to get that iteration loop of make a change, run the test, make another change, run the test again. That's a really powerful one to refine. The other thing probably we're saying is take a look at your own workflow and look at what's somewhat painful and then Google, like, how do I get better at that? And the Internet will have things to say on that front.
I definitely agree with what you were saying about the command line. That's a place that is a little bit hostile to folks when they first show up. Like, what is this place, and why is it kind of mean? But it can be refined and honed, and tweaked. And so that's a place, again, fzf as a utility, there is a particular one. Again, not quite a keyboard shortcut, though. It's more of a utility; it's a command, a tool, I don't know. Pipeline some stuff; it'll be fun.
I will somewhat back out of what I said earlier, though, of I don't recommend that folks try and push on this too much early on. And the reason I'll say that is a while back, I taught a cohort of Metis, which was the bootcamp that thoughtbot was involved in many, many years ago. Uniformly in each cohort that I at least knew about, the instructor started with that ethos of like, okay, we're going to be up here and demoing things. We're using this thing called Vim. It's weird. Don't worry about it, though. You don't need to learn that. You shouldn't learn that. You should focus on the Rails and the Ruby and JavaScript that we're teaching you. That's the focus.
But essentially, without fail, the students were like, "Yeah, but that thing looks cool. Tell us more about that thing." And so they ended up, these are the folks who designed the course before I started teaching it, they ended up bringing that in as like this is a Friday show and tell sort of thing. All right, we're going to tell you about Vim because everybody keeps asking about it.
And then most of the students ended up using Vim because watching the way someone moves in Vim if you've not seen it and if you're not familiar with that, you're like, wow, you're just moving around the file like magic. It's amazing. And that was certainly my experience before I used Vim. And watching someone using, I'm just like, wow, okay, I want that though. That's the thing that I need. So there's this delicate line of like, I would recommend ignoring this. But I get that if you see that, you're like, I would be so much more efficient if I could use that, and it's true to a certain extent.
So yeah, my recommendation would be don't do that. But most folks that I've seen are like, I would like to get better at these tools, and I totally get that. And I've obviously spent a lot of my own personal time [laughs] getting there. So I feel like I'm a do as I say, not as I do, maybe sort of thing. [laughs] It's roughly the space that I'm coming from right now. And I don't love being in that space, but apparently, it's where I find myself in this moment.
STEPH: That feels like a nice thing to share, though, because that is something that you've really enjoyed and cultivating that craft, and then sharing that and creating videos and content around it. So that totally makes sense that you can say, like, this is something I enjoy, and I have found it productive and helpful. Maybe sometimes you negotiate how productive it is.
CHRIS: Jury is out.
STEPH: [laughs] You've enjoyed it. And so it's something that you've chosen to invest in, but you don't feel it's critical to anybody else and their career or their path that they should invest in it. It does make sense that from a teacher-student perspective that, you're going to want to emulate what your teacher is doing.
So in the past, when I've taught very beginner-friendly intro to web development classes, so I would say much earlier than a junior dev, I always made sure to use VS Code or whatever editor I was using with them because I wanted to mimic exactly what they were going to do and have that same environment versus showing them something completely different and then expect them to translate. So there could be some parallelisms there as well if you're working with a junior developer that you want to cater to an environment that they can work with and feel comfortable and grow with versus showing them all of the fancy trickery that you can do in your particular setup.
CHRIS: Another data point in Steph is a better person than me.
STEPH: Sure, I'll take it. [laughter] There was also an interesting part of the question about what's so powerful that can explain to juniors why it's so important to know these keyboard shortcuts? And the only thing I could come up with because, again, I don't think it's super important for junior devs to learn keyboard shortcuts...but for the stuff that we do think is important around becoming familiar with your editor, making your terminal more friendly, for that stuff, I think a lot of it comes down to..., or the best reason I can think of is for your health.
Because it's been known that there's an increased risk of RSI the more that you switch between your keyboard and your mouse. So if you can use more keyboard shortcuts and use less or place less strain on your wrist and fingers by having to switch from your mouse to your keyboard, then I think that's the best reason. I mean, sure, productivity and feeling like a wizard those are cool reasons, but your health is really the only important reason. I just realized I used an initialism, but I didn't provide the definition for it. So for anyone that's not familiar, RSI stands for Repetitive Stress Injury.
CHRIS: It's interesting that you highlight the health aspect and RSI in particular because it's not something that I think about, but I think is definitely a benefit that I've had and what I like about modal editing and what I like about Vim. I think I tend to think about it in a different way, or it's the same idea but rotated around 180 degrees or something where my ability to do something quickly is not important in and of itself. But my ability to stay in context when I've figured out the change that I need to make that matters to me immensely.
And so what you're talking about of like being able to move quickly between files, being able to run the tests, being able to determine if the change that I made was, in fact, the correct change I care immensely about that. Because typing is not the bottleneck is a phrase that gets thrown around, and I like that phrase because it's true. It's not about just being faster at the keyboard and being an elite hacker typing all day long. The hard part is the thinking. But once I've done that, I want to get that thought out of my head and into the code as quickly as possible, as directly as possible.
And having efficient tooling and the ability to work with that tooling and move between files and run the tests and all of that is critically important to me for that reason, not because any individual change needs to be made that quickly but because once I've done the hard part of the thinking, then I want to get it out of my head and into my hands and then into the editor.
So now I think I've completely contradicted myself, or I've just slowly moved around this question of like, I don't think you should. Well, maybe you should, you should definitely is, I think, the three different stances that I've taken. But I do kind of believe that that should be something that changes over time in your career. So maybe I've been consistent if you give me that lens.
STEPH: That's what's fun about these listener questions. They take us on journeys, and I love that answer. I love that it's more about the staying in context that then helps you feel so productive. It's not just a productivity goal that you're looking for. But it is more for your context that that way, if you know there's a change that you want to make and you don't feel held up by all these other little things that are then preventing you from getting whatever it is you're excited to get done, getting it done.
But speaking of shortcuts, there is one that I just learned recently that's probably a pretty common one, but I haven't used it. I happened to stumble upon it. So if you're in your browser, you can open up a new tab, Command+T; I'm on a Mac. So you can do Command+T and open a new tab.
But there have been many times where I've accidentally closed a tab, or I've had a couple open, and I clicked the little close on the wrong one, and I'm like, no. And then I have to figure out where I was or go back to my history. But there's a shortcut for that, and it is Command+Shift+T, and that will reopen the tab that you accidentally closed. Didn't know that, learned that today. I'm going to lock that one away because that should be helpful.
CHRIS: The best way to lock it away is to explain it to others. So now that you've done that, this one's yours forever. Whereas everyone else hearing it, you got to try it a few times before it'll be yours forever, but, Steph, you're good now.
STEPH: Or spin up your own podcast and then share your keyboard shortcuts so that you can lock it away. [laughs]
CHRIS: I've observed that to be the easiest way to instill any learnings deeply within my brain.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph is excited to be headed on a retreat with her mom in the mountains, but before that, she details how she helped troubleshoot a production issue with her team and appreciated their process. She's also looking into tooling around spinning up more machines to process more RSpec tests.
Chris had a developer start their new job at Sagewell and highlights how they involved the new person in rectifying potentially missing and/or confusing existing documentation. He also has a gripe, and that is accounts. Handling too many accounts. Additionally, he talks about triaging an error and how it was tough initially to understand if something was actually broken. And then it was even harder to understand what was broken. So he paired through it and used the power of putting two heads together.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris, I am going on vacation next week, and I am so excited about that. It's going to be pretty much a week long. It's like a Tuesday through Friday ordeal. And it's a trip that I'm taking with my mom. So over the past year, she's gotten super serious about her health and nutrition and done a phenomenal job of being very focused on a plant-based diet, which is basically healthy vegan food is what that comes down to.
So there is a retreat that's taking place in the North Carolina Mountains that she's really excited about. I'm going to go with her. We're going to do lots of cooking, and hiking, and hanging out in the mountains, and it's going to be lovely.
CHRIS: Well, that does sound lovely.
STEPH: Yeah, it seems like a really perfect time to disconnect just because you're headed into the mountains. So all you should take with you are books and things that are not iPhones, and tablets, and computers, and screens. So I'm looking forward to that, just to be away from screens for the week.
On some more technical news, this past week, I helped troubleshoot a production issue, which was a bit novel for me because the work that Joël and I are doing with our current project it's all in the testing realm. And so it was probably around 10:00 o'clock at night my time, and I got a ping on Slack. And it looked like I was getting called in for a production issue.
And I was like, I have touched zero production code. [laughs] So I'm very intrigued how I could have broken production at this point. And so I looked into it, and it turned out that it wasn't necessarily related to a commit that I had authored, but it was for a commit that I had reviewed and then approved. And so their strategy is they create a new channel. They'd gotten a ticket that an error was occurring.
And then the site reliability team created a new Slack channel, and then they pinged everybody who either authored, reviewed, and approved that change to be like, hey, we think the issue is related to this commit. Our plan is we'd like to roll it back. But before we do, we just want to check in with folks who have more knowledge to help us confirm that, yes, this error message seems related. And I really liked that approach.
I really like the idea that it's not just the person who merged the commit that then gets pinged on it, but it's like everybody else who happened to look at this and review it come help us too. So we spent some time looking into it, confirmed that yes, indeed, it was related to that particular commit. And then their team did the wonderful thing of then rolling it back. So then, it was no longer an escalated issue.
And so then I asked, "What else can I do to help?" And they said, "Well, from here, it's no longer a production issue. So tomorrow, just follow up with the author and let them know and issue a fix for the bug, and then merge it like normal." So we're back in that normal pull-request flow, very calm.
And overall, I just appreciated their process. I like very much how they pulled more people in because I think some of the other people that were involved weren't online, which makes sense because it was really late. So that way, you just spread in case some other people really aren't available that then hopefully you'll get lucky and one of those three or four people are available to help you troubleshoot.
CHRIS: That does sound like a really nice and thoughtful and intentional bug response, communication, procedure, rollback, et cetera. All of that sounds like it worked very well and is nice to have. And it's the sort of thing that a larger organization ideally gets to, having these sorts of processes. Spoiler alert, later in the episode, I will talk about the other side of it of being a very young organization and trying to be like, wait, is this a bug? Is this not a bug? Should we roll back? What do we do? That's actually my topic de jour.
But what you're describing sounds like the calm even in the case that there is a fire sort of like, yep, we've got procedures. We have workflows. We have communication channels and ways that even the exceptional things can be handled in an ideally as calm as possible way. So that's awesome that that's what you got to experience there.
STEPH: Yeah, getting called in at 10:00 o'clock is never fun for anybody. But when it happens, because it's going to happen, then I appreciate the thoughtfulness and that process that they put behind it. So it all went fairly smoothly. And it was also one of those fun things where I haven't met...like this is a very big organization, so I hadn't met any of those people.
So when I got pinged on it, and then I hopped in, I was like, hi, I don't know anything about this process and what y'all are doing, but I am here. I'm here to help. Where can I look? What can I do? So it was also a fun endeavor in that regard to just be like, I don't know what I'm doing, but I am here to help. Please let me know how I can help. And it ended up working pretty well. So yeah, that's been a fun adventure for this week. How about you? What's new in your world?
CHRIS: What is new in my world? Well, we had a developer start this week, which has been really wonderful. Unfortunately, we had scheduled their first day to be Monday, which was Presidents' Day, and that's a holiday. So we got out in front of that one and figured it out. We're like, no, no, actually, feel free to start on Tuesday. We'll not be around on Monday, so you shouldn't be around on Monday. But then, on Tuesday, they started.
And we intentionally structured things such that we have a contractor that has been working with us for like seven or eight months now. So it's been a long time and been very formative as well the work with that contractor. So this is their last week, and thus, we very purposefully brought the new person on the team and that contractor together to maximize the amount of pairing and overlap that we have there just to try and as intentionally as possible grab whatever is in their head, get another point of view.
Because this new individual on the team will be able to work with myself and the other full-time developer on the team a bunch moving forward, so we want to maximize their overlap with the person who is on their way out. But otherwise, it's been great. We're a young organization, so the version of onboarding it's me running around setting up a lot of accounts, forgetting to set up other ones, getting pings in Slack, and then following up and setting up another account. Eventually, I hope that there are checklists and formalizations and, ideally, one-click SSO magic that makes all of that work. But for now, I'm happy to chase it down.
But really, we're just leveraging pairing as much as possible as the onboarding tool to make sure that where we don't have formalization, procedures, documentation, et cetera, as thoroughly built out as I would love to be at, we can shore that up with some time with other humans.
STEPH: That's awesome. It's always fun having someone new to join to highlight all the things you need to automate or at least have a checklist for to then help them onboard. But that's really exciting that you've got a new teammate.
CHRIS: Yeah, definitely very exciting. And they've been great. They've hit the ground running and a couple of pull requests already and just contributing very effectively within their first couple of days. So that's always wonderful to see. We are definitely taking this moment to document what is undocumented or update the README where it needs to be and start to make that checklist. We have another person who will be starting in about two weeks’ time. And so, ideally, that will be even a little bit more fleshed out of a process. So slowly, incrementally get a little bit better with each we add that we get there.
STEPH: How much do you involve the new person in creating that documentation? Is that something that you ask them to help build, or is it something you take ownership of? What's that balance?
CHRIS: It's interesting. So definitely some I want to be with that person because I think it can often be the easy first PR is an update to the README for like, oh, I tried to set up the app, and it did not work. For this reason, I have now updated the README, and now there's a pull request. And we get to experience that flow via the very low-stakes change of updating the README. So that's a definite one that I like to have.
The other is I'll typically ask for the individual to capture as much as possible. There's a very delicate line in my mind between empowering them and being like, yes, absolutely. We're young. We don't have everything documented. So feel free to make changes where that makes sense to you. But at the same time, I know that joining a new team can be complicated, can be intimidating in certain ways. You're not sure what's okay to change? What's not okay to change? That sort of thing.
So I simultaneously don't want to put the pressure on someone to be like, "Yeah, no, change anything you want. Literally, nothing is stable here. Nothing's glued to the ground. So feel free to pick up anything and throw it out the window." That feels too far in my mind. So I don't have an actual answer like, I'm ideally calibrated at this point. But it's sort of those two tensions that I'm holding in mind as I think about that.
STEPH: Well, I really like your answer. I like that balance because I think it's really nice to include the person in those changes and also just because they're going through it. So they happen to have that insight, and it's fresh. But I agree, when you're joining a job, you want some stability and confidence that the people that you are joining that team with are also working hard to make it a very positive onboarding experience.
And if you just were to push all of that responsibility on to them to be like, "Yeah, we know. We don't have this organized yet. So you tell us everything that we need to do," that would feel unkind to that new person. I think as a new person that I wouldn't fully enjoy that. I don't mind some of it, but I wouldn't want all of it. I'd have nervousness around ownership, around improving processes, and who that belongs with.
CHRIS: Sort of a classic case of it depends, or it's a little from Column A, a little from Column B, but definitely some, just hopefully not too much.
STEPH: The Goldilocks of onboarding, some onboarding responsibilities, but not all of them, just the right amount. [laughs]
CHRIS: Shifting gears slightly, though, I just want to gripe for a minute. I'm just going to gripe. This is not my normal mode, but I'm going to lean into it.
STEPH: Do it.
CHRIS: Accounts, just accounts. I have so many accounts now. There are so many across different systems, and I'm trying to do the good thing, which is let's stop using personal accounts for anything and only use organizational accounts for the things that are for work. And some organizations do a great job with this.
GitHub, I'm looking at you; really well done, super happy with the way that you folks have implemented accounts. You get that I am one human being that contains multitudes. I am my personal self; I am my work self. I am maybe even another version of work, and you get that. And you usually let me exist as all of those versions of myself and, man, do I appreciate that.
Heroku, you're okay. Like, it's all right. You treat the different facets of me as different accounts, but that's okay. You make it relatively easy to switch between. Although you do make me two-factor auth and re-login every single day, and I don't love that. So I don't know what's going on there, but fine.
Trello, aka Atlassian, I guess at this point, come on, what are we doing? What's going on here? So originally, I had started, and I had the one Trello account, and I had my personal boards. And then there was the Sagewell organizational account. And within that, there were some boards, and I would just bounce back and forth.
But I realized, no, I need to do the right thing. So I created a new Trello account. And now Atlassian just forces me to switch between them, and it loses the link that I'm going to often. It's a different login interstitial screen. And it constantly shows me that like, hey, you don't have access to this. Do you want to switch accounts? And I say yes. And then they take me to a screen where I can pick between two options, the one that I was that didn't have the ability to do it and another.
And as a developer, I know that the thing I'm about to say is not fair. But come on, folks, you could know the answer to this question. There are two, and one is the wrong answer, so the other one is probably the right answer. You don't need to autolog me into that; I get it. Just emphasize it because they almost look identical on the list.
I have now accidentally tried to request access with my secondary account to my other account, and I can't get out of that state. So now, one of the ways that I try and do this it shows me a list of them to pick. The other it says, "You have requested access. We're waiting to hear back." And I'm like, no. So anyway, that's a thing.
STEPH: So I know people can't see me. [laughs] So I'll narrate that I'm dying over here because I very much appreciate that we are positive people. We are very focused on bringing positive energy, but the descent into the amount of shade that you're throwing at different applications [laughter] just really made my day, and I feel that pain.
I have felt that pain with Atlassian and can relate. And we should have some gripe sessions. This feels healthy. This feels very...okay, well, I don't know for you. I'm the one that's laughing and getting joy out of this. I don't know if it's helpful for you, but it feels very cathartic to me. [laughs]
CHRIS: It is definitely somewhat cathartic. I think there's utility in having these sorts of conversations. And throwing shade at Atlassian, whatever, they're doing fine, so I'm not super worried about it. But generally, we try and keep things positive because I think that's, frankly, a more effective way to communicate.
But occasionally, it is useful to look at the things where I'm like; that is a pattern that I do not want to repeat. And I'm sure that there are complex organizational enterprise-y reasons that it has to be this way. But I can look at that and say never that. That experience as a user is like, wow, yeah, I just tripped over nine layers of your enterprise there just trying to do very simple day-to-day things for myself. So I want to avoid that. I've griped about that one login, not the company OneLogin.
But that one login page that I've experienced where I start to interact with the form, and suddenly some JWT handshake in the background happens, and I'm now logged in. And it just rips the page out from underneath me. That is unacceptable. That is not okay. And I really do think there's something worth occasionally looking at those and being like, well, not that. But anyway, I should probably stop my gripe session now.
STEPH: [laughs] Well, if I may join in, I have one that I'd like to share. Since we're on this --
CHRIS: Throw it on the pile. What else we got? [laughs]
STEPH: [laughs] So there was some code. There was a piece of code that I was looking at that was very not friendly. It was difficult to understand. It took a while to parse through what are they actually doing? What records are they creating? Why did they choose this manner? Why are we iterating over these particular numbers? What's the outcome here? And I was pairing with Joël and was going back and forth having a conversation trying to be the detectives of why this code exists, and we finally got there. And we finally understood what it's doing and why. And I just lost it for a minute once we finally got there. [laughs]
I just thought the way this code is written, it does not improve readability, and it doesn't improve performance. All it did was make my life harder because it was very difficult to read. So all they did was become really clever with the code that they were writing and essentially drying it up, which I have such a beef with DRY because it has caused me pain. And so they essentially were drying up their code or introducing a way to make it just take up fewer lines that took up less vertical space. But overall, I was very grumpy about it.
And Joël was very kind about it and was like, "Well, this is the type of code I could see maybe why they did this." But you're right; it doesn't help with readability and performance. And he was helping balance out my grumpy goose moment. I've been having a lot this week; maybe it's just the week I'm in. I'm in more of a fiery mode this week [laughs] with some of the code that I'm seeing, and that was one of them. That was the please, please, please don't DRY up your code. If it doesn't improve readability or performance, there's just no need. There is no benefit.
CHRIS: Well, I definitely know that feeling. And I think I've probably, as a developer, gone through that arc where early on I was just trying to make stuff work, and then I learned how to be clever. And suddenly, being clever became a game that I could play.
And then, pretty early on, I realized I would come back to my own code from two weeks ago and be like, what the heck does this do? I have no idea. And that's when I was drawn to Ruby. That was one of the things. I'm like, oh, I can write code that looks so much like the clear words that I have in my head about the thing. I like that. And so much of my career has been spent in the let's make it obvious and revisitable.
I actually remember very clearly early on in my time at thoughtbot, I was working on something and was working on it with Joe Ferris, who is the CTO of thoughtbot and a very clever individual, and I mean that in the truly positive sense of the term, one of the most capable engineers I've ever worked with. He was describing an anecdote, but it was basically he'd put up a pull request. And someone replied, "Oh, that's clever."
And Joe's reaction was, "Oh, crap." Just taking that as not an insult but as someone saying, oh, that's clever in a positive way, and Joe hearing that in the negative form of I went too far here, or this is not obvious in its initial interpretation. That really stuck in my head from there, just his reaction to it immediately of that being not a good thing. And I was like, that is interesting. And all the more so over time, I've come to believe that clever is probably something to avoid in code.
STEPH: Yeah, agreed. I'm at the point that if I do see someone who's done something that I do think is clever in a positive way, I will still abstain from using that word clever because I do want to make sure they don't think that I'm saying in a bad way that this is clever, that it's not readable, and it's not friendly. So I totally avoid that word when I'm complimenting someone's code just to make sure there's no confusion.
CHRIS: It's one of those words that got away from us that we lost the definition of, and then we came back, yeah.
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: Let's see. In other news, you had mentioned this earlier, and then I had mentioned my side of it but errors in alerting and all of those sorts of things. They're an interesting question. We had a small situation over the weekend that turned out to be kind of real, kind of not real. But I happened to be away on vacation. I did have my computer with me because, at this point, we're early enough. And I'm like, I'm going to take my computer everywhere and just be ready in case it's necessary.
And in this case, I did get a ping. I looked into it and what was unfortunate is it wasn't immediately obvious if something was broken or not. And to a certain degree, that's always going to be kind of true. There's so much noise, so many requests hitting a web application. And how do you tell the good ones from the bad ones? And ideally, I could threshold around certain volumes of traffic, but even that's going to have spikes, and ebbs and flows and things like that.
So it was very hard initially to understand is something actually broken? And then all the more so to understand what was broken. Thankfully, it was tractable. It was solvable. And we've done, I think, some good work especially considering how early on we are and how we've instrumented things in Sentry, in particular, our usage of Sentry and also somewhat in the logs.
But again, I think I've talked about this before, but I'm feeling this tension around there's data. There's data just kind of like, what happened? And right now, we've got logs. That's one of the places that goes to Sentry if it gets escalated up to that level. And we sort of have a weird Venn diagram between logs and Sentry. And then we also have analytics as another thing and then eventually data science, and what do we want to try and learn?
And all of these kinds of want different facets of it's not the same data set. But I wonder, is there a superset of data that then we could filter and slice and cut up, do all those sorts of things? I think this is the dream of Honeycomb and platforms like that, but I'm not even certain if that's true. And so I'm in that awkward middle space is how I would describe it.
But in that particular case, I was able to resolve it. I did take away as an action it's probably time to start thinking about PagerDuty anomaly detection, that sort of thing. When does alerting happen? When do engineers actually get calls when not just during the normal nine-to-five of the workday? So I'll be investigating that in the coming weeks and see where we get to. But it's sort of the first thing that really pushed us in that direction.
The other thing I'll say is we have the idea of the point dev, which I've talked about on a couple of episodes. But the idea is for each week, one individual on the engineering team is in charge of the noise, for lack of a better term. They're looking at the error stream in Sentry. They're looking at any ad hoc requests that are coming from our admin team, et cetera, et cetera. And that's been really great.
But one thing that I've noticed is that dealing with the errors is particularly tricky and what we did in this particular case was just to pair on that. As an individual, it is really hard to sometimes to reproduce, sometimes to just understand these are the things you didn't expect in your code, and therefore they are, by definition, harder to understand, harder to think about.
And then sometimes you get to an understanding. You're like, ah, what do we do about that? Do we care? Do we not care? Is this just noise? Is this something we should solve? Is it something we should solve soon? Or is this something we can solve whenever we get to it in the backlog? And making that sort of determination is all the harder.
And so I'm increasingly of the mind that there should be some amount of time that is pairing on that error backlog to bring two heads together. I hadn't been thinking of it this way, but I've now come around to thinking this is a really great place for pairing because it's so hard for one individual to deal with that complexity to make the hard value judgments. And to do that, if each individual does that in a vacuum, then we have n different value systems at play that are hopefully very similar.
But if we start to pair up, then there's osmosis between those groupings. And ideally, we sort of coalesce towards a shared value structure around, like, what can we ignore? What should we snooze for a week? What should we put in the backlog? What should we prioritize and fix immediately? Because I think those are really hard things to otherwise...that's really hard to document, I would say. I would love to write up a page in the Wiki that says, "This is how you treat errors," except each error is a unique snowflake, and you just have to follow your values.
STEPH: I have been on teams where we've written up documentation that helps you triage an error because you're right; you can't write documentation around a specific error. But that I always found really helpful where it was like, here's all the links that you can look at, here are some recommendations. When we were working on an application that was falling over more often, there were some specific outlines around if you see this problem, then this is typically how you can solve it. And then we had to fix that at a larger scale, but it was a nice band-aid to get us through at that point.
I like the idea of pairing, especially as you mentioned; it's tricky. It's funny when you mentioned capturing those errors and putting them into the backlog because I like that idea that then you can prioritize and bring those into the sprint. It just made me feel a bit hesitant. If we don't work on it now, we're never going to work on it. But then that feels unfair to say because it really comes down to the team.
If you have a team that's going to be able to look at those errors and say, "Yes, we're going to bring them in and prioritize them," then that feels really good to then be able to say, "This is an error. Let's capture it. Let's provide some content around it. But it doesn't need to be addressed at this moment. It's still pretty low in terms of risk for users or at least low in impact for users." So yeah, I guess it just depends as long as the team feels good about being able to prioritize errors, which I feel confident that your team would be able to do. And if you can't, then y'all could reassess that plan.
CHRIS: That's why we definitely have that. We're revisiting the errors. They're part of the same backlog as everything else. So they're coming up in relative priority and getting worked on and getting resolved. But we're also shifting our thinking just a little bit to say, "We should take a little bit more time in the moment to try and resolve some of these where we can." I have the dream of there are just zero bugs ever. But that's hard, especially in different platforms.
And we're seeing a lot of mobile traffic and from different older Android versions and so weird JavaScript edge cases and things like that. Like, why does your runtime not have object? That feels like a thing every JavaScript runtime should have. But that's a joke. Every JavaScript runtime, I'm pretty sure, does have object but that sort of thing. It's like, whoa, this is weird and specific to this one device. Cool, those are fun. So yeah, giving a little bit more time to do those.
And again, so we definitely do have the document that describes here are the places to look and how to think about this category of error and this category of error. But at the end of the day, you get one that's just like, there's not a ton of detail in the error. It's hard to reproduce. It might be device-specific, et cetera. And so what do you do in that moment? And that's where we're trying to...I think pairing is a great way to share that thinking around the team. So overall, it's been great, though. I think everyone who has been involved has been like, "This was better than when I did it on my own," so cool.
STEPH: Awesome. That sounds great.
CHRIS: Yeah, I think so. This is one of those ever-evolving facets of how we work as a team and how we build the platform. So I will certainly report more in future episodes, but for now, happy with that. And yeah, what else is up in your world?
STEPH: Yeah. So we've been looking specifically into tooling around how we're going to spin up more machines to process more RSpec tests. So specifically, we have around 80,000 RSpec tests that we are processing, and we have one machine that is parallelizing those and takes around just for that portion of the build because then there are other tests and things that get run that brings it up to about a total of 30 minutes.
But for the RSpec portion, I think it's probably around 20-ish minutes to process those 80,000 tests. So we split that across four different containers, and then we run those tests. And so we'd really like to spin up more machines to then process because we've reached the point that we have given as much power to that one machine as possible. So now we're looking to add more machines.
And one of those solutions that we're looking at is using Buildkite, which is built with the idea that you can add these build steps so then you can more easily say, "All right, once we get to this particular build step, hey Buildkite, we'd like to run n number of machines to process all these tests." And that seems really nice. And it is something that we are interested in. It is actually what Shopify uses. They use Buildkite ci-queue, which is built for mini-tests, which is what they use, and Redis to then run all of their tests.
But we are using TeamCity, so we're not using Buildkite. And we would like to see if we can grow with our current CI infrastructure versus having to move to a new one. There's a lot of just risk involved in moving to a new one. And so we've been studying hard if TeamCity will let us do this. And so far, the answer has been no.
But just recently, we found somewhere in the docs that it looks like there is a chance that with TeamCity, we can inform TeamCity that, hey, even though we have just this one build step, instead of only giving us one agent or one provisioned machine to then run these tests, instead that we actually want to spin up a couple of machines to then process these and then aggregate the results back to this one step. So we're looking into that.
But I wanted to throw this out there in case anybody else is also using TeamCity and has already invested in this particular approach. I would love to hear about it because we are currently figuring out the capabilities and if this is something that we can stay with our current infrastructure or if we're really going to have to look for a new solution.
CHRIS: Well, I'm hopeful that someone out there can give you some input. I definitely get the idea that you're stuck, and stuck is maybe too strong of a word. But if TeamCity is not ideal, the idea of moving off it does feel exceedingly heavy and the riskiness that you talked about. That's, I think, a critical word here because I think it's easy to think of CI as like it's a very important thing. But that's absolutely critical as part of your deploy pipeline, I assume.
This is speaking generically about CI, and so it is, in fact, a critical piece of the infrastructure. If you've got a bug on production and suddenly CI is down, what do you do? I guess you can test locally and decide you're going to push past it, but then you have to circumvent it. And so I understand the intentional way that you're thinking about that and the risk associated.
I do wonder, though, if TeamCity has felt like not the right platform for a while and if there are considerations. Is there the possibility of both trying to improve the world that you have now, so it's not the big move off of it but then also in parallel start to work on an alternative implementation? This is perhaps not entirely fair, but it feels like a Rails application is this repository of code. And typically, CI is configured via a file.
And that's like, if you've got your teamcity.yaml or whatever it happens to be, could there also be a buildkite.yaml that is not on the critical path for deploying or anything like that? But it is a way to, frankly, somewhat inefficiently test on two different platforms but start to see if you can get the code moving on a different platform and be able to gradually build out and make that transition possible without it being one big swap over sort of thing, which eventually it would need to be. But just wondering, is that happening in parallel? Is that a possibility?
STEPH: I think the short answer is, I'm sure there is. There's a way to look at the existing system and then find ways that we can tweak it. But I also know that the team has already invested a lot into working with the current system and making it as efficient as possible. So I don't know if there's any true big impact but intermediary steps that we can take. We are definitely in that proof of concept world. So we're not going to move anything over for the rest of the team until we can really prove that something is working for a small subset and then start to expand from there.
But currently, our idea is to dig further in TeamCity, which I think also includes just a call to their team and say, "Hey, we'd love to talk to one of your engineers and see if the thing that we're trying to do if it's possible. Let us know if it's not and if we need to look elsewhere," which is intriguing to me because having a lot of tests isn't new. There are tons of companies that have lots of tests, and they want their CI test suite to be fast.
So a company that then has built software that helps Team execute these steps that then the ability to say, "Hey, I want more machines to process. I want to give you more money and to give us more machines, and we can process more things." I feel like that should be a thing. And I'm getting at the edges of my knowledge. This is why we're exploring all of this. But it has been surprising to me to realize that that doesn't seem as easy of a thing as I would have expected it to be.
There are also some other concerns around here where the client that we're working with if we're going to work with third-party vendors, then we have to get special approval to work with them. It's not just a hey, we can just go try it out. It's a lengthy contract process that we'd have to go through. So there are also some constraints that we have to keep in mind where we can't just work with anyone. We need to be careful to make sure that they're certified in a particular way.
So yes, I like your idea. I will definitely keep it in mind. But I don't know if there are any true intermediary steps yet other than the building out a proof of concept and then finding small ways that we could move over. Then I think that would be ideal for sure. And then hopefully, if there's anybody that's listening that has experience with TeamCity or Buildkite, that's the other tool that we're looking at using, let me know. I would love to chat about it and find out your experience. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris is helping with efforts to introduce security, practices, and policies at Sagewell. Right now, they are refining the usage of 1Password to standardize passwords and secure information. He also shares (what he believes) is a terrible idea around fixing inconsistencies around symbols and strings.
Steph shares an update around factories.
Also, at Sagewell, Chris is helping to build mobile apps, one for iOS and one for Android, and is considering pursuing having them be all native. Good idea? Terrible idea? Chris and Steph riff on that a bit.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Weird stuff happens when we sing, Steph.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: Hello, Steph. What is new in my world? We are continuing with some of the efforts that we're doing to introduce security, and practices, and policies, and all those fun sorts of things at the organization. One of the things that this is pushing on is we are further refining our usage of 1Password at the company as a way to standardize passwords and secure information and how we store that, how we move it around, as well as integrating SSL, and all those other fun fancier things.
But I'm personally historically a LastPass user, and now I'm getting to experience 1Password. So now I'm a child of two worlds, and it's terrible, and I hate it. I hate every moment of this existence. So what I need to do is move over to 1Password, but now I'm in that space where I'm like, I can see the flaws of both systems. This is terrible. I don't like it. 1Password does seem to be great; I will say that. There's one really interesting thing about 1Password. I'm interested...you're a 1Password user, right?
STEPH: I'm not; I use LastPass. I'm also a child of two worlds because we use 1Password for thoughtbot stuff, but then I use LastPass for my stuff.
CHRIS: Gotcha. Okay, so you survive in the middle space. I'm slowly trying to move everything over because I think 1Password has a little bit more of what I'm going for. And I would like, frankly, to be in one cohesive, consistent space, although having two different accounts seems interesting. I definitely can handle it. But knowing which I'm in and how to save a password to one versus the other, it's a whole thing.
The one thing that I find really interesting though is 1Password has a feature where it will do two-factor, two-factor authentication. It will do that for you. Specifically, it's doing, as far as I can tell, the TOTP. I don't know what that acronym stands for, but it's the fancy type of two-factor, so not SMS, not text message-based, and not others like WebAuthn is a thing that I've heard of, which I don't know if that is distinct from YubiKey or hardware keys. So there's a bunch I'm trying to learn about this space a little bit more.
I'm very interested in the hardware keys because those seem cool. WebAuthn seems like a new standard. That sounds cool. Don't know anything about it, though. So mostly, I know about SMS, and I do not like that one. I do not want to use text messages because, as far as I understand it, they're not super secure. So that's not the space I want to be in. But the TOTP, the Google Authenticator, or Authy, or that space of password or two-factor code generation tools those seem good.
And 1Password has a feature where they're like, hey, yeah, sure, we'll have your password and your two-factor. And so they grab the QR code, which is typically the QR code is a way, as far as I understand it, to share the seed. And then, that seed is used by an algorithm to generate the current code value for a given point in time. So it takes like, given that seed and the current timestamp, we will generate you the relevant code, which can then be verified on the far side. But that seed only exists for one moment in time, et cetera, et cetera.
But I've always thought of it as this separate thing. The idea of having that all in one system is interesting and kind of scary to me. But as I think about it, I'm like, if 1Password or LastPass, in either case, gets compromised, we're all done. Like, this is over. We should throw in our cards, give away the internet. This whole experiment has failed is my sense. But it was very interesting because I had not seen this. I've always had these as separate systems.
So for me, I have had LastPass, and I have Authy on my phone for the two-factor. But it's frankly very clunky, and I don't like it. And the 1Password thing is fantastic where I say like, yeah, 1Password, fill in my password and username, and then also fill in my two-factor because you have it. This is great. But, and this is where I hesitate, and I don't know, I will say this: I trust that 1Password has thought about this deeply way more than I can and have come to a place of deep confidence that this is a fine and okay thing to do. But I'm still intrigued. What's going on here?
STEPH: That was a lot. I have so many thoughts. [laughs]
CHRIS: Sorry, that was a lot of words, a lot of ideas, a lot of space there. It's just where I'm at.
STEPH: People couldn't hear me, but I was laughing when you were talking about LastPass or if these accounts get hacked in. And I'm imagining someone who uses the combination of their cat's name and their birthday as their password and then like, aha, I win. [laughs] It's like, no, we just all lose. [laughs] But that amused me.
Going back, you talked about having it all in one place. And that actually doesn't surprise me that we're different in this area. Because you also like all of your email...you like one source of everything, which makes so much sense, but I'm different. And with these accounts, I like that I have the distinction between all of thoughtbot is in 1Password while all of mine is in LastPass because it's just a very clear delineation between those two accounts. And I'm sure both of these platforms have figured out a really good way to then separate those two.
But I just remembered there was someone at thoughtbot that accidentally...because they have everything in 1Password, they accidentally shared their personal vault with a client. And so they were just typing in Slack. They're like, "Oh, shit, oh shit, like, how do I undo this?" And we're all just watching like, "We don't know. But please let us know how it turns out." [laughs] It turned out fine. I think they actually realized they hadn't fully shared it but based on the UI they thought that they had. So it all turned out okay. So that just lives with me. I'm a little scared of that now now that I know that story. So watch out, friends.
CHRIS: Oh, wow. Well, now, yeah, I'm also now scared of that. I wasn't, but now I am.
STEPH: And I forgot the other thoughts now. Those were my two main thoughts based on the journey that you've shared.
CHRIS: Particular to the thing you were sharing there, yes, now I will have nightmares about it. But also, it feels manageable because they're both entirely different accounts, and then also within that, there are different vaults. So as I'm building up the password infrastructure at Sagewell, there's going to be different...like, the dev team will probably have one vault and then a shared vault for the dev team. And then other teams within the organization will have that. And so it feels like there are at least structures within the tool to manage that.
But mostly, my consideration is around the two-factor thing. And like, is this reasonable to do? And again, I'm sure 1Password has thought way harder than I have about it. And I trust that they're like, yeah, this seems fine that they're not just like, I don't know, it doesn't seem bad. They're like, no, no, definitively for information-theoretic reasons, this is fine. But it was surprising.
STEPH: That was it. The other comment that you made about two-factor auth that resonated with me because there was a point not that long ago where we have one of those, either New Relic or I forget which account it was, but it was with the systems. We really only needed one person to have access, but every now and then, someone else may need to access that account. And so we wanted to be able to store it in 1Password or LastPass somewhere like that.
But then the two-factor auth was a problem because then you had to coordinate with that other person to say, "Hey, I just need to check something. Would you let me in?" And because we could then leverage that feature, then we could just store all of it. And then that person could just go to 1Password or LastPass and then have access to all of it, and that was really nice. That was a very nice solution to I want to say it was a small problem but yet also very important for team happiness. So that was really nice.
CHRIS: The amount of times that I've been like, "I just tried to sign in to the shared account, and it says that it sent a two-factor request to somebody's phone, but it didn't tell me whose phone. And I'm not sure if we know who that person is or if that person's still around," that version of the story feels true. And so, the idea of being able to centralize two-factor seems great. It almost feels too good to be true, is perhaps where I'm at. I am putting on my tinfoil hat, and I'm saying, yeah, but oh man, security, though.
And again, I will 100% defer to 1Password on this. They've thought about it. But it's mostly I want to get to the place where I understand the thought process that they went through to decide that this is perfectly fine because they definitely did that work. I'm certain of that. I just want to read a white paper or something, and I haven't found it yet. [laughs] I'm like, let me get to that deep place of trust because that's what I want to be at with security tooling and those sorts of things.
STEPH: Yeah, I haven't looked for something like that, but that sounds...I'm kind of surprised that doesn't exist.
CHRIS: Oh, it quite possibly exists. I haven't done much of a search, frankly, at all. Mostly, I'm in the space of like, huh, that's weird and then moving on with my day. Because there's not a lot of free time to go search for the white papers on the internet. But yeah, so moving from 1Password or LastPass or 1Password, or maybe I'll just end up with both for a while. I really hope I don't end up in that space, although you're describing it as a positive, so maybe I will.
STEPH: I have found it helpful for me. When you find that white paper, because you are more likely the type of person to read that white paper than I am the type of person to read it, then I would love a summary. That would be much appreciated.
CHRIS: I'm so intrigued by the persona that you're describing of me of; like, you're the kind of person who would read a white paper. I'm like, well, I don't know if that feels true or if it's definitely true or definitely not true. But if I do happen to find it, and especially if I happen to read it, [laughs], I will share it with you and perhaps with the listeners as well.
Let's see, one other small thing. I have a bad idea. I don't want to share the bad idea with you. I want to more share it with the audience, and then I want the audience to tell me exactly how bad of an idea it is.
STEPH: [laughs]
CHRIS: Because I'm sure it's a bad idea. I'm just not sure how bad.
STEPH: I love that there's not even a scale of goodness here. It's just nope, this is terrible, but I don't know how terrible it is. [laughs]
CHRIS: What's fun is in the later parts of this episode, we're going to go into a segment of good idea, bad idea, sorry, good idea, terrible idea because I like that framing. No, this one is firmly bad idea, but how bad is the question. So we're working on the app, and we keep running into inconsistencies around symbols and strings.
As any Rubyist who has worked in the language for any amount of time, especially in a Rails app, you have experienced this unpleasantness. There are strings; there are symbols. They're often used somewhat interchangeably, and yet they're different. You’ll hit bugs. You'll hit edge cases. You'll hit nils that you didn't expect to be there because you tried to fetch a symbol. It, in fact, was a string, et cetera.
So, what if we just applied HashWithIndifferentAccess everywhere, just deep in the internals of the app or in the Ruby runtime? What if we were to just turn this on? My sense is this would be terrible for performance reasons. My understanding is that's why symbols exist is because they are a more performant mechanism. Strings are complicated within the object model of Ruby because they're mutable. These are things that I understand very loosely, as you can tell by the tone of voice that I'm using. But symbols and strings they're separate. They're separate for reasons, performance I believe to be the main reason.
But what if we were to just say, well, what if it could be like easy, though? That's what I want. Like, this is the promise of Ruby is that I want to express my code in a way that feels like the words I would use to describe to another human. That's the way I always think of Ruby is it's as close to the words I would use to describe the sort of business logic as possible. And yet these symbols versus strings thing it's just annoying, frankly. And again, I think very good reasons for it, I'm sure.
But what if we were to just do the silly thing and turn on HashWithIndifferentAccess for everything? I don't even know that that's fundamentally possible. I don't know that there's the relevant hook or the way to do that. But I would love that because we're using it somewhat regularly throughout our app right now, where we're getting data from one API. And in our test suite, it's one way, and in our code, it's the other way. And granted, that speaks to us being inconsistent in our usage. But overall, I would just love for this to not be a thing.
And so, how bad of an idea would it be? How much of a performance hit? That's my guess as to what it would be. Maybe there's actual fundamental correctness that would go wrong here. But my sense is by collapsing the space together; we would actually get more correct. I don't know. Anyway, how bad do you think of an idea this is?
STEPH: I was thinking through some of the bugs that you're running into. And I think you provided some nice insight around that around it's the fact that you're fetching data from API. So it's typically you're parsing. That's how you're getting the string and symbol differences is because when you're parsing JSON and then you have a mixed case of maybe you have a symbol, maybe you have a string, or maybe you're parsing it differently. Are there other places in the application where that's a concern?
CHRIS: I want to say one other place that we're running into it specifically is we're using a lot of enums, particularly ActiveRecord::PGEnum backed enums. So these are Postgres enums at the database level. And then, within our Rails models, we define them as enums. And the enum is typically defined within the model as a mapping of symbol to string. It could be symbol to symbol. I'm not even sure. I think this might be in terms of our implementation.
But you say like, it's an enum. The key is foobar with an underscore, and it's a symbol, and then the value is foobar, but it's a string. And maybe both the key and the value could be symbols; maybe that's a thing, maybe this is our fault. But certain times, when you're interacting with the value, it's a symbol. Certain times I find it to be a string. I feel like that's true. I don't think I'm making that up. [laughs] It's possible I'm making it up.
But that's another place where I feel that inconsistency or other values within the system that like as they go through certain type coercion layers, they'll start as a symbol, and then they get saved to the database, and then they get reflected back, and they come back as a string. And it's like, well, that's unfortunate. It was a symbol a minute ago, and now it's a string. And so our tests suddenly break in this way, or our code is inconsistent. And it's enough of a nuisance that I had the bad idea the other day. And so, I wanted to bring the bad idea to this space.
STEPH: I think you're right. I think the main reasoning for not having everything just be strings is for looking for that performance benefit. And so then using that HashWithIndifferentAccess then you'd have to loop over everything and then convert it. So I imagine, like you said, there would be a performance hit there. I don't know how bad of an idea it would be.
But when you said this, it brought up a memory because I remember someone proposing or the Ruby community talking about the fact, like, what if we didn't have strings? What if everything was just a symbol? Or can we just have one over the other? And there is a ruby-lang issue; it is 7792. And we shall also put it in the show notes and send it to you. [chuckles] And this person is proposing make symbols and strings the same thing.
And then some people call out specifically the idea of using HashWithIndifferentAccess and saying, yes, that works wonderfully, but then you are going to have a performance hit for it. So it sounds exactly like everything you're saying. I don't know the outcome. I mean, clearly, the outcome is we're not there. But it seems like a really good place to see the reasoning or different approaches that maybe people have tried in this space.
CHRIS: Ooh, I love that. I definitely want to read that and see what sort of deeper thinking folks have done on this. Because again, this feels like another one where definitely folks have thought about this, folks who know more about it and have chosen the current path that we're on for reasons. But I would be really intrigued if I could be like, yeah, I would just like it to be easy to start, and then have the performance optimization be something that I could opt into. Again, that's probably not tractable within the language.
Like, oh, we have a hot code path here that we want to actually have immutable symbols only. And that's the sort of thing if we've done this HashWithIndifferentAccess everywhere, you can't back out of it. And so, therefore, you're stuck in a performance low point. That feels like a bad case. And so maybe that's the reason is like, you will shoot yourself in the foot with this definitely.
But yeah, I'm intrigued. So I will definitely read what you're sharing here. And we'll include it in the show notes, of course. I'm probably not going to do this, just saying that out loud because it seems like a bad idea. I just want to know how bad of an idea.
STEPH: I do love it, for when I'm building a class that's working specifically closely with an API, I do reach for HashWithIndifferentAccess frequently. Because like you said, I just don't want to worry about it. I want to set it up top. It's one of the rare times that I actually will use something in an initializer where I'm like, hey, pass in the data. I'm just going to run it through this method. And then all the data from here on forward you can access it in either way. So the class doesn't have to care; a tester doesn't have to care. So I do feel your pain, or I at least will always reach for it whenever I'm building a class specifically around interactions with JSON.
CHRIS: So for a segment that I framed as how terrible of an idea this is, you're like, hmm, I don't know how terrible. That seems to be your take, which is interesting.
STEPH: Good point. Let me assess for a moment. I'm going to go just from skimming this issue, although I think partially this issue is talking about the fact that if you merge symbols and strings, it's like, hey, friend, you're going to break a ton of stuff and break a bunch of libraries, and these two things do serve a purpose. So this may not be exactly what you're looking for, but it has some interesting conversation on there.
But embedding it deep down in the app so that just happens naturally sounds like it's just a performance concern. So yeah, it comes down to what is the question? How big is the performance? So I feel like I can't say it's a terrible idea until I actually know what the performance hit is.
CHRIS: So a plausible question. That's where we're going to put this in the category of. [laughter]
STEPH: Plausibly terrible, but still worth researching.
CHRIS: Not obviously not terrible. But anyway, these are some of the ideas at the top of my head right now. That's a rough summary of my week.
Mid-roll Ad
Hey, friends, let's take a quick break to hear from today's sponsor, New Relic.
All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem.
So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy.
That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software.
So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's newrelic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed.
STEPH: I have an update that I can share around factories because the last time we were chatting, I was sharing that strategy that we're pursuing where we're trying to minimize factories and then speed up the CI time by reducing the work that those factories are doing. So Joël Quenneville has done some phenomenal work and this past week, specifically improving factories. And he found one particular factory that he was digging into.
So some stats before the change. The factory was taking around two seconds, which I know on paper doesn't sound so bad, but it gets more interesting. So total database time is around 1,000 milliseconds. And 833 total database queries were being made, which includes reads, creates, and updates. So then after, Joël was diving into this looking mainly to reduce the number of database queries because that's such a big number.
So after the change, which took a lot of research on Joël's part, the factory is now taking around one second, so half of that time. The total database time is around 666 milliseconds. And the total database queries went from 833 down to 647, so a nice improvement there. But the real wonderful outcome of the story is not just those stats, but okay, so how did we impact CI? So we spent time working on this factory. And we have reduced, and we can see some of that in the stats. But how does that apply to the bigger picture?
And so Joël took the time of the last 20 successful builds, and based on those builds, we average 27 minutes and 37 seconds for each build. With the factory change that he made, that same test suite was now averaging 21 minutes and 33 seconds. So shaved off six minutes from the build time, which is about a 22% decrease in the build time which is just fabulous. So that was a really nice win from all the work that had been invested in improving that one factory.
CHRIS: That's a heck of a haircut there so glad to see that the efforts are paying off.
STEPH: Yeah, it was a really nice win to see that we had researched which factories we should pursue, and then we were methodical about that. And then Joël worked hard to improve this factory and saw such a large payoff. It's one of those areas where the team has already invested a lot of effort and hours into improving the test suite. And it's challenging when you have so many areas that you'd like to improve and 100-plus engineers also contributing to that same codebase. So how do you improve and keep up with it all at once?
They had spent about a year, so I think they were recognizing that yes, there are still a lot of areas to improve but also felt like small efforts wouldn't move the needle. So it was a nice data point to remind ourselves that we can still reduce the CI build time in a significant way. We just need to be very strategic about where we invest our time in those improvements.
There is also an interesting conversation that Joël and I were having because we have a daily sync with each other each day. We've now been embedded with a team with a client, which is wonderful, but before then, we were also chatting with each other. And we like to chat about code, so we've had lots of fun conversations around code. And one, in particular, this week, came up about how people view code differently. And there's even a tweet that Joël shared that I can link to in the show notes.
And there's one view that code is a liability, and if a line can't justify its existence, then it should be deleted. And then there's another view that code is an asset. If a line isn't causing any immediate issues, then why not keep it? And part of the reason that came up was while I was going through and reading pull requests, there was a particular change where someone was memoizing an expensive call, which was great, something that we wanted to do.
But then they were also memoizing a very fast operation in two other places where it was just like parsing some params something that, you know, superfast and only getting called in maybe two places. And it was one of those that just caught my attention to be like, hey, I love that you memoized this other call, but this one, I don't think we need the additional overhead or complexity of adding memoization.
And I found myself when I was writing that suggestion for the author that I was already looking for more than just to say, like, hey, this is more than we need. Because I've realized that often I take that stance of code is a liability. So if we don't need it, let's just get rid of it. But I've definitely run into other people where they're like, well, it's not hurting anything, so why can't I just leave it? And getting that kind of pushback on suggestions about removing code.
So it was a fun opportunity to think through okay, well, why is this memoization not just unnecessary, but how could it actually cause us problems? And what's the cost of keeping it in, not just the cost of removing it but also the cost of keeping it in? And that was fun to talk about.
CHRIS: I'm so glad you're bringing this particular conversation up because if we're being honest, I saw Joël tweeted about this. I saw it. I sent an email to myself linking to the tweet with the subject of the email being ahhhh, just A-H-H-H-H, which I believe was me being like, oh my God, we got to talk about this. I apparently didn't want to write all of those words, so I just wrote ahhhh.
But as a handful of asides, one, if you're not following Joël Quenneville on Twitter, @joelquen, that is a mistake, because Joël is one of the clearest, most concise, and effective thinkers about code that I've ever seen. The writing that Joël produces is absolutely fantastic. And having worked with Joël for forever, I still will look at his Twitter feed and be like, well, this is fantastic. You're saying amazing things that I have not heard you say. So, again, strongest recommendation I can make; please follow Joël on Twitter and also via the Giant Robots blog and all of those other places.
But in particular, I saw this one come through, and I was like, oh, man, we have to talk about this. So I actually have it up in my email app right now behind the scenes. [laughs] I was like, oh, I want to mention this to you, Steph. So I'm very excited that you're bringing it up in this moment. It is such an interesting thing. It's such an interesting case of like; I deeply believe both of these truths, and yet they do seem to be in contradiction. And so what do we do with that?
More generally, I feel like that's true of a lot of stuff in life, like, the ability to hold two competing ideas in your head and be able to know where one applies and where one doesn't. That is a critical thing to get to in life and to figure out how to do, and that's some of the hard work of thinking. But in particular, this one, the idea that code is a liability. You have a line of code...I'm going to read it precisely as Joël wrote it, "Code is a liability. If a line can't justify its existence, it should be deleted. Code is an asset. If a line isn't causing any immediate issues, why not keep it?"
And I think for me, if I were to try and interpret this, because I do believe both of those sides, I would apply one during code review. When code is coming into the application or when I'm writing code, do I need this? Do we need this? Is this necessary? Because it really should be necessary to come into the app. But then once something has made it in, especially the longer something's been in there, I think code sort of ages and matures. And so, the longer it's been part of the app and not causing an issue, the more I am liable to just leave it at rest. Just say, sure, or not at rest but as part of the runtime production code.
But these are two competing ideas, but I think they apply at different times in the conversation. And so I'm definitely on memoization. In particular, memoization is a form of caching. Caching I have run into a handful of caching bugs in my life, let me tell you. I'll probably run into a few more. So if we can avoid caching, let's do that. So that's a particular question around that thing. But again, that idea of like the point in time to have that conversation is during code review or initial authoring or when it's about to come into the app.
But if we've had some memoization in the app for forever and you're like, do we need this memoization? I don't know, but don't remove it because maybe it's very important at this point. Maybe it's one of the cornerstones holding up our application. So that's a bunch of thoughts about that. But also super glad that you brought this up because I was very excited about this particular tweet.
STEPH: Yeah, there's someone that said something very similar to what you just said around they agree with number one for all new code. And they agree with number two, where code is an asset for refactoring. And I thought, yep, that's a great way to look at it. And I hadn't really thought about that specific perspective. And so it was one of those moments. Because I do like when people will push back on something that I so firmly believe on, not that this person did. I was, frankly, having a conversation with myself based on previous conversations with other pull requests authors that I've had that it's not related to this particular pull request.
But in general, when people do push back on something that I do have such a firm belief in...and early eager optimization around memoization is something that I'm just like, I don't want to do it, especially for something that's so cheap and in such a fast execution and something that we're only calling twice. There's no benefit to it at that point. But then when someone says, "Well, but it's not hurting anything," then I appreciate that question because then it's more of not just pushback, but it's sort of well, tell me more. What is the pain that I'm introducing by keeping this in?
And then that can be a really nice conversation to have with someone around; like you just said, I've seen caching bugs, and this could be a caching bug, and they are painful to then triage. And so we've introduced this optimization, but it's actually just going to cause us debugging pain later. And we really didn't even get the reward from it in the first place. So I really like those conversations when I feel like there's a little bit of a challenge of where I'm like, oh, I hold this as a deep truth, and somebody doesn't, and I would like to have that conversation with them.
There are also some other fun conversations; one was around introducing a query object, which, as you know, we're both really big fans of. And then there was another great question because not everybody who works on this team is really familiar with Ruby and RSpec. They work in Scala, but then sometimes they hop over to the Ruby side. And so then they hop into the Ruby channels, and they're asking questions. And one of them was around the idea of introducing an RSpec Matcher. And they're like, "Am I doing this right? Is this how you would extract something to then improve your test? "
And so that was a really fun conversation around like, yes, you did it right. This is exactly how you write a Matcher. But let's talk about use cases because extracting something to an RSpec Matcher to me means it meets the most generalized sense of usefulness that you want the whole team to use this and that you're willing to put in the extra overhead to then introduce this essentially like new RSpec DSL for the rest of the team to use and then maintain that. So it is the most aggressive step that I take when I'm trying to introduce a helpful tool.
So then I shared my progression for when I'm extracting something for a test. And first, I will start with just a local method to that test because then it's scoped to just that test. And from there, then I will think about extracting to a shared helper. So maybe it's a module that can get included. But then its scope can still be confined to a couple of tests, but then we've also increased some of its observability.
So then other developers will notice it and be able to share with it. And then from there, if I'm like, oh, this is super generic, it is testing time, and it's something that everybody is going to benefit from, then I reach for something like an RSpec Matcher or introducing a custom RSpec Matcher. So lots of fun testing conversations this week.
CHRIS: That was a wonderful hierarchy. I like that a lot. I feel like that would make a good blog post.
STEPH: There are some things that I realize that I just think of inherently about that I realize that would be fun to share. I'm much better at podcasting than I am at blog posting. [laughs]
CHRIS: There's this friend I know, Joël Quenneville, very good at the blogging. He could probably help talk you through writing this up as a quick blog post. But you just described this heuristic hierarchy that you have. And you could probably provide quick examples of each, and I think encapsulate that knowledge. I, too, default to podcasting because it's easy for me to just say stuff here, and then it's there it is.
But what you just said also mirrors exactly what I would think of as sort of the hierarchy and the reasons you're like, I'm not sure I'd go all the way to an RSpec Matcher. That hesitation is meaningful and comes from experience that you've had. And again, that seems sort of a trade-off of like, well, why not? Is it hurting anyone? What's the cost here? You know that cost. You have that in your head. And so now if you can capture...I don't want to put work on your plate. But I think that would be a great blog post. I would be happy to read that blog post and share it with other folks.
STEPH: Cool, cool. Cool. So I totally hear you. So here's my hierarchy. Typically, I start with a podcast, and then I share it there. And then maybe it'll go to a tweet. And then once I'm like, okay, this is super generic, it can help everybody, then we've reached blog post status.
CHRIS: I love how tweet is higher in the hierarchy than a podcast for you. That somehow the throw away let me just have 140 characters or 280, or whatever we're at these days, that somehow that's next in your hierarchy. But I agree; I share that place in the world.
STEPH: Yeah, just writing is hard. Here I get to show up, and I say things. And then we have wonderful Mandy, who is then editing all of our words, so there's a safety net here. If it's just me and a keyboard, who knows what's going to happen?
CHRIS: Then you'll probably think about the switches that you're using on the keyboard. And do you need a new keyboard? Should it be silent? What do we do?
STEPH: I was thinking more how many exclamation marks do you use? That's always a question.
CHRIS: Not too many, not too few. It's a difficult question.
STEPH: [laughs]
Mid-roll Ad
Hi, friends, and now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive user interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Pivoting just a bit, [laughs] what else is going on in your world?
CHRIS: What else is going on in my world? So we are building out a whole platform over here at Sagewell, and one of the things that we need to build is a mobile app or, frankly, two mobile apps, one for iOS and one for Android. And I'll be honest; I resisted this for a while. I am a big, big believer in the web as a platform like deeply in my heart of hearts. That's the place that I want to spend my time. That's the thing that I believe in.
And there are absolutely cases where truly native mobile apps shine, completely outshine what we can do on the web platform sometimes for reasons that are, I think, not great, limitations of the available mobile web platforms, et cetera, reasons that I'll slam my fist on the table or whatever it is.
But there are plenty of really great mobile experiences, offline, et cetera, that we just can't...offline is not even a great example. See, I can't even find a great example. There are definitely things, though, where truly native mobile apps are 100% superior. But again, I'm such a big fan of the web platform that that's what I wanted to do. I wanted to hold on to this dream of, like, what if we just make a really great web app and it's just great?
And then consistently, our backend is one singular thing. Our frontend is kind of one singular thing. And yeah, we got to deal with responsive design. But that's to me a much more tractable problem than fracturing our entire application architecture across a bunch of different platforms and having all of the logic of our domain splintered and especially depending on how you implement it. That's sort of a big question.
I've talked a ton about Inertia.js on this podcast, and that's because I believe it's a really great example as to how to pull some of the logic back to the server-side, which, in my experience, that's where I want the logic to be implemented, our deep domain logic. I just want that to be on my server in a Rails controller, or a Rails model, or a command object, or any of those sorts of things, query objects, all of these wonderful things but server-side that's centralized in one space.
Nonetheless, though, we had to build a mobile app. These are the truths of the world. Sometimes it just comes down to the expectation of your user base. And there are certain things that by building a mobile app we will get so, for instance, in our case, having biometric login, so fingerprint, or facial ID, or any of those sorts of things. Those are actually material security differences. They are actually, as far as I can tell, available on the web but not consistently on every browser, et cetera. So that's something that we can get by having our app as a native app.
Push notifications is another one that certain platforms, certain web platforms have dragged their feet on, Apple Safari. iOS Safari, specifically, I'm looking at you, but that's an example of something that by going the truly native route, we'll get that. Similarly, access to some of the lower-level things, cameras, et cetera, that is something that we'll get a better experience of. And again, you can hear in my voice I don't want to really seed it to the native platform, but it is true right now, at a minimum.
So we had a decision to make as to how we would implement these applications, and we went with an interesting route. So for anyone that's familiar with Turbolinks native, or I believe Turbo iOS is pretty similar. But I'm more familiar with Turbolinks native as there was a talk I Can't Believe It's Not Native I think is the name of the talk that was given a while back talking about the Turbolinks native architecture.
So basically, what's happening under the hood is let's still render these things server-side. Let's send down some HTML. In our case, it's a weird sort of hybrid of HTML and not HTML. But broadly, let's say that the server is rendering things. And our native application is going to then be a native shell that wraps around WebViews. But it does so in not just a single WebView sort of way. It's instead trying to find that optimum hybrid spot where let's do native things where they make sense.
So, for instance, we have introduced a tab bar at the bottom of our application that is a truly native UI. We similarly have push notifications, biometric login, et cetera. Those are features of the native platform that we're using. But then, for most of the screens, most of the screens that are just some text, maybe a button, maybe a form, et cetera, we are using the server-rendered code that we have. And so server-rendered, in our case, because we're Inertia, it's sort of a misnomer because technically it is being rendered on the client-side in the WebView. But, I don't know; we're now getting too nuanced and in the weeds for it.
But what we've opted for is to reuse the same views, controllers, et cetera. All of that is still being reused. Our iOS and our Android codebase at this point are wrappers around those WebView stacks. So it's not just a singular WebView; it's a stack of WebViews. So if you're doing swipe to navigate thing on iOS, that'll work...or Android. I think Android has an actual back button, though, within the applications.
But most importantly, we've introduced a tiny little bridge layer. So from our WebViews, we can communicate to the wrapping native context. And similarly, from our native context, we can send messages into our WebView. So we can have a button in our native UI. And when a user clicks that button, it will send a message to the WebView that it's wrapping around and vice versa. We can do push notifications. We can do all that sort of stuff. For any given view, like, say, the login view, we can say, "Hey, don't render the normal server-side thing. Instead, render this truly native, local Swift or Kotlin view that we want to use there."
So it's an interesting choice. I think it's something that I've certainly seen applications that are just like, let's take some HTML and wrap it in a WebView, and it'll be fine. And they don't make great apps. But I think this time it might just be a good idea. I actually do think that the approach that we're taking, at a minimum, is buying us a ton of simplicity in terms of having to duplicate what are somewhat nascent domain concepts across multiple platforms.
We're not entirely certain as to what our platform and what our business is going to be. So we'd love to non-enshrine that across three different platforms that are hard to update. Like the web, I can kind of change that every day. But iOS and Android because I have to go through review cycles, because I have to get them out to devices, because there are slow update cycles that individuals will use, I'm going to be stuck supporting whatever version of these applications are out there.
And so if more of that is the dynamic content that's driven by the server, frankly, I just feel way better about that, at least for now, at least for the point in time that we're at. But I kind of believe that this may be a really useful architecture for us long term.
That was a bunch of me rambling about the architecture. Let me pause there, thoughts, questions, comments, concerns?
STEPH: First, I really appreciate the thoughtful approach and explanation. Also, you highlighted the reasons that y'all are pursuing having a native app, and all of that makes a lot of sense. Because there is that user expectation of you told me about a service that then there must be an app that I can download because that's what I'm accustomed to using versus having to go to a browser and then having to then remember the URL of the site that I'm supposed to go to. So there's that convenience factor.
There's also the idea that some people go to the App Store and search for their solutions instead of going to a browser and searching for a service. So having that presence in the App Store can seem like a really huge win because then even if it maybe slowly pushes them back to use the website or as long as they get a decent experience, they've now at least been exposed to the idea of the service and that it's out there.
But then, as you pointed out, building a mobile native application is a lot of work. And then it becomes a question of like, well, are you going to hire people to work specifically on these platforms? And then, is it really worth that investment at this point? Or is it worth the approach that you're taking where you're going the more hybrid approach? I am curious; maybe this is something that you'll know. So as you are investing in this hybrid approach and you are starting to collect more users that are then using the app versus going to the browser, then what does that pivot look like, or how does that further investment look like?
If you realize that the UI isn't quite delivering the expectations that you want that if you'd actually built a native iOS or Android application, then what does that investment look like? Can you still reuse some of the work that you've done? Is it totally scrapping that work? I think that would be my biggest question around taking this first approach. Is it an all-in bet that we are now stuck to this? Or is there some salvageable pieces to then move this forward into native apps should we need to do that?
CHRIS: That's a heck of a question. Have you made a terrible decision or just like an iffy decision? I think that the framework that we're choosing or, frankly, building right now will actually be amenable to a potential transition entirely into the native world in the future. So again, one of the options that we have here is the ability to say, no; this facet of the application is entirely native. We're going to opt-in.
And so it actually happens at the navigation layer. So we can say, if a person transitions to the /user/signin route, instead of just rendering that WebView right in place, push a native Swift or Kotlin. Depending on the context that we're in or the platform that we're in, push the native view onto the stack and use that. And so we're able to, on a screen-by-screen basis, make a decision of no, we'd like to opt into native behavior here.
And so, if we did eventually see that the vast majority of the users of the platform are using it via the native app, we should probably continue to invest in that and push in that direction. I think we could do it in sort of a gradual style, and that is critically important to me. I don't want to make a big bet and then be like, oh no, we got to rewrite from the ground up. And there's no way to do that incrementally. It's going to be a whiz-bang Friday launch that everyone's going to hate. That's the thing I want to avoid most in the world.
And so I think what we found now is this seems great for right now because it allows us to avoid this complexity explosion of three different platforms and trying to keep them in sync and trying to keep them up to date. But it does, I think, give us an opportunity as we move forward to slowly sort of transition things over. We are, to state it, this isn't just like wrapping a WebView around things. We are building essentially a mini framework on both iOS and Android, or roughly Swift and Kotlin is what the actual languages are, to work with Inertia because inertia is the core technology that we're using.
Inertia, thankfully, has a nice little event system in there, so we can say, Inertia on navigate. And when a navigate event happens, we can hook into that and then connect it to whatever Swift or Kotlin runtime that we're building here. And there are a couple of different events that we can opt into. And so that's giving us the hooks that we need in the current architecture.
But longer-term, if we needed to, we could just, I think, slowly transition everything over to be truly native mobile, and then that would probably be backed by more traditional API endpoints and that sort of thing. I want to avoid that. That's my dream is to stay in this happy place where we're always going to need some web presence. And I would hate for those to be fractured distinct things.
I've worked with enough mobile apps that are wonderful native experiences, and yet I'm like, could you just give me the desktop view? Just scaled to...like, I'll even pinch and zoom because you're hiding data from me, and that makes me very, very sad. Please give me the buttons, and the text, and the content that you would give me on the web. And the fact that you're not is just breaking my heart right now. And, frankly, for our user base, consistency of experience is something that I think is really important. So that's another facet of the conversation that is really interesting to me of like; I don't want it to be different on each platform.
Certainly, a three-column layout doesn't work on an iOS app that is zoomed in 150%. But we can turn that into each column is just floated down and then otherwise have all the content in there. And I believe in that as sort of a fundamental truth of let's reshape the content but not fundamentally rethink it. I say that as something that I believed deeply. But as I said it out loud, I was like, yeah, but also, I don't know, make it work on the platform it's on. So I can see both sides. But I have had enough experiences personally where I'm sad about the app that I'm using.
STEPH: Yeah, I could also see an argument for both ways where you don't want it to be fundamentally different, but then also, you want it to fit the platform. And then there may be some advantages to the fact that there is a different platform, and you want to utilize that. I also agree with the not hiding of the data. I have felt that pain where I have an app, but I really want to go to my desktop, and I really want to use it there. But then on mobile, it's then hiding, and I realize it's hiding. And that inconsistency really frustrates the heck out of me. So I can understand that as well.
Overall, I really like this. You're taking a bet in a direction of we should have a mobile presence, and we should start attracting people through this new marketplace. But we want to reuse a lot of the logic that we already have before we go so far as then we're going to have to start building for each different platform. Because while I don't have a lot of experience in that area, the times that I have been part of teams that are building native apps, it's a big investment.
I mean, they hire people very focused on that; designers have to design for browser, for mobile, and then for native, and then everything has to stay in sync across. You have to think about how a feature is going to work across all three of those different views. And so it is certainly not something to go into lightly, which I think is exactly what you're describing is that you're looking for that in-between to how can we start working our way in this direction but yet also do it in a way that we're reusing a lot of the work that we have versus having to invest full sail into then building out these different platforms?
So I'm going to go with this is not a terrible idea. [chuckles] I'm excited to see how it feels once I can download this and check it out. I'm excited to then see how that feels from a UX perspective. But overall, everything you're saying really jives with me. It makes a lot of sense. I am curious, what about React Native? Is that something that you considered using?
CHRIS: Oh yeah, great question and definitely something that we considered. We're not using React on the backend, so that was actually a consideration when I was thinking about Svelte initially is I assumed we'd be building a React Native app eventually for the native platforms. But I talked myself into Svelte for the web, and that is not the reason that we're not using React Native for the native apps. But it is an interesting sort of constellation of technologies that we have now.
We're not using React Native because I'm clinging to this idea of what if we could have a singular experience? So React Native fundamentally you're building a native app that this is this bundle that you download that's got all of the UI and that front-end logic in that bundle that you download. And then when it wakes up, it makes some calls back to some APIs to get some data or to decide if I can do an action or to actually do an action, all those sorts of things. But you're building out a Rest or GraphQL or one of those APIs.
And with my explorations of Inertia, I found that what if I didn't need to do that? What if I could do a more traditional Rails CRUD-like experience but CRUD in a good way (I mean it in the very positive sense of the familiar architecture) and still give users a delightful experience but not have to build a distinct API where all of the or majority of the logic was on our client-side? So if I did that, then my web client would need to be that much smarter. And each of the iOS and Android clients would need to be that much smarter because that's fundamentally how these technologies work.
UI components they can give a higher fidelity experience, more native-like experience, but they tend to own a lot more of the smarts. And one of my core beliefs is however long I can get away with this, I want to keep as much intelligence on the server as possible and have my view layer be as minimal and as simple as possible.
So I think React Native is a really fantastic technology for that sort of work. But my goal was to avoid that sort of work entirely. What if we had a singular way that we had the logic exist on the server-side, and then we rendered pretty minimal view layers? Or, from a user experience, the view should do all this stuff and show all of the things that they want. But I want that view layer to be as naive as possible. And by naive, I mean in the positive sense of like, I want to be able to change this very rapidly. I want to be able to evolve it and iterate it.
And so this is more of a buy into I think the thing that Inertia gave me is valuable enough and if I can keep using that and reuse it, especially on these mobile platforms...now if we add a new fundamental part of our Sagewell platform, if we have that, it just exists on each of the iOS, the Android, and the web, and that's fantastic. And we're going to keep a really close eye on what experience that gives to the user. And is it still great? But presuming it is, the complexity savings there are so huge.
Our team is a team of web developers that is able to think about things holistically and singularly. We implement it once within our stack, and it just works. And if we can do that, that is worth a ton. We may not be able to do that forever. But for now, especially while we're figuring things out, while we're super early on as a company, I think that savings and complexity is worth a lot. So it'll be interesting to see how it plays out, and will certainly report back. But I'm a big believer in this little adventure we're on.
STEPH: Yeah, you said it perfectly there at the end; you're a team of web developers. And so as long as you can stick to that, then that's what's best for y'all and the team and the product. So that's wonderful. I have a short segue because I had a little bit of inspiration when we were talking about terrible ideas. I want to circle back to your other terrible idea because I have a terrible idea for your terrible idea about strings and symbols.
Okay, so my terrible idea is you're talking about using HashWithIndifferentAccess for everything. What if you had a class or method that then will first try to access via string and if that fails, access via symbol, and then if that fails, then it fails loudly? So you now have this let's try this, and then let's try the next thing. I have strong feelings about this as I'm saying it.
CHRIS: [laughs]
STEPH: But we're in the terrible idea segment, so I'm going to embrace it. This is my terrible idea.
CHRIS: HashWithIndifferentAccess with runtime exceptions. I think HashWithIndifferentAccess under the hood probably does what you're describing of, like checks one and then checks the other or checks has_key is probably the underlying implementation. I haven't actually looked at it. But some version of that makes sense. Falling back to the key error gets interesting.
I did see a different thing recently of a deep fetch, which is something that I want, to stop trying to make fetch happen, except I'm going to try and make fetch happen. We thought about this a bunch where we have these objects that we need to traverse into. So we use dig to get into the third layer of the object, but dig doesn't care. And it's just going to happily nil out whatever. So I'm like, no, dig but then right at the end, fetch, deep fetch. I saw somebody post this recently. So deep fetch is something I want to make happen. HashWithIndifferentAccess, which raises at the end also intriguing.
STEPH: So yes, but this will be a little different because this one, you don't have to do the transformation process upfront with HashWithIndifferentAccess where you have to pass the data first, and then it transforms it so then it can do these two different lookups or the fallback. This one, you're skipping the transformation process, and you're using your own custom method that then does that first check for a string or first check for a symbol and then default back to the other one and then fail loudly, yeah, if both of those fail.
CHRIS: Interesting, and I have to see what it looks like in practice. But I mean, broadly, I'm into something in this space. Let us find some simplicity. That is what I want.
STEPH: Let's find some terribleness and see which one feels not so terrible. [laughs]
CHRIS: Some terrible simplicity. Well, I like that idea. We'll see where we get to with it. But I think on that note, and we've said a bunch of stuff today, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeee!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph joins Chris in trying new things! For her, it's a new email client – the Newton email client – because she really wants to love her inbox. She also talks about implementing a suggestion from Chris on improving CI speed.
Chris continues his search for the perfect to-do list app. (It's not going great.) But he has made hiring progress and is excited to move on to the next step: onboarding.
Together they answer a listener question who asked for advice on crafting project estimates for clients.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: I am now recording.
STEPH: Me too.
CHRIS: [laughs] That's my recording voice.
STEPH: [laughs]
CHRIS: That's how you can tell.
STEPH: I just like how it sounds suspicious where we're like; I'm now recording, so be careful. [laughs]
CHRIS: This is now on the record.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hello. Happy, happy Friday. Oh, I have something that I'm excited or intrigued about. I don't know. Okay, I'm hyping it up. [laughs] But I'm realizing I'm also very skeptical of it.
CHRIS: This is the best sales pitch I've ever heard. I'm so excited to hear what this is. [laughs]
STEPH: I am trying a new email client; it is the Newton email client. And I so want to love my inbox. I want to check on it. I want to help it grow. Okay, that's the opposite. I want to help get through all the emails that come through, but I just want to love it. I want it to be a good space that I want to go to. And I just hate email so much. And it always feels like this chore that it's really hard for me to bring myself to do, but yet it's really important because a lot of good things come through email.
So this is my rambly way of saying I'm trying the Newton email client because I saw on Twitter from Andrew Mason, who has very similar feelings that I do about email, where we are just not fans of it. And we rarely check it and have declared email bankruptcy at several points in our life. And he's also one of the co-hosts for Remote Ruby. But I saw on Twitter that Andrew was talking about the Newton email client and how it actually made him feel that he enjoyed writing and looking through his inbox. And I was like, yeah, that's the sales pitch I need. So I'm giving it a go. It's been only a couple of days.
But one of the nice things I have noticed about it is it's very focused, and there's not much noise, and it actually feels like very minimal design where if you open up like a new email, so you're opening up a new draft, there's no much noise. You get to just focus, almost like you're writing a little blog post or journal post or something. It takes away a lot of the noise.
While in Gmail, it's going to open up a small window in the right, but then you still have the rest of the noise that feels distracting. So I like that very intentional like, hey, you're just doing one thing, just focus on this. And then also you can integrate other email accounts as well. So you can have one-stop shopping versus Gmail, then you have to click around and sign in, sign out, or visit different email accounts. So we'll see if it helps improve my email life, but that's something new I'm trying.
CHRIS: Very interesting. So you're fully on inbox zero life now. That's what I'm hearing. [laughs]
STEPH: Ah, hmm. I don't want to lie to you. [laughs] We have a good friendship. I won't start lying now.
CHRIS: I appreciate that. So you're halfway to inbox zero. You're not even entertaining that idea, right? This is just you want a better tool to do email.
STEPH: Exactly. Inbox zero is not incredibly important to me. But I do want to make sure that I know that I've seen everything important, and I know where to find things. And then making sure that I am responding to people in a timely manner. Those are more my goals. Inbox zero, if that supports it, then great, I'll work on it. But not necessarily that has to be the goal that I reach.
CHRIS: Gotcha. I'm not seeing Newton, but I'm intrigued. Particularly on mobile, I have the Gmail mobile app, and that has unified inbox, which I appreciate. But Gmail on the web does not, and I find that odd. And I've never found a mail app that I enjoy because I want some of the features of Gmail. I want to do Gmail snoozing because I still want that to be consistent and whatnot. And to be honest, that's the main way that I get to inbox zero. I just say future me will have more time.
I actually tweeted recently. It was a screenshot from my Saturday inbox, which I think was 15 emails that I'd snoozed from the previous week into Saturday morning. Because I'm like, Saturday morning me will have so much time, and energy, and coffee, and it'll be great. And then it became Saturday morning and, ooph, what a view.
STEPH: [laughs] Yeah, your snoozing tip has been life-changing for me because that's not something that I was using all that much. The two things are, one, schedule send so that way if I do have a sudden burst of energy and I want to write an email, but I want to make sure that person doesn't get notified until a decent time. Being able to schedule an email and snoozing is amazing.
I think Newton and Gmail have pretty much similar features. I was trying to do a comparison. I was like, is there something really snazzy that Newton does that Gmail doesn't already give me? But it looks like they all do about the same, having those important features like snoozing and then also being able to schedule emails.
So I think it really just comes down to a lot of the UI, and there may be some other stuff I'm missing since I'm new to it. But that's the main appeal for me right now is the focus and the look and feel of it. So then maybe I will find looking through my inbox a more zenful experience, I think is how I saw them advertise it.
CHRIS: Well, I definitely look forward to hearing more as you explore this space. I will say looping back to what you were just commenting on around deferred send, which is definitely something that I use, but you described one of the reasons that I use it. So the idea of wanting to be respectful of someone else and not send them an email on Sunday night because you happen to be working at that point. But you don't want to put that on their plate. I would say equal amounts; that's the reason I use scheduled send.
And then the other reason that I use scheduled send is please, for the love of God, I do not want another email back in my inbox. So I will reply to something such that now I'm done with that, but I will schedule send it for the next morning. Because tomorrow morning me can deal with whatever reply this generates.
There's some adage; I don't know if it's an adage, but the idea that every email that you send generates 1.1 emails in reply. So emails just have this weird way of multiplying. And so if you send one out there, you're probably going to get something back. And so often, if I'm trying to clear my inbox, I don't want to get another email in my inbox at that moment. So I will not actually send the reply. I will schedule it for a future time because I do not want to hear. I want no new inputs at this point. I'm trying to process them. So that's part of why I use deferred send.
STEPH: I had not thought of that, that yeah, that if you schedule it for tomorrow, you've really gamified this inbox zero because you're like, yeah, if you send something, then you might get an email back. But you're like, if I wait till tomorrow to send it, then I'm less likely to have another email, and then I've hit inbox zero, and I'm set for the day. I like it. It seems helpful.
CHRIS: Yeah, inbox zero sounds like an altruistic thing, but it is not. It's a way to force myself to have to make decisions, which is something that I want to get better at broadly. And that's part of the role that I have now. A lot of what I'm interested in exploring is just getting better at making decisions, being more decisive, being more action-oriented. Because I just have a tendency to make many, many spreadsheets and think about stuff for a while and take a long time to make a decision. But I don't get to do that, particularly now.
But broadly in life, that's probably not the right mode to be in. So inbox zero is another thing that forces me to deal with things rather than just be like, I don't know, I don't know, I don't know, and keep looking at the same thing over and over. So just more thoughts about inbox zero, but now I'll stop talking about it.
STEPH: I do like that, though. And you're totally right; it can be a very helpful constraint. And I think that's sometimes why I fight it because then I haven't curated my inbox enough that then when I go to it, there are so many interesting things that then I feel a little bit overwhelmed where I'm like, oh well, I want to read this, and I will look at that. And this seems interesting, and maybe I should be a part of this.
It feels like one of those like; you could be a part of these ten amazing things. Do you want to be a part of all of them? And given a person that it's hard for me to say no to or recognize that no, I'm just going to not do anything with this, that is hard for me and would be a good skill for me to hone in on and practice and make quick decisions and be very realistic.
Because I used to be subscribed to more newsletters, and then I finally had to stop subscribing to them because it had that same effect on me of that FOMO of like, I'm missing out on this great article or this great video. And I've become more honest with the fact that my Saturday morning self isn't going to want to read through a bunch of newsletters and videos about coding, that I'm going to want less screen time. So that is a really good constraint and helpful skill to cultivate for sure.
CHRIS: All right, I said it was done, but one more thing. I feel like I've mentioned this in the past, but Feedbin is the thing that I use for RSS. I still believe in RSS as a technology. But everyone's moved to newsletters these days that go via email. Feedbin gives you an email address that you can use to subscribe to newsletters, and then they do the job of converting that into an RSS feed.
So for me, I take something that was now a push into my inbox, and now I can pull whatever I want from that RSS feed. And on Saturday morning, if I'm feeling like, with a cup of coffee, I can enjoy some newsletter about all the new hot tips in Svelte land or whatever it is or not. But it's not clogging up my inbox. And with that, I think I'm actually done talking about inbox zero. [laughter]
STEPH: Yeah, that's a nice separation. We could keep going. I have full faith in us that we could keep going about this. But I'll share a slightly different update. I've been implementing a suggestion that you provided a couple of weeks back where we were talking about Rspec's selective test running and how some applications will speed up their test.
If you change one part of the codebase, then perhaps you only need to test this chunk of test. You don't actually need to run the full test suite. And that is complicated and seems hard to get right, and really requires understanding boundaries. But then also knowing Ruby, then how do you really identify? Do you really know where this method is being called and can identify all the tests that need to be run?
I think we'd mentioned before there's a really good article from Shopify where they have worked on this and created an open-source project called Packwerk. So we can link to that article in the show notes. But more specifically, you suggested, well, what if you just change a test file? That seems very low stakes and also has the benefit of creating a reward where if someone does see something that they can improve in a test, then that's a very quick feedback. Let me just get this change. It's going to be fast on CI. I can merge it right away and also saves time on CI.
So I've been working on implementing that change. And it's one of those the actual change is easy, like checking with Git to say, "Hey, what files have changed?" Does it have an _spec.rb at the end of it? Great. Does it not? Okay, we've changed some application files. So let's run the full test suite. That part's easy. Getting it integrated into the build system has been more complicated just because this team has done a lot of work around trying to improve and speed up their test. And there's a fair amount of complexity that's there.
So then figuring out a way to stitch my change into all the different build processes that take place has proven to be more difficult. But it's also been insightful just because it has now helped me really understand and forced me to learn, okay, what are all the different steps? What's important for each one? Where can I cut off the rest of the running of the test and instead just focus on running these tests? So in some ways, it's been challenging, but then on the positive side, it's been like, okay, well, this has taught me a lot about the existing system.
So at this moment, it is still a work in progress. I'll have more updates in the future. I am excited to see the rewards. I've gotten to the point where I just have a proof of concept where I've gotten pushed up, but it's not production-ready. But it's at least I just wanted the feedback that I'm in the right spot and that we're running just the right test.
And so far, it does seem like it's going to be a nice win, even if it's maybe not used by everybody because it's probably rare that someone is altering just a spec file. But for people who are looking specifically to improve the CI build time and working on tests, it will be very helpful to them. So yeah, I'm sure I'll have some more updates in the future. What's going on in your world?
CHRIS: Well, I definitely look forward to hearing more about that. However, we can improve CI speed; I'm super interested in that as a topic.
Mid-Roll Ad
Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem.
So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy.
That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software.
So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's newrelic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed.
CHRIS: Well, similar to your email adventures, I continue on my search for the perfect to-do list. It's not going great, if we're being honest. [laughs] To be clear, because I've mentioned this on a few different episodes, I'm not spending much time on this at all, some but not much. And so it's not really moving.
But there are two interesting things. I took a look at TickTick, which was one that I mentioned in the past, a tool for this. It seems good. It seems like an intersection between things, which is what I'm currently using, Todoist, which I've used in the past, and some other tools. So I think I'll probably explore that a little more. It seems like a good option.
Decidedly, the most interesting thing is a tool called Sunsama, which is different in some interesting ways but very interesting. So one thing to note about it is it's $20 a month, which is a lot of money for one of these tools because most of them are like, "We're $20 forever, and then it's free." And it's a surprisingly low-cost space. And so, they're definitely positioning themselves as a more costly entry. I would be fine with paying $20 a month for a tool if it really is like, no, cool, I feel great. I'm more productive. I'm happier when I'm not working, et cetera.
But what's interesting is they seem to do a let's reach out to all the places that tasks can live for you. So there's your inbox for email. There's your Trello board that you've got. There are GitHub issues. There's Slack. There are all these different sources of potential tasks. And they do a really good job of integrating with those other tools and then allowing you to pull that list into Sunsama and then make each day you have a list. And those items can be like, this is a reference to a Trello card on that board. This is a reference to a Slack conversation over there.
So I'm super intrigued by it. It's also got a very intentional plan your day mode, which I like because that's one of the things that I'm really looking for is at the end of the day, I want to clean everything up, make sense of all of the open items, and then reprioritize and set up for the next morning so that I can just hit the ground running. That said, I tried it, and it just didn't quite click. And I think it's one of those it takes some effort to understand how to use it. So I'm not sure that I'm going to get there.
But it is super interesting because that idea of our work lives in all of these different tools these days feels very true. And so, something that is trying to act as a hub between them to integrate them is very interesting to me. Again, I haven't really gotten anywhere on this. I'm kind of just reading blog posts, as it were. So I'll report back if that changes, but --
STEPH: The search continues for the right to-do app. Yeah, that seems interesting. I don't know why I'm feeling hesitant towards it. I'm one of those individuals...you're right; there are so many tools. And the fact that they integrate with a lot of them seems really nice.
I'm at the point where I just grab links to stuff, and I'm like, hey, if this is my priority, I grab a link to a Trello ticket, and then I just copy that into my to-do. I guess I like that bit of work over having to integrate with a bunch of different platforms. Because once you get used to integrating...I don't know; I'm just rambling. But I wish you the best on this journey. I'm excited to hear more. [laughter]
CHRIS: Thank you. I will certainly report back. But yeah, nothing pointed to share at this point. But I do have something pointed to share on the hiring front, which is that we have hired some folks.
STEPH: Hooray!
CHRIS: Yay. So this has been a fun saga across a couple of different episodes. And in my mind, it feels like this much longer, more drawn out thing, but it's; actually, I think, come together relatively quickly, all things considering. We've got someone who's starting in a little over a week's time, and then someone else who's starting in, I think, two or three weeks after that. So that'll be great. Hopefully, we can transition into onboarding, which is a different whole approach.
But hiring as a distinct activity can scale back significantly. As we discussed last week, I want to be in the always be hiring mindset but in the more passive mode of having conversations with folks, staying connected. And if a great candidate comes along and it's the right time, then bring them on the team but not actually actively reaching out and all that sort of stuff, which will be great. Because it turns out that takes a lot of time and also a lot of energy for me. Having those first conversations, going into it very intentionally trying to communicate about something, and there's a tone of salesmanship to it that is not my natural resting state.
So I come away from each conversation being like, that was fun, but also, I'm drained now. Why am I so drained? So not having that be a thing that is filling up my calendar is great. And also super excited with the folks that'll be joining the team and to be able to now grow our little team and define the culture and the shape of the groups that we will be collectively. I'm excited for that work and what we can build together. So yeah, it's an exciting time.
STEPH: That's awesome. Congratulations. Because yeah, everything you're saying sounds like it's just been a lot of work. So that's very exciting. There's someone that I was chatting with earlier today where they were talking about the value and the importance of understanding what your natural skills are and the things that bring you energy. And so you're mentioning there are certain activities that you enjoy them, but they're also draining because perhaps they are on the outer boundary of what you might define as your own natural skill or the things that get you really excited. And I found that all very interesting.
It had me thinking about that today about where are the natural areas that I find that I get energy that are easier for me? And then making sure that I'm trying to prioritize my day so that I am more focused on the activities that just align with who I am and also that I'm engaged with and then also looking for ways to stretch. But they made the point that if you are always in a space where you are not using your natural talents, and you're always having to stretch, then that can be what leads to burnout. Versus if you're in that sweet spot, that zone of where you are using your natural skills, but then also stretching a bit.
And I think there are some assessments and things like that that will help you then determine what are my natural skills, and what do I like to do with my time? I just like that style of thinking and recognizing, like you said, like, hey, I did a thing. It was fun, but I'm drained. So now I know that this is something that requires more effort for me. Like hiring, that's one for me.
I really like interviewing. I like talking with people, but I'm so nervous for them because I know interviews suck. [laughs] I just have so much empathy for them where I'm like, this is going to be a hard day. We're going to make it as pleasant and positive as possible, but I know this is a hard day. And so I feel like I'm in it with them. And so afterwards, I feel that same relief of like whoo, okay, interview day is over.
CHRIS: I don't know that I quite achieve the same level that you do but in no way am I surprised that that is your experience of hiring. And just to name it, you're a wonderful human being that feels for the people on the other side of the hiring table. Like, oh my God, this must be so stressful for you. It's so kind of you to be in that space with folks.
But coming back to what you were saying a moment ago, that idea of, like, understanding where your strengths are and where they're areas that you're not quite as strong. And I think critically, the question of like which are the ones where I want to just kind of say no to? I'm like, that's fine. This is not going to be a competency of mine. And I'm going to just avoid that or find other people to work with that balance that out. So for me, sales is the thing that I don't think that's ever going to be my bag. I don't think I'm ever going to move in that direction, and that's totally fine.
Whereas decisiveness, which I was describing, is like, I think that's the thing I could get better at. That is one that I don't want to sleep on that. I don't want to say, "That'll be fine. I'll just have other people make the decision." No, I need to get better at making decisions, making decisions with less information or more rapidly, having a bias towards action. All those things I think will be deeply beneficial. So I'm trying to really lean into that. Whereas yeah, again, the sales stuff I'm like, yeah, and there's plenty of examples of this otherwise.
But I've also been coding a bit more this week, which has been lovely because the hiring stuff has ramped down. And that has freed me up amongst some other stuff that's been going on. And you know, I like to code, it turns out. It's fun. I just clack about on my cherry brown keys, and it's great.
STEPH: Do you remember when we first got introduced to mechanical keyboards, and we had co-ownership of one of the keyboards? And we literally had days of where it was like your turn to use the keyboard. And then it was my turn to use the keyboard. How long did we keep that up before we were finally like, we should just buy our own keyboards?
CHRIS: It was a while because we were working with a colleague who was trying out a Kinesis, I want to say, one of the split little bowl of keys. But yeah, we had a shared custody over a keyboard, and it was fantastic. I remember that very fondly.
STEPH: The days that it was my keyboard, I would go to the office and be like, oh, today is my day at the keyboard. This is great. This is going to be such a wonderful day. [laughs] And now I'm just spoiled.
CHRIS: It went on for a while, though. And this was something where we both obviously enjoy this keyboard. Why don't we just buy one of these keyboards? We totally could have done that. And yet, for some reason, both of us were like, no, but what if...I got to think about this. Again, decisiveness. [laughs] We come back to this topic of well; I had to really think about it. And then somebody got the 92-Key test or whatever it was in the office. And so I just went over and poked every one of those for a while.
STEPH: Exactly. It was option overload where we're like, well, okay, we're going to buy one, and then you open it up and search, and you're like, oh, you want options? We have options. Do you know about the blues, and the browns, and the colors, and these different options? Like, I don't know any of this language that you're talking about. I just want to clackety-clack. So yeah, it took time. We had to do our research.
CHRIS: And then I ended up on basic browns. So here we are. Let's see, popping back up the stack a couple of levels, hiring that went on for a while. Now it is less going on. Although to be clear, like I said, always be hiring. So if anyone out there in the world is hearing what I'm talking about with Sagewell or seeing any of the stuff that I'm putting on Twitter, which isn't much, I occasionally just post screenshots of my commit messages, which recently included better snakes as a commit message. [laughs] I have to dig into that or not. But we were just doing some snake case to camel case conversion. But the commit message was better snake, so here we are.
Anyway, if any of that sounds interesting, please do reach out. But I'm excited to transition back to focusing more on the work. On that note, actually, I'm going to call it interesting things that is happening right now organizationally is; we are working with an external security firm to help with some...they helped us out with a penetration test when we needed that. And then they have stayed on retainer and are helping with various different configurations, taking our AWS S3 buckets and making sure those are nice and secure, and all that kind of stuff.
But we've recently started to focus more on organizational security, specifically a bowl of acronyms. We've got SSO for single sign-on, MDM for something device management. I don't know what that first M is. I probably should learn it, but it's fine. That's why I've got help on this is I think they know what the acronym stands for. But so we're working on each of those.
And on the one hand, they're probably going to be kind of annoying, like having to go through the single sign-on. It's a whole thing, and it's harder to sign into stuff sometimes. I mean, ideally, it's actually easier. But in my experience, it adds some friction at some points. And then MDM means that there's now some profile manager on the computer. So I can say like, "Every computer must have full disk encryption or else you can't use it. And we need a passcode, and it must be this long and those sorts of things." So it's organizational controls that I think are good for us having a robust security setup throughout the organization.
But yeah, they're the sort of things that I think historically, I probably would have, as someone working in an organization, had been like, do we have to? Do we need these things? Couldn't I just do whatever? But now there's something about it that I really like. I'm trying to name it in my head, but I'm kind of like, I don't know. This feels like growing up as an organization.
And there's always weird corollary that I've been thinking about with the Rails app that we've been building, intimately familiar with just everything that's going into it. And I know the vast majority of lines of code. I haven't written them all, but I've had an eye on all of the different features that we're building in.
And it's hard to get out of that headspace where it feels like a bunch of pieces. It doesn't feel like a hole to me, even though it definitely is. But when does a bunch of boards that you nail together become a boat? To make a really weird analogy because that's what I do; it's a hobby of mine. But when does that transition happen? At some point, certainly. But that's harder for me to see on the code side.
And organizationally, somehow getting these things in place feels like the organization sort of an inflection point for us, a growth point, which is I'm really excited about it. Even though they're probably going to mean a ton of annoying nuisance work for me because I'm the person in charge of making sure it all gets rolled out. And anytime anyone locks themselves out of an account, I have to help with that. And so it's probably just putting a bunch of annoying work on my plate. And yet, I don't know; I'm kind of excited about it.
STEPH: I feel like that shows our roots in terms of how we approach projects that we work on where you mentioned do we need this? Do we need this yet? Because I feel that we're constantly as developers and consultants just we're trying to advise on the more simplified do we need this? Is this the right thing to spend the money on? How do we know? What are the metrics? What does success look like? And all those questions.
So I feel like the way you just phrased all of that just really shows that sort of mentality that you grew up with in terms of checking in, and yeah, it's cool. Like you said, you're at a growth point where then it's like, yes, we are at this point that I've asked myself all those questions, and we're here. This feels like the right next step.
CHRIS: I like the way you described it as that you grew up with, my formative growing years at thoughtbot.
Mid-roll Ad
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Well, switching gears just a bit, we have a listener question for today, and this one comes from Stephanie. So not me, another Stephanie in the world. Hello, other Stephanie out there in the world. And they wrote in, "Hi, Steph and Chris, fellow software consultant here. And I'm wondering if you'd consider talking about how to craft a project estimate for a client on the pod. It's such an important aspect of consulting." Amen. I added the amen.
"And I feel like I'm very much impacted as a project team member when the estimate isn't accurate." Double amen. So true. [laughs] "Would appreciate any and all thoughts, especially since it might be part of my job in the future. Thanks." I just realized I put us in consultant church by adding all those amens, but here we are. [laughs]
CHRIS: I'm glad you clarified that they were additions by you and not part of the original question coming in.
STEPH: Sure. I don't want to speak on behalf of Stephanie. So I have some thoughts on the matter. I think there are a couple of different ways that we can talk about this particular question because I think there are different formats as to when you're estimating and who you're providing the estimate for. But I'm going to pause because I'd love to see what you think. How do you go about approaching crafting an estimate?
CHRIS: Sure. I'm happy to share some thoughts. And for a bit of context, this question came in to us, frankly, many months ago, but I did send an initial reply to Stephanie because I know that sometimes we take a little bit of time to get back to folks. So if ever you do send in a question, know that one of us will probably respond via email earlier, and then eventually, will make it on the show. And again, just to say, we do so appreciate when folks send in these questions. It's an interesting way to shape the conversation and a way to get topics that you're more interested in into the fold here.
But so the two main ideas that I shared in my initial reply were, first, is an estimate really necessary? I think that's a critical question because an estimate implies that this thing is knowable. And as many of us, probably all of us, have found out at some point in our lives as software developers, it's really hard to do software estimation, like wildly difficult. And not just the thing that we'll eventually get better at it, which you do, but there's just some chaos. There's some noise in this work that we do that makes it so, so difficult to get it right.
So pretty much always, I will ask, like, do we need to estimate here? What if, instead, we were to flip the whole question on its head and say, let's set a deadline. Let's say two months from now that's our deadline. And let's ruthlessly reprioritize every single week to make sure that we're building something that's meaningful, and we're getting there.
And obviously, we have to have some general idea of what we're doing. Is two months a meaningful amount of time to build a rocket to go to Mars? Probably not. But is it enough time to build an app that can allow users to sign in and manage a simple list of items? Yeah, we can definitely do that, and we can probably add a bunch of more features.
The other thing that I think is worth highlighting is there's a bunch of stuff that is table stakes and very easy to do. But I would, whenever doing estimation, emphasize unknowns. So, where are the external integrations with other systems? Where are the dependencies that rely on other folks to provide some inputs into this process that we can't be certain where there'll be?
In my experience, the places where estimates go awry are often these little intersection points that you're like, well, this will probably take a day, maybe two. And it turns out; actually, this can somehow balloon into a month. That's not a thing that feels comfortable saying in an estimation process, but it is definitely real. I've seen it happen so many times.
And so it's those unknowns. It's those little bits that I would emphasize as part of the process if you do need to do an estimate and say, all right, here's the boring stuff. I think we can do that pretty easily. But this part, I don't know, it could be a week, could be three months. And frame it in that way that there is this ambiguity there. Because if someone's asking you for an estimate and they're looking for like it is seven days and two hours exactly, it's like, well, that's not realistic. That's not how this thing works. Unfortunately, I wish it did.
But pushing back and changing the conversation is the thing that I have found valuable. I think there's some other really interesting stuff in here around the team dynamics that Stephanie is talking about. But I want to send this over to other Stephanie to see your thoughts because I'm super interested to hear what you have to say as well.
STEPH: Oh, I like how you hinted at the team dynamics. Yeah, that could be a fun one to circle back to. So I love how you called out highlighting the unknowns. There are a couple of ways that this comes to mind for me. So there's the idea of the weekly or the bi-weekly estimates that we make as developers and designers. So let's say we as a team are getting together to focus on a chunk of work and decide what we can and can't get through. And that feels one of those the more you get to practice it more frequently; you get to ask a bunch of questions. And that feels like a good rehearsal and exercise of how to go through estimates.
And I know you and I have pretty similar strong feelings around how those estimates are then treated by the company. They should really just be used for the team to talk through the complexities in the work to be done versus used to communicate outwardly as to this is when it's definitely going to ship. So there's that more immediate practice of providing estimates. And then there's the idea for more of a consultancy or a company, and someone is coming to you, so thoughtbot being a great example of then how do we work with teams that are looking to come to us and gain an estimate for getting a certain feature implemented?
So actually, I went to the source on this one. I went to Josh Clayton, who does a lot of the conversations for the Boost team when it comes to talking with clients and about the potential work that they would like to be done. And mostly our work is often teams will hire us. They have specific goals in mind, but they're really looking to hire ongoing development and services. So they really want to add to their existing staff. And then it's going to be an ongoing relationship versus a hey, we need you to quote us for how long it's going to take to implement this particular feature.
And on that note, we don't do fixed-bid work. So we don't say it's X dollars for specific features. But on the realistic side, customers are often capped by a budget. And so that estimate is very important to them because it could be a difference between it's a go versus a no go. So if you have larger companies that are like, "Yep, we want to engage with thoughtbot. We really just want additional development power and design services," that's great.
For those that are smaller, it could be an individual product owner, and they need to say, "I really want this feature, but I only have this much money. And frankly, if I can't ship it by this time, I'm not going to do it because it's not worth the investment to my company." And then, in those cases, those are the ones that we're going to spend more time with them to talk about what does the fallback plan look like? And what's our opportunity for simplifying the features?
And Josh, in particular, referenced this as systems thinking. So he will go through the idea of drawing out the set of steps, understanding the complexity of the different screens. So what are the validations? What are the external dependencies? What is owned by us and what isn't? What is the likelihood that we're going to get permission to simplify or remove complexity? And even then, when we start to provide some estimates, it's going to be in weeks. It's not in hours; it's not in days. It's going to be in a slightly larger time frame.
And then we're also going to spend more time in the discovery phase to say, okay, well, we know you need to fix this particular issue, or you need to integrate with this particular service. So we're going to need to ask a lot more questions about your codebase. What problems have you already run into? Have you tried to do this before? Do we have experience doing this? Is this something that we can lean on and ask someone in the team? And, say, how long do you think it would take for us to work on this?
And that's knowledge that isn't privy to everybody. It depends on where you're at in your career as to like, oh yeah, I've done this like five times before, and I know exactly how this stuff can fall apart. I know where the complexity lies. So I think that's why estimation is so difficult is just because it does often pull from that existing experience. And so, if you don't have that experience for a particular set of work, of course, it's going to be hard to estimate because you just don't know. So that was a very broad scope of as day-to-day developer and designers; I feel like we're constantly getting practice and estimating and communicating the progress of our work.
And then on the larger scope of if you are a consultant who's then looking to give estimates to clients, then understanding what other need can you sell them? Just ongoing development services. Or, if they are a smaller team and very focused, then what legwork can you do ahead of time to de-risk the project? And then understand how much control you're going to have to be able to simplify as you learn more as you go. Because you're going to, you're going to uncover some things, and you're going to learn some things. And what's that collaboration going to look like?
I do have one more concrete example I can provide around some of the smaller projects that we take on. So when we are helping someone that's, say, getting a new product out to market, then we do have a more deliberate three, four-phase approach where we first focus on discovery, and ideation, and validation. And then, we move on to iteration and then launching.
And I really like how you said about providing a deadline because then that helps us scope aggressively as to what is the minimum thing that we can get out into the world that will be valuable? And then there's usually some post-launch support as well. But that's often how we will structure those smaller, more specific engagements.
CHRIS: I think one of the critical things that you highlighted in there is that thoughtbot doesn't do fixed-bid work. So we're going to do these 20 features, and it's going to take four months. thoughtbot does not do that, and frankly, that's a privilege to be able to take that position and say, "No, no, no," we're not going to work that way. But it is, I think, a trade-off. It's not just something that thoughtbot does to be like, listen, that doesn't sound fun. So I'm not going to do that. It's a trade-off.
Not doing that comes in concert with saying, "But weekly, we're going to talk about the work that we have done and the work that remains and constantly, ruthlessly, reprioritize and re-decide what we're doing." And it's that engagement, the idea that you can have a body of work, look at it and say, "Yeah, that'll take about six months," and then go away for six months, and then come back with the finished software.
Our strong belief is that that's not the way good software gets built. But instead, it's a very engaged team where the product owner and the development team are in constant communication about each of the features that are being developed. And then again, ideally, on a weekly cadence, coming up for air and saying, "How are we doing? Are we moving in the right direction? Are we getting towards the goals? If not, do we need to simplify? Do we need to change things?"
And similarly, as I mentioned deadlines, I feel like deadlines is probably a word that many people think of as very bad because deadlines often come with also a fixed scope, but that can't happen. That's two constraints, and you can't have them fighting that way. But a deadline can be super useful as a way to say we're going to put something out there in the future and say we're heading towards that moment. And let's, again, cut scope. Let's change what we're building, et cetera.
But critically, not say, "We got a deadline and a fixed scope. We're going to do that." And so it's, again, just ways to gently shift the conversation around and say, what if we were to look at this from a different angle? Because just having a pile of work and saying, "That'll take six months," I've never seen that play out.
STEPH: Yeah, to me, deadline is a bad word when the deadline is set by a team that's not doing the work. So if you have leadership or if you have someone else that is setting this deadline and then just passing that down to someone else to then fulfill, regardless of the feedback or how things are going, then yeah, then it can be a nasty thing, which I think is a little bit of in that question that you picked up on that you highlighted where there could be some interesting team dynamics that Stephanie called out, highlighting that I'm very much impacted as a project team member when the estimate isn't accurate.
And I'm making some assumptions here because I don't actually know the exact situation that Stephanie is experiencing. But it sounds like someone else externally is setting these team estimates. And so then you're handed this deadline, and then stuff goes wrong, but you're still pressured to meet this deadline. And I've certainly been part of projects that are like that.
And then that is one of the number one things that then often comes up in a retro or like, we don't have control over these deadlines, or we don't know why these deadlines are being set. And then people are working extra hours and working nights and weekends to then meet this arbitrary deadline that none of us signed up for, and that's just not fair to treat deadlines in that way.
So full-heartedly agree that deadlines can be a very positive thing, but they need to be set by the people doing the work. And then there has to be discussions and updates about how is this going? Do we have control to simplify this? We thought we could do this with this particular external provider. It turns out that that's a nightmare. Is there another provider we can go with? Can we ship this incrementally? Like some features, you can't. They may have to go out wholesale. But is there a small chunk of this that we can deliver that is then a success that leadership and others can brag about? And then we can keep working on the rest of it.
So it's always identifying what are the smallest wins, and how do we get there and getting buy-in from the team? Going back to something that you said earlier around, it is a privilege, where so as thoughtbot, we don't do fixed-bid work. And that is a nice thing for us to be able to focus on. But for people who do need to do fixed bid work and are relying on that, I think that often requires more legwork. And maybe that becomes part of your estimate. I'm just making up how I might approach this if I were trying to do fixed bid work.
But there's a discovery phase that's very important. So maybe the first part of your estimate is I need to really understand the feature and see the different screens and know what materials we do or don't have. What does the codebase look like? Do I feel like this is a codebase that I can work quickly in? And is it going to be hindersome for me? But answering a lot of those questions to then help me paint a picture of, like, okay, this is a feature that I've implemented before, so I feel pretty confident that I could do this in a month.
And then also communicating that this is my estimate but just know it's an estimate. And I will continue to update you each day as to how things are going or each week as to how things are going, and things may adjust. And we can always talk about ways about simplifying this. But I think that's how I would go about it is; frankly, it's going to require more legwork for me to feel more confident as to then telling someone as to how long I think the work will take.
I think that's a nice, broad scope of the different types of estimate work to be done with the general idea of if you can avoid estimates and go for more frequent updates, then that's wonderful. But then, if you are forced into a corner where you need to provide an update, then just do as much research and honesty as possible and then still include the frequent updates.
CHRIS: Yeah, that I think summarizes it quite well.
STEPH: As a side note, it's been a lot of fun to feel like I'm referring to myself as a third person as Stephanie is working through this problem. So that's been novel. But yeah, thank you, Stephanie, for the great question. I hope that was helpful. On that note, Shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeee!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris is making hiring progress and loves asdf and M1 laptops. Steph is anticipating the arrival of one dongle to rule them all and talks about moving away from having a lot of Bluetooth connections.
Two other big things on Steph's mind are education around factories because they're v important and shared examples and how they can be overused. She and Chris agree that it is better to tell stories in tests instead.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed. [laughs]
CHRIS: Hello, and I'm singing, and I love singing.
STEPH: It's Buddy the Elf; what's your favorite color? [laughter] For reals, here we go.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris. What's new in your world?
CHRIS: My world continues to be focused on hiring as a pretty core aspect of things. We have happily had one offer extended and accepted, so that's great. We've got a person who will be joining the team in a couple of weeks. That's very exciting. And we're continuing in conversations with some other folks.
So I look forward to the place where I can be on the other side of this and have that team and be growing the team and not having to focus because hiring takes a lot of effort. It is something that I believe should be done as well as possible and intentionally as possible and then just outreach and all that. So yeah, I'll be fine with being on the other side of that. But it's going well, so that is nice.
STEPH: That's awesome that you're making progress. Once you have hired your team, will you then add to the agenda to hire someone to help with hiring?
CHRIS: I don't actually know if the organization, if the whole company has someone who's focused on hiring. I think that can make sense. Working through recruiters and things like that is something that I've seen in the past. I've seen it work for certain organizations.
I've also been on the receiving end of plenty of obviously copy and pasted very generic "Hey, person, I saw that you do lots of Java and other enterprise code software. Would you like to come work with us?" I'm like, none of those are true, and I do not want to go work with you. But thanks, I still appreciate the outreach. [laughs] So I am intrigued to see how we think about it.
More generally, this is something that you and I have talked about offline but the idea that you kind of always want to be hiring. We do have specific roles that we've identified that the budget has space for. But more generally, ideally, we're going to need to hire more people down the road, and that will happen at a particular point. But having those conversations, starting to talk to people, now planting the idea of like, hey, you're great, and I would love to work with you someday and just keeping those lines of communication open.
Networking is perhaps what the people call it. I don't know; I've never felt super comfortable with that word, but I think it's that and being friendly and staying connected with people whose work I respect and would love to work with more. So that's part of what I will come out of this with is yeah, let's always be hiring in a certain sense.
STEPH: I'm glad you expanded on it because I was just thinking I have specific ideas as to what always be hiring means to me and what those activities would include. So I was curious what it means to you. And I agree, I think it's a lot of networking. It's a lot of taking chats and social chats with folks and just talking about the company and finding out where they're at. And then one day, if it works out that then they want to make a shift, then you've already got that relationship that started, and they're already potentially interested in your team.
I guess some of the other big stuff that comes to mind, too, is like thoughtbot we have the blog. I feel like that's always really helpful too. Like when you help somebody, when you publish information that then helps them in their career, I feel like that will then draw people towards you as well.
CHRIS: Yeah, the thoughtbot blog and basically everything that thoughtbot does, the podcast here, or Upcase, or all those things were so incredibly helpful in the hiring. But I also know they're hard to spin up, is what I would say. The thoughtbot blog has I don't even know how many hundreds of thousands of hours maybe. It's weird to try and put a number to it.
But I've written a handful of posts for it, and I'm not great at writing them. They take me way longer than they should, but they took many hours. And then I had wonderful peer review by other developers at thoughtbot. And so, the amount of effort that goes into the thoughtbot blog absolutely produces wonderful benefits. But it's not free by any means, and similarly, the podcasts or Upcase or any of those sort of things.
Similarly, the one that's actually most interesting that I see a lot of organizations go for initially and then often walk back is open source. Like, oh, we have this internal library that we built to do something. What we'll do is we'll just package it up and share it with the world, and then it'll be great.
And the maintenance burden and support necessity of an open-source project is so high. I've actually historically gotten into the mode of suggesting...when I was working with clients, they would start to mention this and be like, "Oh yeah, we think we'll open source this thing, and it'll be great." I'm like, "Are you sure, though? Do you definitely want to?"
There's definitely a difference between open sourcing and just putting an idea out there is one thing that I would say. Can you just write a blog post that has code snippets but not reusable code that you have to maintain that people, unfortunately, I think unfairly expect responsiveness and maintenance over time? And what if you stopped using that technology? What if you stop using this thing, but your name is still attached to it? And people have expectations of what that looks like.
Or people come in and say, "Hey, this is great, but I want to change it in this way." And you're like, "Yeah, but that actually doesn't work for us. That's not how we use it. But we would be on the hook to maintain that code if we accept your pull request." And so, as wonderful as open source is, I tend to be on the more conservative end of the spectrum of like, are you definitely sure you want to open source this? Is there another way that you can share this with the world? Can it be a conference talk, or a blog post, or something like that? But it is an interesting one.
STEPH: Yeah, I've been a part of several teams that have started with that; let's start an engineering blog. And their hearts are totally in the right place, and I understand why they want to do it. But like you just said, there's a cost to that. And if you don't have something like thoughtbot has like an investment day or a time for engineers to then be able to contribute to that blog, then either they're just not because they're not going to have their downtime to be able to do that. And it is hard to write and publish and be happy with what you're going to publish with the world.
I really like what you're talking about in terms of the maintenance burden because I can't remember if it was an Upcase conversation or if there was something...but I was early on at thoughtbot and had a similar thought of why can't we just open source it? Why can't we make it public? And there was a very big thoughtful discussion around well, we have to have all these considerations in place. Who's going to maintain it?
Just like FactoryBot is a really big internal project at thoughtbot. And there's typically a rotation of folks who will then take ownership and then onboard other people who are interested in it and curate the issues. And it's very important work, but you have to allocate time for it. All of that to say, I totally agree. There's a big burden that goes with it.
CHRIS: Yeah, it's interesting that this has been an evolving thought in my head, and it makes me sad is another thing I'll say about it. I wish it were easier to just put code out there in the world and have the expectations properly calibrated for like, hey, I did this thing. Here's a code sample. It worked for me.
Actually, I found dropping something in a Gist...a Gist just has a point-in-time connotation to it that I like. Like, if I see a code sample in a Gist, I'm like, I have no expectation that that person is going to do anything or respond to anything I have to say. But this is great because I now have this sample code that helps me get a little bit further.
And I may have to vendor that code or take it on myself, and I now own it. It's not this person's responsibility. But the minute you have a repo with a README that says stuff and like, here are the installation instructions, the expectations just flip in a way that I don't think is...at least I become cautious around. And that does make me sad, though.
STEPH: Yeah, it feels like you went from offering an example to I'm offering a product. And so then as soon as people feel like, oh, you're giving me something as a product that you maintain, then I'm going to have higher expectations of it should work how I expected it to work. I'm going to ask questions. And yeah, you make a lot of good points.
CHRIS: Would you like to pay me $0 for me to build software for you? That sounds fun.
STEPH: [laughs]
CHRIS: And open source is such a wonderful thing. And so I'm interested in...like, I follow a lot of folks who are in the open-source world and have found ways to make it make sense financially or otherwise or organizationally. Open Collective and things like that is one option or OpenCore and then paid pro models and things like that like Sidekiq as an example. Sidekiq just celebrated ten years with some wild numbers in terms of the revenue, and it's like, yeah, that's fantastic. This is a cornerstone piece of software in the Ruby and Rails community. And also, Mike Perham had a great outcome from it. I think that's a win.
So maybe blogging, maybe, but not sure. Probably not open source is my suggestion, at least for me. But one thing that I am interested in that hasn't been an option in my mind for a long time, but I'd love to get back to is conferences and going there, especially with a small team from an organization. The three developers we go, and we hang out at a conference and the company has a space there. And there's room to have conversations and meet people. That is one that I would love to continue in a way of making sure that our name is in people's minds as a place that they could work in this world.
It is interesting, though, that it gets scoped a little bit like we are definitely a Rails shop. But that's not all that we are, or that's not the complete totality of our technical identity, so it becomes interesting. But I think it's probably the most representative. And I definitely see the Ruby and Rails community is having a good product-centric mindset that is definitely the sort of thing that I want in the teams that I'm building.
STEPH: Yeah, I think that's an awesome idea because it's a way that you could focus on creating content. It'll likely have a big impact. But then you can also replay that content, but it's not the commitment of a blog or a commitment of open source.
CHRIS: But yeah, so hiring has been, I would say, most of what I've been doing. One other thing that was fun this week, so I have my new laptop that I've had now for a couple of months, I'd say. And just this week, we had a very frustrating issue where Heroku stopped deploying our application. Just one day, it was like, nah, it doesn't work anymore. And I was like, well, that's less cool than I want it to be.
And so one of the developers on the team dug into it, and it turned out Node-sass was the answer, which we're not even using is extra unpleasant. It's just part of Sprockets and Webpack or something like that. There's some downstream dependency sequence. We're using Tailwind and PostCSS. So we don't even need Node-sass. I think maybe PostCSS does.
But anyway, turns out Heroku had switched to using version 16 of Node just without telling us. We were previously on 14, and then Node-sass didn't build on that. There was just this weird dependency chain that stopped working one day. And we weren't pinning the Node version within our application. So one of the developers figured this out, pinned us back to version 14 something of Node, and that was fine. But then my computer got confused because the versions were out of sync.
Anyway, asdf is great. That's the first thing I'm going to say. So I use asdf to manage the versions of Ruby, and Node, and Yarn, and Elm, and basically everything else that I use. And I love that it's all under one hood, so asdf, wonderful. Also, my laptop, wonderful. I really love the M1 fancy laptop. But what was fun was I had to install the new Node version.
And this was the first time in the three months I've had this computer that I've heard the fans come on. Finally, I asked it to do something hard enough that it was like, whoa, whoa, whoa, I'm going to need some backup here. And so the fans finally kicked in. So I don't know what's going on installing Node, but good for everyone involved, [laughter] impressive to make such aggressive use of all of the hardware in my computer.
STEPH: Yeah, I love asdf. I miss it right now because I'm on my client machine, and we're not using asdf. Instead, we are using Chruby, C-H Ruby to manage Ruby versions. asdf is awesome. That's fun. It's the first time that the fans kicked on. I'm intrigued with my machine. I haven't really paid attention to it when the fans kick on except the one time where I had like a Ruby process that was running away, and I had to figure out what was going on there. Because then all the CPU was just being dedicated to Ruby even when I wasn't using Ruby. But since then, I haven't heard the fans. It's been very, very quiet. It's lovely. I like when it's quiet.
CHRIS: Oh, it's been great. It was interesting because it was this weird noise that I'd forgotten about.
STEPH: [laughs]
CHRIS: My previous computer was so old that this was happening regularly whenever my backup process would run. Apparently, that is a very computationally intensive activity. So I would hear the fans kick in, immediately go find the backup process and say pause for 60 minutes or whatever it was. Just like, leave me alone. Stop it. The computer is getting too hot. You need to calm down. But now, with the new computer, there was nothing I could do to make it happen. And then finally it happened, and I was like, oh yeah, I guess this computer has fans. That's neat. But yeah, so things that are great, asdf and the M1 laptops.
STEPH: Nice. Yeah, you're one of the few individuals I know that's using one of the M1 chip. So it's been reassuring to hear how well it's going because I did not opt in to that new-new. I opted in to the give me something stable and steady that I know so that way I don't have to fuss with it because I can then fuss with all the other things that I need to fuss about.
CHRIS: So much fussing to do.
STEPH: Lots of fussing. Fussing and cussing is what I do over here.
CHRIS: [laughs]
Mid-roll ad
Hey, friends, let's take a quick break to hear from today's sponsor, New Relic. All right, so you've probably experienced this before where you're just starting to fall asleep, and it's a calm, code-free peaceful sleep, and then you're jolted awake by an emergency page. It's your night on call, and something is wrong. But I have some good news because you have New Relic, which means you can quickly run down the incident checklist and find that problem.
So let's see, our real user monitoring metrics look good. And that's where New Relic measures the speed and performance of your end-users as they navigate the site. But it looks like there's an error in application performance monitoring. If we click on the error, we can find the deployment marker where it all began, roll back the change, and, ooh, problem is solved. We can go back to bed, back to sleep, and back to happy.
That's the power of combining 16 different monitoring products into one platform. You can pinpoint issues down to the line of code so you know exactly why the problem happened and can resolve it quickly. That's why more than 14,000 other companies, including GitHub and Epic Games, use New Relic to improve their software.
So you know that next late-night call is just waiting to happen, so get New Relic before it does. And you can get access to the whole New Relic platform and 100 gigabytes of data free forever. No credit card required. Sign up at newrelic.com/bikeshed. That's New Relic N-E-W-R-E-L-I-C .com/bikeshed, newrelic.com/bikeshed.
CHRIS: Well, speaking of, what have you been fussing and cussing about this week, Steph?
STEPH: So this is more in the pranting area, which is our portmanteau for praise and rant, where I'm super excited. I have a delivery coming from Amazon today. So I'm that person that keeps checking and waiting for it to show up. But I'm finally going to have one dongle to rule them all.
I have a very messy approach right now [laughs] where I have all the dongles and have to plug everything in. And you know what? Normally it's fine. It's fine because I do it once, and I don't have to mess with it that much. But because I now have my thoughtbot laptop and I have a client laptop, and I needed to be able to switch back and forth, it is just too much. And I was realizing how many dongles I'm having to use. So I have one dongle to rule them all. It's showing up today. It's a very exciting day.
CHRIS: I'm very excited for you. I recently made a similar switch when I got this new laptop. I was like, you know what? I'm going to look into it because power can come over USB-C and whatnot. And I was like, all right, it's finally time. I want to be able to just click in. And it's one of those things that feels trivial, or at least in my mind, I'm like, this doesn't feel like it'll make that big of a difference.
But it makes it so much easier to disconnect my laptop, go somewhere else, and then come back. And I noticed myself doing that more, which I think is a positive thing. Otherwise, I'm just anchored to my desk. I'm like, I don't want to unplug everything and then have to replug it. That's like a whole thing. But now that it's not, I am more mobile, more flexible in where I'm working from, and I found benefits from that. So I'm a fan. I'm very happy that this is going to show up for you [laughs] and really change the way you're working.
STEPH: Well, I've started moving away from a lot of Bluetooth connections as well because my keyboard will support Bluetooth, my headphones support Bluetooth. And I liked the idea of being wireless. But then, especially from switching laptops back and forth and then having to reconnect and all of it, it was just too tedious to go back and forth.
And yeah, I'm with you where I didn't want to have to leave my desk and unplug everything and then bring it back where I'm playing, you know, like the game Operation where you had to reach in and then you had to grab different little bones? If you don't know the game Operation, that sounds really weird. But it felt like a game of Operation where then I was having to find all the dongles and connect them and plug them all in. And yeah, so it's going to be wonderful.
CHRIS: Even knowing the game Operation, that still sounds kind of weird.
STEPH: [laughs]
CHRIS: But I really love that there are people out there listening that are like, what are they talking about?
STEPH: What weird childhood did you have?
CHRIS: Yeah, I'm definitely Team Wired-Almost-Everything. The only thing that I have that's wireless is my headphones. And it only works kind of, and I have to trick them sometimes. And the worst thing is occasionally my computer will have control, whatever, they're connected. So I'm listening to music on my computer and then suddenly, my phone will just steal it. It's like, what are you doing? No.
Or, randomly, my headphones will be sitting away from me, and they'll just connect. And I'll be in the middle of a call on something else. Like, I'm here talking to you, and suddenly my headphones are like, hey, we wanted to join the party. It's like no, absolutely not, [laughs] not at this moment, under no circumstances. So I don't really believe in Bluetooth as a technology. I'm very much a fan, particularly with things like keyboards and whatnot. Bluetooth I've yet to be convinced that it is a sound technology.
STEPH: I have the headphones where they try to be very smart, and they are pretty smart where they will block out sound. But then, if I am talking, then it will put me in more of an auditory space where then I can more easily hear, and it won't filter out sound as aggressively. But I've noticed a problem. And it's when I'm watching anything that's funny that then I'm laughing.
So every time I laugh, my headphones think I'm talking to someone, and then it will switch over to where it's trying to let me hear more sounds out in the universe. And then it kicks back on because it's like, okay, she's done talking. It's a very jarring experience. [laughs] And I haven't figured out how to turn that setting off. It's like, oh, I just can't watch funny stuff with my headphones right now, which is also problematic with pairing because I tend to laugh a lot with pairing. It's a thing. I'm working on it. The struggles of Shteph.
CHRIS: Well, at a minimum, it sounds like your dongle life is going to be improving very soon, and that's exciting.
STEPH: Dongle life, it'll be single dongle life. That's it. [singing] All the single dongles, all the single dongles. Put your adapters up. [laughter] On a different note, talking about some of the work that I've been doing this past week with Joël Quenneville on our client work, is that we have been looking for ways that we still want to build up CI time. We’ve talked about the fact that we're working on some of that horizontal scaling. And I don't have an update there.
But the other update I have is where we want to be very strategic about where we invest our time because improving the test is not trivial work. A lot of the low-hanging fruit has already been done, so triaging a flaky test can be very difficult, and it can take us a while. So we just want to make sure and verify that before we invest a lot of time into a portion of the test that then we know what the outcome is going to be. Are we improving developers' lives by this much? And how do we measure that? Are we reducing the CI build time, and how do we know that?
And one of the areas that I really wanted to focus on is FactoryBot because there are a lot of factories. The factories tend to do a ton. So they are calling out to the database and building a lot of associations. And that's something that the team knows about as well is that there are just so many SQL queries that get executed in tests. And it would be great if we could reduce the number of SQL queries that are going out.
And FactoryBot includes some ActiveSupport notifications, which means you can subscribe to factories being run which then gives you access to details like which factories are being used? What build strategy is used? Are you calling build build_stubbed or create? And the factory’s execution time.
So then the idea of this is that if we can harness a lot of the data that we can collect from FactoryBot, then can we ask questions around what's our slowest factory? How long does it take, and how many places is it being called? Because then ideally, we can calculate to say, okay, if this factory takes this long and it's used in this number of places, then we can have a formula to figure out how many minutes of our test suite is spent just on executing that factory.
And then if we can reduce the time of that factory, let's say by half, then we know how much time we're shaving off of our CI build. And then we have this more concrete verified okay; this is worth our investment. We want to pursue this, even if the factory may take us a full day to improve because it does so much. And it's just gnarly. So it's going to take some time to really refactor it into a more simplified state. So, in theory, this sounds really, really great, and it was a lot of thanks to Josh Clayton, who was the one that advocated saying that we could use the ActiveSupport notifications to find a lot of this data.
And so Josh and I paired on this for a bit to look into can we answer some of those other questions as well? And we were testing it on a small side project that he had, which was great because the other codebase is very big, and feedback is just a lot slower. So we wanted to first prototype it and have a proof of concept in a very quick space and just to be able to look through the data and make sure the assumptions that we had and the value would be there. So we applied that first, and that was going really well.
And then Joël Quenneville took that strategy and then applied it to all the specs in the spec models directory and ran it for the much larger client codebase and got some really great results. And we also used a low fidelity approach where we wanted to be able to see which factories were the most popular. So how often are they getting called? And the average execution time. So that way, we could then quickly look at this scatter plot, and then we could see, okay, who's in the far upper right quadrant? Because those are the factories that are causing the most pain.
But we started looking into a graphing library and what are we going to pull in. And Josh had the great idea. He's like, "I wonder if Google Sheets has a scatter plot. Can we export this to CSV data and then copy it from the terminal and import it into Google Sheets?" And it turns out that you can. So then we grabbed it and put it in Google Sheets and then just converted it into a scatter plot, which was really nice because then we didn't have to incorporate any chart library or any graphics or anything. We could just plop it into Google Sheets and then easily share it.
So we now have this list thanks to Joël because he ran it through the spec models directory of all the factories that are getting called. And it's really interesting. And there's one, in particular, that is high on the list. And it was actually one of the first ones that we worked with when we were troubleshooting a test that took us a while when we first joined the project. And the average time for this factory is four seconds, and it gets called over 500 times. It's like 527 times. So then if we multiply that, so if we say, all right, it takes about 4 seconds times 527 and then divide it for 60 for minutes, that's 35 minutes, 35 minutes for that factory.
Now, granted, these are getting parallelized across different processes. But still, if you divide that up across four processes, that's a non-trivial amount of time. So I think this is going to be really helpful and really interesting data that we can then use to drive our decisions to say, okay, we want to take this factory and let's say even if we can cut it into half, let's say if we bring it from 4 seconds to 2 seconds, it'll go from 35 minutes to 17 and a half.
CHRIS: Oh wow, I love the methodical approach. I love that actually having a number you're like; this is how much pain or the cost of this right now. And so we've identified that this is this high-level thing. I love the intentional starting with, like, let's measure it. Let's understand where the most bang for the buck is.
In particular, the graph that you're describing reminds me of one I haven't actually worked with it much. But Code Climate has a graph that they use, which is it's churn versus complexity. So it's like, you may have a very complex piece of your code, but someone wrote it once, and it just sits in the corner. And you know what? It quietly does its job. And yes, it's very complex, but nobody needs to touch it. So it's not a big deal.
And then you have stuff that changes constantly, but it's super simple, so that's fine too. Your UsersController is probably going to change a bunch; that's fine. But the stuff that is constantly changing and very complex that's the magic quadrant that, like, pay a lot of attention to that. And similarly, which are the ones that are being used a lot and take a while? That's the magic quadrant. I'm intrigued now. I want to search for more magic quadrants that deserve attention. But for now, that sounds like a lot of fun.
So now, what's the approach that you're going to take? I imagine you need to alias that factory and have it exist because some tests will rely on certain details of it. This is my guess. So let me see if this is the way that you're thinking about it, alias the factory, so you have a representation that does all the stuff that the current one does. But then you have a new one that is much more pared down.
And then, on a test by test basis, you start switching it over and trying to move things to the lower weight, the slimmer version of the factory. But I would think you would want to do a gradual process if there are 520-ish usages. Because otherwise, just changing that factory out from under all the tests, I imagine you'd break some tests if you just were like, what if it did less?
STEPH: Yeah, I like that idea of the incremental approach. And that all sounds great, especially the alias because you're right; we want to change it incrementally and not all of them at once. But then essentially implement, one, because I want to see what does the pared-down factory look like? What is the basic factory that we can get away with? And then how long does it take for that factory to execute? Because then that will help confirm, can we really get it down to two seconds? Or is this just a factory that's always going to take three and a half seconds, and then it's not really that much of a payoff? Maybe we should look for a different factory to investigate.
And then also understanding from the test are people reaching for this factory all the time because it builds up the world and all of these tests need the full world? Or are people just reaching for this because it does the one or two things that they need, and we can get away with a much slimmer factory? So right now, it's in the space of understanding why are people reaching for this? What are the tests they actually need? And yeah, how can we do it incrementally?
At one point, we may even be able to try to programmatically switch it out. Maybe we just find 50 tests that are using this once we have the slimmed-down version and we replace...50 is probably too big. But if we replace X number of tests with this factory, how many of them fail? Maybe 10% of them passed. Cool, let's just take those 10% as a win and issue those as a PR. So that could be a strategy as well just to find if there's any that are super easy to change. All we had to do is literally change the name of the factory.
The other big part that's on my mind is education around factories. I think a lot of people on the team understand that factories are very important. They can be very helpful. They can also be very cumbersome. But it feels like a good opportunity to say, "Hey, we are specifically working on these factories. Here's the reasoning that led us to work on these factories.
When you're in the space of factories, please be mindful about what are you reaching for? Is there a slimmed-down factory that you can reach for? Maybe you can implement your own slimmed-down factory if one doesn't already exist." So I like the idea of coupling it with also just broader awareness because we are but two people. So I would love for more people to be part of the changes.
CHRIS: Unsurprisingly, there are some wonderful blog posts on the thoughtbot blog that speak to this topic. One that I'm a fan of is Factories Should be the Bare Minimum. This was written by Matt Sumner. And it describes basically that idea of factories shouldn't build the worlds. They should give you the pieces that you can use to build the world but not build the world entirely. And so I'm a big believer in that, having your factories be as minimal as possible. They should be valid, but that's about it.
And then I will often reach for extracted helper methods and keeping those as locally scoped as possible often in the spec file, or if not, maybe they're sharing spec support. But being intentional with where we reach for them and not having everyone use the same thing that just slowly gets added to. And it's like, do I actually need everything that's in there?
The other thing that's interesting is the idea of having a factory that does a ton is, in my mind, sort of in direct contradiction to what I believe factories exist for which is when I think of factories, they're useful to fill in the rest of the details such that you don't have mystery guests in your test.
But you can explicitly say build me a user who has an email that looks like this because, in this test, I care about the email, but I don't care about the rest of the details. I don't care about their name. I don't care about their password, or the roles, or any of the other details. Just let the factory deal with that because it's not important to the test. But I want to make sure that the relevant detail is present and specified within the spec.
If you have a factory that builds everything in the world, that's like build a user and then grabs the first action from the project that that user has, because I know that they do because they use the big factory, that is just in direct contradiction to what we want factories to do. We want tests to tell their story. We want to avoid mystery guests. Factories are a great way to do that while still remaining concise. But if your factories just build the world, then there are some mystery guests in the world, I can assure you.
STEPH: Yeah, I agree where factories have served as an abstraction for what I think is important to the test. But then there becomes this moment of where someone thinks, well, I need to build up these records, but I don't really need to reference them directly. I just have some coupled code that's going to rely on these. And so I don't explicitly need them, but they need to be there. So I'm going to extract it away, and a factory feels like a good place for me to extract that too.
And I would take the very hard opposite approach where if you have coupled code and you have these dependencies that aren't necessarily explicitly used in the test, but they are required for the test, I'd rather see a painful test setup than have all of that extracted away from me. Because then if I do need to triage or troubleshoot that test, it's going to take a lot of just mental overhead to work through what do I actually need here and why? So I'd rather see that painful test set up then have it moved somewhere else.
But I think a lot of people take the opposite approach of where if I abstracted away, my test looks prettier. And I'm like, yeah, but maybe to you in the moment, but it's going to cause me a lot of pain further down the road when I have to work with this. So show me all the crap that you had to do upfront. Just let me know. [laughs] I'd rather the test be honest with me.
And then it's a really nice jumping-off point because you can see a test that does all of this. And instead of blaming the test and thinking it's the test's fault, you recognize this test has a lot of complicated setup, and it's probably because of the code and how the code was written. And we should look at refactoring the code, not at how can we make our tests look prettier?
Mid-roll Ad
Hi, friends. And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is an application performance monitoring tool that's designed to help developers find and fix performance issues quickly. With an intuitive interface, Scout will tie bottlenecks to source code, so you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, and memory bloat.
Scout also recently implemented external service monitoring, adding even more granularity when it comes to HTTP requests and API calls. So give Scout a try today with a free 14-day trial and experience first-hand why developers worldwide call Scout their best friend.
And as an added bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. To learn more, visit scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Well, here's one more that maybe you'll agree with, maybe you won't. We'll see. I'll try not to lead you in either direction, but shared examples. If I'm going to rant for a little bit, shared examples are in that space of where they just get used so heavily, and they abstract away important information about the test. And it makes the test so succinct that I don't actually know what the test is doing.
And I've seen a number of places where a shared example has been extracted, and it is only used within that test file once, maybe twice. And I'm just like, friends, too much abstraction. Please keep it close. [laughs] We don't need to move it away. We want our test to be friendly and just full of context, which is what I mean when I say friendly. I want full of context is what I'm looking for, well-named variables. And I won't be able to read the test and see what's happening.
So my little complaint for today would also be about shared examples and how they can be overused. And they do have a really neat purpose. They can be helpful for if you're testing maybe a controller action and you want to say you're extracting that authentication, making sure that a controller always has authentication and then that is getting included. Sure, that feels very helpful. But that's really one of the few cases I can think of where a shared example comes into play.
And if you are testing code over and over throughout different parts of your codebase, there's probably a part of your codebase there that needs to then be pulled out into a class and test that class in isolation. And then you don't need to retest it throughout all of your other classes. Have I already ranted about shared examples? I can't recall at this point if I have or not. [laughs]
CHRIS: I don't think we have talked about shared examples before. And I appreciate you not leading the witness here. But I think I'm in agreement with you, particularly the way you refined it there at the end because that controller example is the one rare case where I might reach for it. But in general, I think this is one of those things that I saw early on in my career. I was like, oh, cool; this is a way to clean stuff up and DRY and all those wonderful things. And then I've definitely felt the pain of just overuse of shared examples and ways to pull details out of tests. But it's like, I want to see the details.
And I think broadly, that's the theme that you and I are very aligned on is like, no, no, no, tell me the story in tests. I am much less interested in having these concise tests that have a single line, and it's like, expect foo to have bar. And it's like, why? Because...oh, there's a let and then a subject, and it's a shared...oh okay. Now that I can put it together, I can tell the story, but I cannot look at this test and see a story. I want to see a story, friends. So yes, I'm totally in alignment, especially with the slight caveat at the end of like, there are cases where it's useful.
Similarly, I've used let. I definitely have not even that long ago. And I stand by the usage, but it was very rare. It's very rare, and it is something that I'll look at and be like, am I sure? Definitely, is this the right thing, or did I do something wrong? Because if I find myself leaning towards let, it's like there's something that I don't think is important to the story of this test that still needs to happen. Why is that? What's going on here? Something feels off about that. And similarly, with the shared examples, it's like, is there not a different way to extract this such that I can test it in a way that I have confidence in, and then we're good?
I occasionally will talk myself into using shared examples or something like it where I'm like, oh, but it's really important that everything in the app has that authentication layer put in. And so, I should definitely have this very easily reusable test that can ensure that I have it. But there's a tautology there of well, if I write the test, then I'm definitely thinking about the implementation. But if I forget the implementation, I might also forget the test. And so, it actually doesn't provide any real safety.
And in those cases, that's a rare case where I would reach for some weird metaprogramming thing that's like, controllers must do the thing. And we say that in application control and then everything inherited from that will raise if it doesn't implement the authentication layer. Something like weird code that says, "You shall not pass. You must, in fact, implement the authentication layer." Rather than saying, "Oh, we'll just make it really easy to test it so that we always test and, therefore, always use the necessary authentication layer." But yeah, that's a hard one to describe in the radio. So I don't know if that came through clearly. But that's sort of my headspace on this.
STEPH: Yeah, and all of that makes sense. I'm trying to think of a good example. And it's been a while since I've used Pundit, but I feel like Pundit may have a really good example of this where it's very easy to document to say, hey, all of these controllers need to make sure that they call out to this class or that there's authentication. I can't remember the exact code and how that works. But I feel like Pundit has a really good example of that behavior.
CHRIS: I think they do. It's something where I think it's a configuration level thing, but you say, "Hey, Pundit, we should definitely authorize any access to models." And so Pundit then has a before action, or it's an around filter one of those. But it will raise an unauthorized error, I want to say. Like, you did not do the authorization dance in this. And that's a great example of like, I like that it is loud and annoying and in your face.
And it is not possible for me to forget it because we configured it throughout all controllers. And so it's that sort of thing that I would probably reach for even though that code gets complicated and messy, and actions at a distance. But it's worth that trade-off in my mind to have, like, I don't want to forget to do the authorization stuff. Permissions matter.
STEPH: That was a really nice pre-emptive approach as well. Because in most cases that we're describing, it's the I'm going to write a controller, and then I need to add this test to verify and prove that yes, I didn't forget the authentication stuff versus upfront, you're setting in a configuration to say, "Hey, please remind me to do the configuration or the authentication step that I don't miss this." So that's also a really, really nice approach.
CHRIS: Yeah, the same version of me that's going to forget to write the test is going to forget to write the implementation. So I don't want to trust that version of me to save that version of me. I'm equally untrustworthy in those situations.
STEPH: You want to trust the version of you that's going to get yelled at by the code if you don't do it.
CHRIS: Yep, I'm going to trust the version of me that was like, I don't trust any future version of me. I will yell at myself if I have not done the necessary things.
STEPH: [laughs]
CHRIS: To be clear, this is like a life philosophy of mine. I don't try to remember things because I forget stuff a lot. It just happens. And so if I need to take something out the door with me, it goes in front of the door but extra critically, and this is the subtle line. Because plenty of people do that trick where you put a thing in front of the door because then you can't leave without it. There's no way to forget it. But by virtue of that, you cannot put something in front of the door until it is time to use it.
Like, if ever you have to go and be like, oh, I don't need it now, though, so I'm going to move it out of the way, open the door, and then leave. No, no, no, because then you've broken the magic of the thing in front of the door must leave with you. So it's a very subtle line. I will play games with myself. I'll be like, I am forgetful. I will not remember this. I do not trust future me, so I'm going to play a trick on them. But you got to calibrate it just right.
STEPH: That's really funny because I totally [laughs] didn't think about it until now how you described it. But I have definitely done that where I set a rule for myself, but then I'll break it. And then, of course, everything all of it collapses. There is a time when Tim, my husband, was going through a developer bootcamp. And as he was learning the whole world and everything that's out there, he would ask me all these questions. And he's like, "Do you know this?" And I'd be like, "No." He's like, "Do you know that?" I was like, "No."
He was like, "I thought you knew this stuff." He's like, "I thought this was your job." And I was like, "Yeah, I'm really good at finding it and Googling it. But I work really hard to not store this in short-term memory because I'm filling it up with other stuff. So I work really hard to be able to find this stuff and track it and Google it."
But now, there's a lot of stuff that I try very hard to not hold on to until I need it. But that was a funny moment where he seemed very upset that I didn't know stuff. And I was like, "Well, welcome to web development. There is too much to know. You're going to have to have a really good catalog system."
CHRIS: Also, just so we're clear, it's going to change by next Thursday, so don't hang on to anything like it's just true forever.
STEPH: [laughs]
CHRIS: SQL will probably be around. That's about it. That's the one thing that I'm really confident in.
STEPH: Yeah, that feels fair. Get really good at understanding HTTP forms, SQL, all that feels like some really good groundwork.
CHRIS: There are some foundations. We should have a foundations episode where we talk about what we think the foundations are, the stuff that we bet won't be different in 10 years. But everything else is going to change by next Thursday, specifically.
STEPH: Yeah, I like the idea of foundations. I'd be intrigued to see what we talk about and what happens there because I feel like that's going to be very representative of already what we talk about. We often will sprinkle in some new-new, especially thanks to a lot of the adventures that you go on. But I feel like a lot of the stuff that we talk about we always bring it back to the foundation because we do want the experiences that we're having to be applicable to everyone else as well. So yeah, that would be interesting to see what comes out of that.
On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeee!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph is super excited about changing her schedule to dedicate a full day to focus on being a great team lead. Chris talks about his continued adventures in the world of hiring.
Together they answer a listener question about what they consider a “large” table in a database and how they review schema changes.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Services down? New Relic offers full stack visibility with 16 different monitoring products in a single platform.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: I just feel like every time I listen to Celine Dion, there are lots of dramatic hand gestures that have to go with it.
CHRIS: Yep, definitely that. I'm strong team Power of Love.
STEPH: Ooh, yeah, yeah, that's a good one.
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. Oh, I have some exciting news. I am changing up my schedule. It is going to start next week, where as a team lead at thoughtbot, we have been working on finding ways that we can have more time to invest into the team and team-specific initiatives. And most of us spend four days billing on client work, and then we have investment day, which is delightful.
But we're finding as team leads, that's really not enough time to then have the impact that we want in terms of supporting our team and then also having time for mentorship and all the other things that go along with being a team lead and one on ones. So we have been incrementally working towards reducing billing.
So team leads only bill three days a week, and then they have an additional day to really focus on being a great team lead. And I start that new schedule, that new-new schedule next week, and I couldn't be more excited. I think it's going to be wonderful.
I do think there are some challenges that go with it in terms of really balancing, at least this is from the others who have gone before me where they then find it a bit harder in terms of client expectations of saying, "Well, I was billing four days, and I had a larger impact on the codebase and the team. Now I'm dropping to three days. I still need to stay within that constraint. And I want to keep the client team happy." So that seems to be a thing. But I will find out next week how it goes.
CHRIS: Well, I'm very excited for you. That sounds wonderful, frankly. The balancing of the client expectations and then there's sort of now three slices to your work, which there always were, but now you have it delineated in an interesting way. Do you have specific plans for the team lead? So let's say now, nominally, there's one day a week that is dedicated to team lead time. Do you have ideas of what that looks like? Are you planning to pair with your team? Is it longer one on ones? I don't want to seed the question too much with potential answers. So what are you thinking about there?
STEPH: [laughs] Ideas are great. And yes, so I think number one is structure. So right now, one on ones and any support that I need to provide others is more ad hoc, or at least the one on ones those are not ad hoc; they are structured. But they are spread out throughout the week, and then I just context switch between client work and then checking in with others.
Now I can stagger everything on a Thursday or whichever day is going to be my really focused team lead day. So that way, I have all the one on ones on that day. And then yes, I can have more time to pair. So I can say, hey, let's just schedule every other week where you and I hang out, and we pair for an hour, and it can be on their client work. It can be on anything that they'd like to work on. Or, if there's a particular topic they'd like to talk about, we can pair on consulting issues or discussions.
But yes, ultimately, I'd love to do more pairing and then structure one on ones to a specific day and essentially, just really protect that time. Because right now, it feels that client initiatives and work always come first, and then team lead comes second. And I'm excited to balance that more so they have equal priority.
CHRIS: Yeah, that sounds great. I'm super intrigued to see what specific structures fall out of it and how you're experiencing it. I'll be interested to hear how investment time changes for you as a result of this because I remember when I started in the management role, four days a week billing, and then one day a week of investment time. But the investment time then basically went to one on ones and other things like that.
And when I switched to a three-day week, I was able to reclaim some amount of investment time. And it was interesting having that open back up and have that be a consideration. Because definitely one on ones and things like that I think firmly fit within the idea of investment time or investing in the organization and whatnot. But still, there's the like; I'm going to go explore a new framework or something like that that also certainly fits within investment time. So I'll be interested to hear if you find that changes in sort of a specific way.
STEPH: Yeah, I'm really interested in that as well. Because right now, as you mentioned, my investment activities are really focused more around the team and other folks and then Bike Shed. Bike Shed is a really big investment time activity. So I've noticed since becoming a co-host for the show, I talk a lot about code, but I don't necessarily contribute to open-source projects or other internal projects at the rate that I used to. It's now more focused about here and being a co-host and talking about all the things, and that requires some prep for me.
So I'm also interested to see if this will shift my investment time a bit where I do find a little more time to code and then explore just things that I'm interested in. But in the experiment of doing something new, it's always important to then have a way to measure is this a good change? Is this a bad change? So we have been checking in with team leads to say, "Hey, we've changed your schedule to where you're billing one day less. How's that going for you?" Because there's the assumption that this will be great, but you really have to check in with folks to find out.
So Edward Loveall has been sending out a helpful survey and checking in to say, "Hey, how are you feeling about your client work? How are you feeling about your team lead responsibilities? How are you feeling about investment time?" So then you can track your own growth and see is this really helping me? Is this really going in the right direction, or am I just more stressed about everything now? So that's helpful that we are also just looking back to make sure that this is supporting the initiatives that we said it would support.
But that's some of the newness in my world. What's going on in your world?
CHRIS: What's going on in my world? Continued adventures in the world of hiring. So we've got a couple of people in the pipeline now. We've got some folks in the technical interview phase, which we're structuring our technical interview very much inspired by the thoughtbot interview. So it's a pairing session as well as some code review, which is great because I think it's really representative of the actual work that we do.
I believe strongly in not having an interview that is trivia or anything of that sort of thing. I want to see folks at their best as opposed to finding the rough edges. Because I think it's critical to have an interview that really represents the work that we're doing and then also gives candidates an opportunity to show themselves at their best as opposed to trying to hunt out gaps in knowledge or things like that because I think it's easier to shore up a gap of knowledge. But I really want to know what is this person like when they're firing on all cylinders?
So, so far, that's going great. But hiring is a complicated long game. So it will probably be a thing that I'm talking about for some weeks to come. And if anyone out there is listening and is potentially interested in a new adventure, I would love to chat with you. Sagewell Financial is hiring. And it's a wonderful Rails codebase and lots of new opportunities, et cetera, et cetera.
STEPH: As someone that has worked with you, I can absolutely vouch that you are amazing to work with. And I can only imagine the codebase must be...everything we've talked about is really interesting and stellar. So yeah, I love that you're talking about this. I think it's awesome and a great opportunity for folks to get to join Sagewell.
CHRIS: Oh, thanks, Steph. That's very kind of you. But in other unrelated to hiring news, one of the things that I talked about in last week's episode was my search for a new to-do list or a new application to use. And I listed some of the ones that I've been exploring. We got more feedback about that particular segment than any other by like 2X. And there's something to be said there. Maybe the show is just living up to its name.
But so many people are reaching out like, "Oh, have you looked at this one?" And to be clear, I very much appreciate all of the feedback that folks have given. And actually, it has given me a few new things to look at or ways to think about this question. But mostly, I find it very funny that even though we've dabbled in topics like agile, and is it good or bad? Or other contentious ideas [laughs] like that, somehow this idea of what to-do list application should I use by far the most engagement we've seen with our audience.
STEPH: I think it makes sense. Everybody has an opinion. Like you said, we're living up to our name, which is great. Was that great? I don't know. [laughter]
CHRIS: It's something, I'll say that.
STEPH: It's something. But yeah, everybody has felt this pain. They get it. It resonated. But since we do have some people that shared their strategies and their thoughts, did that sway you at all? Are you still going to keep with what you have, or are you going to explore new things?
CHRIS: I consider this project open. I have a project in Things, which is the current to-do list application that I'm using to explore the landscape. But it's basically like, I want to timebox it, find a version that works for me. And right now, I moved to Things, and it's fine. I'm more intrigued by the jobs to be done aspect of it. So as opposed to a particular piece of software and the features that it has or doesn't have, I really want to think about the habits and workflows that I want to make easier and more repeatable.
So particularly, each day, I want to wrap up by cleaning everything up. I like my inbox zero, as you probably know, so doing a little bit of that, and then planning the next day. So I want to have a tool that supports that idea of I want to queue up what I'm going to do in the morning so that tomorrow morning when I start back up, I have a very clear list of things to do. And I can just dive in with what I find to be some of my best thinking time early in the morning.
Similarly, I want to be able to review on a regular basis and know if things are getting stale or overdue. So there are a couple of different workflows that I'm really focusing on. And it's unfortunate because then I look at each piece of software, and I'm like, well, you kind of support this but not totally. So I'm more in a collecting phase right now. I'm thinking about the workflows that I want to have and then finding the different tools and comparing them across those.
But the one thing that I have done at this point is I wrote a little Siri shortcut I think is the name for it. They're called Shortcuts is the name of the application, but if I try and Google that, Google doesn't really know what I'm talking about. They think I'm talking about my phone, but I'm not talking about my phone. I'm talking about my actual computer, but it's little workflow automation stuff on OS X.
And so I have a shortcut now that prompts me for the amount of time, and it defaults to 45 minutes. And then, it will turn on Do Not Disturb for 45 minutes, minimize Slack, because I can't be trusted, and turn on a particular Spotify playlist.
And then there's a little menu bar application that...I wrote a tiny bit of AppleScript; I found it on the internet and actually read it, that finds the top task in my to-do list and puts it in the menu bar. And so now I have all of that. I push a magic button, and I say, "Yes, so I would like to work for 45 minutes on the thing that is at the top of my to-do list.” And then all of the noise of the world goes away for 45 minutes or however long I say.
STEPH: I think you just created the next new hot to-do app. [laughs] This sounds like something that I need and love, especially when you're like, it autoplays a playlist for you and shuts down the world and then has you focus. Yeah, I like this. I think this also rings a bell. I feel like Momentum, or something also has similar prompts. But this sounds delightful.
CHRIS: If we're being honest, it's an absolute hodgepodge of a kludge. You have some weird shell scripts and some AppleScripts. And I had to install a weird command-line utility for Spotify to make it happen. But it was one of those like; I'm spent at the end of the day. I just want to tweak on some piece of code. And this was a perfect, productive distraction, is how I would describe it. And when I've used that, I've been very happy. I know the days that I actually lean into that mode of working are better days.
The days where I allow myself to be distracted by Slack throughout the day, although I'm responsive to certain questions, things are not moving as well as they should. And so, I'm trying to be really intentional with having more of these Do Not Disturb sessions throughout the day. I feel bad saying that. I shouldn't because we all should be in agreement that this is the way that we work. But even saying that, I'm like, I'm not that special. I should be reachable, right? [laughs] But I should take even just a short 45-minute break to focus on the work that I actually need to do. It's a struggle.
STEPH: I have struggled with that where I used to always feel such an urgent need that I had to respond to someone as soon as they messaged me. But over time, I've learned that one, things typically aren't as urgent as I will feel that they are. And then two, if you have that type of environment where people aren't expected to just immediately reply to stuff, then you learn to write things in a way that says, "Hey, when you see this, and here's context, and here are the things that I'm looking for. And here's an easy way for you to give feedback."
It just improves the overall communication. I could go on a rant about this. I think we've actually ranted about this before in a very positive way. [laughs] Yes, I think that's great that you are fighting the good fight and turning off the world for 45 minutes to focus on a task.
CHRIS: What's a positive rant? I feel like there's got to be a word for that. [laughs] But I'm trying to try and come up with that. A celebration isn't...is this one of those gaps in language where we don't have a word for a positive rant?
STEPH: Oh, this is going to bother me. [laughs] There's got to be something for a positive rant.
CHRIS: Well, I'm sure German has...some Scandinavian language or German has a more specific version of when one goes off on a rant for many hours about things that they love and are joyous about in the world or something like that. But maybe English is just lacking this, or maybe this is a market opportunity. And we can coin the word, and then it's ours.
STEPH: I think it's just praise or accolades, although that doesn't feel strong enough. Rant feels like such an emotional word that I agree praise doesn't feel strong enough.
CHRIS: It's also spacious. You don't just rant, and it's one word. It's not just like one swear that you yell in the word. No, it's this long rambling thing, and I want that but positive. Maybe it's just called The Bike Shed [laughter] because I think that might be what we do.
STEPH: I love that. I'm trying to smash it together, and all that I can come up with is prant, so that leads with a P.
CHRIS: Yeah, I went there real quick. [laughter] Portmanteau is where I spend most of my time. But prant is just not enough. Okay, we're going to take this offline. I think we should come up with a word. This is our market opportunity. I don't know that we'll make a lot off of this, but we'll have a word then.
STEPH: It's okay. Free things are good. Oh my goodness, this is going to be so trivial and silly. But I've been playing Wordle as the rest of the world has. If you're not playing Wordle, check it out. [laughs] It's delightful. And it's free. But I started playing without really researching who created it and didn't have all of the details behind it.
And then it was earlier this week I found out that the creator of Wordle is Josh Wardle. And that just blew my mind and made me so happy that it just had that alliteration and similarity. And I just hadn't put it together until that moment. And it was just this wonderful, happy bubble of a moment where I was like, oh, that's delightful. [laughs] And I'm pretty sure I texted some people who were like, "Yeah, yeah, we know that." [laughs]
CHRIS: Yes, that was a wonderful positive rant or prant as it were there. And Wordle really is just such a delightful phenomenon that popped out of nowhere and is given away for free by the kindness of Josh Wardle. So yeah, wonderful things on the internet.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: We have a listener question this week. Once again, just as a reminder, everyone, we love getting these listener questions. Feel free to send them into [email protected] or ping Steph or I on Twitter or any number of different ways. There's, I think, a form that you can go to the website, lots of different ways to ask us your questions. But again, we really love them. They let us have more pointed topics to talk about, such as today's topic, which is "What do you consider a quote, unquote, "large table" in a database?" Which is an interesting question, I think.
And so, let me read the question here. "Hey, Steph and Chris, I’ve listened to you (and most of your predecessors) for a while now. I've really been enjoying the conversational style about your actual development struggles." Thank you so much. This comes from Matt, by the way.
"Anyway, something Chris said in Episode 301 triggered a thought for me around large tables and databases and handling them for development tasks. What do you consider a quote, "large table" in a database? What questions/considerations come to mind when you're doing PR work that has a database interaction in it? We recently needed to delete a lot of rows out of a large table, and the team has a lot of discussion around how to handle it without impacting our production users. Curious on your thoughts. Thanks."
So, Steph, what do you think? What's a large database table in your mind?
STEPH: So I don't have a scientific answer for that, but I can give you my gut instinct. So typically, if there's a table that has a million or more records, I'll refer to that table as a large table. And then, if a table has around half a million records, then I start to be more cautious about data changes and how I'm rolling out schema changes. So that's my very loose; this is my feeling of when we're getting into large territory. How about you? Do you have more of a concrete answer?
CHRIS: I don't. And I think it would actually, in a lot of cases, be defined based on the database system that we're working with and, frankly, the RAM available on that system. There are two different sides of it; one is on the right side, like, how quickly are we inserting data into this table? And how quickly is it growing? Is probably a better question. Maybe there's a ton of data in it, but it's not growing that quickly. And so, we don't need to worry about any runaway characteristics.
The other side of it is how easily can we read from it? And that is the one that's going to be RAM-constrained. Where can we maintain an index efficiently? Can we query effectively and use RAM and whatnot? So a million starts to become an interesting number, probably. But I've worked on plenty of databases where hundreds of millions of rows existed, and we've got efficient indices in place and enough RAM that the database just happily works with that, and there are no problems.
So really, it's a question of like, if we start thinking about having to need to delete data, then that's a large table. If we have one table that is wildly larger than the others in the system, then that is something that I'll keep an eye on. I want to make sure, like, how's that table doing? How's the special table doing?
And often, there is one or two special tables similar to the idea of god objects within a system where these are the one or two classes that have just method after method after method after method. Similarly, there are one or two database tables that often have the lion's share of the data within the system. And so those are the ones that I'm really focused on.
And especially as we get closer to the RAM limit, there's this drop-off that I've seen happen where a system is like, it's fine. We got 250 gigs of RAM; there's no problem. And our database is only 100 gigs. And then a couple of weeks later, suddenly, had a bunch of new users sign up, and suddenly, your data and your indices no longer fit in memory. And now we're paging to disk, and suddenly the performance characteristics of your system just tank. And so it's that sort of thing. Watching growth rates is perhaps more important than the absolute size of any individual table. So yeah, those are some loose thoughts.
STEPH: I like how you used the word interesting. I think that's a nice replacement for the word large. When we get around a million records, things start getting more interesting in how we're rolling out schema changes. And then there's also you touched on usage, which aligns well with I often don't think so much about how many records that we have in a table.
But what's the usage of that table? How many queries or transactions are being executed against that table? Is this a very popular table like the users table? And will running a migration that renders that table inaccessible for a couple of seconds will that be problematic, or is this a table that we write to a lot, but we don't read from very often?
And even if it runs a couple of seconds, it's not likely to have an impact on people using the application. So that's one area I tend to think about first is what's the popularity of this table? And how cautious do I need to be in making sure that we don't block other people from accessing this data?
I also really like how Matt asked the question about what considerations come to mind when you're doing PR work that has a database interaction? That's one of those areas that, honestly, I lean pretty heavily on Strong Migrations to remind me how I can rewrite a migration to avoid or to transfer a blocking operation to a non-blocking operation.
So a really good example is setting a NOT NULL constraint on an existing column. I know that it can be very blocking if you try to do that by default when you first run it, and I will look it up every time. I will check Strong Migrations and say, "Hey, I know you've got some really great docs that will walk me through about adding a check constraint instead," and then making sure that I can then add this new column.
So going forward, for inserts and updates will apply the default, but it doesn't validate all the existing data. It's also a really good reminder, that particular example, is start with stricter constraints because it's a lot easier to remove a constraint than to add one later. So that's one consideration that comes to mind.
I also think the fail fast and fail loudly applies nicely here. So if I'm looking at a PR that is making a schema change, then I want to validate that the application has low timeout values so that way if a migration does take more than 30 seconds to run, then the migration will timeout. And then that will alert the developer to say, "Hey, do you need to think of a new approach or see if there's a way to improve this?" Versus if that migration didn't timeout, then that timeout is going to become user-facing as they start to experience problems with the site.
And then also looking for more performant methods so using find_n_batches, update all, delete all, just checking for the more performant ways that we can update large sets of data. Those are, I think, the top things that I really look for. How about you?
CHRIS: Yeah, I think very similar to everything you just said. And broadly, there's a point in time that happens frankly pretty early on in the growth of an application and the data set behind it where you need to start behaving differently with regard to migrations. There's a small period of time where I can just get away with anything.
I actually really love the part before we have any actual users where I'm like, oh, we need to change this fundamentally. I'm just going to drop the table and rebuild it because it's easier than trying to think about how to migrate this data. But so quickly, you get into a place where it's like, nope, sorry, can't do that have to treat this as realistic.
So a bunch of the strategies that you're describing, like indexes concurrently, is one of the things that I'll reach for often because that allows me to decouple the timing there and not...again, the migration timeout that you're talking about is absolutely something that I want to have. Migration should go through quickly, and if they can't, then we need a different approach. Maybe we need to introduce the new column right to that one in parallel to the existing column, and then eventually do a switchover. It's definitely more work and involves a couple of deploys to get that done, but that's the unfortunate reality that we have to move to.
I will say one of the things we talked about is like, if we hit that timeout, then we're going to stop that migration. This is a critical feature that I rely on deeply at Postgres, which is that schema migrations or DDL transformations; if I'm saying that correctly, I'm not sure I am, but throwing an acronym out there, it'll be fun. This is actually one feature of Postgres that I really rely on.
My understanding is that Postgres has this; MySQL does not, but I may be off. I know that Postgres has transactional DDL transformation, so schema migration sort of things. I'm adding a column; I'm removing a column, et cetera. Those inherently happen within a transaction, and that's wonderful because if they do timeout, we want to be in a consistent state.
The worst thing I can possibly imagine is being like, we got halfway through, but then we failed, or we lost connection, and so it's half migrated. It's like, oh God, I want to trust my database deeply. That's sort of one of the fundamental things that I have. And I've, over time, pushed more and more into the database and saying let's have check constraints. Let's have null true and all of these sorts of things so that the data in my database can be deeply trustworthy.
The idea that a schema migration could go awry, and suddenly we've got like, well, half of the rows have these extra columns. What does that even mean? How do you live in that world? So I love this feature of Postgres. I really rely on it. I feel very bad whenever I have to disable it. I think there are some enum-related things that require disabling DDL transactions. And whenever I type that in a migration, I'm like, I don't like this. I'm not happy about this, but it's the world we live in for now.
STEPH: If we're sharing our truths, yeah, adding an index concurrently also you have to remove that DDL transaction and disable it. For a previous project that I was working on, we often ran into that timeout where we'd run a migration, and then it would timeout. And we would then just specify and be like, "Hey, for this migration, I'm going to bump you up to a minute. I'm just going to make it longer."
And that felt questionable at times, but I at least appreciate the explicitness of it where you're making that decision to say, nope, I think this is fine. It’s not going to impact anybody, or we're going to run it in off-hours. I do want to extend this to a minute, and then make sure you do reset it, so it doesn't affect it globally from there on out.
But that's something that you can do, and I have done before, which I feel is important. You still want to know some of your outs in case you do need something like that just to fix things in a moment but then at least be intentional for when you're using it and then communicate to the team like, "Hey, I'm doing this and let me know if you have concerns about it."
For this specific scenario that Matt provided about we recently needed to delete a lot of old rows out of a large table, and the team had a lot of discussion about how to handle this without impacting production users; I happened to have a really nice conversation with Steve Polito, a fellow thoughtboter, about this particular question. And he had a very thoughtful response that I hadn't considered where he suggested starting with deleting the data for a small set of records.
So, for example, if you're working with a users table, you could scope the data deletion to only inactive users and then use a feature flag to disable any interactions that would be affected by that data loss, run that change to delete the data for those inactive users, and then check for unexpected errors or side effects.
So then that way, you have this moment to pause to say, "Hey, did we forget something? Is there something about this application that's still relying on that data that we forgot about? We've only done it for a small amount of users, so we're in a safer space." So then, at that point, you can either repeat those steps for another batch of records or use that feedback to then drop the column with confidence. And that was an approach that I hadn't considered, but I really liked that idea.
CHRIS: Yeah, it's a nice, I'd say methodical approach to what can be a very complex and difficult to wrangle task. I will say I haven't actually explored this too much, but I've always had in the back of my mind, like, if we're deleting data from the application, ideally, we're saying this data is no longer needed.
But I wonder if using table partitioning is an alternative that can be useful in these cases. What if we're able to figure out the correct partitioning? It's often time series sort of stuff. What if we're able to lean into that and say, "Let's partition this by year." And then yeah, we don't use the old data anymore, but it lives in a separate partition. And therefore, I think Postgres is able to do reasonable things with that.
And again, like disk space, we can have a lot more storage on disk, but RAM is really going to be the constraining factor of how much of the index fits in memory. And again, I haven't pushed on this. But I think that's an alternative approach that can be really interesting. But if we do have time-series data, in particular, Postgres is wonderful. But it's not necessarily honed exactly to that use case.
And so, there are a couple of tools that I've kept an eye on right now: ClickHouse, Timescale, and InfluxDB being the three of them. And I think most if not all of them are based on Postgres, but then they build on top of it. And they add some deeper understandings of time series data and how to optimize querying and storing, and all of that. And so, is that an alternative that allows us to still stay in this world but then have a different approach and alleviate some of the burdens that might come with this heavy data that we have?
STEPH: Yeah, all those sound interesting. I haven't heard of some of those. This is why I love chatting with you. You always bring interesting perspectives that I had not considered before, like the partitioning. Just to clarify, partitioning the data is a way of keeping that data, but then it's not indexed. So that way, your system isn't spending as much time making sure that data is easily readable. But then that way, you don't actually delete it, so then it's there should you wish to be like, oh, I wish I hadn't gotten rid of that data.
CHRIS: I think so. I'll be honest; I don't deeply understand it. But I think you basically can say given a giant projects table within your system; we actually may have that logically grouped by user sort of thing. And so we can shard and partition and say, there are ten different buckets of these. And if we optimize it well such that all of the things that are logically together actually live together on disk, then it allows Postgres to be much more efficient.
Similarly, with time-series data, then you can say, use this sort of windowing where it's each month, we get a new bucket. And then it's much easier to query across just that bucket because it's already sort of partitioned down in that way.
But I'll be honest; I'm now speaking well past my actual knowledge. I've never actually worked with it. But it's one of those things that I have in the back of my mind. Like if all of my other tools fail me and if I cannot solve these performance problems in a Postgres system with indexes, and tuning, and other things like that, then I will look to partitioning. So I look forward to that day when the data problems are so massive that I need to table partition.
STEPH: Got it. Like they say, it's a good problem to have. While adding to the list of tools, there's one that I discovered recently; it's called Safe PG Migrations. And it's very similar to Strong Migrations, where Strong Migrations will warn you and say, "Hey, this is not safe. There are other ways to write this migration." Safe PG Migrations take some more aggressive approach and will rewrite your migration to be a safer version. And I don't know how I feel about it. I love it, and I hate it. [laughs]
It's one of those the magic is there, and that could be phenomenal. But I get squeamish when things want to rewrite something as important as my migrations. But on the other hand, it is like a really nice default for the team because it's more than a warning. So that way, if you're trying to put something more strict in place for people to follow, then this would be a good way to do that.
CHRIS: I'm very intrigued by that as a tool because if it were obvious, then Postgres would do it. The team behind Postgres does absolutely amazing work. And so if I tell them, "This is the change I want to make to the system," and they're like, "Cool, we're going to do that super inefficiently," and someone else is like, "No, no, no, I can trick it." Postgres is good at tricking itself, is my stance.
So I'd be intrigued as to what secret knowledge they have or what are their caveats where Postgres has to handle every possible edge case. And therefore, it's slower because of pessimistic concerns that it has. But this tool says, "No, no, as long as you're not doing this very terrible thing, you're fine. And we'll rewrite it to a safer, faster version." But I'm just kind of intrigued, like, why do you think you're better than Postgres?
STEPH: [laughs] Why do you think you're better? Well, I do you have an example I can provide. It's one that they have on their README. And this one highlights that if you're adding a column to an existing table and that you're adding a default value and no constraint, then they show you how it's rewritten where they set explicitly the lock timeout, and then they will add the column.
And then they will set the default value but not in a way that it's going to do a table scan where it's going to add it for all the existing records; it's going to be for new records. And then they, let's see, they also update the users in batches to then set a default value, and then they will reset the statement timeout because it looks like they are...yeah, because initially, they change it, so they're resetting it to an original value. And then, they set the column Null constraint. I know I just said a lot of things reading from their README.
But they have a good example here that kind of highlights that this is how they rewrite it. So I do find that more reassuring as long as I can then see how it was rewritten, and then I can validate it and confirm it with what I think is appropriate. Then I still have full control. Then it's more of a hey, we rewrote this thing for you. Feel free to review it and then change it as you see fit. As long as I have that final authority, then that makes me feel better about this.
CHRIS: Got it. That makes sense. And the thing that you're describing, I think, is a semantically different thing than the first migration where it's like, do this thing. And they're like, well, okay, you could. But if instead, you did X, Y, and Z, then it would go way faster and be way easier. That makes a lot of sense. And it feels like shared knowledge wrapped up into a tool which I'm always a fan of that.
STEPH: Yeah, in general, when I think about general strategies for schema changes, there are really three areas that come to mind or three strategies that come to mind. The first one is that we take incremental steps to avoid blocking reads and writes to the table, which then allows you to deploy during business hours or off business hours. That often means just more manual steps that you have to take to make sure that it's safe. And then the other one is scheduling downtime to run a migration. That is a very real option, something that you can do. Or have a fancy setup that utilizes followers for seamless migrations and upgrades.
I feel like that's like the three big buckets that you can fit your strategy within. And it just depends on the needs of your application and users as to which one of those you're ready for or which strategy you need to use. What do you think? Are there any other big buckets that I left out of that list?
CHRIS: No, I think we covered a bunch there. Hopefully, that was useful. Hopefully, it, I don't know, maybe introduced folks to some new ideas or ways to think about this sort of work. And yeah, with that, shall we wrap up?
STEPH: Yeah, I've still got my Wordle to play for the day. So let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
ALL: Byeeeeeeeeeee!!!
ANNOUNCER: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris updates us on his new window manager of choice, Moom, and tells us what's good with it. He's also giving yet another task manager a go: OmniFocus. (Sorry Things.) Steph talks about defining test classes in RSpec and readdresses flaky tests to improve CI build time.
Chris is worried about productivity. He's still not coding as much as he'd like to be. Steph lends an ear, and together, they discuss potential ways Chris could gain back a little bit of coding time at work.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris, what's new in your world?
CHRIS: What's new in my world? Well, hey, Steph. Oh, I have an update on a thing that I think I talked about a while back or at least asked on Twitter. But I've been looking for a window manager for forever. And in that way that I sort of overcorrected a while back, I think where I'm no longer allowed to do anything related to productivity or dev tools. I was just forbidden because it was a time sink. I'm slowly trying to correct back and be like, you know what? I regularly think about how it would be nice to have a better window manager.
So previously, I had used Divvy, D-I-V-V-Y, which is fine. It did an okay job, but it just didn't have quite the level of control that I wanted, or maybe I didn't investigate it enough. But it felt like it was lacking. So I did a little bit of research. A bunch of people recommended different things. There was Spectacle; there was Rectangle. There was a whole bunch of other things that I'm forgetting now because I have settled on Moom, M-O-O-M. Those are fun words.
STEPH: I feel like you keep bringing interesting words [laughs] because last time, it was Things where you're tracking all the things. And now we have Moom to track the space. All right.
CHRIS: If this is my legacy as a podcaster, then I feel like I will have done well just, you know, weird sounds mostly that's what he's going for. But yes, I've been using Moom now for…[laughs] God, it's just ridiculous to say, but here we are.
STEPH: [laughs]
CHRIS: I've been using it. I've been enjoying it. In particular, the thing that I liked about it...a bunch of the other ones that I looked at were like, oh, we've got all these different configurations. And you can move things any which way, and you can have any number of hotkeys. And I was like, wait, wait, wait, say more right now. You want to take over my global namespace of hotkeys and just clutter it with 19 different things? You know that that is a limited space that I'm working with here.
And so Moom, somewhat uniquely, at least in the ones that I experienced, was what I would describe as a modal window manager. So much like Vim is modal where you start out in normal mode, and you're moving around and you kind of bounce and search and all of that, and then you enter insert mode. And in insert mode, keys do different things. And then in command mode...it's got all these different modes. And so there are lots of different namespaces for hotkeys. It's one of the things that makes Vim so powerful.
Moom is similar in that there's one global activation hotkey. And then, within that, I can have a whole namespace of hotkeys. So like M will put something in the middle of my screen now. F will put something full-screen. And I don't need to remember weird multikey combinations for that. There's just the one to get started, and then I've configured it such that the tab will bounce to a secondary display and sort of rotate through them.
M and F and Q and P I've got it physically laid out on the keyboard. So it looks like my screen. Q being on the left side will push something to the left side, P to the right side. And I'm very happy with that. I don't need a lot out of this tool. I don't need very complex management or scripting or any of that, which are very nice features that exist in the other ones. But that combination, the one hotkey to rule them all, and then the sub hotkeys within it, and the ability to mostly move between the screens and then put stuff where I want it is great. I'm very happy.
STEPH: I think I've figured it out. So Moom, I think it's a combination of move and zoom, and that's how they got Moom.
CHRIS: You're probably right.
STEPH: That does sound really nice. I'm a Spectacle fan. And I have enjoyed it and just stuck with it because I haven't felt a need to change from it. And it's really nice where I use my arrow keys for which direction I want to go. So that has been easy for me to recall. But that sounds really nice, all the things that you're describing with Moom.
CHRIS: Does spectacle have the like, is it some Command Option Control and then left or right or up or down? Or is it you type something, and then you type left, right, up, down?
STEPH: I have to actually touch my keyboard to answer that question because I have the muscle memory, which is an interesting thing that my muscles knows it, but my brain has to really think about it. So I think it's like the Option Command, and then yeah, then use the arrow keys.
CHRIS: Gotcha. That's roughly what I had when I was using Divvy previously, but I found just enough of a limitation there. And so Moom has been great as another tool. But I think Spectacle has a lot more features in terms of scripting and other fancier stuff that you can do, which is both super intriguing and, again, sort of the thing that I'm not allowed to do. [laughs] So I went with, like, this tool seems fine and has the one feature that I really want.
That said, you brought up Things, which is the to-do list app that I've been looking at. I've been using it for a week now. It's great. I'm enjoying having a more structured way to say, like, here's what I'm doing today. Here's what I'm doing tomorrow. It's been wonderful. But I'm already looking at OmniFocus as a better version.
STEPH: [laughs]
CHRIS: Because I think there's some stuff that I don't love, and yes, I can hear my own voice in the back of my head that's like, always chasing that next thing. But I haven't actually made the effort to switch over or even tried. I've used OmniFocus in the past. But anyway, I'll let you know if I do make additional moves there.
STEPH: Yeah, I'm enjoying this journey. Keep me up to date on it. I've heard of OmniFocus, but I know nothing about it. But I feel like I've heard good things. So I like this journey you're going on where you just keep switching and trying new things. That's fun for me [laughs], and there's chasing productivity. So I'm into it; I'm here for it.
CHRIS: If I just invest enough hours to save a handful of minutes down the road, then I will have...oh no, wait, that's not how this goes. There's, of course, an xkcd about this which we can include in the show notes. But I'm trying to be very intentional with it. I waited for many years before I allowed myself to reinvestigate the world of to-do lists. And I'm hopefully going to keep it to just a couple of weeks of nonsense and then back to a few years of stable. That's the dream.
But yeah, that's some of the smaller things that are up in my world. I have another topic that I want to chat about. But I'd love to hear what's new in your world?
STEPH: Yeah, I have some interesting bits that I can talk about with the project that I'm working on. But more concretely, I have something that's been on my mind that I don't think that I've talked about here on the show, but I think would be fun to talk about because I just happened to run into it this week while working on some code.
And it's the idea of defining test classes in RSpec so as you are testing part of your code, but then you want to create just like a fake class, something that you can use as a substitute for real application code. And so it's a really nice way that then you can have this replica behavior, but then maybe it's just one particular method or some behavior that you need to use in the class but then doesn't actually go to the real code. That's wonderful. That's great.
One thing that I've learned is that with RSpec is when you are introducing a test class, so let's say if you have your RSpec describe and then either a string or it's the name of a class, and then you have a block so do, and then within that block is where you write your test. If you create a temporary class, say, like I have my class test class, and then I have some behavior, that gets defined in the global namespace. It's not scoped to that particular RSpec example. And the reason for that it's not specific to RSpec. RSpec is not the one that's doing this; it's actually Ruby behavior.
So for Ruby, when you're defining within a block like that, if you're defining a constant, if you're defining another class inside of a block, it's going to use the outer namespace as its namespace. So if you had a top-level class that you were defining, but if you define a class as a block, and then inside of that block you define a constant, that constant is then defined in the object namespace instead of within that particular class that you have written. And so that's why RSpec has this behavior.
Because someone brought up a really great question about this on RSpec::Core asking about it, and they're like, yeah, that's actually how Ruby works. And so we're not going to change RSpec's behavior since that is how Ruby has decided to handle this. And the part where this becomes important is when you define a test class within an RSpec example.
While it may be unlikely that someone is going to use that exact same name for their test class that they're going to create in their RSpec example, if they were to use that same name, then you're going to have a collision between the two. One of them's going to win, and you're probably going to end up with some really weird test failures because it's going to get confusing as to which class is being used, and they may not match up with each other.
So one way around this, and this is going to be one of the rare times that I suggest this, but let. Let is scoped to an RSpec example. And so you could define a class inside of a let, and then that will scope it to the example. There are probably some other approaches as well, but that's the one that I'm most familiar with to ensure that when you define that class or constant, it's not getting defined in the global namespace and ensuring that none of the other tests have access to it.
CHRIS: Well, this is certainly interesting. I'm pretty sure I've been operating under the opposite assumption for the entirety of my career. This is good to know. I feel like I probably have had tests that failed because of this. And then I learned this truth, and then I subsequently forgot it. I don't know if you know this, but if you define a method within just a helper method that you extract in RSpec, are those also on the global namespace?
I don't define classes in RSpec blocks that often. It's pretty rare. Like if I have a controller concern sort of thing that I want to test, I might say random controller and inject the thing there or some other abstracted piece. That is the only case I can think of where I have a fake model or a fake controller or something like that for test purposes. But it doesn't come up that often. I do extract a heck ton of local helper methods. And I'm wondering now, are those all in the shared global namespace?
STEPH: I'm pretty sure they're not. And I'm getting on the edges of my knowledge here, but I think it has to do with the fact of when you're defining a constant. So if you're defining a class versus an actual constant, that will get into the global namespace because it's using the outer scoping. But in my experience, I'm pretty sure that's not true for the method just because I remember one time I did some funky stuff with RSpec. And I remember seeing that I couldn't access those methods from another example.
CHRIS: I like the honesty. And you're like, to be clear, I was doing something weird, but I learned that day. Okay, that's good because at least that part maps to my understanding. So methods may be safe, but classes get shared. Very interesting.
STEPH: And it's something that I rarely think about or had worried about just because if I'm defining a fake test class, I often will put it somewhere that's intended to be more global. So I'll stuff it somewhere in like spec support. So then other people can see, hey, I've already mimicked this behavior. So if you need to use the same thing, just go ahead and use this. It's not often that I am adding that class directly to the RSpec example group.
So I think I've been fortunate where I haven't actually run into that conflict for that reason. But this came up while giving an RSpec course. And while we were just in a very small, tiny codebase and replicating some examples, someone in the class was like, "Hey, by the way, do you know that that's in the global namespace?" And I was like, "No, friend. Tell me more." So thanks to that person, they're the ones that actually enlightened me about how it's going into that namespace and how it can actually pollute your testing namespace.
There's a really good article that's written by Ken Mayer. And we'll be sure to include a link in the show notes that talks about it and also provides the let example as a way to work around this. And also links to the GitHub discussion on RSpec::Core, where they talk about this behavior and why things are the way that they are.
Circling back to some of the more general project-y things that I alluded to earlier, I've shared a bit about the project that I'm working on. But just to recap it, it is focused on helping a very large team that has a large number of tests, around 85,000. And they are looking to address flaky tests that they have and overall really improve their CI build time. So right now, it takes about 30 minutes for the build to take place. But they also have flaky tests, and then that slows things down. And so, the re-verify rate has been painful for them.
There's been some really great work that has improved that, particularly there is a, I think we've talked about this before, but where they're re-verifying certain flaky tests, which isn't great because they're still flaky tests, but at least they're not preventing people from moving forward and shipping code.
But some of the bigger stuff that is just on my mind is when you have a very large team and a very large application, by large team, I'm talking about 100 developers, and they are all contributing to this codebase. And there are around 85,000 tests, and that has grown substantially in the last 12 months. And so, if you think about the trajectory of the addition of those tests, it is just going to continue to grow. So there's a concern there of even if we address flaky tests and we improve things, there's an architecture concern of how do we really reduce the CI build time?
And so there's that aspect, and then there's also the aspect of then well, how do we still work to improve the tests and the codebase as well as we go across all of these disparate teams? And right now, there is a bit of a culture where engineers don't feel empowered where they can necessarily address all of the flaky tests or things that they run into. And so there is a bit of a mindset of I'm stuck on this, or this test failed, or it's flaky, or I don't understand it. So I'm just going mute it, or I'm going to hand it off to someone else to work on it.
So there are three big areas that are on my mind. The first one is architecture. You can throw architecture at it. There's also the code quality that's a concern. And then how do you improve the code quality in a way that you're improving it fast enough that then you've got 100 other developers that are also contributing to it at the same time? And then individual IC empowerment where then people feel like, hey, I ran into a slow test or a flaky test, and I feel like I can triage this, and I can make changes.
For the architecture piece, we're still in the infancy stages of how to approach this and the strategy that we're using. But one of the ideas that has come up is how do we reduce tentpoles? And the tentpole is like when you're running your test and, let's say that it's parallelized, all of the various tests. But there is one process that takes like 20 minutes, and then the other process is completed in 5 minutes as a drastic example.
And overall, you could have reduced your time if you had managed to split that 120-minute process across all the other workers who are then available for that work. So there are some tentpoles that are taking place. And that could be one first step in reducing the CI build time.
There are also discussions around how to scale horizontally. Right now, we don't think that's something we can do with the service that we're using to run the test. But it's something that maybe we need to manually look into is then how do we build a queue of all these tests and not where we just split test by a file, which is typically how the Parallelize gem does it.
But you could actually split up tests within a file. So if you had a particularly large file, that doesn't necessarily matter. But then building a queue of all these tests so then as each test finishes, a worker can just grab that next test. And then also you can easily scale up and scale down workers. As I'm saying that, that feels big, that's a lot to invest in. But that as an idea is how can we essentially then scale the architecture? So even as we continue to invest in the tests, in the system, and they continue to grow, our architecture can keep up with it.
CHRIS: That last bit there is super interesting to me. It's something that I've looked into and haven't pursued yet. We're currently running on CircleCI with our test suite. And I don't even know that we pushed on parallelization because we're early enough on that. And we turned off bcrypt recently, which super-duper helps with the speed up. But overall, the test suite time is fine, is where I would put it.
It had crept up, though, to a place where it was starting to be painful, is how I would describe it. And I think it's very easy for that to just continue growing and suddenly, it's 20 and 25 minutes. And then, depending on your merge strategy and all of that, it can be all the more complicated, and this gets in the way of deploys. And so, I think it is a super important thing to keep an eye on. I know Charity Majors pushes really hard for 15 minutes from merge to deploy to production. And so if your CI suite takes 25 minutes, then already you're stuck.
As an aside, I just once more want to say out into the ether, CircleCI or any other CI platform, if you would allow me to say yes, we've already tested this Git hash, this Git SHA, or the working tree, ideally, because that's also deterministic, I would love that feature. I would love to not have to rebuild the same code when it gets merged into main, just saying once more out into the world. Also, GitHub, if you want to put me on the merge queue beta, I would love that if anybody out there is listening. [laughs]
STEPH: I like how this has become a special requests hotline for all the things [laughs] that you're hoping to get a part of or features you'd like to see added.
CHRIS: Hello, internet. I have some requests.
STEPH: [laughs]
CHRIS: I would love to see those things, but in the world where those don't exist. The particular thing that you're talking sort of a test queue, is something that I've seen. So Knapsack is a...what's the word? It's a tool; it's a service. It's a combination of things. But it does that essentially where it starts up a local build agent. And then it basically says like, all right, give me all of the tests that you need to run, and then I will feed them back to each of the individual agents that there's one agent running per parallelized process.
And so say you've got five of them. The first one says, "Hey, give me a test," and runs it. And the second one says, "Give me a test," and et cetera. And so, the queue manager on the other side is in charge of that orchestration. And it means that they basically all finish in identical time, with one being an outlier, whichever one happens to be the longest.
But it's only going to be however long your longest test is is basically that outlier versus what you're describing of like, well, if we split it by file, we can end up with more naive things where there's a bunch of feature specs on one of them, and it skews by two minutes. We obviously don't want that. So Knapsack, in particular, is a tool that I've looked at, but generally, I'm very interested in that as a solution to how do we maximally take advantage of parallelization there?
STEPH: Interesting. I have not heard of Knapsack. There is one that sounds similar. It's called RSpec Queue. And it does some really interesting work where it will split the individual test, so it won't do it by file. It will also look at historical data to then try to be intelligent about how it's going to split it and find the longer running test. And I believe it uses Redis to then keep track of the test set up in run and things that still need to be run. That is a gem that the team is looking into using as well.
I don't know how that works if that can integrate with the current platform as we're using TeamCity to run tests. I don't know if that's something that can integrate with TeamCity, if it's a replacement. I don't have all of the knowledge about RSpec Queue yet. But it seems to do a number of the things that we're interested in. So even if we can't use the gem, then maybe it's something that we can still imitate.
CHRIS: The other thing that I'm surprised we haven't said yet is this is one of the places where people would often reach for microservices. I feel like we have to have the microservice conversation at this moment. Microservices can actually be a great solution to organizational problems. As a team scales, it does become really hard to manage a large group of developers. And so microservices introduces a very fixed boundary that then draws nice lines that you can have around things. And so, the individual build time for a portion of your application can be much more manageable by virtue of that.
But it has this huge cost of technical complexity and overhead and et cetera, et cetera, all of the reasons that we may not want to go that route. And so interestingly, I was just looking at Shopify's Deconstructing the Monolith blog post, which I think at this point, they've skewed a little bit more into the microservices. Shopify is huge, one of the largest Rails apps out there. And so looking at them and being like, oh, what are they doing? It's an interesting sort of plot a course and to see how long they waited before they even started thinking about the much deeper things and even exploring microservices.
But in this blog post, they talk about a different approach where they stuck with sort of a monolith. But then they started to introduce Rails engines and clear encapsulation within the large codebase such that then you can actually start to say, well, we don't need to run all the tests every time because if we're making a change within this section of the application, then we just need to run those tests.
I've also heard of organizations having some logic that can determine based on the code change; we know the associated test files that we should run. I'm scared of that is how I would describe it. I want to trust my test suite. I want to be able to deploy on a Friday and say if tests are green, it's going out to production. That's great. And I worry about that sort of thing. That's hard to get right. That feels like caching, right? And that's one of those things that we historically get wrong a lot.
But nonetheless, that is an approach that large organizations I've heard having good success with. So some way to determine what's the affected code and what tests do we need to rerun and et cetera. And that can really drastically reduce down the scope of each CI build. But those are some larger things that I have not had to reach for on any of the applications I've worked with. I've taken different approaches, different ways to reduce the time or otherwise Parallelizer et cetera. But it's interesting for when you get to a certain scale.
STEPH: Yeah, it's funny that you bring up that idea because that came up in conversation with some of the other developers as well, was the idea of, like, what if we could just not run all the tests? You changed one file, and you don't need to run everything. And I immediately was like, that sounds very cool and super hard to be able to get right.
And a lot of this code is extremely coupled, which then moves to the code quality area. So I suspect a lot of the test times could be improved by creating smaller objects because right now, a lot of the tests will load the entire world because they have to. They have to test everything. And so that is creating a ton of data, and then taking a long time to run versus if we were able to split out that code into smaller objects and test in unit tests, then that would also help speed up. But that's also hard to do.
Where do you look first? We do have some great data, thanks to RSpec. RSpec is letting us know how long each test file takes to run, and then we are capturing that data. So I can go look at which files and say, oh, this file takes 10 minutes to run. Let's look at that file first versus some of the other ones that are performing better. But that is a battle that will take a long time to win. And it's something that takes consistency and then also encouraging others to join that battle. So while it's very important, it doesn't address the concern of tests growing rapidly and then being able to support that.
Something that you said in a previous episode also was on my mind in talking about building processes in a way that encouraged people that they can make small, quick changes. And I think that's really important. So if we can build out the architecture to help scale this so then the tests were running in say 15 minutes, then if someone saw a test and they wanted to make a small refactor, they saw a factory.create, and they're like, oh, that could be a FactoryBot.build_stubbed instead and issue that into a pull request or change request and get that merged.
I don't know if people feel as comfortable doing that right now because it takes them 30 minutes or longer to run the test. But that idea of how do we get a structure in place where people can make tiny, little improvements and do that as a whole, as a team, to then work on the code quality concerns?
CHRIS: That last little bit is so interesting where you're saying, like, oh, we have a FactoryBot.create, FactoryBot.Build, but it has the overhead of having to go through the 30-minute test suite. But coming back to the thing we were talking about before, what if we didn't have to run all the tests? Although I find it very hard to tell, given a code change in actual production code, what tests do I need to run? When I'm just changing a test, I'm pretty sure I know which test I need to run in order to determine if that test still runs correctly.
So that feels is there an optimization that can happen there? Which is I've only made test changes; therefore only run the changed tests. And then that's an encouragement to say, like; this is a part of our codebase that we are trying to improve on. Let's optimize the iteration speed there. You'd have to figure out how to write that. And so it's probably much like my productivity adventures, maybe not a good investment.
Although given that this is such an organizational concern, maybe that is the thing that's worth spending an afternoon on and seeing if it could happen. Because if you can speed that process up, get more [inaudible 23:46] and more iteration in fixing the tests, that feels like it could be a win.
STEPH: I think that's a really good idea. I think we could certainly tell that if a file's changed, that it's only a test file that has changed. And then I've heard very good things from the other developers that TeamCity has a wonderful API to work with. And so there's a way that we could then tell TeamCity to say, hey,...or it may not even be a TeamCity command. It may just be somewhere in the universe we have to say, "Hey, RSpec, only run this test," or "TeamCity, we're only going to feed you this one RSpec test to run, so user agent but only run this particular test."
So I really like that idea. I think that's really intriguing. And I'll bring it up with the team because that would be a huge win, especially as Joël and I are really focused more on tests. That would just improve our lives. So selfishly, I'm excited about that idea because we are touching less of the application code and more focused on improving the test at this point.
CHRIS: I mean, if right now you're getting, say, 5 or 10 pull requests through a day which frankly feels like a high bar on this, if suddenly that's 10 to 20, that's material right there.
STEPH: Yeah, I don't know how large of an impact it would have for the rest of the team because I don't know how often they're only making changes to a test file, but it still feels like a nice optimization to have. Cool. Well, thanks. I appreciate that idea.
CHRIS: My pleasure.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: What else is going on in my world? I continue to not code a ton which is interesting and probably makes sense for right now. But to share a small anecdote from this week, we had retro, and I ended up attending retro ever so slightly late. I was doing a hiring interview, which is super exciting. Again, for anyone that's out there, we are hiring at Sagewell Financial. And I would love to chat with you if that sounds interesting. But so I was having a wonderful hiring conversation that ran a little bit long.
So I was a little bit late to retro, and I arrived, say like eight minutes in, and someone was expressing a concern. And the concern was, I very sincerely know this to be true, but they were saying in the most positive way. But they were like, "It'd be great if Chris could code more," and not in the judgmental like, Chris, why are you not getting as much done? Not in that way at all, very much in the it would be great if Chris had more time, if there wasn't as much pulling my attention in different directions.
But then it kind of went into this interesting direction. So we then go back through and address the concerns and talk as a group about how we resolve them. But this one was like, my name was in the concern, again, in a very positive way, in a very supportive way. And we had a wonderful conversation, and there were really great ideas that were passed around. But man, did I feel weird having my name in a retro item. [laughs]
STEPH: So one thing I've learned is that you do a really good job when you are giving presentations and being in the spotlight. But I don't think you actually love it. You love sharing content and things that you have learned. But I could see how being a focal point, especially if there's a concern or something that could have a negative connotation, that would feel squeamish. It would make me feel squeamish.
CHRIS: I hadn't thought about it in that way. But as you say it, also, this conversation is a meta version of that. Like, let's talk about me talking about me. I don't want to be the center of attention. But I love technology or process. I love talking about the work. That's great. And so I'm happy to do that. I'm happy to stand in front of a room and talk about it. But yeah, when it's about me, that's weird. And so now I'm going to move...well, no, I'm not going to move on [laughs] because this is the topic right now.
But so there's a bunch of things that we have been trying to introduce. And I think this is a useful part of the conversation more broadly and less about me. So one of the things that I think I mentioned in a previous episode was the introduction of point-dev, which is each week, we rotate through a person. And that person is in charge of triaging the errors, making sure that nothing is stuck in Sidekiq, responding to any support requests, et cetera, et cetera. But they're meant to be the frontline such that everyone else can be heads down and really focus on the work.
And what was interesting of the three developers that are working on the project, I am point-dev this week. So I was like, yes, that's awesome this week because I'm the person on the frontline. That has not helped me, but in the future, it will. And then one of the other developers mentioned that they feel like it's really useful but also feel like it's been noisy. And we realized the previous week was their week on point-dev. But the other developer was like, "Yeah, it's been great. I haven't had to think about anything." And so they have been off of that rotation for two weeks now. They'll be taking it over next week.
But it is doing exactly its job of providing that attention coverage so that they can keep their focus on the code, and that's really wonderful. So I'll be honest, when we started talking about it, there was a tiny voice in my head that was like, is this a failure mode? Should we be dealing with the noise rather than having a process to address it in the moment? Should we be dealing with the root cause rather than the symptoms? And I still think that's a good point of view. But we found so much value from this.
And as I've mentioned it, many people are like, oh yeah, we have that. It's great. I've heard enough positive things. So I've backed away from that. But there was a voice in my head that was like, are we failing right now? But yeah, so point-dev has been really wonderful. And next week, I will have to...well, frankly, the next two weeks, I'm off of point-dev appointments, so I'm very excited about that.
I've been doing some of the product management or sort of the tech side of the product management and helping to triage cards and make sure that there's very clear work lined up for the engineering team when they're ready to do that. I'm trying to back away from that just a little bit.
And one of the things that we did there was introduced an inbox column in our Trello board. You know how I love a good inbox. You know how I love to get to inbox zero. But that is a good way for me, for anyone now in the organization, which I don't want everyone to have to learn our processes, but just saying, "This is the place that you put requests, and we will deal with them. I assure you of that."
It has been great because that means I don't need to be quite as responsive in Slack. I can just gently redirect people, "Hey, if you don't mind, please put this in Slack in the inbox column, and that'll be great." That thing, though, that gentle pushback in Slack is one of the things that I've struggled with.
And this was one of the more personal aspects of the conversation that happened in retro was me being, like, if we're being honest, I tried to do that. But it's not my favorite thing to do in the world. Whenever someone asks me something, I want to be helpful. I don't want to seem rude or brisk or like I'm too busy for you, et cetera, et cetera.
So I will often respond to the question or do the thing that they're asking and then say, "In the future, if you could go to this other place." And ideally, I'm slowly moving forward and being like, "No, no, no, please go to the other place. We've talked about this a few times." But it is an interesting example of one of the specific aspects of my personality coming through in this. But that introduction of an inbox has been great. Love me a good inbox, as I said.
And then, more generally, we just tried to talk through what are the things that I'm doing? Do I need to own all of those uniquely? And some of them the answer we decided was yes but some of them we decided no. And we started to sort of distribute the work there or some of the meetings or different aspects of it. And so overall, it was a really great conversation but also very weird for me.
STEPH: Yeah, because then you wonder, am I not doing the right thing? Am I not spending my time the right way? But then hopefully, that meeting helped reinforce that yes, you are spending your time the right way and that you're doing a lot of productive things. There are just too many productive things for you to do, and so you have to prioritize those aggressively. I like all the things that you just highlighted.
There's one in particular, the last one that you mentioned about finding things that you can hand off to others. And I love that for a couple of reasons. It came up in a recent conversation that I was having with some other thoughtbot developers around when someone's on a project, typically someone just falls into being the point person. They just happen to be the person that the client talks to and ask questions and goes through the most. And that's something that is okay. But we want to make sure that that's not a bad thing, that everybody is treated equally, that everybody is given equal opportunities and room to grow.
And so, in my mind, whenever someone is that point person, or you have fallen into that role, it is your job to then pull other people up. So if you have been given the responsibility of running a particular meeting each week, then go ahead and do it once or twice, so you can demo it and show it to someone else as to how you do this. But then tag somebody else and say, "Hey, I'm going to let you or ask you to run this next time." So then that person can experience it. They can demo their style, and then it continues on to have more people.
So I really like that you are highlighting it's not just beneficial for you to then distribute those tasks, but it's empowering for everybody else on the team as well. I'm curious, so what was the final outcome? It sounds like there are some really good things in place, and you're transitioning, handing some things off. But I can't imagine that things have gotten...all of your priorities are still there. So do you think you'll actually code more, or what's the outcome for next week?
CHRIS: Short term, maybe probably not, if we're being honest, but trending in that direction. So one of the things that's going on right now is hiring. That is just an activity that takes a lot of time. And I care a lot about doing that well, both for the organization and then for individuals on the other side.
I want to be respectful of their time and communicate in reasonable timelines and not leave people without an answer or follow up or those sorts of things. It probably makes sense for that to sit with me as the starting contact. And then from there, folks that are continuing on in our hiring process they're going to talk to many other members of the team, and that won't just be me.
But there are a lot of first conversations that I'm having. And so right now, my schedule has a bunch of that, which is fine and good. And that will hopefully, at some point, we'll hire some great people. And then we'll be on the other side of that. And that piece of the work that I have right now goes away.
Some of the other outcomes that we named there were a couple of action items. And so I think those will help, but they're sort of we got to work towards that. One is transitioning a meeting, but it's a biweekly meeting. And I'm not going to just not attend the next one. So it'll be me and one of the other developers attending to transition ownership of that meeting moving forward. And then from there, so like, two weeks from now, I will not have that consideration on my calendar. And that's like one 30-minute block that I get back or, depending on how you think about it, one block that that 30-minute broke up.
I do want to touch back just on something that you're saying there. I think you're being very kind to me in saying like, no, but you've got so many things, and so it's hard to do that. I think that's true, but that's kind of the work overall, and my version of that is one thing. But everyone sort of has, as a team, we have a version of like, how are we being most productive? Are we making sure we're doing the most important things? And so it was interesting in the moment, but I think it was a very good conversation.
And I want to make sure that both we as a team and then me as an individual, wherever that happens to be the case, are open to these sort of constructive things. Like, frankly, to do the work to figure out how to get work off my plate that hasn't felt like the most important thing. It felt like close to the most important thing, but then there were all the other things that I had to do. So I wasn't doing the work to figure out how to not do the work. It is a complicated sentence that I just said.
But this was a case where retro, I think, very usefully highlighted that this was a good thing for us collectively to put effort into such that we can be more productive moving forward. It happened to be slightly more focused on me rather than the entirety of the team. But broadly, that kind of thinking is why I'm a huge fan of retro. I think it's a great place to take a step back think about how we're doing the work rather than just being in the work day-to-day.
STEPH: So if I'm internalizing what you said correctly, let me know if not, but it sounds like you're in one of those places, and I've witnessed this with other people and myself where someone is overwhelmed. They have a lot to do, and they're very focused in that grind and in that moment of doing all the things that they have to do. And it's very hard to then say, "I'm in the weeds right now. And then I also have to figure out how to get out of the weeds." And that's a very different skill and mental space to be able to do that.
Because often, when you're just in that mode, all you can focus on is a bit on survival at that time. And then it may take other people to notice to say, "Hey, you're in the weeds. We need to figure out a way to help you not live there and to find ways to distribute some of the work."
Does that sound like a fair assessment? Because I think I say all that because I've just seen people in that position. And then they think back, like, oh, I should have offloaded stuff earlier. And it's like, yeah, true, totally. And it often takes a retro or someone else coming to you and saying, "Hey, I've noticed...I looked at your calendar today; how can I help?" [laughs]
CHRIS: I think that's probably the right calibration. And mostly, my emphasis was just I want to make sure that broadly, any team that I'm on has the space for this sort of conversation. And that thing that you're saying exactly that phrasing of like, "Hey, I saw your calendar. How are you doing? How's that going, though? Are you feeling okay? [laughs] You can't sleep and whatnot." That can be a really useful thing to have and to have organizational norms about what are our expectations of how many meetings someone should have in a week. And where do we start to think about different things?
You did use the phrase overwhelmed. I want to say that I'm like 101% whelmed. So I'm just ever so slightly overwhelmed, but it is like I'm in the weeds. I need to figure out how to clear some of the weeds so that then I can get out of it. And it was a great conversation that came from that.
STEPH: That's awesome. I'm glad you got a good team that, frankly, felt comfortable bringing it up, and then that you could lean on them for ways to talk about how you could code some more and talk about priorities and where you want to focus your time.
CHRIS: It will be an interesting thing. As the team grows, I don't expect this to get easier. We talked about this a number of weeks back. And I think for a while; hopefully, we clear a little bit of dust here, and then I get back to being a little bit more on the code, and that's going to happen for a while. But as I think about the longer sort of the future of the company, this is something I'm going to have to revisit a handful of times.
And it's a really interesting question that I'm still struggling with internally. And where do I want to be versus what will be needed and whatnot? So it'll be interesting to see how it evolves. But for now, I think I can gain back a little bit of coding time, a little bit of maker time versus manager time, as Paul Graham's essay goes. And yeah, I think that'll be good.
STEPH: Yeah, I like how you're already looking forward to the fact that it will probably fluctuate because, yeah, right now, you are sort of paying a tax. You are building up to then where you can have more people on the team. And then that may give you back some of your time where then you can code because you can outsource some of the work to them. But then, as the team grows, so are other responsibilities. And traditionally, being in a CTO role and most CTOs I know will code here and there because they want to, and they enjoy it, but it is not their full-time job.
So I think you're really wise to have already noticed that and start thinking about how that's going to trend in the future. And it sounds like you might need to figure out how to throw some architecture at it. So then you can scale horizontally, and then you can just have more time to do all the things. Yeah, that's right. [laughs]
CHRIS: You're suggesting microservices, right? That's how my job becomes easy?
STEPH: Yeah. Well, I'm thinking more like RSpec Queue, but we'll have RSpec Chris or some version of that.
CHRIS: Chris Queue.
STEPH: Chris Queue. [laughs]
CHRIS: And then I just paralyze my human, and then it'll be great.
STEPH: Yeah, that's always worked out well in the movies. Whenever somebody clones themselves, that goes super well.
CHRIS: Multiplicity is a fantastic piece of cinema, and I stand by that.
STEPH: I haven't seen it, but I feel like it doesn't end well for the main character.
CHRIS: I feel like every time I mention a movie, you haven't seen it. I feel like we need to do a movie marathon at some point just to catch up so that we've got shared analogies. But yeah, it's a fun movie. It's fine. It turns out fine in the end. But there are some humorous adventures that happen in the middle. Cloning maybe [laughs] isn't the most direct option to solve productivity problems.
STEPH: [laughs] Yeah, I think I've got Labyrinth, Hackers, and Multiplicity now on the watch list. And I appreciate the fact that you know that I'm not likely to watch them, although out of the three, Hackers will probably happen.
CHRIS: All right, what if I were to get a bunch of Pop-Tarts, non-frosted?
STEPH: Ooh.
CHRIS: Does that change --
STEPH: Wait, are you going to send them to me? Because if you just have them, that's no good. [laughter]
CHRIS: Eat Pop-Tarts on a video call and be like, "Look at this movie. It's great." [laughter]
STEPH: All right, bribery definitely works for me. [laughs]
CHRIS: Okay, so got it, noted. And based on the nature of the conversation that we have devolved into here, I think we've probably reached a good point. What do you think? Should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeee!!!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph talks about winter storms and thoughts on name pronunciation features. Chris talks about writing a query to add a new display of data in an admin panel and making a guest appearance on the Svelte Radio Podcast.
Finally, Chris decided that his productivity to-do list system was failing him. So he's on the search now for something new. He asks Steph what she uses and if she's happy with it. How do you, dear Listener, keep track of all your stuff in the world?
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. We have Winter Storm Izzy headed our way. It's arriving in South Carolina early tomorrow morning. So that's kind of exciting just because it's South Carolina. We rarely see snow. In fact, I looked it up because I was curious because I've seen it every now and then. But I looked up the greatest cumulative snowfall in 1 season, and it was 19 inches in the winter of 1971. I was trying to add an old-timey voice there. I don't know if I was successful.
CHRIS: Does 1971 deserve a full old-timey voice?
STEPH: Apparently.
CHRIS: I feel like people from 1971 would be like, "We were just people in the 70s." Like, what do you...
STEPH: [laughs]
CHRIS: Wait. Nineteen inches, is that what you said?
STEPH: 19 inches. That was total for the season.
CHRIS: Yeah, we can bang that out in an afternoon up here in the North. So yeah, okay. You were here for the terrible, terrible winter, right?
STEPH: Oh, Snowmageddon? Yes.
CHRIS: Yeah, that was something, oof.
STEPH: I don't remember how many inches. Was it like 100 inches in a month or something wild like that? I've forgotten the facts.
CHRIS: I, too forget the facts. I remember the anecdotal piece of data, the anecdata as it were where we shoveled our driveway, and then another storm came, we shoveled our driveway. And then finally, I was living in an apartment, and it was time to shovel the driveway again. But the pile of snow on the lawn was too big. So we had to shovel the pile of snow further up the lawn to make room for the snow that we were shoveling out of the driveway.
But I also remember that being a really nice bonding moment, and I met more of the people living in the...it was a house that had been converted into six apartments. So I actually met some of the people from the house for the first time. And then we hung out a little bit more in the day. So I actually have weirdly fond memories of that time. But to be clear, that was too much snow. I will officially go on record saying too much snow. No, thank you again.
STEPH: It was a lot of snow. I think it broke Boston for a while. I remember I don't think I went to...I worked remotely for two weeks because they were just like, "Yeah, don't even try to come into the office. Don't worry about it." So it felt like my first dabbling into understanding quarantine [laughs] except at least with less complicated reasons, just with lots of snow. We also went snowboarding in Charlestown, where we were living. And that was fun because there are some really great hills, and there was so much snow that that was delightful.
But I'm not expecting Snowmageddon in South Carolina, although people may act like it and rush out and get their milk and bread. But hopefully, we'll get a couple of inches because that'll be lovely. I don't know that Utah has ever seen the snow. So this will be fun.
CHRIS: Oh, that'll definitely be fun. I imagine you've got like even if you do get some amount of accumulation, a day later the sun will just be like, "I'm back. I got this," and clear it up, and you won't have any lingering. The year of Snowmageddon, if I remember correctly, the final pile of snow left in July, the shared one that the city had collected. So you'll probably do better than that turnaround time. [laughs]
STEPH: Yeah, it's perfect. It's very ephemeral. It snows, it's beautiful. It's there for a couple of hours, and then poof, it's gone. And then you're back to probably 70-degree weather typically what’s here, [laughs] which I have no complaints. There's a reason that I like living here.
But in some other news, I have something that I'm really excited about that I want to share. So there's something that you and I work really hard to do correctly, and it's pronouncing someone's name. So whenever there is either a guest on the show, or we are referencing someone, we will often pause, and then we will look for videos. We'll look for an audio clip, something where that person says their name. And then we will do our best to then say it correctly. Although I probably put a Southern twang on a lot of people's names, so sorry about that. But that's really important to say someone's name correctly.
And one of thoughtbot's projects is called Hub, which is something that we use internally for all of our project staffing and then also for profiles and team information; there’s a new feature that Matheus Richard, another thoughtboter, implemented that I am just so excited about. And now that I have it, I just think I don't know how I lived without this. And I want it everywhere.
So Matheus has added the feature where you can upload an audio file with your name pronunciation. So you can go to someone's profile, and you can click on the little audio button and hear them pronounce their name. And then a number of people have taken it a bit further where they will provide, say, the American or English pronunciation of their name. And then they will provide their specific pronunciation; maybe it's Greek, maybe it's Spanish, and it's just phenomenal. And I love it so much. And I can't wait for just more platforms to have something like this. So really big shout out to Matheus Richard for that phenomenal feature.
CHRIS: Oh, that is awesome. Yeah, we definitely do pause pretty regularly to go scan through YouTube or try and find an example. And often, people just start into talks, or they'll only say their first name. We're like, oh, okay, keep searching, keep searching. We'll find it. And apologies to anyone whose name we still got wrong regardless of our efforts.
But it's making this a paramount idea similar to people putting their pronouns in their name. Like, okay, this is a thing that we should get into the habit of because the easier we make this, the more common that we make this. And names absolutely matter, and getting the pronunciation right really matters. And especially if it can be an easier thing, that's really wonderful. I hope Twitter and other platforms just adopt this; just take this entirely and make it easy because it should be.
STEPH: That's what I was thinking; if Twitter had this, and then I was thinking if Slack had this, that would be a wonderful place to be able to just see someone's profile because we can see lots of other helpful context about them. So yeah, it's wonderful.
I want to hear more people how they pronounce their names. Because I'll always ask somebody, but it would just be really nice to then be able to revisit or check-in before you talk to that person, and then you can just say their name. That would be delightful.
CHRIS: I do feel like creating it for my name would be interesting. I actually had someone this week say my name and then say, "Oh, is that how you pronounce it?" And I stopped for a minute, and I was like, "Yes. I'm really intrigued what other options you were considering, though. I would like to spend a minute and just...because I always thought there was really only the one approach, but I would love to know. Let's just explore the space here," but yes.
STEPH: [laughs] You ask them, "What else you got? What other variations can I hear?"
CHRIS: [laughs] I would like three variations on my desk by tomorrow so that I can understand what I'm missing out on, frankly. There's a theme or an idea that I've seen bouncing around on Twitter now of people saying, "Yeah, I really just want to apply, get hired, work for one day, make this one change to a platform, and immediately put in my resignation."
And I could see this like, "All right, I'm just going to go. I'm going to get hired by Twitter. This is it. This is all I'm doing," which really trivializes the amount of effort that would go into it for a platform like Twitter. I can't even imagine what engineering looks like in Twitter and how all the pieces come together. I'm imagining some amount of microservices there, and that's just my guess. But yeah, that idea of just like, this is my drive-by feature. I show up; I work for a week, I quit. And there we go; now we have it.
STEPH: Well, we are consultants. Maybe we'll get hired for all these different companies, and that will be our drive-by feature. We'll add it to their boards and be like, "Don't you want this? Don't you need this?" And then they'll say, "Yes." [chuckles]
CHRIS: I am intrigued because I can't imagine this hasn't come up in conversations at Twitter. And so, what are the trade-off considerations that they're making, or what are the reasons not to do this? I don't have any good answers there. I'm just asking the question because, for an organization their size, someone must have had this idea. Yeah, I wonder.
STEPH: Yeah, there's; also, I'm sure malicious things that then you have to consider as to then how people...because, at the end of the day, it's just an audio file. So it could be anything that you want it to be. So it starts to get complicated when you think about ways that people could abuse a feature.
On that peppy note, what's going on in your world?
CHRIS: I had a fun bit of coding that I got to do recently, which, more and more, my days don't involve as much coding. And so when I have a little bit of time, especially for a nice, self-contained little piece of code that I get to write, that's enjoyable. And so I was writing a query. I wanted to add a new display of data in our admin panel.
And I was trying to write a query, and I got to build a nice query object in Ruby, which I always enjoy. That's not a real thing, just in case anyone's hearing that and thinking like, wait, what's a query object? Just a class that takes in a relation and returns relation but encapsulates more complex query logic. It's one of my favorite types of ways to extract logic from ActiveRecord models, that sort of thing.
So I was building this query object, and specifically, what I wanted to do here is I'm going to simplify down the data model. And I'm going to say that we have users and reservations in the system. This may sound familiar to you, Steph, as your go-to example [laughs] from the past. We have users, and we have reservations in the system. So a user has many reservations. And reservations can be they have a timestamp or maybe an enum column. But basically, they have the idea of potentially being upcoming, so in the future.
And so what I wanted to do was I wanted to find all users in the system who have less than two upcoming reservations. Now, the critical detail here is that zero is a number less than two. So I wanted to know any users that have no upcoming reservations or one upcoming reservation. Those were the two like, technically, that's it. But say it was even less than three, that's fine as well. But I need to account for zero.
And so I rolled up my sleeves, started writing the query, and ActiveRecord has some really nice features for this where I can merge different scopes that are on the reservation.upcoming is a scope that I have on that model that determines if a reservation is upcoming because maybe there's more complex logic there. So that's encapsulated over there.
But what I tried initially was users.leftjoinsreservations .groupbyusers.id havingcountofreservations. So that was what I got to. And thankfully, I wrote a bunch of tests for this, which is one of the wonderful things about extracting the query object. It was very easy to isolate this thing: write a bunch of tests that execute it with given data.
And interestingly, I found that it worked properly for users with a bunch of upcoming reservations. They were not returned by the query objects which they shouldn't, and users with one upcoming reservation. But users with zero upcoming reservations were being filtered out. And that was a surprise.
STEPH: Is it because the way you were joining and looking where the reservation had to match to a user, so you weren't getting where users didn't have a reservation?
CHRIS: It was related to that. So there's a subtlety to LEFT JOINS. So a JOIN is going to say like, users and reservations. But in that case, if there is a user without reservations, I know they're going to be filtered out of this query. So it's like, oh, I know what to do. LEFT JOINS, I got this. So LEFT JOINS says, "Give me all of the users and then in the query space that I'm building up here, join them to their reservations." So even a user with no reservations is now part of the recordset that is being considered for this query.
But when I added the filter of reservations.upcoming, I tried to merge that in using ActiveRecord's .merge syntax on a query or on a relation, as it were. That would not work because it turns out when you're using the LEFT JOINS...and as I'm saying this, I'm going to start saying, like, here's definitively what's true. I probably still don't entirely understand this, but trying to do the WHERE clause on the outer query did not work. And I had to move that filtering logic into the LEFT JOINS.
So the definition of the JOINS was now I had to actually handwrite that portion of the SQL and say, LEFT OUTER JOIN users on and then, you know, the users.id=reservations.userid and reservations. whatever the logic there for an upcoming reservation. So reservations.completed is null or reservations.date>date.current or whatever logic there. But I had to include that logic in the definition of the LEFT OUTER JOIN, which is not a thing that I think I've done before. So it was part of the definition of the JOIN rather than part of the larger query that we were operating on.
STEPH: Yeah, that's interesting. I don't think I would have caught that myself. And luckily, you had the test to then point out to you.
CHRIS: Yeah, definitely the tests made me feel much more confident when I eventually narrowed down and started to understand it and was able to make the change in the code. I was also quite happy with the way I was able to structure it. So, suddenly, I had to handwrite a little bit of SQL. And what was nice is many, many, many years ago, I recorded a wonderful course on Upcase with Joe Ferris, CTO of thoughtbot, on Advanced ActiveRecord Querying. And I'm still years later digesting everything that Joe said in that course. It's really an amazing piece of content.
But one of the things that I learned is Joe shows a bunch of examples throughout that course of ways that where you need to, you can drop down to raw SQL within an ActiveRecord relation. But you don't need to completely throw it out and write the entire query by hand. You can just say, in this case, all I had to handwrite was the JOINS logic for that LEFT JOINS. But the rest of it was still using normal ActiveRecord query logic. And the .having was scoped on its own, and all of those sort of things. So it was a nice balance of still staying mostly within the ActiveRecord query Builder syntax and then dropping down to a lower level where I needed it.
STEPH: I love that you mentioned that video because I have seen it, and it is so good. In fact, I now want to go back and rewatch it since you've mentioned it just because I remember I always learn something every time that I do watch it. On a side note, the way that you represented and described query objects was so lovely.
I know you, and I have talked about query objects before because we adore them. But I feel like you just gave a really good mini class and overview of like, this is what a query object is, and this is what it does. And this is why they're great because you can test them.
CHRIS: Cool. I'm going to be honest; I have no idea what I said. But I'm glad it was good. [laughs]
STEPH: It was. It was really good. If anyone has questions about query objects, that'll be a good reference.
CHRIS: Well, thank you so much for the kind words there. And for the ActiveRecord querying trail, really, I was just along for the ride on that one, to be clear. I did write a bunch of notes after the fact, which I've found incredibly useful because the videos are great. But having the notes to be able to reference...past me spent a lot of time trying really hard to understand what Joe had said so that I could write it down. And I'm very glad that I invested that time and effort so that I can revisit it more easily.
But yeah, that was just a fun little bit of code that I got to write and a new thing that I've learned in the world of SQL, which is one of those topics that every little investment of effort I find to be really valuable. The more comfortable I feel, the more that I can express in SQL.
It's one of those investments that I'm like, yep, glad I did that. Whereas there are other things like, yeah, I learned years ago how to do X. I've completely forgotten it. It's gone from my head. I'm never going to use it again, or the world has changed. But SQL is one of those topics where I appreciate all of the investment I've put in and always find it valuable to invest a bit more in my knowledge there.
STEPH: Yeah, absolutely same. Just to troll Regexs for a little bit, they're powerful, but they're the thing that I will never commit to learning. I refuse to do it. [laughs] I will always look it up when I need to. But Postgres or SQL, on the other hand, is always incredibly valuable. And I'm always happy to learn something new and invest in that area of my skills.
CHRIS: Yep, SQL and Postgres are great things. But let's see. In other news, actually, I had the pleasure of joining the Svelte Radio Podcast for an episode this week. They invited me on as a guest. And we got to chat about Svelte, and then I accidentally took the conversation in the direction of inertia as I always do.
And then I talked a little bit more about Sagewell, the company that I'm building, and all sorts of things in the world. But that was really fun, and I really enjoyed that. And I believe it will be live by the time this episode goes live. So we will certainly include a link to that episode in our show notes here.
STEPH: That's awesome. I haven't listened to the Svelte Podcast before. So I'm excited to hear your episode and all the good things that you said on it. I'm also just less familiar with them. Who runs the Svelte Podcast, and what's the name of the show?
CHRIS: The show is called Svelte Radio. It's hosted by Antony, Shawn, and Kevin, who are three Svelters from the community. Svelte is a really interesting group where the Svelte society is, as far as I can tell, a community organization that is seemingly well-supported by the core team and embraced as the natural center point of the community. And then Svelte Radio is an extension of that.
And it's a wonderful podcast. Each week, they talk about various things. So there are news episodes, and then they have guests on from time to time. Recently, having Rich Harris on to talk about the future of Svelte, Rich Harris being the creator of Svelte.
Interestingly, if you search for Svelte Radio, they are the second Google result because the first Google result is the tutorial docs on how to use Svelte with radio buttons. But then the second one is Svelte Radio, the podcast, [laughs], which is an interesting thing. Good on Svelte's documentation for having such strong SEO.
STEPH: I was just thinking there's something delightful about that where the first hit is for documentation that's a very helpful; here’s how you use this. That's kind of lovely. Well, that's really cool. I'm really looking forward to hearing more about Svelte and listening to you be on the show.
CHRIS: Yeah, they actually had some very kind things to say about The Bike Shed and, frankly, you as well and our co-hosting that we do here. So that was always nice to hear.
STEPH: That's very kind of them. And it never fails to amaze me how nice podcasters are. Everyone that I've met in this community that's a fellow podcaster they're just all such wonderful, nice, kind people, and I just appreciate the heck out of them.
CHRIS: Yeah, podcasts are great. The internet is doing its job; that’s my strong belief there. But let's see. In other news, I actually have more of a question here, sort of a question and an observation. My work has started to take a slightly different shape than it has historically. Often, I'm a developer working on a team, picking up something off the top of the Trello board or whatever we're using for project management, working on that thing, pushing it through to acceptance. But all of the work or the vast majority of the work is encapsulated in this one shared planning context.
But now, enough of my work is starting to spill out in different directions. Like right now, I'm pushing on hiring. That's a task that largely lives with me that doesn't live on the shared Trello board. Certainly, the rest of the team will be involved at some point. But for now, there's that that's really mine. And there are other pieces of work that are starting to take that shape.
So I recognize, or at least I decided that my productivity to-do lists system was failing me. So I'm on the search now for something new, but I'm intrigued. What do you use? Are you happy with what you have for to-do lists? How do you keep track of stuff in the world?
STEPH: Oh goodness. I'm now going to overanalyze everything that I do and how I keep track of the things that I do. [chuckles] So currently, I have two things that I used to track, and that is...okay, I'm going to expand. I have three to-do lists that I use to track. [laughs]
Todoist is where I add most things of where whatever I just think of, and I want to capture it Todoist is usually where it goes. Because then it's very easy for me to then go back to that list and prioritize or just simply delete stuff. If I haven't gotten to it in a while, I'm like, fine, let it go. Move on.
And then the other place I've started using just because it's been helpful in terms of linking to stuff is Basecamp. So we use Basecamp at thoughtbot, and we use it for a number of internal projects. But I have created my own project thanks to some advice from Mike Burns, a fellow thoughtboter, because he created his own and uses that to manage a lot of his to-dos and tasks that he has. And then that way, it's already one-stop shopping since you're in Basecamp a lot throughout the day or at least where you're going to visit some of the tasks that you need to work on. So that has been helpful just because it's very simple and easy to reference.
And then calendar, I just live by my calendar. So if something is of the utmost importance...I realize I'm going in this in terms of order of importance. If something is critical, it's on my calendar. That's where it goes. Because I know I have not only put it somewhere that I am guaranteed to see it, but I have carved out time for it too. That's my three-tier system. [laughs]
CHRIS: I like it. That sounds great, not overly complicated but plenty going on there. And it sounds like it's working for you, sounds like you're happy with that.
STEPH: It has worked really well. I'm still evaluating the Basecamp, but so far, it has been helpful. It does help me separate between fun to-do items which go in Todoist and maybe just some other work stuff. But if it's really work-focused, then it's going in Basecamp right now. So there's a little helpful separation there between what's going on in my life versus then things I need to prioritize for work. What are the things that you're currently using, and where are you feeling they're falling down or not being as helpful as you'd like them to be?
CHRIS: My current exploration, I'm starting to look for a new to-do list-type things. Specifically, I've been using Trello for a long time for probably a couple of years now. And that was a purposeful choice to move away from some of the more structured systems because I found they weren't providing as much value. I was constantly bouncing between different clients and moving into different systems.
And so much of the work was centrally organized there that the little bit of stuff that I had personally to keep track of was easy enough to manage within a Trello board. And then slowly, my Trello board morphed into like 10 Trello boards for different topics. So I have one that's like this is research. These are things that I want to look into. And so I can have sort of a structure and prioritization within that context in my world.
And then there's one for fitness and one for cooking. I'm trying to think which else...experiments, as I'm thinking about I want to try this new thing in the world. I have a board for that. So I have a bunch of those that allow me to keep things that aren't as actionable, that are more sort of explorations. But then they each have their own structure. And that I found to be really useful and I think I'll hold on to.
But my core to-do board has started failing me, has started being just not quite enough. And then, more so, I wanted a distinct thing for work for a professional context. So I was like, all right, let me go back to the drawing board and see what's out there. And I did a quick scan of Todoist and Things, respectively. And I've settled on Things for right now. It just matches a little bit more to my mental model.
Todoist really pushes on the idea of due dates or dates as a singular idea associated with most things. Almost everything should have a date. And I kind of philosophically disagree with that. Whereas Things has this interesting idea of there is the idea of a due date, but it's de-emphasized in their UI because not everything has a due date; most things don't.
But Things has a separate idea of a scheduled date or an intent date. Like yeah, I think I'll work on that on Wednesday. It's not due on Wednesday; that's just when I want to work on it. It can have a separate due date. Like, maybe it's due Friday.
STEPH: Is the name of the application that you're saying is it Things? Is that the name of it?
CHRIS: Yeah, it is.
STEPH: I haven't heard this one. You kept saying Things. I was like, wait, is he being vague? But I realized you're being specific. [laughs]
CHRIS: It's one of the few things that...yeah, one of the few things that I think is not great about Things. It's from a company Cultured Code, and the application is called Things. And that is all I will say on that topic. Different names maybe would have been better, but they seem to have carved out enough of an attention space.
Enough people know of it that if you search for Things and to-do list, it will very quickly pop up. But yeah, that's a pretty ambiguous name. They maybe could have done a different one there. But the design of the application is really nice. It's on my desktop. And now I have it on my phone as well, and they sync between them and all the stuff.
So there's never going to be a perfect system. I'm certain of that. I've at least talked myself out of trying to build my own because, man, have I fallen into that trap before. Oh goodness, so many times.
STEPH: I'm very proud of you.
CHRIS: Thank you. I'm trying.
STEPH: But yeah, it'll be interesting to see how it evolves. I continue to struggle with there are these things that come to mind, and I want to capture them during the day. But some of them are just stories I'm telling myself, which would probably be best captured in a journal tool. And then there are notes that I might want to keep on remote work and how people think about that.
And so I'm starting to think about Obsidian or a note-taking system for that. And then I've got this Trello board concoction. And now I've got a to-do...and suddenly I'm like, well, that's too many things. And so I'm trying to not overthink it. I'm trying to not underthink it. I'm trying to just find that perfect amount of thinking. That's what I'm aiming for. I'm not sure I'm going to hit it directly, [laughs], but that's what I'm aiming for.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Some of the topics that you mentioned earlier did stand out to me when you're talking about recipes and working out some other topics. Those are things for me that I often just put in notes. So I liked the word that you used for stories that you're telling yourself or things that you're interested in. Is that something that...I don't put it in Todoist or put it somewhere because I don't really have an action item. It's more like, yeah, this recipe looks awesome and one day...so I'm going to stash it somewhere so I can find it.
I'm currently using Notion. I used Bear before. It is beautiful. I really liked Bear, but I needed a little bit more structure, and Notion gave me that structure. And so I will just dump it in Notion. And then it's very searchable, so I can always find whatever recipe or whatever thought that was as long as I try to add buzzwords to my own notes. Like, what would have Stephanie searched for looking for this? So I will try to include some of those words just so I can easily find it.
CHRIS: I love you're defining yourself as a Stephanie. For a random Stephanie walking through the woods, what search terms? How can I SEO arbitrage a Stephanie?
STEPH: What would she look for?
CHRIS: Who knows?
STEPH: That Stephanie, she's sneaky. You never know.
CHRIS: You never can tell. Obsidian is the one that I'm looking at now. But I'm currently using Apple Notes. And it's really nice to be able to search directly into a note very quickly. I have that both via Alfred and then on my phone. And I'm finding a lot of utility in that, particularly for notes, for things I want to talk to someone about.
But now there are seven different things, and how are they connected? And where is something? And to the question of where would a future Christopher look for this, let's make sure I put it in that place. But I don't know what that dude's going to be up to. He's a weird guy. He might look in a completely random place. So I'm trying to outsmart him, and oof, good luck, me.
STEPH: [laughs] I have heard of Obsidian, but I don't recall much about it. So I'd have to look into it. I do feel your pain around Todoist and where it really encourages you to set a date. Because there are often things where I'm like, I saw something I want to read. And I know there are tons of tools.
There are so many tools and videos and things that people could watch if they really want to invest in this workflow. But right now, I've told myself no, and so I use Todoist. And I see something I want to read, and so I just link to it. And I don't have a particular date that I want to read it. I'm like, this looks cool, and so then I add it to a reading list. But that also, I guess, could be something for notes.
More and more, I'm trying to shove things into notes, so it feels less like a task and more of a I'm curious, or I'm interested in things that have piqued my interest. Let me go back and look at that list to see if there's something I want to pull from today or I need inspiration. That's what my notes often are; they're typically inspiration for something that I have seen and really liked, or maybe it's a bug that I looked into, and I want to recall how that happened or what was the process.
But yeah, my notes are typically a source of inspiration. So I try to dump most things in there. I don't know if that's particularly helpful for your task, though, because it sounds like you're looking for a way to manage the things that you actually need to do versus just capturing all of your thoughts.
CHRIS: Honestly, part of it is having a good system for those like, oh, I'd like to read this sometime. Ideally, for me, that doesn't go into my whatever to-do list system. But if my brain doesn't trust that I'll ever read it or if I feel like I'm putting it into a black hole, then my brain is just like, hey, you should really read that thing. Are you thinking about that? You should think about that and just brings it up.
And so having a system externalized that I trust such that then the to-do list can be as focused as possible. It's a sort of an arms race back and forth battle type thing of like, I've definitely done the loop of like, all right, I want to capture everything. I want to have perfect, lossless, productivity system, and that is not possible. And so then I overcorrect back the other way. I'm like, whatever, nothing matters. I'll just let everything fall away. And then I'm like, well, then my brain tries to remind me of stuff or tries to remember more.
And there's a book, Getting Things Done, which is one of the more common things recommended in the productivity world. And that informs a bunch of my thinking around this, the idea of capturing everything that's in your head so that you can get it out of your head. And in the moment, be focused and in the moment and not having to try and remember. And so that's the ideal that I'm searching for. But it's difficult to build that and make that work.
STEPH: It seems the answer is there's no perfect system. It's always finding what works for you. And I feel like it's always going to change from hopefully not month to month because that would be tedious. But it may change year to year depending on how you're prioritizing things and the types of things that you need to remember or that you need to accumulate somewhere. So I feel like it's always this evolving, iterative process of changing where we're storing this.
But I feel like where you store the notes and inspiration, that's something that, ideally, you want to make sure that you can always continue to keep forward. So even if you do change systems, that's something that's usually on my mind. It's like, well, if I use this system to store all of my thoughts, what if I want to move to something else? How stuck am I to this particular platform? And can I still have ownership of the things that I have added here?
But overall, yeah, I'd be intrigued to see what other people think if they have a particular system that works for them, or they have suggestions. But overall, it seems to be whatever caters best to your personality and your workflow. That's why there are so many of these. There are so many thoughts, so many videos, so many styles.
CHRIS: Yeah, I think a critical part of what you just said that feels very true to me is this is something that will change over time as well. Life comes in seasons, and my work may look a certain way, or my life may look a certain way, and then next year, it may be wildly different. And so, finding something that is good enough for right now and then moving forward with that and being open to revisiting it. And yeah, that feels true. So I'm in an explore phase right now. I'll report back if I have any major breakthroughs. But yeah, we'll see how it goes.
STEPH: I will say I think the main tool that I have really leaned into, while some of the others will change over time, is my calendar. There are certain things I've let go. My inbox is always going to be messy. My to-do list is always going to be messy. But my calendar that is where things really go to make sure that they happen. And I will even add tasks there as well.
CHRIS: Yeah, the calendar is definitely a core truth in my world. Whatever the calendar says, that is true. And I'm actually a...I hope I'm not annoying to anyone. But I'm very pointed in saying, "This recurring meeting that we have if we keep just canceling it the day before every time, let's get this off our calendars. Let's make sure our calendars are telling the truth because I trust that thing very much."
And two apps that I'm using right now that I've found really useful in the calendar world are MeetingBar, which I've talked about before. But it's a little menu bar application that shows the next meeting that's upcoming. And then I can click on it and see the list of them and easily join any video call associated, just a nice thing to keep the next thing on my calendar very top of mind, super useful, really love that. That's just open source and easy to run with.
The other that I've been spending more and more time with lately is SavvyCal. SavvyCal is similar to Calendly. It's a tool for sharing a link to allow someone to schedule something on your calendar. And, man, it is an impressive piece of technology. I've been leaning into some of the fancier features of it of late. And it has an amazing amount of control, and I think a really well-designed sort of information architecture as well. It took me a little while to figure out how to do everything I wanted to do in it.
But I wanted to be able to define a calendar link thing that I could share with someone that really constraints in the way that I wanted. Like, oh, don't let them schedule tomorrow, and make sure there's this much buffer between meetings. And don't let this calendar link schedule too many things on my calendar because I need to control my day, and give me some focus blocks. And they're not actually on my calendar, but please recognize that. And it basically supports all of these different ways of thinking and does an incredible job with it.
As an aside, SavvyCal is created by Derrick Reimer, who is the co-host of The Art of Product Podcast, which is co other hosted by Ben Orenstein, former thoughtboter, creator of Upcase, and a handful of other things. So small world and all of that. But yeah, really fantastic piece of technology that I've been loving lately.
STEPH: That's really cool. I have not heard of SavvyCal. I've used Calendly and used that a fair amount. And that is so awesome where you can just send it to people, and they can pick time on a calendar and do all the features that you'd mentioned. So it's good to know that there's SavvyCal as well.
Well, pivoting just a bit, we have a listener question that I'm really excited to dig into. This question comes from fellow thoughtboter, Steve Polito. And Steve writes in that, "Hey, Bike Shed, I've got a question for you. I find it difficult to know if there's an existing method in a large class or a class that includes many concerns. How can I avoid writing redundant methods when working on a large project?"
And Steve provided a really nice just contrived example where he's defined a class user that inherits from ApplicationRecord. And then comments, "Lots of methods making it really hard to scan this giant class. And then there's a method called formatted name. So it takes first name, adds a space, and then adds the user's last name. And then there are a lot more methods in between.
And then, way down, there's another method called full name that does the exact same thing. Just to provide a nice example of how can you find a method that has existing logic that you want and avoid implementing essentially the same method and the same class?"
So as someone who has worked on some legacy systems this year, I feel that pain. I feel the pain of where you have a really giant class, and that class may also include other modules. So then you have your range of all the methods that you may be looking through gets really widened. And you are looking for particular logic that you feel like may exist in the system, but you really just don't know.
So I don't know if I have a concrete method for how you can find duplicate logic and avoid writing that other method. But some of the things that I do is I will initially go to the test. So if there's some logic that I'm looking for and I think it's in this class or I have a suspicion, I will first look to see what has test coverage. And I find that is just easier to skim where I can find, and I'll use grep, and search and just look for anything.
In this particular case, let's use first name as our example. So I'm looking for anything that's going to collaborate with first name. Some of the other things that I'll do is I'll try to think of a business case where that logic is used. So, where are we displaying the user's full name? And if I can go to that page and see what's already in use, that may give me a hint to do we already have this logic? Is there something there that I should reuse, or is it something new that I'm implementing?
And then if I really want to get fancy about it, for some reason that I really want to see all the methods that are listed, but I'm trying to get rid of some of the noise in the file, then I could programmatically scan through all the available methods by doing something like class.instance methods and passing in false. So we don't include the methods that are from superclasses, which can be very helpful. So that way, you're just seeing what scoped to that class.
But then, let's say if you do have a class that is inheriting from other modules, then you may want to include those methods in your search. So to get fancier, you could look at that class' ancestor chain and then collect the classes or models that are custom to your application, and then look at those instance methods. And then you could sort them alphabetically.
But you're still really relying on is there a method name that looks very similar to what I would call this method? So I don't know that that's a really efficient way. But if I just feel like there's probably already something in this space and I'm just looking for a clue or some name that's going to hint that something already exists, that's one way I could do it.
To throw another wrench in there, I just remembered there are also private methods, and private methods don't get returned from instance methods. I think it's private instance methods is a method that you'd have to call to then include those in your search results as well. So outside of some deeper static analysis, this seems like a hard problem. This seems like something that would be challenging to solve.
And then I guess the other one is I ask a friend. So I will often lean on if there's someone else at the team that's been there longer than me is I will just ask in Slack and say, "Hey, I want to do a thing. I'm worried this already exists, or I think it already exists. Does anybody have any clues or ideas as to where this might live?"
I know I just ran through a giant list of ideas there. But I'm really curious, what are your thoughts? If you have a messier codebase and you're worried that you are reimplementing logic that already exists, but you're trying to make sure you don't duplicate that logic, how do you avoid that?
CHRIS: Well, the first thing I want to say is that I find it really interesting that I think you and I came at this from different directions. My answer, which I'll come to in a minute, is more of the I'm not actually sure that this is that easy to avoid, and maybe that's not the biggest problem in the world. And then I have some thoughts downstream from that.
But the list that you just gave was fantastic. That was a tour de force of how to understand and explore a codebase and try and answer this very hard question of like, does this logic already exist somewhere else? So I basically just agree with everything you said. And again, I'm deeply impressed with the range of options that you offer there for trying to figure this out.
That said, sometimes codebases just get really large. And this is going to happen. I think the specific mention of concerns as sort of a way that this problem can manifest feels true. Having the user object and being like, oh man, our user object is getting pretty big. Let's pull something out into a concern as just a way to clean it up. That actually adds a layer of indirection that makes it harder to understand the totality of what's going on in this thing.
And so personally, I tend to avoid concerns for that reason or at least at the model layer, especially where it's just a we got 1,000 methods here. Let's pull 200 of them into a file and maybe group them somewhat logically. That tends to not solve the problem in my mind. I found that it just basically adds a layer of indirection without much additional value.
I will say in this particular case, the thing that we're talking about presenting the full name or the formatted name feels almost like a presentational concern. So I might ask myself, is there a presenter object, something that wraps around a user and encapsulates this? And then we as a team know that that sort of presentational or formatting logic lives in the presentation format or layer. Maybe I'm not entirely convinced of that as an answer. But it's just sort of where can we find organizational lines to draw within our codebases?
I talked about query objects earlier. That's one case of this is behavior that I'll often see in classes as, say, a scope or something like that that I will extract out into a query object because it allows me to encapsulate it and break it out a little bit more but still have most of the nice pieces that I would want.
So are there different organizational patterns that are useful? I think it's very easy to start drawing arbitrary lines within our codebases and say, "These are services." And it's like, what does that mean? That doesn't mean anything. App Services, that's not a thing, so maybe don't do that. But maybe there are formatters, queries, commands, those feel like...or presenters, queries, commands. Maybe those are organizational structures that can be useful.
But switching to the other side of it, the first thing that came to mind is like, this is going to happen. As a codebase grows, this is absolutely going to happen. And so I would ask rather than how can I, as the developer, avoid doing this in the first place...which I think is a good question to ask. And again, everything you listed, Steph, is great. And I think a wonderful list of ways that you can actually try to avoid this.
But let's assume it is going to happen. So then, what do we do downstream from that? One answer that comes to mind is code review. Code review is not perfect. But this is the sort of thing that often in code review I will be like, oh, I actually wrote a method that's similar to this. Can you take a look at that and see can we use only one of these or something like that? So I've definitely seen code review be a line of defense on this front.
But again, stuff is still going to sneak through. And someday, you'll find it down the road. And that's the point in time that I think is most interesting. When you find this, can you fix it easily? Do you have both process-wise and infrastructure-wise the ability to do a very small PR that just removes the duplicate method, removes the usage of it, and consolidates on the one?
It's like, oh, I found it. Here's a 10-line PR that just removes that method, changes the usage. And now we're good. And that can go through code review and CI very quickly. And we have a team culture that allows us to make those tiny changes on the regular to get them out to production as quick as possible so that we know that this is a good code change, all of that.
I found there are teams that I've worked on where that process is much slower. And therefore, I will try and roll a change like that up into a bigger PR because I know that's the only way that it's really going to get through. Versus I've been on teams that have very high throughput is probably the best way to describe it. And on those teams, I find that the codebase tends to be in a healthier shape because it just naturally falls out of having a system that allows us to make changes rapidly with high trust, get them out into the world, et cetera.
STEPH: This is that bug or inconsistency that's going to show up where on one page you have the user's full name. And then on another page, you have the user's full name, but maybe the last name is not capitalized, or there's just something that's slightly different. And then that's when you realize that you have two implementations of essentially the same logic that have differed just enough.
I like how you pointed out that this is one of those things that as a codebase grows, it's probably going to happen, and that's fine. It's one of those if you do have duplicate logic, over time, based on your team's processes, you'll be able to then identify when it does happen, and then look for those preventative patterns for then how you organize your code.
How quickly can you make that change? Can you just issue a PR that then removes one of them? But then look for ways to say, how are we going to help our future selves recognize that if we're looking for a user's full name, where's a good place to look for that? And then what's a good domain space or naming that we can give to then help future searchers be able to find it?
I also really like your code review example because it does feel like one of those things that, yes, we want to catch it if we can, and we can leverage the team. But then also, it's not the end of the world if some of these methods do get duplicated.
There's one other thing that came to mind that it's not really going to help prevent duplicate methods, but it will help you identify unused code. So it's the Unused tooling that you can run on your codebase. And that's something that would be wonderful to run on your codebase every so often.
So that way, if someone has added...let's say there was a method that was full name but is not in use. It didn't have test coverage; that's why you didn't find it initially. And so you've introduced your own formatted name. And then, if you run unused at some point, then you'll hopefully catch some of those duplicate methods as long as they're not both in use.
CHRIS: I think one more thing that I didn't quite say in my earlier portion about this. But in order to do that, to use Unuse or to have these sort of small pull requests that are going through, you have to have test coverage that is sufficient that you are confident you're not going to break the app. Because the day that you do like, oh, there's a typo here; let me fix it real quick. Or there's this method I'm pretty sure it's not used; let me rip it out.
And then you deploy to production, and suddenly the error system is blowing up because, in fact, it was used but sneakily in a way that you didn't think of, and your test coverage didn't catch that. Then you don't have trust in the system, and everything slows down as a result of that. And so I would argue for fixing the root problem there, which is the lack of test coverage rather than the symptom, which is, oh, I made this change, it broke something. Therefore, I won't make small changes anymore.
STEPH: Definitely. Yeah, that's a great point.
CHRIS: So yeah, I don't have any answer. [laughs] My answers are like, I don't know, it's going to happen, but there's a lot of stuff organizationally that we can do. And granted, you gave a wonderful list of ways to actually avoid this. So I think the combination of our answers really it's a nice spectrum of thoughts on this topic.
STEPH: I agree. I feel like we covered a very nice range all the way from trying to identify and then how to prevent it or how to help future people be able to identify where that logic lives and find it more easily. Also, at the end of the day, I like the how big of a problem is this? And it is one of those sure; we want to avoid it.
But I liked how you captured that at the beginning where you're like, it's okay. Like, this is going to happen but then have the processes around it to then avoid or be able to undo some of that duplicate work. But otherwise, if it happens, don't sweat it; just look for ways to then prevent it from happening in the future.
On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Happy New Year (for real)! Chris and Steph both took some end-of-year time off to rest and recharge.
Steph talks about some books she enjoyed, recipes she tried, and trail-walking adventures with her dog, Utah. Chris' company is now in a good position to actually start hiring within the engineering team. He's excited about that and will probably delve into more around the hiring process in the coming weeks.
Since they aren't really big on New Year's Eve resolutions, Steph and Chris answer a listener question regarding toxic traits inspired by the listener question related to large pull requests and reflect on their own.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey, Chris, what's new in your world?
CHRIS: What's new in my world? Well, spoiler, we actually may have lied in a previous episode when we said, "Hey, happy New Year," because, for us, it was not actually the new year. But this, in fact, is the first episode of the new year that we're recording, that you're hearing. Anyway, this is enough breaking the fourth wall. Sorry, listener.
STEPH: [laughs]
CHRIS: Inside baseball, yadda, yadda. I'm doing great. First week back. I took some amount of vacation over the holidays, which was great, recharging, all those sorts of things. But now we're hitting the ground running.
And I'm actually really enjoying just getting back into the flow of things and, frankly, trying to ramp everything up, which we can probably talk about more in a moment. But how about you? How's your new year kicking off?
STEPH: I like how much we plan the episodes around when it's going to release, and we're very thoughtful about this is going to be released for the new year or around Christmas time, and happy holidays to everybody. And then we get back, and we're like, yeah, yeah, yeah, we can totally drop the facade. [laughs] We're finally back from vacation. And this is us, and this is real.
CHRIS: Date math is so hard. It just drains me entirely to even try and figure out when episodes are going to actually land. And then when we get here, also, you know, I want to talk about the fact that there was vacation and things, and the realities of the work, and the ebb and flow of life. So here we are.
STEPH: Same. Yeah, I love it. Because I'm in a similar spot where I took two weeks off, which was phenomenal. That's actually sticking to one of the things we talked about, for one of the things I'm looking to do is where I take just more time off. And so having the two weeks was wonderful.
It was also really helpful because the client team that I'm working with also shut down around the end of the year. So they took ten days off as well. So I was like, well, that's a really good sign of encouragement that I should also just shut down since I can. So it's been delightful.
And I have very little tech stuff to share because I've just been doing lots of other fun things and reading fiction, and catching up with friends and family, and trying out new recipes. That's been pretty much my last two weeks. Oh, and walks with Utah. His training is going so well where we're starting to walk off-leash on trails. And that's been awesome.
CHRIS: Wow, that's a big upgrade right there.
STEPH: Yeah, we're still working on that moving perimeter so he knows how far he can go. Before then, he needs to stop and check on me. But he's getting pretty good where he'll bolt ahead, but then he'll stop, and he'll look at me, and then he'll wait till I catch up. And then he'll bolt ahead again. It's really fun.
CHRIS: I like that that's the version of it that we're going for. This is not like you're going to walk alongside me on the trail; it's you're obviously going to run some distance out. As long as you check back in once every 20 feet, we're good; that’s fine.
Any particularly good books, or recipes, or talks with friends to go with that category? But that one's probably a little more specific to you.
STEPH: [laughs] Yes. There are two really good books that I read over the holidays. They're both by the same author. So I get a lot of books from my mom. She'll often pick up a book, and once she's done with it, she'll drop it off to me or vice versa.
So the one that she shared with me is called The Midnight Library. It's written by Matt Haig, H-A-I-G. And it's a very interesting story. It's a bit sad where it's about a woman who decides that she no longer wants to live. And then, when she moves in that direction to go ahead and end her life, she ends up in this library.
And in the library, every time she has made a different decision or made a decision in life, then there is a new book written about what that life is like. So then she has an opportunity to go explore all of these lives and see if there's a better life out there for her. It is really interesting. I highly recommend it.
CHRIS: Wow. I mean, that started with, I'm going to be honest, a very heavy premise. But then the idea that's super interesting. I would, actually...I think I might read that. I tend to just read sci-fi. This is broadly in the space, but that is super interesting.
There's an image that comes to mind actually as you described that. It's from Tim Urban, who's also known as Wait But Why. I think he posts under that both on Twitter, and then I think he has a blog or something to that effect.
But the image is basically like, all of the timelines that you could have followed in your life. And everybody thinks about like, from this moment today. Man, I think about all of the different versions of me that could exist today. But we don't think about the same thing moving forward in time. Like, what are all the possibilities in front of me?
And what you're describing of this person walking around in a library and each book represents a different fork in the road from moving forward is such an interesting idea. And I think a positive reframing of any form of regret or looking back and being like, what if I had gone the other way? It's like, yeah, but forward in time, though. I'm very intrigued by this book.
STEPH: Yeah, it's really good. It definitely has a strong It's a Wonderful Life vibe. Have you ever watched that movie?
CHRIS: Yes, I have.
STEPH: So there's a lot of that idea of regret. And what if I had lived differently and then getting to explore? But in it's A Wonderful Life, he just explores the one version. And in the book, she's exploring many versions.
So it's really neat to be like, well, what if I'd pursued this when I was younger, had done this differently? Or what if I got coffee instead of tea? There are even small, little choices that then might impact you being a different person at a point in time.
The other book that I read is by the same author because I enjoyed Midnight Library so much that I happened to see one of his other books. So I picked it up. And it's called How to Stop Time. And it's about an individual who essentially lives a very long time.
And there are several people in the world that are like this, but he lives for centuries. But he doesn't age, or he ages incredibly slowly, at a rate that where say that he's 100 years old, but he'll still look 16 years old. And it's very good. It's very interesting.
It's a bit more sad and melancholy than I typically like to read. So that one's good. But I will add that even though I described the first one, it has a sad premise; I found The Midnight Library a little more interesting and uplifting versus the other one I found a bit more sad.
CHRIS: All right. Excellent additional notes in the reading list here. So you can opt like, do you want a little bit more somber, or do you want to go a little more uplifting? Yeah, It's a Wonderful Life path being like, starts in a complicated place but don't worry, we'll get you there in the end.
STEPH: But I've learned I have to be careful with the books that I pick up because I will absorb the emotions that are going on in that book. And it will legit affect me through the week or as I'm reading that book. So I have to be careful of the books that I'm reading. [laughs] Is that weird? Do you have the same thing happen for when you're reading books?
CHRIS: It's interesting. I don't think of it with books as much. But I do think of it with TV shows. And so my wife and I have been very intentional when we've watched certain television shows to be like, we're going to need something to cut the intensity of this show.
And the most pointed example we had was we were watching Breaking Bad, which is one of the greatest television shows of all time but also just incredibly heavy and dark at times, kind of throughout. And so we would watch an episode of Breaking Bad. And then, as a palate cleanser, we would watch an episode of Malcolm in the Middle.
And so we saw the same actor but in very different facets of his performance arc and just really softened things and allowed us to, frankly, go to bed after that be able to sleep and whatnot but less so with reading. So I find it interesting that I have that distinction there.
STEPH: Yeah, that is interesting. Although I definitely feel that with movies and shows as well. Or if I watch something heavy, I'm like, great, what's on Disney? [laughs] I need to wash away some of that so I can watch something happy and go to sleep.
You also asked about recipes because I mentioned that's something I've been doing as well. There's a lot of plant-based books that I've picked up because that's really my favorite type of thing to make. So that's been a lot of fun. So yeah, a lot of cooking, a lot of reading. How about you? What else is going on in your world?
CHRIS: Well, actually, it's a super exciting time for Sagewell Financial, the company that I've joined. We are closing our seed financing round, which the whole world of venture capital is a novel thing that I'm still not super involved in that part of the process. But it has been really interesting to watch it progress, and evolve, and take shape. But at this point, we are closing our seed round. Things have gone really well.
And so we're in a position to actually start hiring, which is a whole thing to do, in particular, within the engineering group. We're hiring, I think, throughout the company, but my focus now will be bringing a few folks into the engineering team. And yeah, just trying to do that and do that well, do that intentionally, especially for the size of the team that we have now, the sort of work that we're doing, et cetera, et cetera.
But if anyone out there is listening, we are looking for great folks to join the team. We are Ruby on Rails, Inertia, TypeScript. If you've listened to the show anytime recently, you've heard me talk about the tech stack plenty.
But I think we're trying to do something very meaningful and help seniors manage finance, which is a complicated and, frankly, very underserved space. So it's work that I deeply believe in, and I think we're doing a good job at it. And I hope to do even a better job over time. So if that's at all interesting, definitely reach out to me.
But probably in the coming weeks, you'll hear me talk more and more about hiring and technical interviews and all of those sorts of things. I got to ramp myself back up on that entire world, which is really one of those things that you should always be doing is the thought that I have in my head. Now that I'm in a position to be hiring, I wish I'd been half-hiring for the past three months, but I'll figure it out. It'll be fine.
STEPH: That's such a big undertaking. Everything you're saying resonates, but also, it's like that's a lot of hard work. So if you're not in that state of really being ramped up for hiring, I understand why that would be on the backburner. And yeah, I'm excited to hear more.
I've gotten to hear some more of the product details about Sagewell, but I don't think we've really talked about those features here on the show. So I would love it if we brought some more of the feature work and talked about specifically what the application does.
I am intrigued speaking of how much energy goes into hiring. Where are you at in terms of how much...like, are there any particular job boards that you're going for? Or what's your current approach to hiring?
CHRIS: Oh, that's a great question. I have tweeted once into the world. I have a draft of a LinkedIn post. This is very much I'm figuring out as I go. It's sort of the nature of a startup as we have so many different things to do.
And frankly, even finding the time to start thinking about hiring means I'm taking time away from building features and growing out other aspects. So it's definitely a necessary thing that we're doing at this point in time.
But basically, everything we're doing is just in time compiling and figuring out what are the things that are semi-urgent right now? And to be honest, I like that energy overall. I've always had in the back of my mind that I like this sort of work and this space, especially if you can do it intentionally.
It shouldn't feel like everything's on fire all the time, but it should feel like a lot of constraints that force you to make decisions quickly, which, if we're being honest, I think that's something that is not my strongest suit. So it's something that I'm excited to grow that muscle as part of this work.
But so, with that in mind, at this point, my goal is to just start getting the word out there into the world that we are looking to hire and get people interested and then, from there, build out what's the interview process going to look like? I will let you know when we get there; I will. I will figure that out.
But it's not something that I've...I haven't actually very intentionally thought about all of this. Because if I were to do that, it would delay the amount of time until I actually say into the world, "Hey, we're hiring." So I very purposely was like, I just need to say this into the world and then continue doing the next steps in that process.
I'm prone to the perfect is the enemy of the good just trying to like, I want to have a complete plan and a 27-step checklist, and a Gantt chart, and a burndown. And before I take any first action and really trying to push back, I'm going to be like, no, no, just do something, just take a step in the right direction.
There's actually a blog post that comes to mind, which is by Dave Rupert, who is a former guest on this podcast. It was wonderful getting to interview him. But he wrote a blog post. The title of it is Do the Next Right, which is a line from a song in the movie Frozen 2, I believe. He is like, all right, stick with me here. And I know this is a movie for kids, maybe. But also, this is a very meaningful song.
And he framed it in a way that actually was surprisingly impactful to me. And it's that idea that I'm holding on to of you can't do it all, and you can't do it perfectly. Just do the next right thing. That's what you're going to do. So we'll link to that blog post in the show notes. But that's kind of where I'm at.
STEPH: I love that. I'm looking forward to reading that because that has been huge for me. I used to be held back by that idea of perfection. But then I realized other people were getting more work done more quickly. And so I was like, huh, maybe there's something to this just doing the next thing versus waiting for perfection that is really the right path.
So, how do folks reach out to you? Should they reach out to you on Twitter or email? What's best for you?
CHRIS: Oh yeah, Twitter. This is all probably going to be said at the end of the show as well. But Twitter @christoomey. ctoomey.com is my blog. I'm on GitHub. I make it very easy to contact me because I haven't regretted that up to this point in my life.
So basically, anywhere you find me on the internet, you will be able to email me or DM me or any of the things. I'm going to see how long I can hold on to that. I want to hold on to that forever. I want just a very open-door policy. So that's where I'm at right now, but any of those starting points.
And bikeshed.fm website will somehow link to me in any of the various forums, and they're all kind of linked to each other, so any of those are fine. I will happily take inquiries via any of the channels.
STEPH: Cool. Well, I'm excited to hear about how it goes.
CHRIS: Me too, frankly. But in a very small bit of little tech news or tech happenings from my holiday time, this was actually just before I started to go on break for the holidays. I had noticed that the test suite was getting very slow, like very, very slow but on my machine.
It was getting a little bit slow on CI, but the normal amount where we just keep adding new things. And we're adding a lot of feature specs because we want to have that holistic coverage over the whole application, and we can, so for now, we're doing that. But our spec suite had gotten up to six-ish minutes on CI and had a couple of other things. We have some linting and some TypeScript and things like that.
But on my machine, it was very slow. So I hadn't run the full spec suite in a long time. But I knew that running any individual spec took surprising amounts of time. And in the back of my head, I was like; I guess I hadn't configured Spring. That seems weird. I probably would have done that, but whatever.
And I'd never pushed on it more until one day I ran the specs. I ran one model spec, and it took 30 seconds or something like that. And I was like, well, that's absurd. And so I started to look into it. I did some scanning around the internet.
There was a wonderful post on the Giant Robots blog about how to look through things from Mike Wenger, a wonderful former thoughtboter. Unfortunately, none of the tips in there were anything meaningful for me. Everything was as I expected it to be. So I set it down.
And there were a couple of times that this happened to me where I'd be like, this is frustrating. I need to look into this a little bit more, but it was never worth investing more time. But I mentioned it in passing to one of the other developers on the team. And as a holiday gift to me, this person discovered the solution.
So let me describe a little bit more of what we've got working on here. On CI, which in theory is less powerful than my new, fancy M1 MacBook, on CI, we take about six minutes for the test suite. On my computer, it was taking 28 minutes and 30 seconds. So that's what we're working with. The factories are all doing normal things. We're not creating way too many database records or anything like that. So any thoughts, anything that you would inspect here?
STEPH: Ooh, you've already listed a number of good things that I would check.
CHRIS: Yeah, I took all the easy ones off the list. So this is a hard question at this point. To be clear, I had no ideas.
STEPH: Could you tell if there's a difference if it's like the boot-up time versus the actual test running?
CHRIS: Did that check; it is not the boot-up time. It is something that is happening in the process of running an individual spec.
STEPH: No, I'm drawing a blank. I can't think of what else I would check from there.
CHRIS: It's basically where I was at. Let me give you one additional piece of data, see if it does anything for you. I noticed that it happened basically whenever executing any factory. So I'd watch the logs. And if I create this record, it would do roughly what I expect it to.
It would create the record and maybe one or two associated records because that's how Factory Bot works. But it wasn't creating a giant cascade or waterfall of records under the hood. If we create a product, the product should have an associated user. So we'll see a product and a user insert. But for some reason, that line create whatever database record was very, very slow.
STEPH: Yeah, it's a good point, looking at factories because that's something I've noticed in triaging other tests is that I will often check to see how many records are created at a certain point because I've noticed there's a test where I think only one record is created, but I'll see 20. And that's an interesting artifact. But you're not running into that. But it sounds like there's more either some callback or transaction or something that's getting hung up and causing things to be slow.
CHRIS: I love those ideas. I didn't even know those were sort of ideas in the back of my head. I didn't know how to even try and chase that down. There was nothing in the logs. I couldn't see anything. And again, I just kept giving up. But again, this other developer on the team found the answer. But at this point, I'll just share the answer because I think we've run out of the good bits of the trivia.
It turns out bcrypt was the answer. So password-hashing was incredibly slow on my machine. What was interesting is I mentioned this to the other developer because they also have an M1. But there are three of us working on the project. The third developer does not have the M1 architecture. So that was an interesting thing. I was like, I feel like this maybe is a thing because we're both experiencing this, but the other developer isn't.
So it turns out bcrypt is wildly slow on the M1 architecture, which is sort of interesting as an artifact of like, what is password hashing, and how does it work? And in normal setups, I think the way it works is Devise will say by default, "We're going to do 12 runs of bcrypt." So like take the password, put it into the hashing algorithm, take the output, put it back into the hashing algorithm, and do that loop 12 times or whatever.
In test mode, it often will configure it to just run once, but it will still use the password hashing. Turns out even that was too slow for us. So we in test mode enabled it so that the password hashing algorithm was just the password. Don't do anything. Just return it directly. Turn off bcrypt; it's too painful for us. But it was very interesting to see that that was the case.
STEPH: Yeah, I don't like that answer. [laughs] I'm not a fan. That is interesting and tricky. And I feel like the only way I would have found that...I'm curious how they found it because I feel like at that point, I would have started outputting something to figure out, okay, where is the slow process? What's the thing that's taking so long to return?
And if I can't see tailing the test logs, then I would start just using a PUT statement to figure out what's taking a long time? And start trying to troubleshoot from there. So I'm curious, do you know how they identified that was the core issue?
CHRIS: Yes, actually. I'm looking back at the pull requests right now. And I'm mentioning that this was related to the M1 architecture, but I don't think that's actually true because the blog post that they're linking to is Collective Idea blog post: Tests Oddly Slow? Might be bcrypt. And then there's a related Rails issue.
They used TestProf, which is a process that you can run that will examine, I think the stack trace and say where are we spending the most time? And from that, they were able to see it looks like it's at the point where we're doing bcrypt. And so that's the answer.
As an aside, my test suite went from 28 minutes and 30 seconds to 1 minute and 30 seconds with this magical speed up.
STEPH: Nice. That's a great idea, TestProf. I don't know if I've used that tool. It rings a bell. But that's an awesome sales pitch for using TestProf.
CHRIS: Similarly, I don't think I'd ever use it before. But it truly was this wonderful holiday gift. Because the minute I switched over to this branch, I was like, oh my God, the tests are so fast. I have one of those fancy, new fast computers, [laughs] and now they're so fast.
STEPH: Wait, you had to switch to a branch? I figured it was something that you had to do special on your machine. So I'm intrigued how they fixed it for you, and then you switched to a branch and saw the speed increase.
CHRIS: So they opened a pull request. And that pull request had the change in the code. So it was a code-level configuration to say, "Hey, Devise, when you do the password hashing thing, maybe just don't, maybe be easy for a moment," [laughter] but only in the test configuration. So all I had to do was check out the branch, and then that configuration was part of the Rails helper setup, and then we were good to go from there.
I added an extra let me be terrified about this because the idea of not hashing passwords in production is terrifying. So let me raise...I put a couple of different guards against like, this should only ever run in test. I know it's in the spec support directory, so it shouldn't. Let me just add some other guards here just to superduper make sure we still hash passwords in production.
STEPH: Devise has a bcrypt chill mode. Good to know. [laughs] And I like all the guards you put in place too.
CHRIS: Yeah, it was really frankly such a relief to get that back to normal, is how I would describe it. But yeah, that's a fun little testing, and password hashing, and little adventure that I get to go on.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: So I have something that I've been wanting to ask you, and it's not tech-related. But we can make this personal and work however we want to tackle it. But there is a previous episode where we read a listener question from Brian about their self-diagnosed toxic trait being large pull requests. And Brian was being playful with the use of the term toxic trait. But it got me thinking, it's like, well, what is my toxic trait?
And it seems like a fun twist on you, and I aren't really big on New Year's Eve resolutions. And in fact, I think you and I are more like if we're interested in achieving a goal, we'd rather focus on building a habit versus this specific, ambiguous we're going to publish ten blogs this year. But rather, I'd rather sit down and write for 15 minutes each day.
And it seemed like a fun twist instead of thinking about what are my toxic traits, personal, at work? Large pull request is a really fun example. So I'll let you choose. I can go first, or you can go first, but I'm excited to hear your thoughts on this one.
CHRIS: I think I've been talking too much. So let's have you go first at this point/ also, I want a few more seconds to think about my toxic trait.
STEPH: [laughs] All right, I have a couple. So that's an interesting point start there [laughs], but here we are. So I was even bold because I asked other people. Because I'm like, well, if I'm going to be fully self-aware, I can't just...I might lie to myself. So I'm going to have to ask some other people. So I asked other folks.
And my personal toxic trait is I am tardy. I am that person who I love to show up 5, 10, 15 minutes late. It's who I am. I don't find it a problem, but it often bothers other people. So that is my informed toxic trait. That might be a strong term for it. But that's the one that gives people the most grief.
CHRIS: Interesting. I do find the framing of I don't find my own tardiness to be a problem as a really interesting sort of lens on it. But okay, it's okay.
STEPH: I see it as long as I'm getting really good quality time with someone; if I'm five minutes late, I'm five minutes late. I think the voice going high means I'm a little defensive. [laughs]
CHRIS: But at least you're self-aware about all of these aspects. [laughs] That's critical.
STEPH: I am self-aware, and most of the people in my life are also self-aware, although I do correct that behavior for work. That feels more important that I be on time for everything because I don't want anyone to feel that I am not valuing their time. But when it comes to friends and family, they thankfully accept me for who I am.
But then, on the work note, I started thinking about toxic traits there. And the one I came up with is that I'm a pretty empathetic person. And there's something that I learned that's called toxic empathy. And it's when you let people's emotions hijack your own emotions, or you'll prioritize someone else's physical or mental health over your own. So, for example, it could be letting another person's anxiety and stress keep you from getting your current tasks and responsibilities done.
And there's a really funny tweet that I saw where someone says, "Hey, can I vent to you about something?" And the first person telling it from their perspective they're crying in the middle of a breakdown. And they're like, "Yeah, sure, what's up?" And I felt seen by that tweet. I was like, yeah; this seems like something I would do. [laughs]
So over time, as something I'm aware of about myself, I've learned to set more boundaries and only keep relationships where equal support is given to both individuals.
And this circles back to the book anecdote that I shared where I had to be careful about the books that I read because they can really affect my mood based on how the characters are doing in that book. So yeah, that's mine. I have one other one that I want to talk about. But I'm going to pause there so you can go.
CHRIS: Okay, fun. [laughs] This is fun. And it is a challenging mental exercise. But it is also, I don't know, vulnerable, and you have to look inside and all that. I think I poked at one earlier on as we were talking, but the idea of perfect is the enemy of the good. And I don't mean this in the terrible like; what’s your worst trait in a job interview? And you're like, "I'm a perfectionist." I don't mean it in that way.
I mean, I have at times struggled to make progress because so much of me wants to build the complete plan, and then very meticulously worked through in exactly the order that I define, sort of like a waterfall versus agile sort of thing.
And it is an ongoing very intentional body of work for me to try and break myself off those habits to try and accept what's the best thing that I can do? How can I move forward? How can I identify things that I will regret later versus things that are probably fine? They're little messes that I can clean up, that sort of thing.
And even that construing it as like there's a good choice and a bad choice, and I'm trying to find the perfect choice. It's like almost nothing in the world actually falls into that shape. So perfect is the enemy of the good is a really useful phrase that I've held onto that helps me.
And it's like, aiming for that perfection will cause you to miss the good that is available. And so, trying to be very intentional with that is the work that I'm doing. But that I think is a toxic trait that I have.
STEPH: I really like what you just said about being able to identify regrets. That feels huge. If you can look at a moment and say, "I really want to get all this done. I will regret if I don't do this, but the rest of it can wait," that feels really significant.
So the other one that I wanted to talk about is actually one that I feel like I've overcome. So this one makes me happy because I feel like I'm in a much better space with it, but it's negative self-talk. And it's essentially just how you treat yourself when you make a mistake. Or what's your internal dialogue throughout the day?
And I used to be harsh on myself. If I made a mistake, I was upset, I was annoyed with myself, and I wouldn't have a kind voice. And I don't know if I've shared this with you. But over time, I've gotten much better at that.
And what has really helped me with it is instead of talking to myself in an unkind voice, I talk to myself how someone who loves me will talk to me. I'm not going to talk to a friend in a really terrible, mean voice, and I wouldn't expect them to talk to me.
So I channel someone that I know is very positive and supportive of me. And I will frame it in that context. So then, when I make a mistake, it's not a big deal. And I just will say kind things to myself or laugh about it and move through it.
And I found that has been very helpful and also funny and maybe a little embarrassing at times because when pairing, I will talk out loud to myself. And so I'll do something silly, and I'll laugh. I'm like, "Oh, Stephanie," that was silly. And the other person hears me say that. [laughs] So it's a little entertainment for them too, I suppose.
CHRIS: Having observed it, it is charming.
STEPH: It's something that I've noticed that a lot of people do, and we don't talk about a lot. I mean, there's imposter syndrome. People will talk about that. But we don't often talk about how critical we are of ourselves.
It's something that I will talk to people who I highly admire and just think they're incredibly good at what they do. And then when they give me a glimpse into how they think about themselves at times or how they will berate themselves for something they have done or because they didn't sit down for that 15 minutes and write per day, then it really highlights.
And I hope that if we talk about this more, the fact that people tend to have such a negative inner critical voice, that maybe we can encourage people to start filtering that voice to a more kind voice and more supportive voice, and not have this unhelpful energy that's holding us back from really enjoying our work and being our best self.
CHRIS: That's so interesting to hear you say all of that for one of your traits because it's very similar to the last one for myself, which is I find that I do not feel safe unless (This is going to sound perhaps boastful, and I definitely do not mean it as boastful.) but unless I'm perfect. I guess the standard that I hold myself to versus the standard that I hold others to are wildly different.
Of course, for other people, yes, bugs will get into the code, or they may misunderstand something, or they may miss communicate something, or they may forget something. But if I do that, I feel unsafe, which is a thing that I've slowly come to recognize. I'm like, well, that shouldn't be true because that's definitely not how I feel about other people. That's not a reasonable standard to hold.
But that needing to be perfectly secured on all fronts and have just this very defensible like, yeah, I did the work, and it's great, and that's all that's true in the world. That's not reasonable. I'm never going to achieve that. And so, for a long time, there have been moments where I just don't feel great as a result of this, as a result of the standard that I'm trying to hold myself to.
But very similarly, I have brought voices into my head. In my case, I've actually identified a board of directors which are random actual people from my world but then also celebrities or fake people, and I will have conversations with them in my head. And that is a true thing about me that I'm now saying on the internet, here we are.
STEPH: [laughs]
CHRIS: And I'm going to throw it out there. It is fantastic. It is one of my favorite things that I have in my world. As a pointed example of a time that I did this, I was running a race at one point, which I occasionally will run road races. I am not good at it at all.
But I was running this particular race. It was a five-mile January race a couple of years back. And I was getting towards the end, and I was just going way faster than I normally do. I was at the four-mile mark, and I was well ahead of pace.
I was like, what is this? I was on track to get a personal record. I was like, this is exciting. But I didn't know if I could finish. And so I started to consult the board of directors and just check in with them and see what they would think about this.
And I got weirdly emotional, and it was weirdly real is the thing that was very interesting, not like I actually believed that these people were running with me or anything of that nature. But the emotions and the feelings that I was able to build up in that moment were so real and so powerful and useful to me that it was just like, oh, okay, yeah, that's a neat trick. I'm going to hold on to that one.
And it has been continuously useful moving forward from that of like, yeah, I can just have random conversations with anyone and find useful things in that and then use that to feel better about how I'm working.
STEPH: I so love this idea. And I'm now thinking about who to put on my board of directors. [laughs]
CHRIS: I'm telling you, everybody should have one. As I'm saying this, there is definitely a portion of me that is very self-conscious that I'm saying this on the internet because this is probably one of the weirdest things that I do.
STEPH: [laughs]
CHRIS: But it is so valuable. And it's one of those like; I like getting over that hump of like, well, this is an odd little habit that I have, but the utility that I get from it and the value is great. So highly recommend it.
It's a fun game of who gets to go on your board. You can change it out every year. And it is interesting because the more formed picture that you have of the individual, the more you can have a real conversation with them, and that's fun.
STEPH: So, as I'm working on forming a board of directors, how do you separate? Is it based on one person is running work and one is finance? How does each person have a role?
CHRIS: So there are no rules in this game. [laughs] This is a ridiculous thing that I do. But I find value in it's sort of vaguely the same collection of individuals. Some of them are truly archetypal, even fictitious characters. As long as I can have a picture in my head of them and say, "What would they say in a situation?"
If you're considering, say, moving jobs? What would Arnold Schwarzenegger have to say about that? And you'd be surprised the minute you ask it in your head; your brain is surprisingly good at these things. And it's like, let me paint The Terminator yelling at you to get the new job.
STEPH: [laughs]
CHRIS: Not get to the chopper, but get the new job. And it's surprisingly effective. And so I don't have a compartmentalized like, this is my work crew, this is my life crew. It's a nonsense collection of fake people in my head that I get to talk to. I'm saying this on the internet; here we are. [laughs]
STEPH: That makes sense to me, though, because as you're describing that situation, I do something similar, but I've just never thought about it in these concrete terms where I have someone in mind, and it's a real person in my life who are my confidence person.
They're the one that I know they are very confident. They're going to push for the best deal for themselves. They're going to look out for themselves. They're going to look out for me. They're going to support me. I have that person. And so, even if I can't talk to them in reality, then I will still channel that energy.
And then I have someone else who's like my kind filter, and they're the person that's going to be very supportive. And you make mistakes, and it's not a big deal, and you learn, and you move on. And so I have those different...and in my mind, I just saw them as coaches.
Instead of board of directors, I just see them as different things that I don't see as strong in my character. And so I have these coaches in those particular areas that then I will pull energy from to then bolster myself in a particular way or skill.
This was fun. I'm so glad we talked about this because that is very insightful to you, and for me as well, and to myself.
CHRIS: Yeah, we went deep on this episode.
STEPH: No tech but lots of deep personal insight.
CHRIS: I talked a little bit about bcrypt. [laughs] You can't stop me from talking about tech for an entire episode. But then I also talked about my board of directors and the conversations I have with myself, so I feel like I rounded it out pretty good.
STEPH: It's a very round episode.
CHRIS: Yeah, I agree. And with that roundedness, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph tells a cute story about escape artist huskies, and on a technical note, shares a journey in regards to class variables and modules inheritance.
Chris talks about how he's starting to pursue analytics and one of the things that he's struggling with that he's always historically struggled with is the idea of historical data. He's also noticed a lack of formalization of certain things and is working with his team to remedy that.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, it's an entirely new year. What is new in your new year?
STEPH: Well, the year is off to an interesting start because we helped rescue a husky.
CHRIS: Rescue as in now this is your dog or rescue as in the dog was trapped in a well, and another dog told you about the dog being trapped in a well, and then you helped the trapped? [laughs] Which of those situations are we working with?
STEPH: [laughs] I'm really wishing it was the second version [laughs] where there's a dog that tells me about another dog trapped in a well. No, this is a third version where there was a husky that was wandering around the gym that we go to. And so Tim, my husband, called and said that "There's this husky, and he's super sweet, but he seems very lost." And our gym is located near a major road, and so we were worried that he was going to wander about and get hit.
So I hopped into our car and took a crate and a leash, and he hopped right in. Clearly, he belonged to somebody; he'd just escaped. So he hops right in, and then we bring him home. But I put him in the backyard because I want to keep him separate from our dog, Utah, just because I don't know this dog, and I want to keep him safe. And I go back inside to grab a few things. I come back out, and the husky is gone. And I'm like, well, shit. [laughs] Now I'm starting to understand why this husky is missing or why this husky seemed lost.
So then I started looking for the husky, and Tim comes home. He's helping me look for the husky. And it was one of those awful moments where we live near...it's not a major road, but people tend to speed on it. And the husky and I happen to see each other across the road. And so the husky was like, oh, human friend and starts coming across the road towards me. And there's this large SUV that's also coming from the other direction. I'm like, oh, this is it. This is my nightmare. This is becoming real. This dog is about to get hit.
Thankfully, the driver saw the husky and stopped in time, so everything was fine. And the husky just finished trotting across the road to me, brought him in, kept him in the kennel in the garage. We didn't have any backyard adventures after that. The husky then thanked us by howling most of the night. [laughs] So this poor husky has had an adventure. We've had an adventure.
And then, around 4:30 in the morning, I go out because I'm checking on the husky and going to let him out. And I'm scrolling on the app called Nextdoor. And I see that someone posted a picture of this exact husky that's like, "Please help me find my dog." And I was like, yes. Because we were going to have to take him to a county shelter or at least go see if he had a chip so then we could return him. But thankfully, we found the owner. I found out the husky's name is Sebastian. And then we had him for a few more hours, and then we had a wonderful husky and human reunion.
CHRIS: That story had everything. It had ups; it had downs; it had huskies. It had escape artist huskies, in fact. I have...this is only through Reddit because that's how people learn about things in the world, but huskies are a rather vocal dog breed. So when you say the dog was howling, huskies have a particular way of almost singing, and it kind of sounds like yelling rather than more traditional dog sounds. Was that the experience you had?
STEPH: Luckily, it wasn't too bad. His howling was more just; he didn't want to be in the crate. He seems like an indoor dog. So he's like, what am I doing outside in the garage? I should be indoors. And so he wasn't too loud. It was more he was just bemoaning his situation.
But our dog Utah could hear him upset in the garage. And so that was also getting Utah upset because he didn't understand why there was a dog so close. And that was what led to the sleepless night because we couldn't get both of them to calm down. Because then, as soon as one of them calm down, the other one would get the other one riled.
CHRIS: As it so often happens.
STEPH: I'm so grateful that it turned out to be a happy story, though. That part was wonderful. And if we see the husky again, now we know his name is Sebastian and that he'll just come home with us. [chuckles] And we'll know how to return him since he seems to be an escape artist.
CHRIS: And we were best friends forever.
STEPH: On a more technical note, I have quite the journey to share in regards to class variables and modules inheritance. But before I dive in, I'm curious, what's new in your world?
CHRIS: Oh. Well, I'm excited to dig into that story. But I've got two smaller things in my world this week that are top of mind. I don't really have answers on them. I have more questions. One is we're starting to pursue analytics. We want to try and understand our system a little bit better. What is the experience of our users? How are they coming into the system? What are they doing? How long does it take them to do the things that we want them to do? All those sorts of questions you want to be able to answer about your application.
And one of the things that I'm struggling with that I've always historically struggled with is the idea of historical data. So data changes over time, and often we actually want to know about those transition points. We want to know about the different states that a user or any record in the system has been in. And I'm finding myself feeling the same pain that I felt many times and starting to think again about the relevant options out there in the world.
To give a slightly more pointed example of what we're dealing with, users come in, and then there are a few steps for them to actually sign up for the application. And so their user record or their application, if you will, will go through a couple of different states. So they can be basically approved directly, and now they're an active user of the system, that's one option.
But they can also end up in a state where they're pending review. And then eventually, depending on the outcome of that review, whether it's manual or someone intervenes or what have you, then eventually they can transition to either being denied or being accepted. And then they'll again be an active user. And so there's a question now of how many of the users that end up in that pending state end up transitioning into active.
And as I looked at the database, I was like, I do not have this information right now. I know their current state. And the logs could tell me all of this. We don't have proper log archiving right now. And I also don't have a system for, like, let me pull down gigabytes of logs and try and sift through that to understand the answer, especially for something domain level like this.
But this is one specific example that represents a category of things in my mind. The stuff that I've looked at in this space otherwise is Event Sourcing. So the idea that rather than having a discrete representation of the state of your application, you store every event as an individual log, essentially of like user did X, thing happened, Y occurred. And then, at any given point, you need to know about the state of your system; you just reduce all of those events through some magical reducer that produces the current state.
I also very recently read an article called Event sourcing is Hard. So I have that in my head as a counterpoint. This seems like a thing that is non-trivial to do, makes sense for a certain scale. But of course, like anything else, it has its trade-offs.
Another thing that I've looked at and never really pursued mostly because it's in a different ecosystem, is Datomic, D-A-T-O-M-I-C, which I think I've mentioned before. But it's a database that actually stores data in this historical format. And so you can ask for the current value, but then you can also ask for what are all the states that this user has been in? And what are the timestamps of those changes?
One small thing that we do have that I really like...so this is one example of us; I think leaning into wanting to have more information, higher fidelity information, is often we want to know something like was this ticket paid? Did someone pay for this ticket? And so paid is a BooleanProperty on this ticket record within our system. So the ticket can be held for a little while and eventually gets paid. And now, yes, it has been paid for. It is good. You can use it. But often, we want to know not just that it's paid but when it was paid.
And so there's a gem that we are using on the project called time_for_a_boolean by former thoughtboter, Caleb Hearth. And it does a wonderful job of basically instead of storing a Boolean value in the database, you store a timestamp. But then the Boolean can be inferred. If there is a value, if there's a timestamp for that record in the database, then there are a bunch of helper methods that get introduced that say, like, paid? That's now a method that I can ask, and it will tell us that. But we can also find the paid_at, paid_at value.
And so we have this higher fidelity data when we need it, but we can also collapse it down to the simpler representation. Because most often, all we need to know is, have they paid for it? Cool, then they're good. They can come into the concert, that sort of thing. But yeah, this is a broader question that I don't have a great answer to. I think Postgres and Rails and just the nature of how we approach these applications pushes us in a certain direction.
Another thing I'm exploring is downstream analytic systems. What if I send a bunch of events to them, and they act as a half-event sourcing type thing? But yeah, this is going to be, I think, an open question for me for a while.
STEPH: Yeah, you said a lot of really good options. When you're talking about in our ecosystem, we get pushed in one direction or the other that makes me think of the projects that I've been on. Typically, what they'll reach for first is something like a Papertrail. So then, that way, they can check for the historical versions of an object and how it was changed and see who changed it. That's one way to track the logs. I like the idea that if you can outsource it and send all of those events to a logging system and then essentially ask for that data back as you need it.
You made me think of a recent project as well where we needed to track the state. So it was a patient matching system. And we really needed to know when a patient match was created or disconnected and then who did that and perhaps for what reason. And to ensure that we had as much information as possible, we took that opportunity to just create a record for it.
So we had a patient match record or...I forgot the name of the other one where we created where a patient did not have a match. But we were creating a record every time someone did that. Granted, probably that’s not going to happen nearly as often as someone paying for an event or the situations that you're describing.
This was ideally infrequently that someone was going to unmatch a patient because it meant that our system had matched people that shouldn't be matched, and then a human had intervened. But yeah, it's interesting the space that you're in. And you listed all the good things that I would have thought of.
CHRIS: I think you listed Papertrail, which is one that I hadn't actually thought of yet for this particular instance. This only came up earlier today also. So this is new in my head that I'm really being pushed in this direction. But I think Papertrail could be a good solution for where we're at. But it is one of those where you often don't know the thing you want to know.
And I'm terrified of losing data of like; I had the data. I knew it at one point in time, but now I can't reknow it in the future because I didn't write it down. That's one of the things that I just don't want to happen in the world. And so finding those ways of like, how can we architect a system so that we can do the normal, straightforward, boring things most of the time but then when we need to expand out the analytics dimension of the system that we're working on...and trying to thread that needle and find the ideal optimization on both sides is a tricky one.
But yeah, I'll definitely take another look at Papertrail and see if that...at a minimum, I think that's a good solution for where we're at now. And then this is going to be a thought that's going to roll around in the back of my head for a while. So if I come up with anything else, perhaps a grander solution, I'll certainly bring that back to The Bike Shed. But yeah, what else is up in your world? I want to hear the story of the class variables.
STEPH: Well, it is quite a journey. So I hope you're ready. Specifically, I was pairing with Joël, who was working on fixing a test that had been marked as being skipped for a while. We weren't really sure why. We figured maybe because it's flaky. But then, as Joël had restored that test, he realized it was actually failing consistently.
So it was a test that was failing for a reason folks maybe didn't understand, but they decided to cancel or to skip that test. But they didn't actually want to get rid of it because it seemed like a pretty important test based on the description. So Joël saw it and got excited because it seemed very relevant to some of the work he was already doing. So then, he is now investigating why this test is failing consistently.
So in this story, we have four main characters: we have a class, two modules, and a class variable. So enter the class stage left. All right, so this class defines a class variable which I have to say is not something I work with very much in Ruby. So class variables kind of felt a bit novel and diving back into like, oh yeah, these are a thing.
So the class defines a class variable that's called cache and assigns this variable to an instance of a cache. So then this class includes two modules who we'll call Module A and Module B. And we'll enter them stage right. And both of these models look to see if cache is already set. And if it's not, they also set the cache class variable.
So with that information, in our test, we don't want to exercise the real cache just because then if other tests are reading from that cache, which is proving to be a source of flakiness for these tests, then they are overriding each other's expectations, and it's causing some of the tests to flake.
So instead, we want to use a fake cache, just like an in-memory cache. So the test and its setup is already overriding. It's setting that class variable to say, hey, I want you to be a fake cache, just be in-memory. However, while executing that test, one of the modules is checking to see if that cache is set, which is being set in our test setup.
So test setup sets the value. We're running the test but then in the module, the model checks to see if it's set, and it’s suddenly nil instead of using the cache that we had set. So now it's defaulting back to say, "Oh, it's unset. So let me go back and set it to the real cache," which is exactly what we're trying to avoid.
So then the question became, if we're setting the class variable in our class, why is it being populated in one of the modules but it's not being populated in the other module? So one of them has it set to the in-memory cache, but the other one does not.
So I'm going to gloss over some of the details because this stuff is pretty tangling. But essentially, when the test is running, and it's loading the class, and we are overriding that class variable, it's getting shared with one of the modules because as soon as one of the models does set that class variable, there's a bidirectional link that gets set between the parent class which is the module in this case, and the class itself.
And as soon as that module sets the class variable, then they're going to talk to each other, and they're going to reference the same value. However, this only seems to happen for one of the parents. You can't do this for both. So if you have two parents that are trying to share a class variable with the same class, that doesn't work. So that's a particular bug that we were running into.
I do have some good news because if anybody is very nervous about the situation that I'm describing, I feel you. The good news is that in Ruby 3, they actually warn when this is happening and have introduced an error. So you don't have this inheritance confusion that can come out of the fact that these parent classes are also trying to share a class variable with this child class.
So in Ruby 3, if you are writing a class variable in that class but then you try to overwrite that class variable in the parent of that class or by the module that's being included, then an error is going to be raised. So it's going to warn you if you're creating this bidirectional link between those two class variables and that you shouldn't be overriding the child's ownership of that class variable.
Instead, if you're going to use class variables, which, one, is not my cup of tea, but if you're going to use class variables, it should be defined in the parent class, and then it can be shared downstream in the inheritance versus trying to go upstream and then having your ancestors essentially override some of those class variables.
So all of that is to say we were on a very interesting journey of understanding how class variables work, how the inheritance works, how that bidirectional link is getting established, and then how Ruby 3 comes in to warn us if something funky is happening.
CHRIS: Oh, that is interesting. And I'm now going to catalog that as a piece of information that my brain will retain for roughly the amount of time that we are recording this podcast and then immediately forget.
STEPH: As you should. [laughs]
CHRIS: It's one of the reasons that I try to avoid inheritance. And I try to avoid class variables as much as possible because of this category of problem, a very subtle bug that you have to try and really hone in. And you have to be very smart to debug this sort of thing. I don't want to be that smart. I want to code in a way that I can be less smart on any given Thursday. That's my goal in life.
I will ask one other question, though. So there's just a cache that this class and pair of modules are hanging around with, and then you want to swap it out for in-memory. This sounds remarkably like the Rails cache. Is this cache distinct special? Could it not just be backed by rails.cache, THE cache within the rails context, which can be backed by Memcached, or Redis, or in-memory when you're in tests, or the NullStore, which I think is the default in development is probably how that goes?
Is there a particular reason? Is this a special cache? Is there additional behavior that this cache has beyond the normal thing? Or is it just like, at some point, someone's like, oh, I need a cache. I'm just going to use a class variable, that'll be easy, which it definitely is, but then you run into complexities.
And caches are one of those hard things to get right. So it's one where I would immediately be like, whoa, whoa, I would love to not make up our own cache here. So I'm wondering, is there a distinct reason, or is it just this happened, and here we are?
STEPH: So I think we are using a custom cache that we are pointing to. So it is another service. It's not a Rails cache or an abstraction that we can point to and use. It is a different cache that we are using. And I'm trying to think back to the exact code. But there is a method that essentially checks to say, hey, should I use the real cache? Should I use the in-memory cache?
And that is something that we've explored to find a way to make this more global for the test suite because we really want to control this for all the tests. Because it's very easy to not realize in the test that you should avoid using that shared global cache. And so that way, the tests don't interact with each other but instead always use an individualized cache for each test to make sure that it is self-sufficient and independent. But we haven't gotten that far yet in figuring out how we can take a more global approach with this.
CHRIS: Gotcha. So I don't know the details. I assume there are reasons here. But just to play this out, if we find ourselves saying we have a reason to have a distinct cache, to have a special cache over here, but it's a cache...and caches fundamentally, that word always will raise my attention. It will be like, okay, this is a place that bugs will come and aggregate. And we need a distinct one that has special behavior as an external service, or that is just something like in...
There's a wonderful blog post that Mike Burns wrote at one point that was about...I think it was something like things that will make me look at your pull request in more detail. And I really loved it because it did capsulate all of these like, yeah, there are good reasons to do everything on this list. But if you do any of them, I will look at your pull requests and be like, oh, that's interesting. Why are we doing that, though? Do we have to do that? Are you sure? Are you triple sure we have to do that?
And this is definitely one of those things where caches automatically catch my attention. Even if we're using the built-in cache, I'm like, do we need to? Is that a definite thing? And then all the more so when we're using a custom bespoke one. Again, I assume that there are reasons that there's something special that's going on here. Perhaps the caching behavior is distinct from just it's Redis, and we throw data. And if it falls out the backside, that's fine. Maybe you need entirely different behavior here. But it is something that I would poke at a bunch.
STEPH: Yeah, you're asking a lot of good questions. I will have to go back and look at some of the code because we spent enough time in Ruby specifics that I didn't pay as much attention to the cache. Because right now, as we are working on these tests, we're trying to fix just the test without changing the application code, one, because that feels like a safer space. And if the test is flaky, we're just trying to change the test first.
But some of these tests we're starting to realize I'm not sure we can fix the test without also changing some of the application code, or the way that we do have to fix the test is really an incentive to back up and say maybe now's the time that we look at some of the application code. Because another question that comes to mind is why use a class variable, and does this need to be shared by the class and the modules?
And there's a part of me that suspects that maybe some of this logic was extracted to a module, but then it wasn't cleaned up in the other places. And so that's why we still have a reference. And it's essentially then being shared and set and unset and reset in those different places. So I think you ask some good questions, and I have some more questions of my own when we have time to revisit that portion of the test and application.
As another example of some of the tests that I've been working on, one of the tests that I...because we have a list, we can usually tell some of the tests that are flaky. So one of the ones that I was investigating was a similar issue where there was a shared resource, and someone had tried to mock it out. So they had taken the time to say, hey, I don't actually want to use that real resource that's over there; instead, I want to just return the scanned value.
But instead, they'd accidentally stubbed out a class-level method instead of the instance-level method. And so it was running, but it wasn't actually stubbing anything else since that's the method that's not getting called. So that was just an oversight for that test. So I fixed that test. But I noticed that we were using allow any instance of, so then I did take the time to go through that file and change and move away from the use of allow any instance of.
And for folks that are less familiar with allow any instance of, RSpec has some really great docs that talk about how it's very helpful for dealing with legacy code. But essentially, it is a code smell that you're using; allow any instance of because you are saying that my test is or my code is so complex that I can't really mock out the specific instances that I want to and then return specific behavior. So instead, I'm having to use this more global approach to say, hey, any instance of this method, I want you to mock it out versus this very specific instance that I know that I'm working with.
But we can include a link in the show notes because there's a nice write-up that talks about some of the reasons that allow any instance of is not recommended. So that's been kind of fun. There's been a little bit of joy to get to refactor away from that and actually stub out a specific instance.
Part of the work, too, that I'm noticing as Joël and I are going through these tests is leaving breadcrumbs for other developers as well because they have a very large team. And they're very junior friendly, which is just incredible. I love that so much about this company. And because they do hire a lot of juniors, then it is a tough codebase. It's a fairly old codebase.
So as these juniors are coming in, they're seeing a lot of these patterns. And they're propagating these old patterns that aren't necessarily the best patterns to propagate. But they're doing their best, and then they are reusing what they're seeing. So part of the work as we are revising these tests, my hope is that people will see some of these newer patterns and use those instead of following some of the older patterns.
CHRIS: I can only imagine that you're writing borderline novels in your pull request descriptions and commit messages there. I do wonder, is there an index of those that you're collecting? So there's like, here's the test remediation examples list, and you're slowly adding to them. This was a weird one with a class variable. And this was a weird one that had flakiness due to waiting or asynchronous behavior. And gathering examples of those, but specifically from the codebase.
I could see that being a really useful artifact because I happily traverse through git blame all the time. But I don't know that that's always a thing. And frankly, I have to work for it sometimes. So if there is that list of here are pull requests that specifically did X, Y, and Z, I think that could be super useful.
STEPH: Yeah, that's a great idea. And yes, they have some shared team documentation that speaks to specifically flaky tests because they're aware that this is a problem. They are working together to address this. And they have documentation that states ways to avoid flaky tests. If you encounter a flaky test, here are some of the ways that you can triage to find out what's wrong.
So as Joël and I have been finding good examples, then we've been contributing to that document. And they also have team meetings. So our plan is to attend some of those meetings and be like, "Hey, this is just some of the stuff that we've seen this week, some of the things that we improved and changed," and share the progress that we're making.
Since everyone is aware that there are these developers that are working hard to improve the test suite, but then share that information with the rest of the team so they too can feel...one, they can just see the changes that are taking place. But they too can also benefit and apply those strategies themselves when they see a flaky test.
Oh, but you did just remind me of a thing. So one of the tests that I was going through...I'm very intentionally going through and making the smallest change possible. So I will do the gross, ugly fix whatever it is to get something to pass, and then I will commit it. And then I'll think about okay, well, how can I make this better? So essentially, I have the fix, whether it's pretty or not. And then, after that, I start to have other commits that make it prettier.
And so, I had a pull request that had four commits that told the story that I was very happy about and progressed along in a more positive direction. And I issued that, and I discovered that Gerrit, when it sees four commits, it split all of them into their own change request.
And so, instead of having what I thought would be this nice story, now got split across these four change requests. And I thought, well, that's less helpful. So I ended up squashing two of them, but I still kept three of them because they stood alone, and each told a story. But that's something that I've learned about Gerrit.
CHRIS: Always so interesting how our tools shape our work.
STEPH: And it made me think back to the listener who asked the question about ensuring that CI runs for each commit. Well, here you go, Gerrit. [chuckles] Gerrit does it for you. It ensures that every commit gets split into its own change request.
CHRIS: I mean, as you said earlier, not my cup of tea but... [laughs]
STEPH: Yeah, I'm still lukewarm. I'm still discovering Gerrit and how we get along.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
What else is going on in your world?
CHRIS: In my world, we keep adding new users to the system. We keep doing more stuff. These are all wonderful things, the direction you certainly want to be heading. But as we're doing that, I've recognized that we had a lack of process and a lack of formalization of certain things.
And a lot of the noise of the work was just coming to me because I was the person that everybody knew. I can ask a question; Chris will know the answer, et cetera. And then there were things that we needed to keep an eye on. But because it was everyone's job, it was no one's job. So we've introduced the idea of a point person on the engineering team. So this is a role that will rotate each week. I think you and I have worked on a handful of projects that had something similar to this.
There was a team that we worked with that had an ad hoc list, which were just little tasks that needed to be done by developers. So there was one person who would run with that. I've heard it called captain before, the sprint captain. We're not really doing sprints. So for various reasons, that title didn't work for me. But point person is what I went with here.
And so the idea is rather than having product management or anyone else in the organization just individually reaching out to developers, we want to try and choke that off, have a single point of communication. And so just today, I introduced into Slack, a group, but it's a group of one person. So @pointdev is technically the handle for this person. It’s a group in Slack. And each week, we'll rotate who the members of that team are. And technically, you could add multiple, but the idea is this is just one person. So we'll rotate the person.
And what ends up happening is if anyone...say the product manager says, "@pointdev, what's the status on..." blah, blah, blah, that will notify the person who is the point person that week. So that's a nice feature in Slack so that we can condense it down and say rather than asking individuals, ask this alias. We're introducing one layer of abstraction in our communication tools, much like we do in our software.
So I'm drafting now the list of like, here's all the stuff that I think this person...because we're trying to push all of the quote, unquote, "other work" the non-product feature development work into this person's purview for a given week. So it's monitor Sentry for any new errors as they come up, triage them, and figure out what we want to do.
Ideally, and this is perhaps aspirational, I would like to keep inbox zero in Sentry. I know how you feel about that more generally and perhaps even more specifically within the world of errors, but that's my dream. We're going to see how it goes.
STEPH: I don't know if people know I am the opposite of inbox zero. This is the life that I'm living.
CHRIS: What about with errors, though? What about something like Sentry?
STEPH: I want to say that I would be a better human with my email. But I'm going, to be honest [laughs] and say that I would probably have the same approach where I am not an inbox zero person. I've come to terms with it. I used to really strive and think I needed to change. But I have reached a point of comfort with this is who I am. There are many like us, so shout out to all y'all.
CHRIS: Oh yeah, by far the more common approach, I think. So specifically with the errors, I struggle a bit with it because what ends up happening is we are implicitly ignoring the errors. And if we're doing that, I would rather just sit around and have a conversation and be like, let's just explicitly ignore them. There's a button in the UI. We can ignore them.
If this is not a real error, we can add it to the list of things that we do not report on. We can ignore that error. We can ignore it for a week and add a card to Trello that has a due date that says, "Hey, we got to work on this." But let's take that implicit indifference to that particular error mode of our application and make it explicit.
Let's draw that line in the sand such that when I see a new error pop up, I'm like, oh, that seems like something I should do something about. I really want high signal-to-noise when I'm seeing errors coming. And so I'm willing to work for that. But it is a trade-off, and it does take effort.
And it's noisy, especially browser extensions, and whatnot, just fighting the page. Facebook showed up one day. I don't know how Facebook got in there. Someone was browsing our website from within Facebook's browser, which I didn't know was a thing, but they had their own thing. And it fires a bunch of events, and Sentry was just like, let me slurp all of those up. Those seem fun. That was noisy. So we had to turn those off, but we explicitly turned them off.
STEPH: I do like the approach that you're taking where it's one person, and then it's a rotating shift because I think that makes it more reasonable for someone's who's like, hey, this is going to be noisy for a week. And then you're going to look through these emails and check all these errors, and then either silence them because you don't think that they're interesting or mute them for now. Or if you're going to convert it into a ticket, set a due date, whatever the triage approach is going to be.
But that feels more achievable versus inbox zero for life is just exhausting. But I feel like if you're doing it rotating week by week, that seems like a nice approach and also easier to keep it at inbox zero because that way, you are keeping up with all the errors. Because I agree; otherwise, what's the point of tracking all the errors if you're just going to ignore them?
CHRIS: Yeah, definitely the rotating, I think, is critical. I think the other thing that's been critical specifically on the error front is we've had now a handful of meetings where we triage the backlog together, the backlog of errors. So like, what all is coming into Sentry? What's going on? And we go through the process of determining is this a real thing? Should we fix this? Should we ignore it?
And we do that together so that it becomes not just one person's intuition about whether or not this is important or not or what the source of it might be but a shared intuition such that now any one of us, when it's our week, can ideally represent the team in that way and be like, never mind, never tell us about this again because it's very easy to silence things in Sentry that you would actually like to know about when they become real. But right now, we have this edge case that is an ignorable version. So trying to get there that's been fun.
But yeah, once again, Sentry, that's one of the things on this person's list. There are ad hoc support tickets for our operations team. So anything that needs to happen on a user's behalf that currently needs a developer to console, let's funnel all of those to this one individual, respond to any new questions. So this is where that Slack handle will be useful.
Check for any stuck jobs in Sidekiq. So is there anything that's been retrying for a while? Because it probably shouldn't. Maybe one or two retries is cool, but past that, something has gone wrong. And we should either get in there and fix it or just kill that job because it's never going to succeed, which is quite often the case but go in there and keep an eye on those and then look for anything.
We're starting to use due dates within Trello, which is currently our project management system. We'll see. Someday we're definitely going to grow out of that. But for now, it's good enough and checking for anything that's overdue or coming up in the next week in terms of due dates and just making sure that we're being responsive to that.
And so, I really like the idea of having this be a named set of things and a singular focus for one individual. Because again, that idea of like, if it's everybody's job, it's nobody's job. Or if it's nobody's job, then it's my job, and I don't want it to exclusively be my job. [chuckles] So I'm trying to make it not exclusively my job and to share the knowledge about it and make sure that these are skills that we all have and ideas and et cetera. But also, I would be fine to answer fewer questions in Slack each day.
STEPH: I have to admit, as soon as you were telling me that you had established this role, I was quietly congratulating you on helping delegate some of these responsibilities to the team. Because like you said, you are then the person that takes on all these tasks.
CHRIS: There's a laziness to that. Like, it's easy for me to just answer the questions. It's harder for me to put up a wall and say, "No, no, we have a process for this." And quite possibly, what's going to happen behind the scenes is that questions are going to come in to whoever is this point person. They're not going to know the answer. They're going to reach out to me, and then that conversation is still going to happen. But even by doing that now, now that person will see that answer, will understand the thinking or the background, the context that I have.
And so it's that weird thing of like, it would be so much easier for me to just answer one question. But to answer all the questions, well, I can't do that. And so I'm working to try and do more of the delegation to try and hand things off when they're in a known state and to identify this sort of stuff so that the team broadly can be stronger and better able to support everyone else in the organization. So that's the dream. We'll see how it goes.
STEPH: Yeah, I love that approach. I'm also thinking how interesting this role is because I'm imagining a mix between someone who is like the front point person at like an ER. So like, things are coming in, and they're in a tragic state and need help and need to be diagnosed.
But at the same time, you mentioned they're going around. They're checking Sidekiq. They're looking at some email errors. So they're also that night shift guard that's walking around with a flashlight just poking in each room. So it seems like a very stressful and low-key role all at the same time, all mixed up into one week. That person probably needs a beer at the end of the week.
CHRIS: There is a version of the story in my head that is...I wouldn't say this feels like a failure mode, but I would rather this not have to exist at all. I would rather things to be calmly humming along and not require a dedicated person each week to deal with the noise. I don't think that's realistic, certainly not as early on as we are in our organization. But I do wonder, is this a crutch? Is this something that we should be paying more attention to?
And I know in teams that you and I have worked with in the past that has been a recognition of like, this is a crutch. But it's a costly crutch. Like, we're taking an entire...in our case, it's not requiring the entirety of a developer's week. They're able to do this pretty easily and then still get a bunch...like, 75% of their time is still feature work. But we're just choking down who's the person that will be responding to questions when they pop up so that fewer individuals are interrupted?
But I have seen organizations where this definitely filled an entire week and spilled out more than. And then there was the recognition of that and the addition of another person that comes along and tries to fix stuff along the way as opposed to just responding. And so I want to make sure this isn't a band-aid but is, in fact, a necessary layer that we then try and shore up, you know, we should have fewer errors. That feels true. Okay, cool. Let's fix the bugs in the app.
And these ad hoc things that an admin needs to have done can that be a button in the UI? Can they actually self-serve in those cases? And we're slowly moving towards those. Ideally, fewer jobs get stuck in Sidekiq. And so, my hope is that this isn't a job that gets harder and harder over time. It's a job that potentially, if we're being honest, probably stays about this hard. I don't think it's ever going to be just like, nope, nobody needs to do anything. The app just runs, and it's great. And it never has bugs.
But that is a question in my mind as I start to embrace this thing of like one person is dedicated for a week to this. And if right now it's only 25% of their time, okay, that's probably fine. But if suddenly it's 50% of their time or 75% or 100% of their time for that whole week, that becomes too high of a bar in my mind. And I want to keep a close eye on it and make sure it's not trending in that direction. And I will be one of the people on the rotation. So I'll get to be in the trenches.
STEPH: I appreciate all the thoughtfulness that you're putting into it. And I'm thinking back on a project where we had a similar rotation because we had an issue Slack channel. And so anytime there was an issue, then it would get posted in there. And before, it was going out to everyone, or there was one particular person that was always picking it up and then trying to delegate it to others as they needed to. But then we started a similar rotation.
And one of the key benefits that I found from that is it signaled to the team, hey, this person might get pulled away. They can pick another ticket or two, but we need to give them lower priority tickets because there's a chance that they're going to get pulled away to work on something else. And that's okay, and we're going to plan for it.
Versus without this role in mind, then you had people all taking on high priority tickets, but then someone had to be the one that's like, well, I'm going to punt on my high priority and feel stressed about the fact that I've got this other thing to deal with. But then, I didn't actually do the work that I planned for.
So I feel like you're helping introduce calmness into the week, even if it is a stressful role. But then there's the goal that this becomes less of a stressful role, and if you see it trending in the opposite direction, then that's something to investigate.
But I also feel like triage and communication is such an important part of being a developer that it also feels very relevant upskilling for the whole team to go through. So there's also that benefit of where this approach also empowers the rest of the team to also experiences, build empathy, look for additional fixes, and then also build these important skills.
Overall, I really applaud your thoughtfulness. And I think it's a really good idea. And it will be interesting to see which direction that this role trends if it gets easier or if it's getting harder over time.
CHRIS: Well, thanks. I appreciate that. And I'll certainly report back as we develop this but hopefully, it stays about where it is. That feels right. And I think I'll probably...that's one of those things that I will monitor. And if I feel it moving in the wrong direction, then step in and try and get it back to this space because this feels like a maintainable reasonable amount.
And we shouldn't be fixing every bug and adding every button to the UI. That's just actually not how it works, unfortunately, would love to. That's not true. You shouldn't have every button in the UI. That's so many buttons. But broadly, I hope we can maintain roughly this, and I think identifying it and laying it out now I'm feeling good about having that structure. So yeah, we'll see how it goes. Will report back. But again, thank you for the kind words.
With that tour of a bunch of different things, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes as it really helps other people find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeee!!!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph and Chris recap their favorite things of 2019 and 2020 and share their 2021 list. Happy Holidays, y'all!
Steph:
“The longer I’m in the software game, the more I want things to be calm” - Steph
Chris:
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Listen to episodes from 2020 and 2019 👇
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Are we taking off the next few weeks?
CHRIS: According to Steph's schedule I think we are.
STEPH: You know, that's Steph and her schedules.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: Well, this will be our last episode for 2021. So that's new collectively in all of our worlds, I think, which is exciting. We'll be taking off the next few weeks for the holidays. But as has become tradition, I think it is time for you and I to review some top 10 lists from last year or two top 5 lists, and then maybe you share some new favorite things. How does that sound?
STEPH: Yeah, I'm excited. I love that we take this time to reflect about what we enjoyed about the past year and share our top things. It's like Oprah's list. You know Oprah has her list of favorite things, and we have our list of favorite things.
CHRIS: It is almost exactly like Oprah.
STEPH: It feels a bit blasphemous to compare our list to Oprah's list but here we are. [laughs]
CHRIS: I tried to give the hyperbolic sarcasm there to be like, and let us be respectful of...but yes.
STEPH: Good. You got it. [laughs] So to prep for sharing our new list of favorite things, do you want to start by going through the list of favorite things from last year?
CHRIS: Sure. And just as a reminder, if anyone does want to listen to the episode and hear a bit more detail about our thinking on these, we covered this in Episode 274. But for me, the 5 items that I covered last year were Tailwind CSS. So the utility-first CSS framework which I continue to love and use on every project that I possibly can.
Remote work, that was a relatively new and novel thing for me at that point. Similarly, I have continued on with that and if anything, leaned into it all the more.
Next up is Svelte. Svelte is a JavaScript framework that I have grown to love even more over the past year. Spoiler alert, that may show up later in the episode.
Next up, we had Postgres, PostgreSQL, the database engine that is wonderful, and I had spent a lot of time with last year. Frankly, I haven't spent as much time with it this year but it’s still something that's near and dear to my heart.
And the last was Inertia.js, a framework that although it's got js in the name, it's both server-side and client-side and binds it together and gives a wonderful experience. I believe I've talked enough about that throughout the rest of this year that perhaps you've heard me mention it in a previous episode, listener.
But yeah, that was my top 5 for 2020. What about you, Steph?
STEPH: All right, so the things that I had from last year are one-on-ones. I don't remember exactly what I said about them, but I am still a fan. I still very much enjoy them. I learned a ton from them either participating or leading them.
Rails, also still a fan. Async communication, yes, love it. It really helps more people be involved in the conversation when it's async communication. feature flags, also still a fan. And Elixir and Phoenix is on the list also, still a fan although frankly, haven't done as much with it.
CHRIS: So, Steph, I have a question for you. Actually in preparing for this episode, I re-listened to Episode 274, which had our top 10 list for 2020. And then I also listened to 273, which was the previous episode which had our retrospective on the list from 2019. So at this point, I've now reviewed all of these lists, which is now 10 items, and 10 items for each of us.
And what was interesting to me, at least from my side, and especially as I was preparing for this year, is stuff's mostly stayed the same. I kind of still like most of the items on the list. And certainly, nothing has changed in a deep way where I'm like, you know, I used to really like this, but I don't like it at all anymore. So I'm wondering, is that the same for you? Is there anything that you've changed your mind on amongst this set of items?
STEPH: Looking at the list, I still really like everything on the list. So there's nothing that I've changed my mind about significantly. I'm realizing as we're creating this list each year, it's likely a list that I'm going to continue to grow and add to instead of subtract from.
Most of the stuff, I guess because we have a full year by the time we get to this point, I feel pretty good that this is something that I like in the world versus something that may be more of a month to month experiment that then I'd change my mind on. So everything on the list still rings true for me. And I have some new stuff that I'm going to add to that list.
CHRIS: Ooh, new stuff, exciting. Yes, this is what we're here for. So, Steph, let's dive in. What do you got?
STEPH: So in preparation for this episode, I started thinking through all the different ideas that I wanted to add to my list and all the topics I'm excited about. And I started to wonder what are the things that we really said? What can data tell us about these episodes versus just trying to think through my feelings of the past 12 months? Because it's very easy that I forget things that were important to me at the moment. So I started wondering, what data could I collect from the different episodes?
And now that we have transcripts that started back in I think around May of this year, I built a small little Ruby program to perform a word frequency analysis and generate a very low version of a word cloud. But I wanted to find what are some of the top things that we said? And it came out rather poetic. And I tried to ignore some of the small words just prepositions, and a, and the, and things like that were that were less interesting.
So here I've got a couple of different lists, a couple of different facts that we can explore. So here are the top 10 words that we said. So there's code, great, write, feature, question, idea, interesting, love, no, and laughs.
CHRIS: Laughs is in parentheses or brackets to say this is where they're laughing?
STEPH: Exactly.
CHRIS: Wow. A, that feels true. B, that's just delightful. And I'm so glad that you did this. For anyone listening at home, this is a complete surprise to me too. So I'm really enjoying going on this ride. But yeah, that feels like a representative list.
STEPH: There's another poetic one because then I started looking at some of the episodes individually as I was building this out to handle all the episodes. This is over 28 episodes. And so I pulled a specific episode with Joël Quenneville where we talked a lot about debugging.
And so the top words from that episode are debugging, people, think, don't, love, time, bug. And it's fun no matter how you hear that or read that you get something new out of it each time. And now I'm really into this word frequency art or whatever it is that we're going to call it.
CHRIS: That's fantastic that I want a little bumper sticker of that amongst the bumper stickers that I've claimed I want from things we say on the show. I want that one with Joël's face on it right there. That seems like a perfect item.
STEPH: So I also tried to figure out how many times we said it depends. And that one got a little trickier, and I was also surprised. But according to the data, we've said it depends around 10 times. And I feel like that's low.
CHRIS: That feels very low, huh.
STEPH: It does. I agree. That one feels a bit low. And so those were the fun, more poetic like, what are the top things that we said? And then I started looking for more what are the technical things that we talked about, some of the different frameworks or languages? So I started looking specifically for those. So over these 28 episodes, we said Rails 200 times, which is a lot. [laughs]
CHRIS: Good job, Rails. Way to show up on the leaderboard.
STEPH: And then next in that list data, some form of test, tests, or testing. We said around 230 times database. Ruby's next on the list at 140, then Sidekiq, retro. Monitoring is a big one. JavaScript, agile, REST. React, React is at 52. I was intrigued that React was spoken as much because I know I haven't worked in React in a long time. So I'm going to give you credit for that one. Manager, Svelte, Svelte, and Inertia are both around 45, 40 times that they were spoken. Python, Postgres, Rust, Elixir, Elm, Vim, and tmux.
CHRIS: Wow. I like that list.
STEPH: One other fun data point is that we said the word hard 20 times more than the word easy.
CHRIS: That feels fitting.
STEPH: It does, right?
CHRIS: I love this work, but it's not easy.
STEPH: Yeah, I appreciated that. I was like, that's true.
CHRIS: [chuckles]
STEPH: So that was some fun with words and frequency analysis, and it was neat. So I'm excited to do this for more episodes and to do it per episode because it highlights some interesting themes for the episode.
So pulling just from the data, then I'd say the top things from my list are Rails, data, testing, Ruby, Sidekiq, and retro. Those are the top things. But I'm still going to be creative with it and add to the list the things that I want to include on there.
So the first one this one is a bit of a repeat, so that's why I'm going to bring it upfront. But it's feature flags and calm deploys. That is something I am still a big fan of that I really appreciate. It can lead to some slightly increased tedious workflows depending on how diligent you are in feature flagging your work and keeping new work behind that gate so then you can turn it on when you want to. Also, the data supports it. We said flag like 67 times over 28 episodes. And I'm betting that was coupled with feature flags. So I feel pretty good about that one.
CHRIS: I think half of them were probably flag football is my guess if I remember what we talked about.
STEPH: We do play a lot of flag football, uh-huh.
CHRIS: It's interesting that you're leading with that. So one of the other items that I pulled out as I was reviewing the previous episodes was a quote that you made that resonated deeply with me in that moment and all the more so now. And everything I think about software probably falls a little bit under this bucket, which is...this is the quote from you, "The longer I'm in the software game, the more I want things to be calm."
And I think my response in the moment, which is why this was primed in my head, was I want a bumper sticker of that. I want it on a t-shirt or get a tattoo of it. [laughs] And I stand by those words because that's a beautiful sentiment and definitely, for me, speaks to a lot of the work that I want to do and how I think about what I put importance on.
STEPH: Thanks. Yeah, I find it makes a really big difference in terms of the quality of the work and then also, the happiness of the team. How about you, what's first on your list?
CHRIS: First on my list this year is going to be...it's a little bit of an abstract concept. So we'll see how well I can define it in a small amount of time. But the phrase in my mind is pushing logic back to the server. Over the past many years, let's call it like a decade or so, I've seen this gradual shift where more and more logic is being implemented client-side. And client-side can mean a bunch of things. It can mean a JavaScript client that gets downloaded and then runs. It has lots of smarts in it and knows about all the business logic but also iOS apps, Android apps, et cetera.
And every context that I've worked on that I felt the pain of now we've got our business logic distributed across all these different systems. I've seen some really interesting approaches to try and bundle up the logic and use it in a shared library. Perhaps in JavaScript, I've even seen some other approaches where this is a bundled C++ library that we somehow embed in every context that we want to run. And that's where the business logic is. But fundamentally, I felt a ton of pain from that.
And I've always had this idea in the back of my head that wherever possible, I like to pull logic back to the server because the server is this safe space with all the knowledge that I want in the world. And I can have secret environment variables, and I can add the database. And I can combine different sets of data very easily. And I can have the logic implemented in a single place. And that's wonderful.
And more and more, I've started to pursue this. Some of my work with GraphQL was an attempt to get this because a REST API is just like, here's a bunch of data. Combine it how you will. Have fun, front end. Whereas the GraphQL API starts to be more about the relationships between the data and the connections. And you can ask more interesting questions of a GraphQL API in my mind and ideally then push some of that logic back to the server because the GraphQL API encodes it in relationships and whatnot.
But probably the thing that has helped me the most on this is Inertia.js which was on my list last year. It remains something that, if anything, I tripled down on my enjoyment of Inertia.js. But it allows me to continue building my logic such that it's on the server-side.
And I don't need to implement a client that knows hey when a user adds an item to their cart, I also need to update that little icon in the top-right corner. I don't even need to think about that because Inertia uses the traditional request-response lifecycle, but then handles it in a smart, forward-thinking possibly animated way. And I'm just very happy with that and all of the explorations that I've had around pushing logic back to the server.
And actually, as I explore this even a little bit more, at my company, we're now starting to explore building native mobile apps. And we're trying to figure out what that means for us as I try and cling desperately to this idea of pushing logic back to the server. So that'll be a topic that I would love to chat with you more about in future episodes. But I think I found a way to, as I said, cling to this idea of pushing logic back to the server. So yeah, that is item number 1 for me.
STEPH: I'm very excited for those future conversations. You reminded me of something that I've heard from someone else at thoughtbot. I believe it's Stephen Lindbergh that said this. He was giving a presentation talking about forms. And one of the things he said was, "Stop using client-side form validations." And that's a bit of a blanket statement. And there are always some caveats with those statements. But when he said that, I thought, yeah, that sounds great because you have to validate it on the back end anyways. Let me rephrase that, you should validate it on the back end. A lot of applications don't.
CHRIS: I would go with have to just some opt to not despite the fact that they definitely have to.
STEPH: That's true. I just wanted to fuss at the people who aren't doing it. [laughs]
CHRIS: Steph's getting to fussing.
STEPH: And I just really liked what he said because I understand why people started adding more client-side validations because then they think well, this creates a better experience for the user. We can give them faster feedback.
But if you get to the point that you're actually hindering their experience...like if you've been filling out a form and it's telling you that you're incorrect, and it's because you haven't met the specific regex they're looking for, that annoying behavior that you see on forms that's often a result that I see from client-side form validations.
Also, if you're at the point that you're using form validations to drive the user to do the next thing, there's a good chance that form is too big. And there's an opportunity to break that up into a smaller workflow. So that way, you're not using validations to essentially coerce or force a user into a particular path and use more helpful ways to help guide them through that process.
So I'm very excited for our future conversations about pushing more things to the server. And side note, stop using client-side form validations or just reduce it. Dial it down. Don't dial it up, dial it down.
CHRIS: Oh yeah. That is such a great example of this theme. And again, hopefully, we'll chat more about this in future episodes. But yeah, so that's item number 1 for me. What is item number 2 on your list?
STEPH: So this one I really want to say thanks to you because I feel like you've brought a lot of topics and conversation about this particular idea to the show. And that has really resonated with me and influenced me as I've joined different projects that either have observable systems. And then that has been really helpful as then we are jumping into the project and debugging and then contributing to that system or where they're lacking that observability. And that just makes work and life so much harder.
So thank you to you and everyone else that has contributed in having conversations about observable systems on the show. Specifically, I'm thinking of the episode with Charity Majors where she talks about observable systems. And so that is number 2 on my list.
CHRIS: Oh yeah. I do love some observability. It's one of those ideas that once you get it in your head, you can't shake it. You can't unsee that you can't see what's going on in your runtime system.
I will say the app that we're building, the core Rails application, we've instrumented it heavily because we're trying to get in early on the observability game. But now we can see everything. And we've yet to really get to that deep understanding of like, that's just noise. We don't need to care about it. So let's silence those. Let's dial these up. These should go piped into Slack and how to sort of triage that.
So right now, it is a bit noisy in our world. I'd rather that than the silence, than the crickets of I don't know, something happened. There is a form validation, but it seems fine. It's happened a lot since the last deploy, but that seems fine. I'm trying to avoid that kind of stuff. But as a result, sort of the rough edges of the early times in observability, but yeah, huge fan of that. Glad that that made it onto your list.
For number 2 for me, this is a recurring theme from last year, but I've doubled if not tripled down on it. So this will be Svelte, Svelte the JavaScript framework that is just so fantastic. The more time I spend with it, the happier I am that I've leaned into it. I took I would say a tiny bit of a gamble in choosing it for the view layer for the application that we're building. It's not as popular. It doesn't have nearly as much community, mindshare, shared libraries, et cetera, et cetera.
But A, because we're working with Inertia, Svelte occupies a smaller portion of our application architecture. So that made me feel more comfortable with that decision. And I liked a lot of the fundamentals that I saw on the Svelte community. And over the past year, I've just seen each of those get reinforced. Svelte wonderfully leads with accessibility as a primary concern.
And one of the things that I see is although there are fewer packages out there in the Svelte ecosystem, the ones that there are very often like, and of course, we thought about accessibility, and screen readers, and keyboard navigation, and all of that. And so you don't even need to worry about that. It's like, thank you. That is wonderful.
Likewise, SvelteKit is a project that came out. I believe it was released, and I think it's 1.0 now or at least it's on its way to 1.0 now. And it's starting to get real usage. And that's a Next.js-like framework that takes your Svelte application and allows you to build it, run it, compile it. You can use it for packages. You can use it for apps. Wonderful stuff in there. And it's a great answer to how do I actually build a Svelte app or a Svelte package?
Likewise, Rich Harris recently moved to Vercel. Vercel is one of the big names in this world of we're building fancy applications on the internet. And so that's a huge vote of confidence for the framework. And now Rich Harris the creator of Svelte will be working on it full time. So it's just a bunch of signals that are pointing at although it's still definitely not nearly as popular as even Vue or certainly not React, Svelte is a wonderful choice. And I have enjoyed every minute that I've worked with it.
STEPH: I like how you're doubling or tripling down on Svelte. I've heard so many wonderful things about it. I feel like I should be a pro at Svelte at this point from everything that you have shared and brought to the show. But I'm still looking for that opportunity to get to test it out. So I'm excited to hear more about it next year.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days, no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
So third on my list, this one is more...it's something that I'm toying around with. I don't really have any concrete answers around how it's going to look but something that I'm interested in exploring further. Based on earlier this year, I took a month's sabbatical and that was phenomenal. I felt like this incredible reset, and then I came back more energized and interested in my work. And also, I got to explore other facets of life that I just normally didn't have time for.
So number 3 on my list is the idea of working in seasons where you are focused and work really hard on a project. And then let's say you take a couple of weeks off in between, and then you go on to your next thing. But I like this idea of chunking my work time because I found I'm very much a person that I'm on or I'm off. And it's very hard to create that balance between those two parts of myself.
And this may be a nice way to do it to say, I'm committed. I'm doing this for six months. But then I know I'm going to book a vacation, and I'm going to take a solid two weeks off or maybe even a solid three if that's something that my work and time allows. But I'm very interested in that idea.
I think it came from a conversation with someone else about academia life and how that is an approach they take where they work in those seasons where they work for the academic year, but then they take a summer off, and then they go back to work. And I very much like that idea and that approach to work.
CHRIS: This is such an interesting topic in my mind. I grew up both of my parents were teachers. So for the entirety of my life, I got summer vacation and my friends got summer vacation, and my parents got summer vacation. So clearly, everyone in the world got summer vacation. This is just a true thing about the universe.
And then spoiler alert, I learned the truth; it is different out there. So that took some getting used to. And then I have done an absolutely terrible job of this. This is an idea of like, I believe in this idea, the phrase that you used of living in seasons. It makes so much sense to me and seems like such a useful way to be. But I have at most taken two weeks off at any given point in my working career since I graduated college, and that was for my wedding. And that was it.
And between jobs, one time I left work like 15 minutes early on Friday, and then I started the next job on Monday. That was one of them. And then I did take a week off between my most recent job switch, just a whole week. Well, actually, that's not true because we recorded The Bike Shed in the middle of it, and I took a bunch of meetings to be ready to start. I'm terrible at this. Even though it's an idea that I believe in, that I want, I have never pursued this in a deep way. And it's something that I would really love to do. But yeah, I've not really done it.
So you mentioned academia and so there's the natural cadence to a year. But there are also sabbaticals. That's a thing that exists in the world. It's an idea that's already out there. Once every seven years, you get to take six months off just to go on an adventure. That sounds fantastic. I would like that, please. So I got to make that work in the world somehow, probably not for a couple of weeks, though, because I'm in an early-stage startup at this point. And so I probably got to hang out for a little while and get some stuff done.
STEPH: I like how you pointed out that sabbaticals exist; those are a thing. You also mentioned that a lot of times, maybe there's seven years or five years is what I've seen at companies before you get a month off. And while that is wonderful and much appreciated, I am interested in finding a way to include sabbaticals or at least those breaks more often in my life. Because I know I'm someone that I'm going to be focused on, and I'm going to work hard. And rather than just continue to do that and then one day burn myself out, find ways that I can have more of a structured this is when I'm on. This is what I do. It's what I'm interested in. I'm excited about this.
But now that I'm done with this after six months, let me go take a solid two, three weeks off to reset, recharge, find some other hobbies, and then come back to this. And I think that will make for a much longer and happier career. So I haven't worked out the details, but it is something that's on my mind. So that is why it is my number 3. What's your number 3?
CHRIS: My number 3 is perhaps in a similar space. And again, this is another one that was on my list last year, but I've leaned into it all the more, and that's remote, working remotely, working from home, et cetera. I have embraced it all the more this year.
The new company that I've joined we are a remote-first company. And so that is the mode that we're going to be working in. And that was something that I certainly pushed for because I feel like it is meaningful across the board. And if you're intentional about it from the beginning and think about things like async communication, and how do we handle this, that's all the more meaningful.
But also as vaccines and things like that have become available in the world, last year, remote was just the thing that we did. And this year, it was more of a choice and also was offset by the occasional in-person meeting. So the other folks that are in the company currently are co-located around Boston as well.
So we've had a number of days where we'll go downtown meet at a WeWork or some other shared co-working space. And we can have the occasional bit of in-person time. But we try and be very intentional with that. We try and make sure that when we're going to do that we have an agenda, even if that agenda is just connecting and socialization, which I think is deeply important. And that is incredibly hard to do just over Skype or Zoom or any of those tools.
But then the vast majority of the time I get to not have a commute. I get to work out more easily. I can cook dinner more easily. I can go for a longer walk with my dog. All of these things are just options now that are so, so meaningful and allow me to have a slightly calmer cadence to my life which is a thing that I want both in the work and in the life.
So I'm all for remote and perhaps tinged with a little bit of hybrid in person, kind of figure out how to get that right optimization. But yeah, big fan and will be continuing to do it with the caveat, and this is something we talked about the previous time we talked about it. This makes a lot of sense for a certain point in your career. I still wonder about how to make this work for folks that are newer to the industry. Junior developers joining a team being remote feels like it would be very complicated. So at a minimum, needing to be incredibly intentional around that. But also, is that even the right answer in that case? I don't know.
STEPH: I have feelings about that one. But I'm going to punt for now for another episode because I think that's a really great topic to dive into. And yeah, we should talk about that more.
CHRIS: I look forward to that conversation. But yeah, remote, that is my number 3. And with that, I will send it back to you for your number 4.
STEPH: I love that one. I'm a big fan of remote work. All right, for number 4 it's debugging. So I feel like we've had a number of conversations. Joël Quenneville has been on the show to talk about debugging and debugging not just for the art of it and the necessity of it but really building concrete skills around how to debug and then finding ways to share that information with others is really powerful.
And I feel like it's something that a lot of people just pick up on the job as you go, which is great. But it'd be great if we could create shortcuts for people. So then that way, they can have that information sooner rather than just waiting for a painful experience and then happen to pick up new tools for debugging. So debugging is a big one for me.
I also think that's representative of the type of projects that I've been on this year where a lot of them have been more triage-focused and how important debugging skills are in that moment, which I'm sure is also why observable systems is on the list. So for my number 4 is debugging. And we'll link to Joël's episode about debugging because it's delightful.
CHRIS: Debugging, one of the most pointed examples of alchemy in our work is the intersection of art and science and craft and all of that. And yes, debugging, what a fun topic.
But for my number 4, this is a return from two years ago, and this is Vim. I finally feel like Vim is starting to catch up, the promise of the language servers and VS Code, and the way that it works. I guess I've said this every year. I know. I'm aware.
STEPH: I'm laughing because I thought for a moment you're going to be like, I finally feel like it's working for me. [laughter]
CHRIS: I finally learned how to quit Vim. I've just had one instance of Vim open for the last 13 years because I didn't know how to quit it. But that has been fine. And then I finally learned how to quit it. No. Vim is finally catching up.
The Neovim just came out with a new version that's got tons of deep integrations VS Code-like features. Thanks to the wonderful work of the VS Code team and the respective language servers from all the different communities. The promise of the editor ecosystem rising tide lifts all ships is coming true, I think.
And even right now, I haven't even jumped to that new Neovim version. But the version of Vim that I'm working on with the current config is great. It works. It does the thing. And that's awesome. And it's only going to get better from here I think.
So 2022 is the year of Vim on the desktop. That is my strong bet. That's a joke about Linux in case anyone doesn't get it. It's not a good one. But it is a joke about Linux. So that's my number 4. Back to you, Steph, for your number 5.
STEPH: [laughs] And this is another count for our laughs in parens for next year's frequency count. Well, I guess it is still this year.
CHRIS: I absolutely love that that made it onto the list of top 10 things just [laughs] laughter off to the side.
STEPH: All right. So for my final, number 5 is don't forget the fun. And I say this because while work can be very interesting and fulfilling, I have found for myself this year that I also really needed some downtime to just play, to just experiment. And initially, sometimes I was worried where I felt like a lot of the work I was doing often wasn't building, but it was more correcting or fixing systems.
And I started to lose some of the joy that I had around coding. And I started to worry about am I losing the interest, the spark that I have for this career? And while I'm very fortunate to enjoy my career, I have become accustomed to the fact that I really like what I do. And so when I felt that starting to fade, it was a concern for me.
But then I started picking up just some little fun things like one of them is Advent of Code which is created by Eric Wastl. And during the month of December, a new programming challenge is released each day, and there's a leaderboard and you can be as competitive as you like. You can use any programming language that you like because then you essentially solve the problems and then provide the answer. And then Advent of Code will let you know whether you have the correct or wrong answer for that exercise.
And that sparked some joy, and it reminded me, oh, I really do enjoy this. I like a lot about this. But I have been so heavily invested in triaging that I was missing some of the fun that comes from just building something. And so that is my number 5 is don't forget the fun.
CHRIS: I'm so glad you added that to the list because this podcast is depressingly serious at times. And I'm glad that we now have this on a list formally so that we can remember to not take things too seriously. But more seriously, [laughter] I do think that's a wonderful item. And we do have the possibility of really loving the work that we do.
I find this work to be very fun. And there are different versions of it. And there are different companies and ways that it can go. But for me, this is something that I love to do that I find so much fun in but can get mired down in the details. And so being intentional and saying, "This should be fun. If it's not, what's going on?" That's at least something to look at. And where can I find the fun? And how can I revisit that? So I really enjoy that that is the final item that you're capping your list off with, in fact.
So for me, the way that I've thought about this list as we've composed it over each of the years is what are the major themes? And for me, probably the biggest theme is that I have joined an early-stage startup, and I've joined on as CTO. So it's a very different role. It's a very different type of interaction. I'm not sure I've ever said the company's name before on this show because I'm a terrible salesman. The company is Sagewell Financial. And so we are trying to do something very ambitious.
And the role that I'm in is a very interesting one. It's composed of pieces that have always been part of my work. There have been bits of mentoring, and hiring, and architecting, but then also doing the individual contributor work and all of those different pieces, and those will all be present but to varying degrees.
And the amount of ownership I have over the thing is very different than the long history of consulting that I've done. And so I'm really excited to lean into that and to explore that and to find out what it feels like to code less because I think that's just kind of a given. It's already started to happen even this early on in the project, and I know it's probably only going to continue, which is an interesting one relative to your "Remember the fun." I find coding very fun. So that'll be an interesting one to see how it plays out.
But I also find all of the other aspects of managing and guiding the technical portion of an organization really interesting. So I'm super excited to continue pushing on that, to go on that adventure. But yeah, it's very different. Or it's every single dial on all of those different measures is just turned up to 11 now is what it is. And I'm like, okay, cool, strap in. Let's go for a ride. This will be fun.
STEPH: I really enjoyed those discussions about how your role has shifted and the different responsibilities that you're taking on as I have often felt that tension between managing and then coding. And I enjoy both, but then making time for both, and then which ones do you grow in? Because I'm still always growing and striving to be a better manager and a team lead. But then I also want to continue to grow and be a better individual contributor. And focusing in those two areas or trying to grow in both directions is hard.
So then I often have to pick one to focus on. Maybe it's for a day, maybe it's for a week, maybe it's for a month. And I'm like, hey, for a month, I want to grow in this particular manager skill. But then that way, I feel like I have this more achievable goal. So all that is to say I really like your number 5. And I'm really looking forward to more conversations about how it's going and all the different things that you learned from being a CTO.
CHRIS: Well, I think on that wonderful note, we should probably wrap up this episode and wrap up this wonderful year of The Bike Shed. As always, Steph, it's been such a pleasure getting to chat with you on these weekly tech talk and nonsense adventures that we go on.
STEPH: Likewise. This has been so much fun. And when I mentioned earlier about having sparks of joy, Bike Shed is always one of those. I love these conversations that we have. It's been a wonderful year.
CHRIS: Cool. Well, I will see you in 2022.
STEPH: On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes as it really helps other people find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph started a new project and shares details about the new tools she's using, including working on a remote dev environment. Chris shares a journey with Lograge and Rails flash messages as he strives to capture user-facing errors.
They also discuss "silencing" flaky tests, using Graphviz to visualize data dependencies, and porting Devise views to use Inertia and Svelte. It's also interesting how different their paths have been this year!
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: Tech talk nonsense and songs, that's what people come to The Bike Shed for, variations on the Jurassic Park theme song, you know, normal stuff.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. Let's see. So I've started a new project. So frankly, there's a ton of new stuff in my world. And I've been on the project for about a week and a half now. I started over the holiday, and it's been going really well. Still in that whole early stage with getting to know the application, the codebase, the processes, the team, all the dynamics.
It's a large company. So I'm working with a small group of individuals, but there are about over 100 developers that work at this company. And they do have a lot of documentation, which has been very helpful. But there's a lot to learn in terms of setup and processes, specifically.
So they have provided a laptop that I'm using to access their codebase. So I'm using their laptop. And then, I am also using a dev machine, a remote dev machine, that they have set up for me. So I need to be on their VPN and SSH into that dev machine. So that's novel as well.
CHRIS: Ooh, I'm very intrigued by that bit, not that they gave you a laptop bit but the dev machine. This is in the cloud sort of thing? What is this? I'm very intrigued.
STEPH: I don't know if I have concrete answers for you. But yes, for me to be able to access their codebase, I have to go into the dev machine. And then that's where then I can do my normal development work.
CHRIS: So is this like an EC2 instance or something like that that you're SSH-ing into, and then you can run processes on it? Or is it closer to the GitHub dev containers thing that they just released? Or are you running with your local Vim? Is it a remote Vim? Are you using Vim? Is it VS Code? I have so many questions.
STEPH: [laughs] I think it's more like the first version, although I don't know the backbone of it. I don't know specifically if it's an EC2 instance or exactly how it's being hosted and how I have access to it. But I did have to set everything up on it.
So they started the dev machine up for me. Their DevOps team started an environment where then I could access, and then I did need to cultivate it to my own habits. So I had to install several things. I had to install Brew and Vim and also the tmux and all those configurations that I'd really like to have.
They do have a really nice Confluence document that walks you through how to set up a connection between VS Code and the remote environment. So then that way, you can really just hang out in VS Code all day. And initially, I was like, okay, I could do this. And immediately, I was like, no, I love Vim. I'm going back to it even if I have to spend the 20, 30 minutes setting it up.
I'm so comfortable with Vim and tmux that I stuck to my roots, and I didn't branch out into VS Code. But I think VS Code is one of the more popular tools that they're using. So that way, it feels more local versus having to work in a remote machine. I think I answered some of your questions. I don't think I answered all of them.
CHRIS: Yes. I think you did answer all the questions. But just for clarification, the Vim and tmux and whatnot setup is that you're running SSH, and then on the remote machine, you are using Vim and tmux? Or is it a local Vim that is doing…I think Vim has some remote editing capabilities but not anywhere near what VS Code can do.
STEPH: It's the first setup. So I am SSH-ed in. And then I have Vim and tmux running on that remote machine.
CHRIS: Gotcha. Novel.
STEPH: Yeah, it's a thing. It's working. So that's good. And it feels cozy. I feel like I'm at home. I feel like I can be productive. So that's great as well. Some of the other tools that I'm also new to, so they use Zeus, which is used to then speed up the booting of your application. And you can also use it for speeding up test runs. So very similar to Spring, which I think we've had some discussions about Spring and who loves it and who doesn't. [laughs]
CHRIS: I don't know. I'm not...[chuckles] I feel like I remember Zeus. But Zeus is like three iterations ago of this preloader thing. I'm intrigued by that. I thought Spring had fully supplanted it in the Rails ecosystem but maybe not.
STEPH: So this company has been around for a very long time. So there are a number of tools that I think they're using because that was the tool to use the day when they got started. And then it just hasn't been a need to move on to one of the newer tools to use Spring. So at least that's my current explanation for why we're using Zeus. And also, Zeus works most of the time. I'm frankly still getting comfortable with it. [laughs] I still have gripes about Spring too.
CHRIS: 60% of the time, they work most of the time.
STEPH: [laughs] So, Zeus is another new tool that I'm adding to my tool belt during this engagement. Another new tool that I'm using is Gerrit. And so they use Gerrit…it is used for managing their Git repositories. It is used for code reviews. And being as accustomed and familiar with GitHub as I am, that one has been a little tricky to then navigate and change the whole UI that I'm used to when it comes to pushing up code, reviewing code, asking for feedback on changes.
And at one point, I was reviewing a change request for someone else. And there's a button on there where I was adding comments, but they were in draft mode. And I'm trying to figure out how to get them out of draft mode so that they're actually submitted, and the other person could see it. And I saw a submit button. I was like, cool. So I hit the submit button. And then it said something in red text about ready to be merged into main. [laughs] I was like, oh, no, I mean, maybe, but that's not what I meant to do.
So I had to reach out to that person and be like, "Hey, I'm new to Gerrit. I don't know what I did. I hit a button. I hope everything's fine. Here's my review. Best of luck. [laughs] I think everything is fine. Nothing dramatic came out of it. But I had my own little dramatic moment.
CHRIS: Wow, that is a bunch of new stuff. It's interesting. On the one hand, I totally understand projects get started, and there's a certain set of tools that are current at that point, and so then you're using them. And then, over time, it takes a very active effort to try and keep up with the new current, that new-new as we call it.
But the trade-off there is really interesting because, at any given time, it never feels like the right investment to pursue the new thing to just upgrade for upgrading sake. But then the counterpoint is the cost to someone like you coming onto the project. And it's like, it's a bunch of new stuff. It's kind of old stuff. It's new for me, but it is old, and less documented, and less familiar. And it's also certainly less compatible with other things that are going on, almost certainly.
And so, how to stay on top of those updates is always the thing that's really intriguing to me. I say as someone who started a project recently, and I have not thought about upgrading anything at this point. And we have bundler-audit I want to say is the one thing that we have in there. So if there's a CVE for a gem, then security-wise, we will be upgrading those. But otherwise, I haven't thought about upgrading our Ruby version or anything. And I think we're on 2.6 or something like that, which is a couple back at this point. And so it's something that's in the back of my mind.
I feel like I should have a formal answer to this. Like, company-wide, how do we think about the process of upgrading? And Dependabot and things like that answers some of it, but that doesn't tell me when to upgrade Ruby, I don't think. It could. That would be annoying. I don't want that. But it's one of those many things that depends and is subtle. And you have to decide where you put the trade-offs and whatnot. So just an interesting thing. And to observe you now going into this project building and being like, there's a bunch of new stuff.
STEPH: I think it really takes passion or pain. Those are the two things that then prompt us to upgrade. Either it's pain, and you need to change it to get rid of that, or it's passion. So you're really excited about the next version of Ruby or the next version of Rails. And I think that's fine. I think that's fine that those are often our drivers. But yeah, that is interesting. I hadn't really thought about that in terms of there's often no real strict process around when we upgrade except those are then the natural human catalyst.
CHRIS: I think you're right that those are the catalysts. But I think quite often those cannot be sufficient to push us to do the work. And so what do you do in the absence of that? It's not really painful. And I'm not really passionate about it. But I probably should do it is the 80% of the time middle space that we live in. And so yeah, I don't have an answer to it. I'm more observing the question. But like so many other things, I feel like often we just exist in that awkward middle and got to find a way through, so how like life.
STEPH: I was having a conversation with someone earlier a bit about these life cycles that we live in. Specifically, we were talking about consulting and how changing from project to project is so daunting. Because you go from I'm accustomed to this project, I'm accustomed to the team. And then all of a sudden you jump into this new project and with all these new things it can be really interesting.
But then there's also this feeling of like, wait, I used to be smart, and I knew everything that was going on. And the team knew me, and I knew all the team processes, and I felt good. And now I'm in this totally new space, and I have to relearn, and I have to reprove myself and relearn all the company politics.
And there's always that initial jumping from a sure space over to a very new space that always makes me then question and be like, yeah, I can do this, right? I can do this. And then I have to keep letting that voice build until about two weeks in. And I'm like, oh okay, I'm back in a good spot. I said two weeks; it's probably more like four.
But there's still that grace period of a new project where you're leveling up on all the things and learning the new team. And as daunting as it is; apparently, it's what I like. Apparently, I like that roller coaster ride that comes from jumping from one project to the next. So on that note of a bit of novel insight into myself, what's new in your world?
CHRIS: What is new in my world? Let's see. I think I've got two updates, two anecdotes to share. One, I lost the battle, one I won the battle. So we'll go with the lost battle first because that seems fun. So we have Lograge on this application, which Lograge, for anyone that's not familiar, is a library that helps with producing more structured and more complete log lines from a Rails application.
You can tell it to do JSON log lines, which is useful for many of the tools that will receive your logs. And then with it, you can say grab me the controller name and the params but sanitized and this and that. And so, you aggregate a bunch more data than would traditionally be in the logs. In general, I've just found it to be a much better foundation. I find the logs to be more readable, and more informative, more useful, all those lovely things.
But slowly, I've been looking at what's the other stuff that I want to have in here? What else would be nice to know? So one example is we use Inertia on this project. And Inertia has a particular way in which errors get mapped back to the front end. And it's an interesting little trick that involves the session, but that's sort of an aside. Basically, this is something that the user will see that I would love to know about. So how many users are hitting their head against the wall?
Because typically, whenever these errors happen, that means this is a flash message or something like that we're going to show to the user. So we were able to add that into our log lines. Now we can see those. We can aggregate on them. We can do counts. We can do alerting and monitoring, all those kinds of fun things. So cool. That was great. That worked well.
I then specifically…I mentioned the flash a second ago, but that's actually not…the Inertia messages will not show up in the flash. They end up in forms inline on certain inputs or whatnot. But we do also use the flash message pretty regularly as a way to communicate to the user success or failure or what have you. And I really wanted to get those into the logs. And I tried very hard, and I failed. I gave up. I threw in the towel. I raised the white flag.
So the nature of the flash, which is something that knew in the back of my mind but I had never really experienced as pointedly as this, is the flash is a magic value within the Rails ecosystem that can be written to and then once read clears itself. That's the nature of how the flash is supposed to work. And it persists across requests. So it's doing some fun stuff there, which I assume is tunneling through the session or maybe putting it into a cookie. I'm not actually sure.
But there's some way that you post to an endpoint, and then you get redirected to the show page. And on the show page, we actually display that flash value. But the flash is set on the controller endpoint that is handling the POST request. So this value spans across two request-response life cycles, which is interesting.
And so the manner in which that works is Rails is managing that on our behalf. We write to it on the one side. And then, when we do the subsequent requests, if there's a value in the flash, we show it to the user, which is why occasionally you'll see those weird things where that flash message shouldn't show up. But it's like a sticky value that was left in the system that didn't get cleared via one thing or another.
But I really wanted to put those into the logs. Like, what are we saying to the user is the thing I want to know. This is that question of like, what's my system doing at runtime? I understand what it's doing. I can read the code and understand what should happen. But what actually happened? Are users seeing this flash message way more than they should? That's a question I want to be able to answer.
And I have lost the battle. I cannot find a way to read the flash value, put it into my loglines, but then also have it persist through. The first attempt I did, I was able to get it into my loglines, but then it didn't show to the user, which is a bad outcome. Because now I've read the value, Rails clears it, cool, that's fine.
There is a flash.keep method. And that I thought would do the thing I wanted, which is like, oh, I want to read this value. I want to tap this value, I want to observe it, I want to peek at it. And I thought this keep method would do the thing that I wanted. It did not. It just caused the flash to be persistent. So now, anywhere I went had the same flash message for forever, which was not the behavior that I was looking for.
I then tried, like, all right, just for exploration purposes, what if I reach inside and read the instance variables of the flash objects? Also did not work. Everything I tried did not work. And it had these fun failure modes that just made me very sad. Thankfully, we had feature specs that told me about this failure mode because I would not have known about it otherwise. This was not obvious to me on first implementation. But yeah, I lost, and I feel sad.
And then I did the thing that we do, which is I searched Google, and there's nothing. I cannot find…This is one of those cases where like, I can't be the first person who wants to know what's in the flash. I can't be breaking new ground here. And yet I couldn't find anything on the internet. So that's where I'm at.
STEPH: That's interesting. Yeah, I'm trying to think…I think I'm one of those people. I don't think I've ever tried to peek into the flash and see what's there ahead of time. And it makes me wonder if it's partially…so we can't peek into the flash. You've exhausted several examples or tries there.
When you're setting the value of the flash, it makes me wonder if there's an order of operations that you have to pursue. So before you set the flash, you know what messages that you're going to share. So you send that off to the logs, but then also share that to the flash. So instead of writing the message directly to the flash and then having to check the flash, if you just stored that value elsewhere and shared it to the logs first. Is that a reasonable approach?
CHRIS: It definitely could work. But that was in the space of this is getting weird enough. I thought about things like that, but I didn't want to do anything weird. And part of the benefit that I get from using Lograge is rather than having multiple lines for each request…so a request came in and rendered this partial and did this thing. It gets constructed such that there's a single logline, which is one big JSON object that contains all of the data about that request.
And I really liked that structure because then everything's correlated like, oh, did we 404, or did we 302? And what was the message that we said to the user? And what were the params? It's all there in one line. I found that to be really useful. So I wanted to do that. I could just separately log it. But then I'm also worried of there is a statefulness there. Because again, the flash is written on one side and read on the…it's like a Hail Mary to ourselves between requests. Look at me with a sports reference.
And so, I didn't want to try anything out of the ordinary. I really just wanted to find a way to just like; I just want to read this value but not like Heisenberg uncertainty principle observing changes in the system. I found myself in that space, and I was like, can't there be a way that I can just flash.peek? And I just want to take a quick look. I don't want to mess with anything. You do your normal thing, flash. Just let me know. And I do not have an answer for it yet.
And for now, this is one of those nice to have, not an absolute requirement. So I wasn't yet in the position of okay, fine, let's do some out-of-the-box ideas here. So I'm still in the in the box phase, I would say, but who knows? Maybe down the road, I'll be like; I would really love to know what the flash message was for that request because this user is seeing stuff that we do not understand. And that information would tell us the answer. So we're not there yet. But I was surprised by how thoroughly I was defeated by Rails and the flash message on this adventure.
STEPH: I am equally surprised. I wouldn't have thought that particular achievement would have or is proving to be that hard or, frankly, not doable. So yeah, I'm intrigued to see if anybody has thoughts on it or if you do find a different solution because Lograge is one that I haven't used. But I would be surprised if other people haven't had a similar request of like; I want to be able to store what's in the flash message. Because like you said, that seems super helpful.
CHRIS: Well, certainly, if I do figure anything out, then I will share that with the world. But yes, part of this is putting it out there into the universe. And if the universe happens to send me back an answer, I will happily accept that. But yeah, again, I had two stories, and that was the one where I lost. I'm going to send it back over to you because I'm interested in anything else that's up in your world. And later, I'll tell the story of a victory.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: I have a victory that I can share as well, and I'm excited to hear about yours. So to share a bit more context about the project that I'm on, we are focused very heavily on improving their test suite, not only the time that it takes to run the test suite but predominantly addressing a lot of the flaky tests that they have. Because that is a huge pain point for the team and often leads to the team having to rerun tests.
And so, there are a couple of areas that we're very excited to make some contributions. The first part is that we are just looking at those flaky tests to figure out what is going on and how can we address these? And one of the nice things, one of the tools that they're using TeamCity is the tooling that they're using to run their automated test suite.
And TeamCity will let you mute tests, so then that way, if you do encounter a flaky test, you can mute it. So then, at least it's not impacting other people. I say this with some asterisks that go along with it because, for people who can't see, Chris is making a very interesting face. I think you have thoughts on this.
And the other thing that they will show is a flip rate for the flaky tests, which is really nice, too, because then you can see which tests are flaky the most. So then that helps us prioritize which ones we want to look into. All right, I'm going to pause so you can respond to that comment I made about muting tests.
CHRIS: I'm intrigued. I talked in a recent episode about adding RSpec::Retry. So the idea of flakiness being a thing that exists and trying to decide how much engineering effort to apply to fixing it. But the idea of muting it and especially muting it in the UI, not in the test suite or not having that be something that's committed, there's something about that that caught my attention, and thus apparently, my eyebrows raised. You saw that. [laughs]
But I don't actually know how I feel about it. This is such a complicated, murky area that I wish I had a stronger set of beliefs around. It was interesting when we talked about the RSpec::Retry thing. I think you rightly pushed back on me, and you were like, that's interesting, maybe don't do that. And I was like, that's a fair point. [laughs] And so now hearing you're in the quagmire of flaky tests, and yeah, it's an interesting space.
STEPH: Well, I think my hard belief is that muting tests is a thing that we shouldn't do. It's going to lead to more problems, and you're not really addressing the issue that you have. It is a temporary solution to a much bigger problem that you have. And so it is a tool that you can use to then buy you some more time.
And so that is the space that this team is in where they have used this particular tool to buy them more time and to be able to keep shipping changes while realizing that they do still need to address these underlying issues. So it is a tricky space to be in where essentially, you've gotten to the point that you do have these muted tests. It is a way to help you keep going forward, but you are going to have to come back to it at some point.
And so that's the space that I'm in right now joining the team is that we have been brought in to help some of their engineers specifically address this issue while ideally letting the rest of the team continue to focus on shipping changes while we address the test. Although I really think there's going to be two angles that we've talked about in how we're going to help this particular codebase.
One of them is that we are going to address a flaky test. But the other one is empowering people that they feel like they have the time and the knowledge that they can address a flaky test and also not contribute more flaky tests to the codebase. But I appreciate that you called me on that a bit because we've had those conversations around when we should actually address something versus muted, all the interesting trade-offs that come along with that conversation.
So this particular flaky test that we addressed earlier this week is specific to hard coding primary IDs. The short version is that it's bad, don't do it. The longer version is that they were having a test that was failing intermittently because it would pass the first two runs, but then it would start to fail for all future runs.
And the reason it would pass for the first two runs is because when they were setting the ID for a record that the test setup is creating, they were looking for existing records and saying, "Hey, what's your latest ID?" And then I'm going to guess the next ID. I'm going to add one to that to figure out what the next ID should be.
Some additional context, when the tests boot up, there's some data that's being created before the test run. So then that's why they're checking to see, okay, what records already exist? And then let's add one to that. The reason that fails sometimes is because then once the tests have run, the Postgres IDs aren't being reset, so they're using a truncate approach. So then, when the test runs once or twice, that works. But then, at some point, there's a collision between those IDs where they tried to guess the next ID, but then Postgres is also on that same ID, and it ends up failing.
There are also some callbacks. There's some trickery afoot. It took a little while [chuckles] to work through these tests to understand why they're failing. But the short version is that we thought we had to restructure the data in a way that no longer required us to guess what the next primary key should be for a record.
We could actually use Factory Bot to generate that record, and then ask Postgres, okay, what ID did you assign? And we're going to pass that in. And that part was really challenging when you're in a new codebase, and you are learning the domain knowledge and exactly how data should be structured. So that was one challenge of it.
The other part was that a lot of the data relies on each other. So then figuring out the right hierarchy in which we could create the data. So we didn't have a circular reference at some point. It took some time. And Joël Quenneville, who's on the project with me, used a tool that I found very helpful. It's called Dataviz. He went through and documented the let statements, the data that's being created, and then it generated a nice tree structure that shows you okay; these are your dependencies. This is the test setup that you're using.
And then from there, just by changing a few lines in that particular file that used to generate that Dataviz tree, he would move it around. And we could simulate what we were already mentally trying to construct in our head. So as programmers, we're already thinking, okay, I know this record needs that data. And that data needs that data before I can build this. But this actually turned it into a concrete visualization where we could see it.
And I was really struggling. And he was like, "Hey, I got it into a visual form that we can look at. And there's a circular reference. That's why this keeps happening and why we're not making progress." So then, using that, we were able to then reformat some of the dependencies, look at the graph, see that we didn't have that circular reference anymore. And then we could implement that in code.
And it really helped me to be able to walk through that visual aspect because then I could say, okay, this is all the stuff that I'm trying to mentally hold on to, but instead, I can just look at this and know it's going to work. I don't have a circular reference. It also helped concretely show why the previous efforts were failing and why we kept running into some issues.
So I'm really interested now in Dataviz because I found it very helpful in this particular case. And I'm very intrigued to see if I can apply this to more tests that I'm trying to fix and to see if I can start out with here's the current structure. Here's where I'm trying to go. And then essentially build that graph first before I start changing the code around. I would love to have that optimization. And I feel like it would speed up the process.
CHRIS: It was funny as you started to say that I had observed some tweets going out into the world recently. And I was like, this is Joël. This is definitely Joël talking about these things. As an aside, for anyone who doesn't follow Joël Quenneville on Twitter, @joelquen, I would highly recommend it. We can include a link to Joël's Twitter in the show notes. Joël is one of the clearest thinkers and communicators about programming that I have ever worked with.
And in particular, what you're describing of the data visualization is something that I think he does incredibly well. Often he'll make blog posts, but they'll include just simple little visualizations, little images, or diagrams, or flowcharts that just so concretely encapsulate an idea and express it so much better than text ever could.
And so, in so many ways, I look to Joël's writing, both on Twitter, in the blog, in many places. And I just appreciate so much what he puts out there and the manner in which he does it. So I was by no means surprised when you said, "Oh, and I'm working with Joël on this project." I was like, yes, I bet you are. That sounds true, and in particular, some of the conversations about flaky tests and determinism and all of that.
So yeah, the visualization stuff is also particularly interesting in taking a system that it's very hard to hold all of this in our heads. But that visualization, the tree and/or graph thing at play, having that in a picture and being like, oh, look, there's a cycle now. There we go. Can't have those. That's not okay. That's a really interesting solution that's just very cool to hear about and presumably led to a good outcome where you were able to break that cycle. And now you're happy and deterministic in your tests.
STEPH: Yeah, it's one of those approaches where I wonder if it was helpful afterwards and how can I make it helpful beforehand? Because it felt like a confirmation of the pain in the process that we had been through. And I'm eager to see if now I can apply it ahead of time and save myself some of that pain. That's where I get really excited. But yes, it was a successful outcome. And we have fixed that particular flaky test. But I'm very excited to hear about your victory from the week.
CHRIS: It's a shared victory. It was a team victory, just to be clear. But we are working in a system that is using Inertia. Inertia.js is a project that I've talked about a number of times on the show. I'm a huge fan of it. It is the core architecture of how we're building our application.
But as a very brief revisiting of what it is, on the server-side, we have Rails, and Rails is acting in a pretty traditional way. We do not have an API. And on the front end, we have Svelte, which is a JavaScript view layer framework. Inertia sits between them and binds the traditional Rails MVC architecture and the Svelte front end.
So again, there's no API in the traditional sense of this is a REST endpoint, and we hit it, and we get some data, and then the front end holds on to that in a store. None of that is going on. Inertia does a wonderful job of marrying these two concepts and allowing us to use familiar programming techniques on the server-side but then also have a more future-friendly front end.
Animations and transitions and things like that are now totally possible while not throwing away the entirety of our programming model that we've had in Rails server-side applications. That's all well and good. Almost all of the UI in our application is rendered via Inertia and Svelte. That's great. We love it.
The one caveat is Devise. So we have Devise on this project, and Devise comes with a lot of views built-in. And we have both an admin and a user model. So we have sign in and sign up, and confirm registration, and forgot password and all of these different views and flows and things that Devise just gives you out of the box. And being an early-stage startup, it was not a good time to revisit any of that or to try and build it from scratch or any of that. We just wanted to build on the good known trusted foundation that Devise gives us.
But the trade-off there is that now all of our Devise logic lives in this uncanny valley. It's the only stuff that is in ERB views. Our styling, thankfully, we're using Tailwind, and so we are able to have some consistency between the styling.
But recently, we redesigned the flash messages on the client-side in our Svelte pages. But on the server-side, they are a little on the Devise-side because Devise is the only pages that are being rendered truly server-side. They look a little different. And this is a pain that we felt, that inconsistency or that mismatch between the Devise views. And then the rest of the application is a pain that we felt but one that we consistently were like, I don't think it's worth the effort to try and change this.
Finally, this week, we've been doing a lot of work on our user onboarding funnel. So the initial signup flow going through it's a progressive form screen where you go in between different pages. And a majority of it is implemented in the Inertia and Svelte side of things. And it's very nice and very fun to work with. But the signup form, the user signup form, is in Devise, and it's a traditional Rails server-rendered post, and then all the normal stuff happens.
We finally decided to bite the bullet this week and see how painful it would be to port that over to Inertia and Svelte. And spoiler, it was awesome. It was very straightforward, and coming out of it, immediately, the page was largely the same. The server-side code was largely the same.
But now we had things like when you submit this form, if there's a validation error, we don't clear out your passwords because we're staying on that page on the client-side. We're taking advantage of the way Inertia's error flow works. That's a subtlety of how Inertia works. That's probably more detail than we want to get into here, but it's an awesome thing that works and is great.
And so immediately, this page just got better. We got inline errors for each of the fields. We were able to very easily add a library called Mailcheck, which I've talked about on an episode a while back. But this is a thing where if you have a typo in your email address, we can say, "Hey, you have a typo in your email address. And if you click this link where we suggest the alternative, we'll just replace it inline."
That would have been really awkward to wire up in our Devise view. It would have been some jQuery-esque script tag at the bottom of the view page that doesn't stop…We don't have jQuery actually at this point. We wouldn't have jQuery. And we could certainly, but it would only be for that view. And it would be weird and different in a fundamentally different programming model. It was trivial to do in the Inertia and Svelte world once we had made that port over.
This was always my hope. This was the dream that I had in mind. And it speaks to the architecture of Inertia. And Inertia is a really great abstraction that is very minimally leaky. I won't say it has zero leaks because no abstraction does. But this was my hope is I think the server-side should mostly stay the same. And I think the client-side, we just take an ERB template, turn it into a Svelte template, and we're good to go. And that has largely been the case.
But suddenly, this page is so much more. There are subtle animations as things come in. And there are just lots of nice features that were trivial to add now and that fit with the rest of the programming model that we have throughout it. So that was awesome.
STEPH: That is awesome. I love these styles of updates where there's like, oh, I had a loss this week. But I also had this really great win because that feels just so representative of a typical week. So I love this back and forth.
CHRIS: It's also that sequence is how the week went. So the loss happened earlier in the week, and then the win happened later in the week, which is how I would prefer it because now I'm going into the weekend with a win. Like, cool, I'll take it. Had it gone in the other direction, I would have been like, oh man, Rails beat me. But I guess it's the weekend now. I'll forget about it for a little while.
STEPH: Yeah, that definitely helps to end on a positive note.
CHRIS: But yeah, I don't think too much more to say about that beyond it was both really nice to get the added functionality to get the better, more user-friendly behavior in this view that naturally falls out of this programming model. But also to have that reinforcement of my belief in Inertia as a good architecture.
Not only did we get some really nice stuff out of doing this port, but it was also pretty straightforward because Inertia sits so comfortably between the pieces. And that's a story that I really like. I want more of that in my programming world, where to change this thing requires changing everything in our app. Oh no, this is sad. No, this was a great example of we were able to very minimally change things and get a much better experience out of it. So once again, I am very pro Inertia.js
STEPH: It's interesting to me how different our paths have been this year where I have been working on applications that are brought on thoughtbot to then help out with some of the concerns that they have, either their application is going down, or they have a test suite that they need to improve, or there's a lot of triage that's involved.
And so it makes me very excited to hear that, when you are building stuff, and it's going really well and how awesome that is. Because then I feel like most of my world has definitely been more in the triage space, which is a very interesting and fun space to be. But it brings me a lot of joy to hear about wins from let's build new stuff and hearing it be built from the ground up and how well that's going.
CHRIS: Well, I'm definitely happy to provide that. But also, I want to be realistic and be like, I’m just writing next year's legacy code right now, let's be honest. I'm very happy with where we're at in this moment. But I also know how early I am in the project that I'm working on.
And I'm burdened with the knowledge that I'm certain one decision that I'm making of the many that are being made I will deeply regret a year from now. I just know that that's true, and I can't let it slow me down. I got to just keep making decisions and do stuff. But I know that there's going to be one. I know that a year from now, I'm going to be like, why did we choose that option? But it's sort of the game.
STEPH: [singing] We'll just know that there's something strange and your code won't change. Who are you gonna call? thoughtboters!
CHRIS: Well, yes. I will definitely be calling you when I find myself in the uncertain times of legacy code of my own creation. So I look forward to that, frankly. But that's a problem for a year; I don't know, maybe two years from now. Who knows? But for now, what do you think? Let's wrap up.
STEPH: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it really helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed on Twitter, and I'm @christoomey.
STEPH: And I'm @SViccari.
CHRIS: Or you can email us at [email protected]
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Fellow thoughtboter Edward Loveall joins Steph to cohost and talk about alternative frontends and his own that he created: scribe.rip: an alternative frontend to Medium, learning about what it's like to be a manager/non-IC, and helps answer a listener question re: how do you think about empathy in your work?
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari. And this week, Chris is taking a quick break. But while he's away, we have a guest on today's show. Today's guest is fellow thoughtboter, and wonderful friend, and British accent enthusiast Edward Loveall.
EDWARD: Oh, hello, Steph. It is lovely to meet...No; this is not my real accent. Anyway, hi, friends. [chuckles]
STEPH: [laughs] Hello, British Edward. I am so excited to be chatting with you today. Are you going to maintain that accent throughout the whole episode?
EDWARD: No. There's no way I could do that. I need a lot more professional actor training to be able to maintain quality of that level, I think.
STEPH: That's fair. I won't hold you to that standard. I was reflecting on preparation for this chat. I've been thinking about all the fun that we've had together, the time that we have worked together at thoughtbot, all the remote coffee walks that we have gone on together as we've talked through consulting challenges or coding challenges. And I realized that we have never worked on a project together, which is wild to me.
EDWARD: Huh. Yeah, I think you're right. That is wild. Because I've been here three and a half years, and you've been here even longer than me. So in three and a half years of overlap, we've never done that.
STEPH: And yet we've still always found ways to hang out.
EDWARD: We make it a priority, you know.
STEPH: I think we need to...we might have to bribe somebody for us to get on a project together.
EDWARD: I'm pretty sure we know the person to bribe.
STEPH: We do.
EDWARD: We can go talk to our boss and make that happen. One thing we've both done in our career here at thoughtbot, too, is we have gone from individual contributor to being a manager, which is a cool transition.
STEPH: That's a really good point. That is fun that we have embarked on that journey together. I was very much encouraged to become a team lead, and that was very helpful. Because I'm the type of person where I'm not sure I would have put myself up for that role. I'm very thankful that others encouraged me to do so because I really love it. There are certainly challenges with being a team lead. But overall, I have very much enjoyed the role.
Just to provide some context for being a team lead a thoughtbot, because I feel like those management roles tend to differ from company to company as to the level of responsibilities that you have. So for us in particular, it's really focused on leading a team of developers, usually two to three developers, and conducting regular one-on-ones to ensure that they are fulfilled and are successful in their projects and their growth at thoughtbot. And then helping them become senior developers if they're not already and essentially coaching them through difficult development and consulting scenarios.
EDWARD: Yeah, there is still an expectation that you are an individual contributor in some form on client projects. It is not just a management position.
STEPH: Yeah, that's a good point. For me, that context switching is often what makes it challenging but yet also helps me still feel that I can coach somebody and that I can have one-on-ones because I am still in the trenches. I'm still contributing to client projects. And so, it really helps me still stay in touch with the work that's being done and the struggles that people will face. Let me say again I am positive I wouldn't have pursued this path if I lost my IC status. I really like that part of the role. That's really a split. How about you?
EDWARD: Yeah, and I still do. We've been experimenting. So thoughtbot generally does four days a week on many projects. So we do four days a week with our client, and then we do one day a week as investment. And team leads, at least on the team that we are on, have been experimenting with just doing three days a week on a client, one day dedicated towards team lead, and then one day for investment. I like that split so far.
We're still seeing how it goes, still pretty early on in that experiment. But I've enjoyed continuing to be in the trenches, as it were, and working sometimes with the people that report to me so that we can really grow in the same way. There's a lot of context shared there. And that's been really wonderful.
STEPH: Yeah, I have some specific questions I'd love to ask you about that shift in schedule. Because in some of our meetings, there has been discussion about that ability to context switch between I'm only billing three days now instead of my typical four. And I now have more time to focus on team lead priorities, but then that also means I lose a day with client work. And so there's that battle going back and forth between focusing on client work and also focusing on team lead work. So I'm going to leave that as just a teaser because I want to come back to that.
But I'd really love to circle back to earlier in the year when you were thinking about becoming a team lead and correct me if I'm wrong, but I think you were pretty hesitant about it. And you were still deciding if it was something you wanted to do. Do you recall what helped you make up your mind as to which path you wanted to take and why you chose this one?
EDWARD: Yeah, that's a great question. I did also get some encouragement, a pretty light encouragement from a previous co-worker. And that was helpful, but I turned it down initially. Someone asked, "Hey, are you interested in this?" And I said, "Nope, definitely not." And, I don't know, a year-ish later, I then ended up applying.
And I think what happened in the intervening year was that I started to naturally do some of the work of a team lead primarily, checking in with people and talking with them, pairing with them on things more regularly. So I felt as if I was already doing some of the work, not exactly running a one-on-one, not getting people promoted necessarily. But I cared about the people I was working with and wanted to see them grow and be happy and thrive. That realization helped me think, oh yeah, I'm just kind of doing this. And I should maybe apply for this role.
STEPH: Wow, that resonates so much. I've heard that from other folks, too, as they have progressed into team lead or other management roles is it was often they already felt like they had started doing some of the work, or there was some natural inclination to start taking over those activities. And so then it felt right to then actually acquire that title and take on those responsibilities officially.
Well, how's it been going? You had almost a year now. So you had some of those hesitations at the beginning. How's it been? What do you think of being a team lead?
EDWARD: Yeah, I'm really enjoying it. It is a challenge like you said. But that's every job, right? Every job should be a bit of a stretch. So I did come into it with some natural inclinations of wanting to talk to people and check in with them. But there are all these other pieces that I wasn't good at. One thing that has been really challenging is instead of completing things myself, being that individual contributor, is trying to coach and sponsor people to do something that I would do.
And I think the hardest part about that is they may not be as far along in their career as you are. And so it is hard to watch someone struggle in the way that you used to struggle without saying, "Oh, here, let me just do that for you.” And I think what I started to realize is that the efforts that I'm putting in I can really be a force multiplier and end up effecting more change than what I could do by myself.
Like, if you think about it, I have four reports right now, and they're all really smart and talented people. But let's just say they were half as good as I was. That is definitely not true but just go with the numbers here for a second. If I could teach them to do what I do, even if they were half as fast as me, because there are four of them, they can get two times the work done.
The math adds up in a way where if I can unblock those people, help them just get to the next one little step, do whatever it is that they need, they're going to do way more than I could by myself. And really wrapping your head around that and the advantages there is so hard but so rewarding once you figure it out and get it going.
STEPH: Do you feel like anybody told you that up front going into taking on some more management responsibilities? Or is that something you learned as you went?
EDWARD: I definitely learned that as I went. I got some great advice from Josh Clayton, who we work with, and he's been a manager for a long time. And that's a lot of how he thinks about it. And he encouraged me to do things like pairing with everybody on the team or running little workshops to teach, to fill in knowledge gaps for people asking questions, instead of giving answers, to help them find their own answer. And that's all been really, really helpful.
STEPH: Yeah, that's one of the things that I have valued very much about our culture. I've seen some other companies struggle with is that when someone does get elevated into a management role that they still need support. They still need to be coached. And they also need room to make mistakes and grow.
And at thoughtbot, I feel that we have been very supported and where there's someone that I can still get mentoring and coaching from. And I can learn to be a manager on the job versus I'm not just put in a position where I'm going to fail or just put there without the expectation that I still need to grow as a manager and as a person as well. So that has helped me out tremendously as well.
You highlighted the idea of pairing more with others and then asking more questions around providing answers. And as you're learning those skills or as you've acquired those skills for being a team lead at thoughtbot, have you found those skills also transition well to client work?
EDWARD: Yeah, they do. There's a lot of overlap, especially around gaining trust with somebody. I'm gaining trust in one-on-ones, but I'm also gaining trust with my client or helping my client understand something. This gets a little more into the client-side of it. But a lot of times in client work, I'm looking to bridge a gap. I understand something because of my consulting experience, and they want my knowledge and consulting experience. But it's hard to just go in and say, "Do X or do Y."
And in the same way, with somebody who's reporting to me or who we're having a one-on-one, it's not usually very helpful to just say, "Do this, do that." You want to help them understand the why and bridge that knowledge gap to get to where you want them to be and where you think they should be. Those really do go hand in hand, and I have used a lot of the same skills.
Giving feedback also has been a huge thing to share. It's really, really hard to give critical feedback to somebody. It's very easy for them to shut down and not take the feedback, which is the opposite of what you're trying to do. And the same can be with clients. Like, they've gotten to where they've gotten to because of whatever they've done in the past, and trying to show them why what some of the things they're doing is maybe not ideal is really tricky without triggering that flight or fight response. So yeah, there are lots and lots of crossover to answer your question. [chuckles]
STEPH: I get so excited when clients that have brought on thoughtboters recognize that we are there temporarily, that we bring an outsider perspective. And they will set up essentially reoccurring; maybe it's weekly, maybe it's monthly to say, "Hey, give us feedback. Let us know what are you seeing? What do you think about the team? What do you think about our processes? What would you like to change?"
And I don't mean just in a retro setting that you're having with the team, but it may be meeting with leadership of that company to give them that feedback directly. And that's awesome. It's rare because, I mean, that takes confidence on their part to be able to say, "Hey, give us all of your feedback, constructive, positive, whatever it may be." But I feel like they get so much value out of doing that where they really get to leverage the fact that they have brought in these external members. And they get to hear from them as to how things are going and insights that they may be missing or not hearing from their people otherwise.
EDWARD: Agreed.
STEPH: Circling back to the manager IC path for a moment, I have a question for you because I often find myself asking this question to me or sometimes other people asking this question. But how do I know which path to follow? How should I explore do I want to be a manager? Do I want to continue and invest in my individual contributor skills and really lean into that path? Have you found any resources that have really helped you or ways that you coach others through that scenario?
EDWARD: I probably don't have a very interesting answer just because I'm going to mostly repeat what I think I said. But I think it's still so relevant and valid, which is, do you find yourself doing some of the work that a manager does? And it doesn't necessarily have to be the thing that I did, which was reaching out to people and checking up on them and seeing how they're doing. It could be that you really, really like running big team meetings or something like that. You just get a kick out of doing that kind of work. Or maybe you really enjoy working less on yourself and more on the group around you. That could also point to more of a technical leader. It doesn't have to be a person leader.
So I think I would look for where you find yourself wanting to effect change and figuring out if that fits into a manager role or not. And I've had people tell me they definitely do not want to be a manager, and they know that for sure and people that are on the fence. And I think that's another useful thing is to ask your manager what they do as the job and see if that's interesting. See if any of those things spark joy for you, as it were.
STEPH: I love the approach of just flat out asking your manager or someone that you see where perhaps you would like their role and saying, "Hey, what's your day like? What do you do? And can I be part of more of your day just to see if I would be interested in this type of work? Essentially, can I shadow some of the meetings that you're in?" I really like that idea.
And I think in the past, I would have been more hesitant about this approach. And it certainly depends on your company's culture. But there's a part of me that's like, just try it out. Like, if someone is encouraging you to go for a management role or to go for maybe it's a stronger individual contributor role, maybe it's being a principal engineer or something else, but if there's someone that's already there encouraging you or if it's just yourself and you are your own cheerleader, then go for it. Try it out. See if you like it. Take some notes. See if what you thought the job was going to be like actually matches reality. Because then, at the end of the day, you can always decide to change your path.
And if you are at a company that supports that type of experimentation, then you can step back to your current role if you decide that you don't like it. Or you might find that there's a really nice mix in there. But I feel like, with time, I'm getting a bit more bold with strategies in terms of just trying things out, even when it comes for technical challenges as well. Like if there's something that you're really nervous about or there's some big technical problem or something that the team is working on, and you're really skittish and nervous about it, just go ahead and say, "I'll do it, or I'd love to work with somebody on it," and then try it out and take some notes, see how it goes.
EDWARD: You could be really sneaky too. You can say to a colleague, "Hey. You want to get lunch?" And like you turn that into a secret one-on-one. Or you offer to run the retro board during retro, or you step up for doing a bunch of pull requests that week or something like that. You can try these little test things without even having to let somebody know or committing to anything publicly or even privately. Just really internally to yourself, you can try to take some of those steps.
STEPH: I like the sneaky success ladder. People won't talk about that one as much. [laughs] That's how I definitely found out that I didn't want to do sales. There was someone that I was talking to that was interested in working with thoughtbot, and Josh Clayton was very supportive of like, "Do you want to come along and be part of the conversation?" I was like, "Yeah, sure." And so I went along, and it was fun. But I definitely walked away like, yep, I don't want to be part of sales. I really like everything else minus this part. [laughs]
EDWARD: Yeah, it's good to know. It's good to know.
STEPH: Circling back just a bit to something you said earlier, you had mentioned that as you were becoming a team lead, you realized that helping others be successful at their job was really then what led to you feeling successful as well and that you could be a force multiplier. And you'd mentioned that a lot of that work comes down to bridging knowledge gaps.
And I'm really curious because this is something that we're always working on at thoughtbot. We are looking to identify what skills people would really like to learn. How can we help people learn those skills? And I'd love to know more. How do you go about this? How are you helping people bridge those knowledge gaps?
EDWARD: Yeah, so that is a doozy of a question. I have a couple of different answers. First is something I talked about before, building trust. And there's a bunch of different ways to do that. And I see trust as the foundation of almost everything in consulting. If you don't have that trust, it's really hard to deliver feedback like we talked about. It's hard to bridge that knowledge gap. Because effectively, nobody knows who you are, and what you're doing, what's going on, why you are coming to talk to them. It's really strange. And we can come back to how to build trust.
But once you've built that trust, I approach bridging that knowledge gap in a couple of different ways. One is asking questions instead of giving answers. The goal behind this is I want them to think about their goals. And that will often help lead them to some answer to bridge that gap that we have. I have some idea. They have another idea. If I can ask the right open-ended question, they will walk themselves across and get to where I want. Now, that doesn't always work.
Another strategy I've found is outlining a bunch of different possible solutions and their pros and cons. That has done two things. One, it helps them understand where I'm coming from, what my goals are in relation to what they're trying to do. And another one is that actually tends to gain a lot of trust. In the meantime, you're showing your expertise. You're showing that you're really considering all their problems.
Because almost every solution has trade-offs, there's very rarely a silver bullet. And so it's really helpful to say, "Well, here's the pros, here's the cons. Here's where I think you should go, but you know your business better than I do. And I've outlined all the things here. So whichever way you want to go forward on this, let's do that. And let me help you get there."
Joël and I, a colleague that we both cherish dearly, we did that on a project recently, and it was really, really successful. We put a lot of work in and helped them get to a really difficult architecture decision. And it could have gone one of, I think, four different ways. And we were sort of vying for one. They were vying for another. And we found a couple more in the middle, and I believe we went more towards the middle. And we were both pretty happy with how that turned out.
STEPH: I really, really like how that approach gives someone so much autonomy, and they're part of that decision. So you're not just saying, "Hey, you need to do this," and then just following through with it. But instead, it's saying, "I think I've heard everything. I think I understand the different problems that we're facing. Here are my suggestions, but you still have more context. What do you think, or which option would you like to pursue? I really like that option."
EDWARD: Yeah, because you're always writing this line as a consultant of like, they did bring you in for your skills and expertise and theory. But you really want to level them up so that they can make the right choices because that ultimately is...like, their success is your success as a consultant. That's the job in a lot of ways.
And so yeah, giving them the tools they need to make the right decision is so often the job. And I think that can get lost in the shuffle of, oh no, we have to meet these sprint goals. Or I got to get this ticket done or this bug fixed or something. And stepping back to get them to a better place is another goal that you can get to down the line. It's not to say shipping tickets is bad [laughs] or getting the sprint goals is bad. It's just another facet. Have you had any aha moments in consulting?
STEPH: Oh my gosh, I have had so many aha moments. I think most of them, for good or for worse, are here on The Bike Shed, or at least they've been shared here on The Bike Shed. [laughs]
EDWARD: Yeah, you should write a book of them all.
STEPH: Could we just grab the...I'm lazy. Can we grab the transcripts? We'll just turn that into a book.
EDWARD: [laughs] Yeah, just put it all together, call it The Bike Shed Diaries.
STEPH: Yeah. Oh, I like it. Okay, all right, that'll be next week's task. We'll publish The Bike Shed Diaries. [laughter] Specifically, in terms of bridging aha moments for helping someone bridge knowledge gaps or even for myself is I will often focus on what skills do you need today to make your job easier? What challenges are you facing? And also, what skills would you like to have six months from now? So that way, you are meeting the needs and the requirements that you really need today to fulfill your job.
But then also six months out, we're still looking towards the future. Maybe that's also more job requirements related, or maybe it's just for personal growth, or the areas that you're really excited about. You really want to contribute to an Elixir, open-source project, or something more specific that contributes to your fulfillment.
So when it comes to knowledge gaps, those are often the questions that I'm asking are, what do you need this week to make your job easier and to make your life easier? And then where would you like to be in terms of what skills would you like to have six months from now or what concepts? It may even be too lofty to say what skills because that could be huge to say that I want a whole new skill to be able to work in a language. So maybe it's something that's more specific of like, I'd really like to understand forms a bit better six months from now, or I'd really like to feel a little more confident with SQL, or maybe you'd like to take a look at Arel, things like that.
And then set those targets and then check in to say, "How's it going? How do you plan to learn these skills? Would you like help learning these skills? What are some resources?" Because I am not always the person that can help someone acquire that knowledge. So in that role, I'm often a facilitator where I will say, "Cool, you want this. You're interested in this particular skill. I don't know that skill. But I do know someone else who's really good at this. So let's get you all connected, and then you can work together on this."
EDWARD: And to dovetail a little bit with that manager individual contributor piece we were talking about before, that's another piece we didn't really talk about, which sounds like sponsoring. It's not just you doing the thing for your report or even coaching them necessarily. It's how can I get my report into a situation where they can exercise that skill or connect them with somebody who can help them with that thing? I'm still working on that one, honestly. That's a really, really difficult one. That's not something that comes naturally.
STEPH: When you say that's the part that's still challenging for you, is it the connecting of one person to someone else to learn a skill? I'm curious to hear more about which part of that is challenging for you.
EDWARD: I think I don't always think of sponsorship as a tool that I can lean on. It just doesn't come to mind as naturally. I think the very natural thing to do is mentor first, which is like, here's what you should do. It's kind of giving somebody a fish. Coaching then is more like teaching them how to fish. And then I don't know if we're going to extend this analogy farther. Sponsoring is like you're going to open up your own fishing teaching school or something. [laughs] And that just doesn't always occur to me.
I don't necessarily think like, oh yeah, like my friend over here could totally teach you about this technical skill that you're trying to learn or set you up to speak at a conference or something like that. It's a much different level of being a manager that I'm just not used to yet. I'm getting better at it. But it doesn't come naturally.
STEPH: Yeah, that's a very powerful form of managing someone as well because then you are helping that person go beyond their current bubble of who is their manager in their team and then helping them shine in other circles. And that's incredible and also something that I am always working on getting better at.
EDWARD: Let's get better at it together.
STEPH: We can do it. Also, when you mentioned opening a fishing school, I definitely pictured fish in a school in front of a chalkboard and someone's writing on that board and little fish in their school seats learning.
EDWARD: [laughs] A little Finding Nemo action.
STEPH: You got it. [laughs] You know your fishing school. You got to learn to stay away from those hooks.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: So pivoting just a bit on a slightly more technical note, you've been working on a side project called scribe.rip R-I-P. And I've heard a bit about it, but I would love to hear more. Could you tell me more about that project that you're working on?
EDWARD: Yeah, sure. So Scribe is what I would call an alternative frontend. And specifically, it is an alternative frontend for medium.com. The goal of the project is to give people a tool to read the Medium articles, not on medium.com, which might sound like a strange goal. [laughs] I'm happy to go into a little bit of a lie there. But that is the tool. And yeah, the domain is scribe.rip, mostly because that was a cheap domain. [laughs] So I got it and put my project there.
STEPH: I like that phrasing that you're using, alternative frontends because I think when you had first mentioned that, when I'd heard that in other conversations, I was like, oh, what is that? And I didn't know what it meant. But now, when you put it into some context, that makes all sense. I am intrigued. Why would someone be interested in using an alternative frontend versus, say like, there's an article on Medium; I'll just read it there. What might inspire me to want to use Scribe instead?
EDWARD: Definitely. There's a bunch of different reasons. The alternative frontends cover a pretty broad ground. But I'd say the most common reasons that someone might want to use one are privacy if they're worried about the main service, whatever that might be. Let's say Medium, in this case, is doing something with their user data that they'd rather they not do, potentially a better experience on that service. If you don't like the way Medium's articles look, you might want to see them in a different way. It can also be a way to vote with your actions, saying that this is the kind of web that I want to see if you don't like in general what a platform is doing.
And if you think a platform is potentially even harmful, it can be a way to say I don't want to support that platform, but sometimes I find myself needing to interact with it in some way. The alternative frontends can be a tool for that. On the very cynical angle, you can also go to you don't want to see ads. And sometimes, these are ad-supported platforms. Alternative frontends can get rid of those ads. And so that's another way too. I'm conflicted more about that one. We can dive into that.
But those are the most common reasons I've seen that people want to use alternative frontends. And to be clear, Scribe is not the only alternative frontend out there. There are frontends for YouTube, for Twitter, for Instagram, for Reddit. There's a huge list of a bunch of them, but those are some popular ones.
STEPH: Oh, that's really cool. I've never used any of those before. Will be sure to include some links in the show notes so people can check those out. And you listed some really interesting reasons for why folks might want to use an alternative frontend. I'm curious, to make this possible, though, does it mean that the service that is hosting that content do they have an open API into which then you can pull that content? How is an alternative frontend possible?
EDWARD: It is possible through APIs almost always. In some form or another, the APIs aren't necessarily open. One interesting side effect to many of the JavaScript rendered apps is that they often talk to some API in the background. And that can often be used to get the content in a more computer-friendly way. And so, with Medium, in particular, they don't really have an open API. So I ended up trying to figure out their API in the background that they're using to fetch articles and was able to get the content and display it in a different way.
STEPH: So everything you're describing sounds really interesting. I feel like I do have to ask the question, is it okay that these alternative frontends are taking content and are essentially rendering content but then not using the original service in which the content was published? How do you feel about that aspect?
EDWARD: Yeah, it's a really interesting question. There's a bit of a moral argument here, and I think everybody has to make that call for themselves. I think every platform if it gets large enough, is going to have people that don't want it to exist for some reason. I think, in some ways, providing alternative frontends is a bit of a release valve for that platform. Not to say that the alternative frontend explicitly helps that platform, but I imagine it gives people literally an alternative to then use instead and can make a peaceful, neutral ground in a way. So instead of being forced to use only the official platform, you can now use it at least in a limited fashion outside of that, which may alleviate whatever concerns you have and therefore keep everybody happy.
And I think honestly, in the long run and in practice, most platforms will not particularly notice the impact of these alternative frontends. Overall, we're talking very, very small potatoes. YouTube is not going after Invidious, the alternative frontend for YouTube, because it's probably a drop in the bucket. Nitter is not getting cease and desist from Twitter. Instagram is not sending a cease and desist to Bibliogram. These are some of these alternative frontends. And I think that's just because it's okay. They don't mind. It's so small. And it's giving people what they want in a way that is not harmful enough for it to really matter in the long run for them.
STEPH: Interesting. Because yeah, as you'd mentioned earlier, I think most people are going to continue to use that main service because that's what they see advertised. And it's more well-known, and it's frankly easier to go to. But then for folks who do want a little bit more control over their experience and they still want to access someone's content. So it is interesting.
You still want to ensure that the person who created that content always gets recognition and ownership of the content that they have. And in this case, that very much still applies. If I wrote an article on, say, Medium, but then I'm using Scribe to be able to read that content, it's still known who wrote the article. But this way, you are perhaps opting out of something else that service is doing, maybe if they have some type of tracking or something that you're not comfortable with. But you still want to be able to appreciate that person's content, even though they're perhaps only able to publish on Medium for right now. Or they're still looking for more ways to publish their content for folks who would like alternative ways to consume their information. Yeah, it's an interesting spot.
EDWARD: Right. And some ways that I think Scribe can provide a slightly better experience are trying to highlight the author more than the platform. The only time it says Scribe on the website is on the homepage. If you go into an actual article, I don't put branding or anything like that. Because I think I really want people to have their work speak for the author, not for the platform, and that's really important to me personally. And that might not be important to you, and that's okay.
Maybe you use scribe because it supports dark mode or something like that, and that's totally fine too. I don't mind at all. There are many aspects on which an alternative frontend can provide for people that the official platform doesn't. In some ways, it's augmenting their features, but in some ways, it's just giving people a bit more choice. And I think that's important.
STEPH: I have found since you'd mentioned the side project, that I've started using it more to read content. And I have found it helpful because it really silences all the noise because a lot of services want you to see ads, and they do want you to click on more articles that are related to the thing that you're reading. And so, I do appreciate the simplicity that it brings to the content. So then I can really just focus on that one article that someone has written. Overall, it seems like a really neat project.
EDWARD: Yeah, thanks. I'm glad you enjoyed it.
STEPH: Pivoting just a bit, I would love to go on a slight adventure and answer a listener question with you. What do you think?
EDWARD: Yeah, let's do it.
STEPH: All right. So this listener question focuses on empathy in your work, and this person writes in, "I'm curious how you all think about and notice empathy from yourselves and others around you. Empathy is so helpful and critical for making and maintaining healthy, productive relationships. I've noticed that the way you frame your client engagements, empathy sounds to be at the heart of them. For myself, I've noticed I'm better at it in certain contexts and certain times and with specific personalities, more so than others. More concretely, how do you stay empathetic with your clients and with cross-functional teams like product or design or even yourself? Can you teach or increase your empathy? And if so, what have you found successful in these situations?
So, Edward, this seems really on topic for some of the things that we were discussing earlier. So I'm going to hand it over to you first and get some of your thoughts.
EDWARD: This is a really great question. There's a lot to unpack. And one question they asked was, can you teach or increase your empathy, and if so, what have you found successful in what situations? I have found that being vulnerable both publicly and being empathetic publicly is a really useful tool.
A lot of teams don't communicate very publicly; it's a lot of stuff in private messages. Being vulnerable publicly in a big team channel can really open the door to letting other people be vulnerable and see what other people are doing, understand what people are feeling. And that's really at the heart of empathy is understanding someone else's point of view.
I've also found that starting small, like, just do it with your close co-worker. Maybe try to effect just change with them. And then once you've gotten them on board, broaden it to two other people, and then two more people, and then two more people, because it's really hard to take that leap of faith and be vulnerable by yourself. So I totally get that.
And also trying to take this on really early in someone's career or someone's tenure at a job. Offer to help new people to your team. Work with them, so they just start off with a very empathetic experience. And that can grow into a more empathetic team as a whole. Encourage team members to update documentation on their first day because they're learning so much in those first few days. Once they've learned it, the only reason they want to document it is because they have empathy for that next person. And so, just like setting that baseline and that boundary, I think is super helpful. What do you think?
STEPH: Yeah, I think those are some great examples. I really love that one way to acquire more empathy is to go on a journey with someone else. So if you have someone new that's joining the team, be their onboarding buddy. Go through that journey with them so you can understand what they're going through, what challenges they are facing. And that will boost the knowledge that you have and will likely also boost then the empathy that you have for people that are new to the team or for future onboarding buddies if you realize that there are some processes that really need to be smoothed out.
I also think it's worth highlighting that I don't think empathy is a single skill. I think it's a number of things. It can be the ability to feel someone else's emotions, so you can understand what someone else is feeling at that moment. It could be reasoning about another person's perspective, or it could be just, frankly, wanting to help. So I think there are a number of ways that we can demonstrate empathy to someone else. And it's going to depend on the situation as to which one of those skills is going to be helpful.
For how you stay empathetic with clients, that one is a really interesting one just because the way we work with clients; we do get to go on that journey with them. We are with them in making decisions around priority and technical decisions and what pain points they are feeling. So I think going, as you described earlier, going on that journey with someone is what helps us stay empathetic with our clients. And I think that's true for cross-functional teams.
So if you are working with someone that's maybe on customer support or on the design team, it could be grabbing lunch with them and saying, "Hey, what's your day like? What challenges are you facing?" Maybe it's your company has rotations where you actually are part of the customer service team for a day. So you get to respond to tickets and have more of an understanding.
I'm realizing there's a theme here. I feel like a lot of it comes down to stepping into someone else's shoes and seeing the world from their perspective and not just seeing it but experiencing the world from their perspective.
EDWARD: Yeah. And another way to do that...because that can also take a lot of time. It's a hard ask potentially to say, "I'm going to go be a customer service rep for a day," if your job is also, I'm going to be a programmer and ship features or fix bugs. That's hard to do. And I think there are ways to do that, to experience what someone else is experiencing by trying to take on not necessarily the role of the other person but just trying to support the other person in their role.
So, for example, we see teams become really siloed where the product is solely responsible for writing tickets, development is solely responsible for understanding what makes the code work or fixing a bug, and design is only responsible for user interactions. I found it really, really helpful to try to approach design and say, "What's the goal here with this user interaction?" I don't know. I'm not a designer.
And so, how can I ask them and again bridge my own knowledge gap? Because that can really help you get to that point and help them understand maybe what you're going for and say, "I wasn't going implement it like that because I thought X, Y, and Z." And they go like, "Oh, I see what you're saying." And then now you're making those barriers…or maybe when you're working with products, they're like, "I see what you're trying to do here. But in my experience, I've seen websites like this. How do you feel about that?"
And it's not to say that you're just trying to steamroll over them. It's that you're trying to share your experience and get on the same page and trying to get them on your page so that you're all making the decision together, not just handing it back and forth across the wall.
STEPH: Yeah, and that was really well said where I think the more that you do collaborate with others and the more that you make decisions with others, the more context you're going to have for why someone else is making a decision, what challenges they're facing. And so again, it comes down to having more information about what that person is going through to then help you be able to be empathetic because I don't think this is a skill you can just turn on. If you don't know anything about somebody, you don't know anything about what they're going through. Being empathetic is going to be incredibly hard.
And in this question, they mentioned that they're better at it in some contexts, at certain times with certain personalities. And I think that makes sense because anyone that's more like you, I think you're going to find it easier to be more empathetic. And anyone that has had similar situations, ones that you can relate to, you're going to naturally be more empathetic to.
Also, timing is important. Maybe it's the end of the day, and you have already used up your empathy bucket, and you have nothing left to give. And that's just something to be aware of. You may have reached that threshold. And with practice, maybe that bucket will get bigger, and you will have more empathy to give throughout the day. But just be aware when you've also hit that threshold, and maybe you don't have any more to give in that moment. But I do think it's very much a skill that you build with a lot of practice.
EDWARD: Yeah, it's absolutely a muscle. You're totally right. You are trying to do it, and the first time you do, it will be very hard. You will be very drained. And you need to recognize that that's okay. You can step away and come back the next day, and it will get a little better. But that's a wonderful point.
STEPH: There is a really nice example that you have captured in a thoughtbot blog post that will be sure to link to in the show notes that highlights how difficult it can be to communicate the tone of voice and even how impactful that can be for someone who is reading that message that you have sent, and they don't understand that tone of voice.
EDWARD: Yeah, that post was very focused on trying to bring in emotion to a more or less emotionless conversation, which is often text. It's very hard to understand when someone is being sarcastic or angry or bubbly or whatever. Just even silly things like adding emojis can really help in that process of bringing in more emotion and getting that tone across.
And I'd say finally to this person who asked the question that the fact that you're thinking about it is already an empathetic thing. Just the fact that you want to get better at this shows that you're already empathetic, and that's really great to hear.
STEPH: Yeah, I think that's a really great observation, and I think that's a perfect note for us to end on. So thank you so much to the person that shared this question with us. It is a very interesting question. And I applaud you for being so thoughtful about how to be empathetic with everyone around you.
Edward, thank you again for being a guest on the show. For those that are interested in following more of your work or checking out your alternative frontend, where can they find out more about the life of Edward Loveall?
EDWARD: You can find the alternative frontend called Scribe; it’s scribe.rip. You can find me at edwardloveall.com. And I have links to various social media or email if you want to email me or whatever. And yeah, it's been a pleasure. Thanks for having me, Steph.
STEPH: Thanks so much. On that note, shall we wrap up?
EDWARD: Let's wrap up. Ta-ta, Stephanie.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph gives an update about RSpec focus and how she often forgets to remove the focus feature from tests. She figured out two solutions: one using Rubocop, and the other from a Twitter user, suggesting using a GitHub gist. She also suggests that if you're one of those people who misses being in an office environment, you check out soundofcolleagues.com for ambient office noise selection.
Chris has been struggling to actually do any coding and is adjusting to doing more product management and shares some strategies that have been helping him.
They answer a listener question about dealing with large pull requests and how it's hard to recognize a good seam to break them up when you are in the thick of one.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: One day, I'll grow up. It's fine. I look forward to that day. But today, I don't think it's that day.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. Well, in some fun news, Utah started his professional training as of this morning, which I'm very excited about. Because we've been working with him to work on being good with walking on a leash, FYI, he's not, [laughs] and also being good about not jumping on people. And essentially, being a really good roommate. And he started training today, and we are using an e-collar, which initially I was really hesitant about because I don't want it to hurt him in any way. But now that I have felt the e-collar myself and we've had a first day with it, it's going super well. I'm very excited for where this is headed.
CHRIS: That's very exciting. When does he start paying rent?
STEPH: Ooh. I'll have to check with him, or I guess I have set those boundaries. That's my job.
CHRIS: I just figured that's a core part of being a good roommate. But maybe we've got baby steps or doggy steps to get there. But that's exciting. I'm glad [laughs] that the first day of training is going well.
STEPH: Yeah, it's going great. And the place that we're going to the trainer they have horses, and mules, and goats. And so now I have a very cute video of him trying to play with a goat, and the goat was having none of it. But it's still all very cute.
In tech-related news, I have an update for when you and I were recently chatting about the RSpec focus and how I mentioned that I often forget to remove the focus feature from tests. And so then that goes up to a PR, and I have to rely on a kind human to let me know, and then I remove it. Or worst-case scenario, it gets merged into the main branch. And for anyone that's not on Twitter, I just wanted to share an update because I also shared something there.
But the resolution for what I was looking for there's already a rule that's written into Rubocop, but it's specifically written in the Rubocop RSpec codebase. And with that rule, you can essentially just say, hey, let me know anytime that a test is using the focus metadata, and then make sure to let me know and fail.
And then if you don't want to actually include all of Rubocop into your project because Rubocop is pretty opinionated, you can still add Rubocop to your project, but you can specifically add Rubocop RSpec, and then you can say, hey, all other rules disabled by default, but then you can enable that specific rule. So then, that way, you will catch all of your focus tests.
There's also another approach that someone on Twitter shared with us recently from Marz Drel. And Marz shared specifically a really nice simple GitHub Gist that documents or exemplifies that you can add an environment variable that checks to say, hey, if we're in CI mode, then add a before hook. And then that before hook will look for any examples that are using that focus metadata, and then it's going to raise. And then if we're not in CI mode, then don't do anything, don't raise, and carry on. And that's just a really nice simple addition if someone didn't want to pull in Rubocop into their project.
CHRIS: Both of those definitely sound like great options. I don't think we have Rubocop on the current project that I'm working on. But I think the RSpec focus thing, the metadata one, seems like it'll work great. More generally, I just want to thank folks out there who listen to the show and then write back in like, "Hey, this is probably what you want."
There was a similar thread that someone shared around the RSpec::Retry stuff that I was talking about recently and the failure mode there and trying to get that into the Junit Reporter. And so they had some suggestions around that. Jason Rudolph on Twitter reached out, sharing just his initial exploration and thoughts on how it might be possible to extend the XML reports that are generated and capture a flaky test in that way. So that's really interesting.
And again, just really love that folks are listening to the things that we say and then even adding on to them and continuing the conversation. So thanks to everybody for sharing those things.
STEPH: Yeah, it's incredibly helpful. And then one other fun thing that I'd love to share, and I found this out from someone else at thoughtbot because they had shared it recently. But it's a neat website called soundofcolleagues.com. And I know you've got your laptop in front of you. So if you'll go visit it, it'll be neat to see as we're talking through it. For anyone else that wants to pull it up, too, we'll include a link in the show notes.
But it's a neat project that someone started where you can bump up the sounds that you would normally hear in an office. So maybe you want to bump up background noise of people or an open window. There's one specifically for printers and a coffee machine, and keyboards are on there as well. [laughs] I have discovered I am partial open window and partial rain, although rain is just always my go-to. I like the sound of rain for when I'm working.
CHRIS: Gentle rain is definitely nice white noise in general. I've seen this for coffee shops, but I haven't seen the particular one. Also, yes, I definitely know how to spell the word colleague on the first of three tries. Definitely didn't have to rely on Google for that one. But yeah, nice site there. I enjoy that.
STEPH: I tried the keyboard option that's on there because I was like, oh yeah, I'm totally going to be into this. This is going to be my jam. I don't think it is because I realized that I'm very biased. I like the sound of my own keyboard. So I had to shush the other one and just listen to the rain and the open window. But that's some of the fun things that are going on in my world today. What's new in your world?
CHRIS: I'm just now spending a moment with the keyboard sound. It's a very muted keyboard. I want a little more clackety.
STEPH: A little more clackety?
CHRIS: I was assuming it would be too much clackety, and that would be the problem. But it sounds more mushy. Maybe we can pipe in some of the sound here [laughs] at this point. Or we can link to these sounds, and everyone can dial up the keyboards to 100. But I, too, am partial to the sounds of my own keyboard.
But what's new in my world? This past week and I think probably even a little bit more of the prior week, I’ve been noticing that I've been struggling to actually do any coding, which has been interesting to observe. And again, trying to observe it, not necessarily judge it, although if that's not the thing that we want to be doing, then try and improve that. But mostly trying to observe what's going on, what is taking my time. A lot of it is product management type work. So I am spending a good amount of time trying to gather the different voices and understand what is the work to be done, and then shape that into the backlog and make sure that that's clear and ready for the team to pick up.
And then, thankfully, the other two developers that are working on the project are fantastically prolific. So they're often very quickly working through the work that has been set up in front of them. And so I'm trying to then be proactive and respond to the code. But there's almost a cycle to it where I'm just staying out in front of them, but they're catching up with everything that's going on. So it's something that I'm trying again to be intentional about, name, share some of that back up with the group. If there are things that I'm doing that I don't uniquely need to be doing, then let's share as much of that knowledge as possible.
But one thing that I will say is the product management, shaping the backlog work is exhausting. I am astonished by just how drained I am at the end of the day. And I'm like, I don't even really feel like I did anything. I didn't write any code, but I am just completely spent. And there really is something to when the work is clear, just doing the work, I can actually find energizing. And it's fun, and I can get in flow state. And sometimes, I'll be drained in a certain way.
But the work of taking a bunch of different slack threads, and communications, and meetings, and synthesizing that down, and then determining what the work needs to look like moving forward, and providing enough clarity but then not over constraining and not providing too much clarity. And there are so many micro-decisions that are being made in there. And I'm just spent at the end of the day, and I have so much...I've always had a lot of respect for product managers and folks that are existing in that interstitial space and trying to make sense of the noise, especially of a growing company, but all the more so this week as I've been feeling some of that myself.
STEPH: I totally agree. I have felt that having a strong product manager really makes or breaks a project for me where even though having technical leadership is really nice, I'd prefer someone that's really strong at the product knowledge and then helping direct where the product is headed. That is incredibly helpful. Like you mentioned, the work is exhausting.
There's someone that joined the thoughtbot team fairly recently, and I was chatting with them about what type of projects they would be interested in working on. And one of their responses was, "I'd love to work on a project with a strong product manager because I have been doing that a fair amount for recent years. And I would love to get back to just focusing on coding." And so I think they enjoyed some of the work, but they just recognize it's exhausting. And I'd really like to just get back to writing code for a while.
CHRIS: Yeah, I'm definitely in that space. And I think there's a ton of value to spending a little bit of time, like having any developer at some point in their career spend a little bit of time managing the backlog, and you will learn a bunch from that. But I'm also in the space of I would love to just turn on some music and code for a while. That sounds fun. There's a lot of work to be done right now. I'd love to just be in there doing the work. But sometimes, out of necessity, the defining of the work is the thing that's important.
And so, I think I've been correctly assessing the most important thing. And that that has consistently for a while now been the defining and responding to the work that's in process as opposed to doing it myself. But, man, I really hope I get to dive back into the code sometime and use my clackety keyboard to its fullest extent.
STEPH: Have you found any particular strategies that really help you with the product management work?
CHRIS: I will say that I think this is a competency. This is a skillset and a career path that...again, I've been at plenty of organizations that I don't think respected the role as much as it should be. But it's an incredibly hard role and multidisciplinary communication at the core of it. And so I don't think I'm great at it is the thing that I'll say. So everything that follows is just to be clear; I’m not saying that I'm great at this, but I have been doing some of it. So here are some thoughts that I have.
I think a lot of it is in reaction to where I felt like the work was clear. So I have a sense of what it looks like when I can go to the backlog, trust that it is in a roughly solid priority order, pick up a piece of work and immediately go to work on it. And understand what are the end-user implications of this piece of work? Where would I start on it like, how technically? What's a rough approach that I would have? And getting that level of specificity just right. So it's not overconstrained, but it's not under constrained. So having experienced that on the developer side, I try and then use that to shape some of the guidance that I'm putting into, say, the Trello tickets that I'm writing up here.
We recently introduced Trello epics, which is I want to say like an add-on. And that allows us just the tiniest bit of product management, like one level up. So instead of just having cards and a list that is like, here's the work to be done, we now have an epics list that is separate to it, and it links between a card and its associated epics. So it's like project and action within that project.
And just that little touch of structure there has been really, really useful to help look at like, okay, what are the big pieces that we're trying to move? And then how do they break down into the smaller pieces? So a tiny, tiny bit of fanciness in our product management tool, not Jira-like not going in that direction yet for as long as I cannot. But that little bit of structure.
And then thinking about what has been useful to me as I pick up tickets. And then, as always, trying to just always be cognizant of what is the user's experience here? What problem am I trying to solve for them? What is their experience going to be? How will they know how to work with this feature? And just always asking that and then framing the work to be done in the context of that.
STEPH: I like how you're adamant about a little bit of fanciness but not all the way to Jira-like. I also like how you highlighted end-users. All of that, I think, is awesome when developers are able to expand their role to experience all the other facets of building software.
CHRIS: Yeah, definitely. I think that whole list of all of the different facets of where our work interacts with different groups. The more empathy or, the more experience that you can have there, the better that you'll be able to understand how to communicate there, how to express things in terms, et cetera, et cetera. So a huge fan of all of those ideas. I am ready to just get back in the code for a few minutes, though. But for now, for as long as necessary, I'll do some of this work. But I am trying to find my way to other things.
In terms of actual feature work that we're working on, one of the things that we're doing right now is restructuring our onboarding. So when a user comes and signs up to the website and then subsequently has to fill out a handful of other forms, there's actually an external system that we've been working with that houses some of the core data of our application. And they have a hosted application form. So we can send the user over to them, and the user fills out the rest of the application on this other system's site. And then they get redirected back to us. And everything's got nice DNS entries for a particular subdomain and whatnot. So it looks roughly consistent. There's some branding. But it's still someone else's UI, essentially.
And we were feeling enough pain from that experience. We were like; you know what? It's time. We're going to bring this back in-house. We're going to do all the forms ourselves. We're going to do a nice progressive little progress bar. You can see all the steps as you're going through onboarding. We're just going to own that more because that's a core part of the experience that we're building here. So biting the bullet, deciding to do that.
But there's an interesting edge case that we run into, which is we are using Devise for authentication. Totally makes sense. We're in Rails context; there we go. It's the thing to use. But Devise exists in truly the Rails world. So like HTML ERB templates, the controllers have certain expectations as to what's going on. So thus far, we've just let that exist in that world and everything else we're building in Inertia and Svelte. But we're just now starting to feel enough of the pain, and that Devise exists in this other context. And for a while, we just kept saying, "You know what? It's not worth the effort to port it over. It's fine."
Because we're using Tailwind, we have a consistent design language that we can use across them. That said, the components are drifting a little bit. And it's like, oh, this one's got a rounded corner like this, and that one's got this color. And we don't have the disabled style. But it is nice that it's not completely distinct. But we have finally decided it is time. We need to port this thing over because we feel like the onboarding and authentication type flows; they’re actually a big part of the user experience or at least the first run user experience when someone's signing up to our site. So we want to own that a little bit more.
One of the things that I ran into as I was trying to introduce Mailcheck, which is a library that I've talked about, I think in a previous episode...but basically, you can have it observe a field and if someone types in like, [email protected], you can like, did you mean gmail.com? And then go from there. And I think there's more subtlety. They can maybe even look up MX records and things like that. But basically validate an email address heuristically and offer the nice, very friendly to a user, "Hey, did you mean this instead?" So not a full validation that says, "No, you cannot put your email address," because maybe you have a weird one that sounds like Gmail but isn't. But that's a little bit trickier to implement both on the Devise side and then in any other place that we have an email input.
And so what we want to do is port over to Inertia and Svelte, and then everything's in our nice, happy context with all our components and all the other work that we're doing. And it really does just highlight how much I've come to enjoy working with Inertia and Svelte. They are fantastic technologies. And now I just want absolutely everything to be in them. So we're finally going to bite the bullet, and I think port those over a little bit after we get the current batch of work done. But soon, soon, that's the goal.
STEPH: I'm having a bit of déjà vu where I feel like there was a project that you were working on that was using Devise, and then removing Devise and replacing it with something else was a challenge. Does that ring a bell?
CHRIS: Yes, that is accurate. So I had a project that I worked on where we had both Devise and Clearance was actually what was going on. There were basically two different applications that existed; one was using Clearance, the later one used Devise. But then we folded those two applications back together. And by virtue of that, I tried to unify the authentication schemes, and it was like, nope, not going to happen. And then we didn't.
STEPH: And then we didn't. [laughs] I like that ending.
CHRIS: Well, sometimes you don't. [laughs]
STEPH: Yeah, I love that ending because it reflects reality. Sometimes that just happens. In fact, I'm going to segue for just a moment because you're reminding me that there's something I don't think I've shared with you yet. On my previous project, there was a particular feature. It was a big feature that someone had picked up and worked on.
And at one point, we were essentially playing hot potato with this feature because we hadn't gotten it to the point that it was merged. There was too much that was happening in that pull request, although then we ended up merging it. But then we found lots of bugs. And it was just one of those features that we couldn't really get across the finish line. There was always something else that was wrong with it or needed to be done or needed to be considered.
And we'd reach that point where Chad Pytel, who is on the project, was like, "We're either going to finish this, or we're going to throw it away." And I felt a little guilty saying this, and I was like, "I vote we throw it away. I have lots of concerns about this. We are essentially reimplementing another complex workflow. But now, we are implementing it pretty differently in another portion of the application. It's going to be hard to manage. The cost of adding this and maintaining this is a really high concern." And so he talked with the rest of the team and came back, and he's like, "Yep, we're going to throw it away." And so then he issued a PR, and we removed it.
And it was one of those moments of like; this isn't great because then we have invested hours into this, and now we are taking it away. But it also felt really good that that's always an option. And that was the better option because it was either we're going to continue sinking more time into this, or we can stop it now. And then we can move on to more important work.
CHRIS: Sunk costs and all that.
STEPH: Yeah. I feel like it's so rare when that really happens because then we just feel dedicated to like, well, we're going to make this valuable to somebody. We're going to keep this. And in this case, we just threw it away. It's very nice.
CHRIS: There's a similar anecdote that I remember. Actually, I think it's happened more than once. But very particularly, we were working on a system. And this was with our friend, Matt Sumner, a friend of the show, as well been on a few times. And Matt was working on the project. And we got to a point where we had two competing implementations of a given workflow, and we were opting to go with the new one.
But there were folks that were saying, "Let's keep the code around for the old one." And Matt was like, "Absolutely not. If we do that, we might go...no, this will be bad. Then we have to maintain that code. We need to burn the ships," as he said. And he actually named the pull request burn the ships where he just removed all the code. And I was like, I like your style, man. You made a decision here. We collectively made a decision. And then this is a classic Matt Sumner move. But he did the thing that we said we were going to do. And he just held that line. And I really appreciated it.
And it's a voice that I have in the back of my head often now, which is just like, no, burn the ships. If we need it, it'll be in Git history. We can recover it. But it's going to need to be handled in the interim. We don't want to have to support that code right now and for however long until we actually decide to remove it from the codebase. So let's get rid of it. And if we really need it, well, then we'll resurrect it, but for now, burn the ships. And I like that.
STEPH: I like that too. I think it's one of those areas where it takes experience to feel that pain too. If you're pretty new to writing code, you're going to think, well, we can keep it around. There's no harm. And so it often has to be that sage, that person who's been around long enough and felt some pain from making that decision in prior centuries or years. And he's like, "No, we're not going to do this." The WE collective of developers who have experienced the pain from this understand that that's not a good choice. And so we're going to burn the ships instead. But it is one of those that if you're newer, you won't think that way. And I think that's totally reasonable that you wouldn't think that immediately.
CHRIS: I think that tacit knowledge that oh, I've gone through this before, and I've experienced the pain, and now let me tell you about that. And let me try and share that with you because there's always the cost-benefit trade-off. Because if that code stays in the codebase, then we know it works because we've kept it around for that whole time. And so there's a nicety to that, but there's a cost, that maintenance cost. And being able to express that well and being able to say, "I've been here, and let me tell you a tale," but do it in a way that doesn't sound overly condescending or explainy or things like that. I think that's a very subtle skill and a very important one, and frankly, really hard one to get right.
I'm not sure I always hit the mark on that where I'm just like, "No, can't do it. It's bad." I think it's very easy to end up in a space where you're just like, "No, it's bad." And they're like, "But why?" And you're like, "Because it's bad. Trust me." It's like, well, I feel like you do need to be able to explain the stories, the experiences that you've had in the past, the anecdotes that you've heard, the blog posts that you've read that have really informed your thinking. But I think that is a big part of what it means to continue on in this profession and be able to do the work and make those subtle trade-offs, and the it depends because, at the end of the day, it all depends.
STEPH: Or you just issue a pull request and title it burn the ships. [laughs]
CHRIS: Burn the ships. Indeed, that is, in fact an option. And actually, while we're on the topic of pull requests, this might be a perfect segue into a listener question that we have.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: As always, thanks to everyone who sends in listener questions. We so appreciate getting them. They help direct the conversation and give us something to chat about. So this question comes in from Bryan Robles. And Bryan writes in about large pull requests. And Bryan writes in with, "My toxic trait is large pull requests. Any tips on when you get into a place where you're fixing or refactoring something, and it ends up cascading to many more changes than you want it to? I sometimes can go back and break it up. But it's hard to recognize a good seam when you're in the thick of it." So, Steph, what do you think? Large pull requests and finding yourself in them after [laughs] certain amounts of time.
STEPH: Yeah, speaking of that knowledge that often comes from experience, this is something that I'm certainly always striving to get better at. I think it does take practice. There are some things that I do that I can share. And I categorize them really into a before, and I guess midway. So there's the before I set sail and set off to deeper waters list that I will think through as I'm starting a new task, and then there's the I'm lost at sea. And then, I need to figure out how I'm going to organize this change.
So in the first category, when I'm first starting off a task, I consider what sort of changes need to be made, and are there any obvious roadblocks? So an obvious roadblock may be changing or updating a model that has one relationship, and I need to change it to has many relationships. Or perhaps there's a part of the application that is untested. And before I make any changes, I need to document that existing behavior. And that really falls neatly within Kent Beck's advice where he said, "First make the change easy (warning: this might be hard) and then make the easy change."
So I try to think upfront what are some of the small, incremental changes that I can make first that will then make the final change easy? And then I separate that mentally into PRs. Or I may separate it into tickets, whatever is going to help me stay organized and communicate how I'm breaking up that work.
And then the other thing that I'll do is I'll consider what's my MVP? So what's my minimum viable pull request? What set of changes include just enough changes to be helpful to users or to other developers? Which, by the way, is also a helpful mindset to have when you're breaking down work into tickets. So, as an example, let's say that I need to fix some bad data that's causing a site to error. So my first step could be to write a task to fix the bad data. And then, step two, prevent bad data from being created. And then probably step three, I need to rerun the task to fix data that was created during step two. But I can think through each of those steps and separate them into different pull requests.
And then there may also be the question of well, how small is too small? Like you're saying, what's a minimum viable pull request? How do I know if I am not delivering value? And that one gets a little trickier and vague. But ultimately, I will think, does it pass CI? Is this change deployable? And then I do have to define what value I'm delivering. And I think that's a common area that folks struggle because we'll think of delivering value as delivering a whole new feature or adding complete test coverage for an untested interface.
But delivering value doesn't have to represent that end goal. It may be that you added one test for an untested interface. And that's still delivering really great value to your team, same for delivering a feature to a user. You may be able to speak with that wonderful product manager and find what's the smallest bit of value that you can deliver instead of the whole feature set? I think the smallest PR I can think of that I've issued is either fixing a typo or removing a focus metadata from an RSpec test. So that's my starting point. That's the before I set sail. Those are some of the things I think about. I have more for the I'm lost at sea. But what are your thoughts?
CHRIS: First, that was a great summary that you gave. So I totally agree with everything that you just said. I think part of the question I would have...So Bryan wrote this in and described this as his toxic trait. So he's identifying this as something that seemingly consistently plagues him.
So I would ask, is there a way that you can introduce something? Like, are there natural breaks in your day? And can you ask the question at those breaks? Like, hey, I've been working on a thing for a little while. Is there a version that I could...like, could I close off a body of work at this moment? When you break for lunch, if you go grab coffee in the morning, when you're leaving at the end of the day, use those natural breakpoints.
I'm not sure exactly what you mean when you say large pull requests. But if those are spanning multiple days, in my mind, if anything starts to span more than a day, I will start to ask that question to myself. And that's a reflex that I built up over time by feeling the pain of large pull requests and putting it up, and feeling apologetic. And then having my colleagues gently, professionally kindly ask me to break it down into smaller pieces. And me saying, "I really don't want it. All right, fine, fine, fine, I'll do it." And then I do it.
And it's one of those things that I never want to do in the first place, but I'm always happy to have done after the fact. But it is work. And so, if I can get better at pulling that thinking and pulling that question earlier in the process, that I think is really useful. Similarly, I will try to, again, as friendly as I can; if I notice someone mentioning the same body of work at stand up for a few days, I might gently ask, "Hey, is there a way that we can find a shippable version of a portion of that of a subset? Can we put it up behind a feature flag and get something out there just to try and keep the PR small, et cetera?" And so gently nudge in that direction.
And then I think the other side of that is being very okay with one character PRs. Like, that's it. We changed one character. It turns out we need to pluralize that word, or we need one-line changes are great. That's fine. And more pull requests, in my mind, are better than fewer, larger pull requests. And so really embracing that and having that be part of the core conversation and demonstrating that throughout the team is a way to share this idea. So that's perhaps more in the process or person point of view on this as opposed to the technical, but that's part of the consideration that I would have. I am interested, and I'll bounce back to, Steph, what you were saying of now that you're out at sea, what do you do?
STEPH: So I need to react positively to some of the things that you just said because you made me think of two things. One of them is I've never had someone say, "Hey, Steph, that PR is too small. Could you add some more changes to it? Could you do some more work?" I have had people say, "Hey, that PR was hard to review." But even then, sometimes getting that feedback from folks is hard because nobody really wants to say, "I had a hard time reviewing your PR." That's something that, over time, you may become really comfortable saying to someone.
But I think initially, people don't want to say, "Hey, that was hard to review," or "There were a lot of changes in that. Would you break it down?" Because that's a lot of complex emotions and discussion to have there. But yeah, I just figured I'd share that I have never had someone complain that a PR is too small, and I've issued a single character change.
And then I love, love how much you asked the question of what's the problem we're trying to solve? And so there's this ambiguous idea of a large PR. But what does that mean? What are the pain points? What are we actually looking to change about our behavior? And then how is that going to impact or benefit the team or benefit ourselves? And so, going back to the question of how do we measure this? How do I know I'm starting to break up my changes in a helpful way? We may need to circle back to that because I don't have answers to it. But I just really like asking that question.
As for the I'm lost at sea part, or maybe you're not lost at sea, but you've caught too many fish, and the fish warden is going to fuss at you if you bring too many fish back to dock. I don't think this is a real nautical example. But here we are.
CHRIS: Was that the fish warden?
STEPH: Yeah, the fish warden. You know, the fish warden. [laughs]
CHRIS: Sure, I do, yeah. Yeah, I know about that, well-versed in fish law.
STEPH: [laughs] Got to know your fish law. If we're going to talk about pull requests, you got to introduce fish law. But I'm actually going to quote Joël Quenneville, a fellow thoughtboter, because they shared a thoughtful thread on Twitter that talks a lot about breaking up your changes and how to break up your pull requests and your commits. And I'll be sure to include a link in the show notes because it's really worth reading as there's a lot of knowledge in that thread.
But one of the things that Joël says is get comfortable with Git, and it makes a world of difference. In particular, you want to get really good at git add --patch, git reset, and git rebase interactive. And that is so true for me. Once I have gotten really good at using those commands, then I feel like I can break up anything.
Because often when I am helping someone break something up, it's often they want to, but they're like, "I don't know how. And this is going to take so much of my time. It doesn't feel efficient and the right thing to do." And they're probably right. If you don't know how to break it up, then it may take you too long. And maybe it's not worth it at that point.
But if you can ask a friend, and they can help walk you through this process, or if you can learn on your own, that's going to be a game-changer because you will start to think about how can I separate these commits? And I can reorder them, and then issue separate PRs, or just keep them in separate commits, whatever process you're looking to improve.
In fact, there's a really great course on Upcase called Mastering Git written by someone who is co-host of this podcast. And it has a lot of great videos and tutorials that will help you get really good at these Git commands and then will help you split up your commits.
CHRIS: Oh yeah, I did do that. Warning: it's like three and a half hours long. But it is broken up into, I believe, 10 or 11 videos. So you can find just the ones that you want. There's a couple in the middle that I think are particularly useful talking about the object model of Git. Git is weird, unfortunately. And so I spent a bunch of time in that course. Also, thank you for the kind words, Steph. [laughs]
But I spent a bunch of time in that course trying to make Git less weird or understandable. If you look under the hood, it starts to make more sense. But if you really want to get comfortable with manipulating Git history, which I think is a really useful skill for this conversation that we're having, that's the only way I found to do it, just memorizing the steps.
It's always going to feel a little bit foreign. But once you understand the stuff under the hood, that's a really useful thing for being able to manipulate and tease apart a pull request and break it into different things, and port things from one branch to another, and all those fun activities. Yeah, man, that was a bunch of years ago too. I wonder what I look like in it. Huh.
STEPH: I really liked that episode, the one you just mentioned, the Git Object Model. Now that you've mentioned it, I remember watching it, and it's very interesting. So yeah, thank you for making all this helpful content for folks. There's also a blog post that we can include in the show notes as well that is a really nice overview of using git interactive, and rebase, and squash and amend those types of behaviors as well. So will be sure to include both so folks can check those out.
And then to round things out, one of the other things that I will do is I will ask a friend. I will ask someone for help. So we've talked about some of these behaviors, or some of these processes that we have are really built up from experience and practice. And you can watch a lot of helpful content, and you can read blog posts. But sometimes, it really just takes time to get good at it. I know, as I'd mentioned earlier, I am always still looking to improve this particular skill because I think it's so valuable.
And one of the ways I do that is I will just phone a friend. And I'll say, "Hey, can we chat for a bit? I would like to show you my changes. I want to hear from you if you see something in here that's valuable that you think can be shipped independently, so that way we can get it delivered faster." Or it may be a change that's just like a test improvement or something. And we can go ahead and get that immediately released to the team, and it will benefit them.
Or you may want to do this at the start of a ticket. If I am new to a project or when I am new to a project, I will often ask someone to break down a ticket with me if I'm feeling a little bit uncertain. Or just say, "Hey, do you see any clean lines of division here? I feel like there's a lot in this ticket. You're more familiar with the codebase. What would you ship? How would you ship this incrementally?" and have someone else walk through the process with you.
CHRIS: Yep, the phone a friend and/or, as always, pairing is a wonderful tool in these sorts of situations. The one other thing that comes to mind for me is part of the question was about sometimes it's difficult to find a clear parting line within a larger body of work, within a larger change. And that can definitely be true.
I think there are certain standouts of like is this a refactoring that can be shipped separately? Is this a test change that would be useful on its own? Is there a model change that we could break out and have just that go out? So there's a bunch of mechanical questions that we can ask and say; here’s categories of things that might fit that bill.
But to flip this to the other side, the question was asked by Bryan very much as an I struggle with this thing. This is my toxic trait is the phrase that he used, which I thought was really interesting. And that can be true. This can be something that if you're consistently and uniquely within the team producing these giant PRs and then folks find that difficult to review, then I think that is absolutely something to work on.
But if this is something that is happening between members like, other members of the team are also finding that they keep ending up with PRs that are bigger than they expected and taking longer and harder to review, there is a question of is the codebase actually in a shape that makes it harder to do small changes? There's the phrase shotgun surgery, which refers to a codebase that is so entangled and coupled that any change requires modifying ten files just to make one small alteration.
And I think that's a worthwhile question to step back and ask, actually, is it not me? Is it actually the codebase? It could be both certainly. But there is a version of your codebase is coupled in a way that means that any even small, tiny change requires touching so many different places in the code. And if that's true, that's at least worth naming and worth highlighting and maybe talking about in retro and saying, hey, this feels like it's true. So maybe we start to get intentional about refactoring, and breaking out, and starting to add those dividing lines within the code such that hopefully, down the road, small changes can, in fact, be small changes. So that is the one last thing that I would consider here.
Also, anecdotally, this is just a thing that came to mind. As I've worked with strongly-typed languages, systems that have a compiler, and have a type system, and the ability for the compiler to keep an eye on the whole codebase, I've noticed that it's very easy to do this sort of thing where I just start with one small data model change, and then the compiler is like, oh, you got to go fix it here, and here, and here, and here.
And I found that because the compiler is your friend and will just point you to all the places you need to make the change, it is very easy to just keep going because some of that mechanical work is happening on your behalf. And it's a wonderful facet of typed languages and of having a compiler and being able to have that conversation with the compiler.
But I found that for me, it is much easier to end up in this mode where I'm like, oh no, this PR is way too large. When I'm working in a system that has types, that has a compiler, that frankly makes it a little bit easier to chase down all the places you need to make a change. So that's also a consideration. It's not necessarily a good or a bad thing, just something that I've observed that feels like it's adjacent to this conversation. But yeah, I think those are my thoughts.
STEPH: Yeah, those are great points. I've certainly worked on projects where that felt very true where it's a small change, but it would cascade throughout the project. And all the changes were necessary. It wasn't something that I could split into smaller PRs. So checking if it is the codebase that's really making it hard to have small PRS is a really great idea.
CHRIS: Who'd have thunk such a little question could get us rambling for so long? Oh, wait, I would have thunk that.
STEPH: And so far, reflecting on the things that we've talked about so far, I think I've talked a good game of where I'm saying, "Oh, I identify the seams upfront, and then I organize and create different tickets." And that is very much not the case. That's the really ideal outcome. But often, I am in the thick of things where like you just said...and it's this moment of, oh, I've done a lot in this PR. And how can I break this up? And that does take time. And it becomes a conversation of trade-off, which is why those Git skills really come in handy because then it will lower the cost of then splitting things out for others.
But for people that are struggling with creating smaller PRs, I do think it's very fair to ask your team for help. I think it's also fair that if you issued a large pull request and folks have already reviewed it, and it's gotten approved, and someone makes a comment like, "Oh, this would be great as two PRs instead of one," to say, "Awesome, thank you for letting me know. I will take that forward with me, but I'm not going to do it for this PR."
I wouldn't recommend making that a habit. But just know that that is something that you can say to someone to say, "I think this one is good to go at this point. But I will keep that in mind for future PRs. And I may even reach out to you for help if I feel like I'm having trouble splitting up a PR." And bring that person into your progress and use them as an accountability buddy. They can be someone that helps you down that path towards smaller PRs.
CHRIS: Yeah, I definitely agree with that, although it becomes a very subtle line. Saying, "Thank you, but no thank you," in a pull request or to feedback is delicate. It's difficult. That's a whole thing. But I agree there have been times where I have either been the one making that decision or suggesting that or being like, "We probably should have broken this up. But we're far enough along now. Let's get this merged. And then we'll iterate on it after the fact."
One last thing, actually. I thought I was done, but I have one more thing, which is I feel like there's a strong parallel between test-driven development and this question in that, often, I hear folks saying, "I don't know how to write tests upfront. I don't know how to do that. I know after the fact I can write tests, and I can add them after." And that can definitely be true. It can become more obvious after you've written the code how you could then write a test that would constrain that behavior that would interact with the system.
But I think the useful thing that you can do there is take a moment and pause there and say, "Okay, now that I have written the test, what would it look like if I had written this in the first place?" Or if you really want to go for it, throw away the code, try again. Start with the test first and then rebuild it. That's maybe a little much.
But that thing of taking these moments of maybe you don't know upfront how to break the work into smaller pieces, but then you get to the end, and you have that conversation with someone. And they highlight where some parting lines would be, or you figure it out after the fact. Stay there in that moment. Meditate on it a bit and try and internalize that knowledge because that's how moving forward, you might know how to do this in the future. So take those moments, whether it be with TDD or with pull requests, or breaking up a ticket into smaller tickets, anything like that. And spend a moment there and try and internalize that knowledge so that you have it proactively moving forward.
STEPH: You know how Slack has status? I really like the idea of there being a status that's meditating on...and you can fill it in. And the example that you just provided, meditating on splitting up a pull request or meditating on how to write a test first, [laughs] I think that would be delightful.
CHRIS: I, too, think that would be delightful. But with that long, adventurous answer to what seemed like a simple question, and they always do, but here we are, shall we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris finally got his new computer! 🎉 🎉 🎉 He gives his initial review. He's also super excited that GitHub announced a beta for pull requests merge queue, and even more excited that multiple people who listen to this show very kindly pointed that out to him on Twitter!
Steph discovered something that is quite niche, but she's excited to tinker with it more, called CookLang. It's a markup language that's designed for cooking and recipe management so you can store recipes and text files and there's no database required; making it easy to have control over recipes versus storing them in a separate application.
Then they answer a listener question about refactoring murky legacy code.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: What's new in my world? Well, we've talked about it before, but it has finally happened. I finally got a new computer.
STEPH: Yay, yay.
CHRIS: Five years in the making. I held out, I waited. The new computer is fantastic. I'm in that transition phase of trying to set everything up and get it all...the particular thing holding me back is actually this recording and some dongles. I need to live that USB-C life now. Everything needs new connections and whatnot, particularly my external monitor.
STEPH: I'm now realizing how old your current laptop is.
CHRIS: [laughs] Did I just date myself? Yes.
STEPH: You did. You just dated it with a USB-C. I thought you were still on the USB-C life.
CHRIS: I'm pretty sure it's a 2016. I'm currently recording on a 2016 MacBook Pro. But yeah, I'm very excited with the new one. The shape of them is weird. I did not expect this because I've seen the 13-inch MacBook Pros that have the touch bar and other things that I didn't really want. But the shape of that laptop was more familiar to me. And this one, I don't know, it's weirder and rounder and bulkier in ways that I didn't expect. And it's heavier than I expected. I got the 14-inch, as an aside. I went with a slightly smaller version assuming that my 16-inch with a giant bezel, because it's from the past, would have a similar amount of screen real estate to a 14-inch with no bezels or with the screen going almost out to the edges.
As an aside, the notch in the top of the computer screen is ridiculous. I've dealt with it on the phone for a while now. I accepted that I live in the land of notches. But somehow, it's way, way worse on the computer like when I take my terminal full screen most of the time, and so stuff just gets lost. I don’t know, I got to deal with this or not, maybe I can just not care. But it is covering things that I want to read. And I'm like, well, this is annoying.
But yeah, beyond the notch, everything else is great. It's a nice form factor. It seems to have great battery life. It's very fast. It goes very fast. It also has...there's more RAM in it. There's more hard drive space and whatnot, so a bunch of the things that I use. Often as we start this recording I'm like, oh no, I wonder if I have enough hard drive space for the recording we're about to do, which I should probably be past that at this point in my life. Well, now I think I am, except I haven't yet ported my recording setup.
Nonetheless, I'm very excited. And particularly, a lot of the development workflow tasks like starting up the dev server of the project that I'm on just moves a lot more quickly. It's a lot snappier, and everything is just speedier. The fans don't really turn on. It doesn't get too hot even when I'm doing challenging things, downloading everything from the internet and compiling from source code. It's like, yeah, cool; I can do that. That seems like a thing that's well within my wheelhouse. And I'm like, cool, computer, good job. Glad to have you on the team.
STEPH: So I learned something about you recently because hey, we were hanging out in person recently since I was in Boston for a week. That was amazing. And I learned that we have a similar thing where we both like to start our machines from scratch, and slowly (or at least correct me as I'm going through this), we bring things over. But it's not an immediate just port everything from my current laptop over to my new laptop. And I'm fascinated because I thought I was the only one with this sickness. But it turns out that you, my friend, also have this in your life.
CHRIS: Well, yes, you are correct. That is the thing that is true. And also, to reiterate, it was really lovely seeing you in the human world as opposed to just in a Skype window as we so often do. But yes, I start fresh every time. But to be clear, it's been more than five years since the last time I did this. So I feel like I can make a bit of an event of it each time I do it. So I'm fine with that. But I do like starting from fresh reinstalling everything rather than trying to copy over an image of the system.
I felt a little bit shamed by the operating system because there's the like, welcome to your new Mac. What's your language? What's your WiFi? What do you want to migrate over? And I'm like, nothing. But there wasn't a button for no, thank you. [chuckles] There were buttons that were like...there were two different options, but neither of them were no. And I'm just staring at it, and I'm like, but I would like to not though. I would like to just start fresh.
STEPH: [laughs]
CHRIS: And I got it from here. I appreciate the effort. Turns out they were hiding it in the bottom left corner, but it wasn't a button. It was like link text, but it was barely emphasized. And on the right were where all the action buttons for every previous step and subsequent step were. So this was like no one would ever want to do this. So we're just going to hide the option over in the corner.
Yes, I very much like starting fresh. I like to get the chance to shed some of the mistakes of the past and only bring forward the things that are bringing me joy in the modern-day, so here we are. A lot of stuff doesn't work, by the way. [laughter] I brought over my dotfiles, and things did not work super great. Or I opened an application, and it's like, oh, that hasn't worked for years, and I'm like, I was living in the past. All right, this is fine. I'll update my workflow. It'll be okay.
STEPH: I like how you said make an event of it because I haven't really found the right words for it, but that's perfect. I have a fresh laptop, and it's an event, and I want to spend that time setting it up and shutting the mistakes of the past. [laughs] That too, that also resonates.
CHRIS: Yep. It's really about the mistakes of the past. I don't want to live there anymore. I want to move on from them. But overall, yeah, it's going well. I think I'll be able to work with it. I'm actually unfortunately limited in that I can't connect it to my monitor right now or to the keyboard that I have and whatnot. So I can't quite integrate it into my home office setup. I can use the laptop itself, and it's working fine. I've got my current project running on it.
So I was able to install Ruby and figure out how do I trick it into using the Homebrew things versus the M1 things versus which architecture and yadda yadda yadda? That actually went better than I expected. I thought it was going to have more issues there, but stuff just kind of worked. And I had to find one weird Stack Overflow and copied and pasted as one does when you're setting up new computers or all the time as a programmer. And then yeah, we're off to the races. But yeah, unfortunately, right now, the limitation is a physical cable. I need a couple of new cables. So I'm excited to get those in the near term.
STEPH: Yeah, that'll be nice to have it set up and feel like you can fully transition to the new setup.
CHRIS: But in slightly other news, so computer is great, very happy about that. Perhaps I’m even more excited that GitHub has announced a beta for pull requests merge queue, and even more so excited that multiple people who listen to this show very kindly pointed that out to me on Twitter. And I was like, this is the dream. I just rant about stuff on a podcast, and then people tell me when the world has solved the problem that I complained about. This is so great.
But it's a public beta, but you go on an invite list and whatnot. So I'm on that list right now. And I'm trying to just mention it as much as possible, especially near any friends I know that happen to work at GitHub; just be like, "Hey, if you happen to know how to find that list...and I will give wonderful feedback. I will be an active member of this beta." But I would really love to get to try out that feature. So I'm excited.
We'll include show note links to the blog post announcing this new feature. So I'm really excited that it is built into the platform and will sort of be there. And I'm hopeful that GitHub has done a great job. This is actually a really interesting feature in my mind. It's one of those things like, oh, it's pretty straightforward. You just make a queue, and then you merge things. It's like, oh yeah, but what about all the ways it can go wrong? It's a great I think iceberg feature where the simple, happy path is wonderful.
And then there are so many different failure modes like, oh, what do you do when the PR fails to rebase? Or should you proactively rebuild the sequence of them? Do you stack them up and start preemptively rebuilding the other PRs that are next in line in the queue after that? But what if then it fails? And et cetera, et cetera. There's lots of fun stuff that can go wrong. So I trust that GitHub will do a wonderful implementation. I would love to get to try that, but currently, I've yet to get in.
STEPH: Have they shared how to get access to the limited beta? Is it just random selection, or can you specifically request? It sounds like you can't, based on what you're saying, but I'm curious.
CHRIS: There is a waitlist. Particularly, they had me put the repo that I was interested in, so like, I would like to be in the beta. And this repo, in particular, is the one that I would like to put into the beta. So I've submitted that, but now I'm just in a list. I don't know where in the list I am. I have no visibility into that. So again, I'm just saying things out into the void and seeing if the universe then reflects anything back at me, and by that, I mean people that work at GitHub. Hi, friends.
STEPH: I totally believe that if you speak things into the universe, the universe will give that back to you, or that's probably not true. So here's hoping that you get into the beta list.
CHRIS: Here's hoping. One other tiny thing I have been using Afterparty a bunch lately, which is a gem that you have recommended to me, I believe, in a previous episode not that long ago. But we started doing feature flags. Feature flags are great. We're using Flipper. It's wonderful. But Flipper stores...the way we've configured it, we're storing them in the database. And so, when we get to the point where we want to fully release a feature, we dial-up so that everyone has access to it. So at this point, the feature flag system is still running, but everyone is getting access to that feature. And so then there's a code change that removes any references to the feature flag, any checks for it.
And subsequently, we want to then clean up after ourselves. Afterparty has been fantastic. I'm really happy with the way that that works and that we can just include the Afterparty data migration associated with that, delete the record from the database, and then everything consistently works together. This does poke at the edge, though, of how Heroku does deployments and the fact that there is that latency where it will basically restart with the new code, needs to run all migration steps, et cetera, et cetera. And then it's running on the new code, which is great. That's what I want in this case. But it does have a cost, and I would love to figure out true blue-green deployments sometime in my usage of Heroku such that we can have zero downtime deployments is actually the way to phrase it.
Using Afterparty for this, I think, is leveraging the idea that it's not a big deal. It'll be fine. Because if we had zero downtime deployments, I think temporarily, we would be in a space where feature flag has been deleted; therefore, no one is in the feature. Therefore, no one would see it for that weird second. I'm safe from that. But it's this weird trade-off that I have in my mind of I want blue-green deployments, but I'm appreciating that I don't have them right now and that there is a consistent...and I think that's the reason that Heroku makes the choice that they do. But, man, everything's complicated. Why can't stuff just be simple?
STEPH: Everything is complicated. So I'm thinking back through to now my previous client's setup since I'm about to transition to a new project. And we have AWS. We have rolling deploys, so we don't have that specific downtime. We are using Afterparty. So I think based on everything that you're saying that yes, there's a slight period of where we are rolling out a feature flag or potentially dropping it, and that could put some users into a weird state if they're active and then the code suddenly changes. I'm thinking through out loud and singing it because apparently, that is what I do. We haven't run into any issues with that. But I am now trying to think through of when someone might end up in a weird state because of that.
CHRIS: You almost certainly won't run into any issues because we're talking about a matter of seconds where the code is in an inconsistent state. But the same thing applies to migrations like database migrations where if you're doing rolling deployments and the database migration hasn't happened yet, but there's new code that's running that expects that database state, then you're in this subtly inconsistent mode.
And I think if you really want to adopt this idea of rolling deployments or zero downtime deployments, you have to separate data changes from the code that uses that data change such that both versions of the code when the migration goes out are fine. And then later, you deploy the code that uses the migration, but then that's like a whole bunch of more steps. And you got to think about it, and you have to probably put in some pull request review step to check on it. And so again, it's just complicated.
STEPH: So that's one of the reasons we did change our feature flag implementation was because we realized it was a pain. Because anytime that we wanted to drop feature flags, we first issued a PR that removed any references in the code to that feature flag, let that deploy. And then we'd issue a separate pull request that went out in a different deploy that went and dropped that column. So that way, we didn't have any situations where maybe ActiveRecord has cached to that column, and then there's code that's looking for it, but then it's not actually there. And then you run into that funkiness and complicated behavior.
So we were at a point where removing a feature flag required two PRs and two deploys, and that was awful. So we switched up how we were handling feature flags specifically so we could avoid that and started storing them in JSON files because we took the more bespoke approach to feature flags versus using Flipper.
CHRIS: Bespoke artisanal feature flags.
STEPH: I don't know that I'd recommend it. It's working. It's been a journey, and there are things working well about it. But it's definitely still in that space where it's like, I'm a little uncomfortable of like, how far should we go in this bespoke world and at what point we should reconsider and use something like Flipper. I would say stay tuned, but I'm not on that project anymore. So I'll just never know. [laughs] Sounds like a devious laugh. I will know probably from some friends that are on the project, but I won't be there for it. [laughs]
CHRIS: Cool. Well, yeah, I think that sums up things in my world. What's new in your world?
STEPH: I have discovered something that's quite niche, but I'm excited to tinker with it more. It's called CookLang. And it's a markup language that's designed for cooking and recipe management. So you can store recipes and text files, and there's no database that's required. So it's easy to have control over your recipes versus storing them in a separate application, which is currently how I store my recipes. Which for the record, I'm happy to pay for software. I am very appreciative of when people make my life easier. And so I very much enjoy that. But I also still love the it's mine, and I can just have it stored somewhere in the cloud, and I'll never lose it. And I don't have to worry about renewing memberships to keep access to something. So there's a part of me that is very drawn to the idea that I can just have everything in the text file. I can store it anywhere that I want.
And then, for this particular CookLang markup language, it lets you define certain qualities of your recipe. So you can define the ingredients, the quantity, cookware, timers. It'll help you create shopping lists, and you can set metadata. So maybe you want to include the total prep time or the type of meal, or the number of people that the recipe serves. And they also have an iOS app that is in beta. So speaking of beta, this one is closed at the moment because they've had a pretty overwhelming positive response or interest. So they have shut it down for now in terms of accepting new people to the beta; at least, that's my current understanding.
But the iOS app does look like it'll be really nice. It's going to read from your files; I think stored in iTunes. But I'm just excited for all of this. It looks very interesting. And it looks like something that's just fun to play with. So I haven't moved anything over to start really investing and creating these .cook files that you use for the cook language. But it all seems very cute, very niche, and something I'm totally into.
CHRIS: I have not seen this, but it looks absolutely fantastic. I'm just reading the brief syntax snippet that they have at the top and the way that they're inferring semantic meaning from plain text recipes. And then you can generate a shopping list, super cool. You can just be like; I’m going to cook these things from my recipes. And then they'll tell you the shopping list, and it's grouped.
And this feels like what software should be in my mind. This doesn't feel like we're going too far with it, which is very easy to do. But it's like, no, no, we've annotated just a little bit of semantic meaning on top of something. And then with that, we can do wonderful things. Look at all the good stuff that falls out of it. We can have an iOS app. We can generate shopping lists. It's great. But it's also minimal and contained, and not locked in a proprietary format. And this is fantastic.
STEPH: There's also the nice part where they've already started highlighting that people can just store their recipes on GitHub. And then you can just fork recipes, or you can import everybody else's recipes. And I love that. I want more open-source recipes, although frankly, there are tons of just free amount of work and recipes that people have on the internet. But I love this because then I could skip all the ads that go with it. And I can just grab the stuff that I care about.
CHRIS: You mean you don't want to hear the person's life story before every single recipe. You don't want to hear about the walk that they took along the river when they were thinking about seasonal gourds, and then they decided to make a particular -- [laughs]
STEPH: [chuckles] I would normally...previous Stephanie would have been snarky about that in regards to all of that backstory that's given for a recipe. But then I saw somewhere maybe a friend was telling me about someone who had asked, "Why do y'all do this? Why are you putting so much information? Why can't I just have the recipe?" And they're like, "Well, frankly, Google's going to rank us a lot higher if we give you this whole backstory and if we look like there's more content to this page than just a recipe." And I'm like, dang. Okay, so it's Google that I have to be snarky about because they're doing their best in terms of trying to get you the recipe that you care about and make it very searchable.
CHRIS: Oh, yes. Absolutely. It's both a self-imposed problem. But also, I don't think Google's doing anything wrong here either. Google's part of their algorithm is they penalize duplicate content. And so there's what they view as the canonical source, the thing that's telling the truth. And then recipes look very much the same because they are kind of the same most of the time. This is how you make bread. This is also how you make bread. Two people now have almost identically the same content on the internet. And Google will penalize one that it determines to be copying essentially, which I think is a good thing broadly.
I've seen so many Stack Overflow...I want to say clones, but they're not even that. They're just literally taking the content and then republishing it entirely new. And that is a very bad thing that happens constantly and takes away from creators and whatnot. And so, in a way, Google's trying to do the right thing here, I think. But it leads to really just so much summary about recipes. It's one of those weird...is this a tragedy of the commons? I don't know. I've never quite understood that one. But I think it might be.
STEPH: Yeah, I don't know. That's my off-mic answer. [laughs]
CHRIS: That's fine. Someone on the internet will tell me whether or not this is a tragedy of the commons. But either way, I do not blame the recipe authors. I do not blame Google? You'll note the question in my voice. But I think I stand by that. I'm not sure who to blame. Maybe it's us. But here we are. [laughs]
STEPH: Yes, I like that final takeaway of let's not blame the people that are trying to give us wonderful, helpful content. Circling back a bit to some of the code that is powering CookLang or some of the interesting bits about CookLang, for anyone that's interested, you can also visit the GitHub repos they have. I believe everything is open source. There's a particular repo called spec. And you can view the issues, and you can see all the conversations around how should we annotate an ingredient? Or should we have comments in the language? And how should we handle this type of metadata?
And I find that interesting because I haven't really participated in any of the language crafting forums for Ruby or some of the other languages that I use. So it's neat to watch how some of this is being done in public to say, hey, we're just making this new markup language, and we're not really sure how we want to annotate everything. So what do people think? What's working for you? What's not? And I'm enjoying that conversation and reading along.
CHRIS: It really is such an interesting microcosm of an example of a language, an ecosystem, a community, a specification, and all of the stuff that is true of all of the other languages that we use more generally. But I'm looking down here, and there's the parser implementation. There's a Swift canonical parser. But then Rust, somebody is rewriting this in Rust because, of course, they are. But you know what? It's going to parse your recipe so fast, and it's going to be great. [laughter] But it does have the shape of everything that I've seen from other programming language conversations. And how do we evolve the language, and what syntax do we accept versus what do we not? And this is so interesting.
STEPH: Yeah, it's really neat. I'm really into it, and I'm really enjoying it thus far.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: So, changing gears just a bit, we have a listener question that frankly is a doozy of a listener question. It's a bit long, but I feel like all of the content in here is important. And I feel like most people are going to be able to relate to this scenario that the question is describing. So this question comes from Michael Kopinsky and Ben Rosenbach from Philadelphia. And they write in, "There's one area of our codebase which has a lot of skeletons." I'm already empathizing. "The UI is error-prone jQuery soup, and the UX doesn't entirely make sense. Test coverage is close to non-existent. And the expected behavior isn't clear and has lots of edge cases.
We've thought over the years of how we can redo it, but it's never a priority. Redoing it would be a huge lift, and...it works usually. We get bug reports occasionally, which inevitably get closed as won't fix workaround provided. But a few months ago, we received a bug report, and I decided to start cleaning things up. I opened a merge request with a set of factors extracting some behavior into a service object, making explicit some things that were implicit and adding test." Side note, that's some hero work. I appreciate that right there. "It's gone through a few cycles of code review and pairing each time suggesting additional cleanups to do, requesting additional test coverage and that sort of thing. But the more we do, the more we discover that needs to be done.
So at this point, the merge request has been open for about eight weeks. I've paired with two other developers for probably 12 hours. The user is impatient, and we're tired of this darn thing and want it to go away. But it feels so bad. It really is severely lacking in test coverage. And the expected behaviors still aren't clearly defined. And it just doesn't meet the normal quality standards that we try to hold ourselves to. We could sit for another 8 to 40 hours and write more tests or documentation, but we're so drained from this whole thing.
So overall, we're really struggling to balance two things. We want to get things out the door, have short-lived PRs, and focus on incremental improvement. However, we still don't have defined requirements of how the feature is supposed to work across all permutations. And we can't write proper tests until expected behavior is defined more clearly. It feels like that's a baseline requirement before being able to merge or refactor. We'd love to hear any thoughts. Sincerely, Michael and Ben." Ooh, friend, there is so much we can dive into here. Would you like me to get started, or would you like to start?
CHRIS: Yeah, I've got a handful of thoughts because, as you highlighted, this is all of the stuff. This is the murky pile of complexity that writing code and delivering value in an organization is made up of. I'm going to start at a random place and then just start saying things because I think there is not a singular answer to this is an important starting line. We're not going to say, oh, if you just did X, then it would be fine. Obviously, this is a number of different complexities and situations, both person or both interpersonal human, and then code, and et cetera, et cetera.
So one of the things that actually came in the latter part of what you were reading there is we do not have defined requirements as to how this feature is supposed to work across all permutations of how it can be used. And that one is really interesting to me because right now, I wonder if there's something that can be done there. So often, you have code that does some stuff but also could do other stuff. And it's not clear if it's actually intended to do that other stuff or if that's just something that falls out of overly permissive like, we'll take any data you throw at us, and then we'll do some stuff. Typically, we don't actually want that. We want a more constrained system that's doing something. So that particular part was really interesting. Some of the conversations about testing were really interesting.
I think one of the things I would do is I would really ask, is this worth the effort? This code is kind of doing its thing. It's been around for a while. It's business-critical, but nobody really understands. It's like, is there a version of just leaving it at rest? Probably not. I typically would not do that. But it is a question that I think is worth asking because it can be really hard to make these changes on an under-defined system written in legacy technologies and whatnot, especially ones that we don't necessarily have as much...like, I haven't worked in jQuery for a while. So I wouldn't be quite as good at that. I wouldn't be able to understand that code as intuitively. That might be similar for this team.
So is there a version of just letting it lie? That is the first question I would ask. Presuming that's not the case, then I think I start to look at what is the attack that I would take to approach this. And I would definitely start with, hopefully, some small but meaningful introductions of test coverage but as a standalone thing. It's not part of the big change of a pull request. It's merely trying to lock in the code as it exists.
Again, I think I've referenced this talk perhaps more than any other, but Katrina Owens'Therapeutic refactoring is a really wonderful story and talk and really talks through this idea in a great way, but also it's just fun watch. But in it, Katrina talks about her approach to a similar thing where she was like, there's some code, and nobody really knows what it does. So I'm going to try and lock it down. And the first thing that she does is wraps test coverage around this code. So at least whatever it's doing now, we've constrained it a little bit. And actually, via the testing that she does, she ends up using that as a mechanism to sort of what does this system do? And finds a way to exercise the system and determine what its behavior actually is because nobody really knows that at this point.
So I would start with the testing. And then I would ask the question of we don't really know what it does across all permutations. Is there a way to actually constrain that? And say, let's actually lock some stuff down. Let's add a tiny bit of code that looks at the inputs being requested or coming into this code path or whatever it is but actually explicitly delineates what we think is the expected inputs and rejects or at least logs things that are surprising and that are like, we don't think it should do any of this. And then suddenly, if you see that showing up in the logs, you're like, I guess it does need to do that. Now we have a better idea of what is the actual target that we're trying to hit.
But really, both of those in my mind are ways to try and constrain the problem to try and add a little bit of support before tackling the hard work of the refactor because, just to be clear, you're probably going to break it in the refactor. I've broken every single system like this that I've tried to refactor without question. And there's a certain amount of organizational trust that you need to have. And it just gets to be a very complex time. And so anything that I can do to provide some guide rails, some support, something to help me in that time is something that I'm going to start with before I even go near the refactor itself. So those are some thoughts scattered around throughout the question. What do you think, Steph? What comes top of mind for you as you hear this question?
STEPH: I think those are great thoughts. And to expand on what you're talking about determining the expected behavior, that's also one of the bits that jumped out to me. And I love that you asked the question, can we leave it at rest? It sounds like yes, but there are some user concerns that come through or some bug reports. And then the team has to collectively say, "Yes, we are going to put someone through a difficult time to either do maybe a gross fix or implementation." And then we're making this area of the codebase feel worse and more accepting of different flows and adding to the number of flows that we have. Or we document it as won't fix and then provide a workaround.
So I really like that you said, "Can we leave it at rest?" Because I think that helps a team make a decision together as to like, what is the understanding of this part of the codebase functionality? How important is it to the business? If we decide that it's not that important to the business, then it's frankly not worth anybody's time working on it even if it feels really bad and you know there's this gross code over here, and you really want to go work on it. But if it's not important to the business, then it just shouldn't be done. And it needs to be left alone until the business decides this is important to our functionality. We do want to prioritize this work, and we're going to work on it together. And I still think there are a couple of nuanced strategies even from there.
So if you're looking for the more incremental improvements of we can't totally let this lie...we do have to improve it. But we're also not willing to invest in an overhaul situation. So some of the incremental improvements would be, as you'd mentioned, adding some test coverage to it to at least start to lock in the behavior that's already there and establish a baseline of understanding and then document that behavior. I've also found that really helps for context switching. So if you go in and you just document one part of that system as something that it does, then it frees you up. You can add something. You can issue a smaller merge request. You can get that merged because you're just adding test coverage. So it's not going to break anything. You're not making code changes. But then you can nearly let that go. And you can jump to something else that the team or the business has decided that's more important.
Some other areas that I've used in the past for working in these types of legacy or just bug-prone areas is to look for flows that are no longer in use. So if there's something that stands out to me as like, I'm pretty sure users aren't using this, or this code just looks unused. Maybe it's running a tool like Unused, which we can link to in the show notes, and see if there's truly something that's unused that you can just immediately remove. Or maybe it's logging a message to Rollbar or something similar. So that way, you know this is actually being used, but you can have it log whenever that's being run. Let that live for a month and then go back and check to see if that flow is ever executed.
As for the larger overhaul approach, that really has to be a team effort. So this has to be one of those moments of like the company decides this is important. And there may be smaller wins you can look for, but I do think it's going to require at least a developer or the product manager to be invested and ideally a designer as well. And then some small steps there may be looking for reducing responsibility of that feature. So maybe it's this really verbose long form that's complicated. And maybe you can collect some of that information somewhere else in the app. So you can start to chip away at some of the responsibilities in that portion of the application.
Or maybe you have a mini design sprint. You may realize this is a very important part of your application. We should really have this be a better experience for users. And let's really take a hard look at what this does and start to document. And that's the bigger like we're going to rebuild this, or the whole team is going to focus on this and improve this portion of the application. So that was in response to some of the lovely things that you said that you made me think of.
The other thing that really jumped out to me is this merge request that's sitting because this person, Michael or Ben, they've already done some really hard work where they have decided to add test coverage. They've cleaned up some things. And I can certainly relate to you issue a pull request and someone...because you've started cleaning up a space, people are like, "Oh, what about that cobweb? And then you missed a spot. What about over here?" And so, how do you make progress when you're in that state? And I would say move the goal line and merge the pull request.
So your goal line sounds like right now is trying to address everything which doesn't feel humanly possible. So instead, move the goal line. Figure out okay, I just want to add these couple of tests, or I just want to add this refactor to a function or something that documents this part of the system. And if people are nervous about merging the current merge request because it has gotten so big and people have made so many small requests, is to then break it up into smaller, more focused PR. So then it makes it easier for people to say, "Yes, that does improve the state of this codebase. Please merge it."
And I will often prefix my PRs with that. I'll say, "Dear reviewer, this PR is intended to improve the state of the world. It does not improve everything because I can't, but this is the goal of this." So that way, people go in with the mindset of they'll realize they're still going to see some cruft. But this is still leading us towards a better place. And unless there are strong concerns about the changes made in that specific PR, any other requests need to be noted and then can be captured somewhere else, but we're not going to address it right now in this PR.
CHRIS: I agree so very much with all of the things you just said. For anyone at home who cannot see us on the Skype call that we're on, I was just aggressively nodding my head the entire time Steph was saying all of that. [chuckles] You touched on I think all of the additional points that I would want to make specific to this merge request is out there in the world and whatnot.
The one other tiny bit that I would add to it is I have definitely been the person, and then I've also been on the other side of it where people get attached to the code that they've written. "But I've already written the merge request." And you're like, "Yeah, but can you break it up?" And they're like, "Yeah, but it already is a thing. And I did a bunch of work. And now I'm emotionally invested in this pile of work getting across the finish line and breaking it apart..." We can get caught up in our code.
And this is a really great example of this thing just got away from you. This is too big of a refactor. It's too much of a risk. And so again, I have struggled with this personally where I'm like, I'm so proud of this pile of work that I just made. And people are like, "Yeah, but that's a big pile of work there." I'm like, "You're right. You're right. I should break this up. I will break this up." But I can feel that resistance in myself to doing that, what feels like extra work, what feels like almost undoing of work. It's like, "No, no, no. I think it's ready." I'll be like, "I don't know, I'm not convinced. Can we break it apart?" And that's almost always the right decision. But it can be painful.
And so, just knowing that and having that in the back of your head as a thing that my brain will tell me the opposite. My brain will be like, no, no, go with it. It's great. We're proud of this. But being open to the idea that breaking things up is good, smaller pieces are good, et cetera, et cetera, can be a useful psychological aspect of this conversation because,, of course, there's one more facet. There are so many facets to this question.
STEPH: I like that anecdote about that you haven't regretted breaking things up because I also relate to that where there's that initial like, I've already done this much work. And now you're asking me to do more work. And it's going to have the same outcome where it's still going to be the same code. It's still going to be the same set of changes, but you're asking me to break it up into smaller changes. And I just feel exhausted from this work already. And I don't want to have to break this up. So I understand and relate to that initial resistance.
But I also really like the idea that all of these changes and everything going into the codebase is managed by a team. So we really want there to be a level of comfort from the team that this is what's going in. And as the author of the changes, I am far more comfortable with everything that's about to get merged in because I've spent so much time with it. So for me, whenever I do feel that initial resistance, I have to ask myself, well, what's the value? Will this make it easier for folks to review? Maybe it will show some areas that I didn't add test coverage because it felt like it was already getting so large. And so I started to negotiate with myself as to like, well, maybe I don't need to test that because this is already getting so big. And I don't want to add more to it.
So I really think through what's the value of if I break it up? And will it make the team more confident in this change? And ultimately if it does, then to me, that's always a resounding yes. So it's just going through what's the value versus what's my initial oh, this is a waste of my time to oh no, this is a good use of my time because it really benefits everyone else.
Circling back to something else that was said in the question was the message that we're just so drained from the whole thing. And I really don't want to take that for granted because that's really important. And it sounds like someone has taken their own time and hopefully company time to work on this. But it's not a team initiative. So this does feel like one of those areas where it feels like the team needs to prioritize whether this gets worked on or not. And it's okay if the team says, "Yes, this is important, but six months from now because we need a break from it." But as long as it's something that the team worked on together to say this is important, or it's not, I feel like that's an important part. And also to recognize that we don't have to fix all of this right now. That's really important for the company to buy into it.
I have two small things that I want to add, sort of the what not to do because we've been talking about some things that we would do. And so one thing that I definitely wouldn't do is avoid having one person going off and trying to fix this part of the codebase. Don't have one person, either a designer, developer, just a single soul who's like, I can fix this, and tackle it, and define all the things, and improve it. That's going to go poorly. Don't do that.
And then also, I would avoid refactoring just to refactor, which I think goes a little bit against Therapeutic Refactoring, which is a talk that I absolutely love. But if we're really invested in proving this part of the codebase, then I think it is very helpful and important to have a fixed user deliverable goal driving your refactor because that's also going to help shape and influence your PRs and encourage the team to take the time to review those PRs and get them merged.
So there's a nice coupling there where you want something driving that goal. Granted, it could be different for different teams. Therapeutic Refactoring may work wonderfully. But I've just seen so many people sink a lot of time into something, get very frustrated, then nothing gets shipped. So I would favor leaning more until we have this very focused thing that we're going to deliver, whether it's adding a test or two tests. Have a goal in mind when you start that refactor.
CHRIS: Again, I find myself nodding aggressively with everything that you're saying. And in addition, I really like the subtlety that you're bringing to one of my favorite talks. I've mentioned many times that this is one of my favorite talks. But I think you've added a really interesting like, yeah, but also, in the way that everything depends, it also depends. And I think you framed that really well that refactoring for refactoring sake can be fun and therapeutic, and cathartic for an individual. But we have to make sure it's part of the overall work that we're doing.
And you've now, I think, really well captured the human side of this work that is captured in what is such a complicated, nuanced question that I think also does such a good job of just highlighting the work. This is what it looks like. This is a particular almost perfect storm of the work, but that happens. But each little piece of this that we've talked about is this is the day-to-day of software development interestingly. I always thought it was just going to be writing for loops. I haven't written a for loop in years. I don't even know how to write a for loop. But this stuff is what I do every day. [laughter]
STEPH: The most challenging part for me in software I realized is when a problem has escalated to the point where it's like, I can't just write code to fix this. I need to go to the team. I need to get buy-in. And that's where I start to realize this is where software to me really gets hard because then I need other people to contribute and to get their opinions. And it turns out way better for it. But yes, that is definitely the intersection for me where I can write code up to a certain point to then I really need people to help make really great software and fix some of the underlying issues that we're facing.
On that note, Michael and Ben, thank you so much for writing this in to us. I'm very interested to hear how this turns out for your team. And thank you so much. I wish you the best of luck. On that note, Shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes,as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph talks about starting a new project and identifying "focused" tests while Chris shares his latest strategy for managing flaky tests. They also ponder the squishy "it depends" side of software and respond to a listener question about testing all commits in a pull request.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
CHRIS: My new computer is due on the fourth. I'm so close.
STEPH: On the fourth?
CHRIS: On the fourth.
STEPH: That's so exciting.
CHRIS: And I'm very excited. But no, I don't want to upgrade any software on this computer anymore. Never again shall I update a piece of software on this computer.
STEPH: [laughs]
CHRIS: This is its final state. And then I will take its soul and move it into the new computer, and we'll go from there. [chuckles]
STEPH: Take its soul. [laughs]
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we learn along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. Let's see. It's been kind of a busy week. It's been a busy family week. Utah, my dog, hasn't been feeling well as you know because you and I have chatted off-mic about that a bit. So he is still recovering from something, I don't know what. He's still on most days his normal captain chaos self, but then other days, he's not feeling well. So I'm just keeping a close eye on him. And then I also got some other family illnesses going on. So it has been a busy family week for sure.
On the more technical project side, I am wrapping up my current project. So I have one more week, and then I will shift into a new project, which I'm very excited about. And you and I have chatted about this several times. So there's always just that interesting phase where you're trying to wrap up and hand things off and then accomplish last-minute wishlist items for a project before then you start with a new one. So I am currently in that phase.
CHRIS: How long were you on this project for?
STEPH: It'll be a total of I think eight months.
CHRIS: Eight months, that's healthy. That's a bunch. It's always interesting to be on a project for that long but then not longer. There were plenty of three and four-month projects that I did. And you can definitely get a large body of work done. You can look back at it and proudly stare at the code that you have written. But that length of time is always interesting to me because you end up really...for me, when I've had projects that went that long but then not longer, I always found that to be an interesting breaking point. How are you feeling moving on from it? Are you ready for something new? Are you sad to be moving on? Do you feel attached to things?
STEPH: It's always a mix. I'm definitely attached to the team, and then there are always lots of things that I'd still love to work on with that team. But then, I am also excited to start something new. That's why I love this role of consulting because then I get to hop around and see new projects and challenges and work with new people. I'm thinking seven to eight months might be a sweet spot for me in terms of the length of a project. Because I find that first month with a project, I'm really still ramping up, I'm getting comfortable, I'm getting in the groove, and I'm contributing within a short amount of time. But I still feel like that first month; I’m getting really comfortable with this new environment that I'm in. And so then I have that first month.
And then, at six months, I have more of heads-down time. And I get to really focus and work with a team. And then there's that transition period, and it's nice to know when that's coming up for several weeks, so then I have a couple of weeks to then start working on that transition phase. So eight months might be perfect because then it's like a month for onboarding, ramping up, getting comfortable. And then six months of focus, and then another month of just focusing on what needs to be transitioned so then I can transition off the team.
CHRIS: All right. Well, now we've defined it - eight months is the perfect length of a project.
STEPH: That's one of the things I like about the Boost team is because we typically have longer engagements. So that was one of the reasons when we were splitting up the teams in thoughtbot that I chose the Boost team because I was like, yeah, I like the six-month-plus project.
Speaking of that wishlist, there are little things that I've wanted to make improvements on but haven't really had time to do. There's one that's currently on my mind that I figured I'd share with you in case you have thoughts on it. But I am a big proponent of using the RSpec focus filter for when running tests. So that way, I can just prefix a context it block or describe block with F, and then RSpec I can just run all the tests. But RSpec will only run the tests that I've prefixed with that F focus command., and I love it.
But we are running into some challenges with it because right now, there's nothing that catches that in a pull request. So if you commit that focus filter on some of your tests, and then that gets pushed up, if someone doesn't notice it while reviewing your pull request, then that gets merged into main. And all of the tests are still green, but it's only a subset of the tests that are actually running. And so it's been on my mind that I'd love something that's going to notice that, that's going to catch it, something that is not just us humans doing our best but something that's automated that's going to notice it for us. And I have some thoughts. But I'm curious, have you run into something like this? Do you have a way that you avoid things like that from sneaking into the main branch?
CHRIS: Interestingly, I have not run into this particular problem with RSpec, and that's because of the way that I run RSpec tests. I almost never use the focus functionality where you actually change the code file to say, instead of it, it is now fit to focus that it.
I tend to lean into the functionality where RSpec you can pass it the line number just say, file: and then line number. And RSpec will automatically figure out which either spec or context block or entire file. And also, I have Vim stuff that allows me to do that very easily from the file. It's very rare that I would want to run more than one file.
So basically, with that, I have all of the flexibility I need. And it doesn't require any changes to the file. So that's almost always how I'm working in that mode. I really love that. And it makes me so sad when I go to JavaScript test runners because they don't have that.
That said, I've definitely felt a very similar thing with ESLint and ESLint yelling at me for having a console.log. And I'm like, ESLint, I'm working here. I got to debug some stuff, so if you could just calm down for a minute. And what I would like is a differentiation between these are checks that should only run in CI but definitely need to run in CI. And so I think an equivalent would be there's probably a RuboCop rule that says disallow fit or disallow any of the focus versions for RSpec. But I only want those to run in CI.
And this has been a pain point that I felt a bunch of times. And it's never been painful enough that I put in the effort to fix it. But I really dislike particularly that version of I'm in my editor, and I almost always want there to be no warnings within the editor. I love that TypeScript or ESLint, or other things can run within the editor and tell me what's going on. But I want them to be contextually aware. And that's the dream I've yet to get there.
STEPH: I like the idea of ESLint having a work mode where you're like, back off, I am in work mode right now. [chuckles] I understand that I won't commit this.
CHRIS: I'm working here. [laughter]
STEPH: And I like the idea of a RuboCop. So that's where my mind went initially is like, well, maybe there's a custom cop, or maybe there's an existing one, and I just haven't noticed it yet. But so I'm adding a rule that says, hey, if you do see an fcontext, fdescribe, ffit, something like that, please fail. Please let us know, so we don't merge this in. So that's on my wishlist, not my to-don't list. That one is on my to-do list.
CHRIS: I'm also intrigued, though, because the particular failure mode that you're describing is you take what is an entire spec suite, and instead, you focus down to one context block within a given file. So previously, there were 700 specs that ran, and now there are 12. And that's actually something that I would love for Circle or whatever platform you're running your tests on to be like, hey, just as a note, you had been slowly creeping up and had hit a high watermark of roughly 700 specs. And then today, we're down to 12. So either you did some aggressive grooming, or something's wrong. But a heuristic analysis of like, I know sometimes people delete specs, and that's a thing that's okay but probably not this many. So maybe something went wrong there.
STEPH: I feel like we're turning CI into this friend at the bar that's like, "Hey, you've had a couple of drinks. I just wanted to check in with you to make sure that you're good." [laughs]
CHRIS: Yes.
STEPH: "You've had 100 tests that were running and now only 50. Hey, friend, how are you? What's going on?"
CHRIS: "This doesn't sound like you. You're normally a little more level-headed." [laughs] And that's the CI that is my friend that keeps me honest. It's like, "Wait, you promised never to overspend anymore, and yet you're overspending." I'm like, "Thank you, CI. You're right; I did say I want the test to pass."
STEPH: [laughs] I love it. I'll keep you posted if I figure something out; if I either turn CI into that friend, that lets me know when my behavior has changed in a concerning way, and an intervention is needed. Or, more likely, I will see if there's a RuboCop or some other process that I can apply that will check for this, which I imagine will be fast. I mean, we're very mindful about ensuring our test suite doesn't slow down as we're running it. But I'm just thinking about this out loud. If we add that additional cop, I imagine that will be fast. So I don't think that's too much of an overhead to add to our CI process.
CHRIS: If you've already got RuboCop in there, I'm guessing the incremental cost of one additional cop is very small. But yeah, it is interesting. That general thing of I want CI to go fast; I definitely feel that feel. And we're slowly creeping up on the project I'm working on. I think we're at about somewhere between five to six minutes, but we've gotten there pretty quickly where not that long ago; it was only three minutes.
We're adding a lot of features specs, and so they are definitely accruing slowdowns in our CI. And they're worth it; I think, because they're so valuable. And they test the whole integration of everything, but it's a thing that I'm very closely watching. And I have a long list of things that I might pursue when I decide it's time for CI to get a haircut, as it were.
STEPH: I have a very hot tip for a way to speed up your test, and that is to check if any of your tests have a very long sleep in them. That came up recently [chuckles] this week where someone was working in a test and found some relic that had been added a while back that then wasn't caught. And I think it was a sleep 30. And they were like, "Hey, I just sped up our test by 30 seconds." I was like, ooh, we should grep now to see if there's anything else like that. [laughs]
CHRIS: Oh, I love the sentence we should grep now. [laughter] The correct response to this is to grep immediately. I thought you were going to go with the pro tip of you can just focus down to one context block. And then the specs will run so much faster because you're ignoring most of them, but we don't want to do that. The sleep, though, that's a pro tip. And that does feel like a thing that there could be a cop for, like, never sleep more than...frankly, let's try not to sleep at all but also, add a sleep in our specs. We can sleep in life; it's important, but anyway. [chuckles]
STEPH: [laughs] That was the second hot tip, and you got it.
CHRIS: Lots of hot tips. Well, I'm going to put this in the category of good idea, terrible idea. I won't call it a hot tip. It's a thing we're trying. So much as we have tried to build a spec suite that is consistent and deterministic and tells us only the truth, feature specs, even in our best efforts, still end up flaking from time to time. We'll have feature specs that fail, and then eventually, on a subsequent rerun, they will pass. And I am of the mindset that A, we should try and look into those and see if there is a real cause to it. But sometimes, just the machinery of feature specs, there's so much going on there. We've got the additional overhead of we're running it within a JavaScript context. There's just so much there that...let me say what I did, and then we can talk more about the context.
So there's a gem called RSpec::Retry. It comes from the wonderful folks over at NoRedInk, a well-known Elm shop for anyone out there in the Elm world. But RSpec::Retry does basically what it says in the name. If the spec fails, you can annotate specs. In our case, we've only enabled this for the feature specs. And you can tell it to retry, and you can say, "Retry up to this many times," and et cetera, et cetera. So I have enabled this for our feature specs. And I've only enabled it on CI. That's an important distinction. This does not run locally.
So if you run a feature spec and it fails locally, that's a good chance for us to intervene and look at whether or not there's some flakiness there. But on CI, I particularly don't want the case where we have a pull request, everything's great, and we merge that pull request, and then the subsequent rebuild, which again, as a note, I would rather that Circle not rebuild it because we've already built that one. But that is another topic that I have talked about in the past, and we'll probably talk about it again in the future.
But setting that aside, Circle will rebuild on the main branch when we merge in, and sometimes we'll see failures there. And that's where it's most painful. Like, this is now the deploy queue. This is trying to get this out into whatever environment we're deploying to. And it is very sad when that fails. And I have to go in and manually say, hey, rebuild. I know that this works because it just worked in the pull request, and it's the same commit hash. So I know deterministically for reasons that this should work. And then it does work on a rebuild.
So we introduced RSpec::Retry. We have wrapped it around our feature specs. And so now I believe we have three possible retries. So if it fails once, it'll try it again, and then it'll try it a third time. So far, we've seen each time that it has had to step in; it will pass on the subsequent run. But I don't know; there was some very gentle pushback or concerns; let’s call them when I introduced this pull request from another developer on the team, saying, "I don't know, though, I feel like this is something that we should solve at the root layer. The failures are a symptom of flaky tests, or inconsistency or et cetera, and so I'd rather not do this." And I said, "Yeah, I know. But I'm going to merge it," and then I merged it.
We had a better conversation about that. I didn't just broadly overrule. But I said, "I get it, but I don't see the obvious place to shore this up. I don't see where we're doing weird inconsistent things in our code. This is just, I think, inherent complexity of feature specs." So I did it, but yeah, good idea, terrible idea. What do you think, Steph? Maybe terrible is too strong of a word. Good idea, mediocre idea.
STEPH: I like the original branding. I like the good idea, terrible idea. Although you're right, that terrible is a very strong branding. So I am biased right now, so I'm going to lead in answering your question by stating that because our current project has that problem as well where we have these flaky tests. And it's one of those that, yes, we need to look at them. And we have fixed a large number of them, but there are still more of them.
And it becomes a question of are we actually doing something wrong here that then we need to fix? Or, like you said, is it just the nature of these features-specs? Some of them are going to occasionally fail. What reasonable improvements can we make to address this at the root cause? I'm interested enough that I haven't heard of RSpec::Retry that I want to check it out because when you add that, you annotate a test. When a test fails, does it run the entire build, or will it rerun just that test? Do you happen to know?
CHRIS: Just the test. So it's configured as in a round block on the feature specs. And so you tell it like, for any feature spec, it's like config.include for feature specs RSpec::Retry or whatever. So it's just going to rerun the one feature spec that failed when and if that happens. So it's very, very precise as well in that sense where when we have a failure merging into the main branch, I have to rebuild the whole thing. So that's five or six minutes plus whatever latency for me to notice it, et cetera, whereas this is two more seconds in our CI runtime. So that's great. But again, the question is, am I hiding? Am I dealing with the symptoms and not the root cause, et cetera?
STEPH: Is there a report that's provided at the end that does show these are the tests that failed and we had to rerun them?
CHRIS: I believe no-ish. You can configure it to output, but it's just going to be outputting to standard out, I believe. So along with the sea of green dots, you'll see had to retry this one. So it is visible, but it's not aggregated. And the particular thing is there's the JUnit reporter that we're using. So the XML common format for this is how long our tests took to run, and these ones passed and failed.
So Circle, as a particular example, has platform-level insights for that kind of stuff. And they can tell you these are your tests that fail most commonly. These are the tests that take the longest run, et cetera. I would love to get it integrated into that such that retried and then surface this to Circle. Circle could then surface it to us. But right now, I don't believe that's happening. So it is truly I will not see it unless I actively go search for it. To be truly honest, I'm probably not doing that.
STEPH: Yeah, that's a good, fair, honest answer. You mentioned earlier that if you want a test to retry, you have to annotate the test. Does that mean that you get to highlight specific tests that you're marking those to say, "Hey, I know that these are flaky. I'm okay with that. Please retry them." Or does it apply to all of them?
CHRIS: I think there are different ways that you can configure it. You could go the granular route of we know this is a flaky spec, so we're going to only put the retry logic around it. And that would be a normal RSpec annotation sort of tagging the spec, I think, is the terminology there. But we've configured it globally for all feature specs. So in a spec support file, we just say config.include Rspec::Retry where type is a feature. And so every feature spec now has the possibility to retry. If they pass on the first pass, which is the hope most of the time, then they will not be tried. But if they don't, if they fail, then they'll be retried up to three times or up to two additional times, I think is the total.
STEPH: Okay, cool. That's helpful. So then I think I have my answer. I really think it's a good idea to automate retrying tests that we have identified that are flaky. We've tried to address the root, and our resolution was this is fine. This happens sometimes. We don't have a great way to improve this, and we want to keep the test. So we're going to highlight that this test we want to retry. And then I'm going to say it’s not a great idea to turn it on for all of them just because then I have that same fear about you're now hiding any flaky tests that get introduced into the system. And nobody reasonably is going to go and read through to see which tests are going to get retried, so that part makes me nervous.
CHRIS: I like it. I think it's a balanced and reasonable set of good and terrible idea. Ooh, it's perfect. I don't think we've had a balanced answer on that yet.
STEPH: I don't think so.
CHRIS: This is a new outcome for this segment. I agree. Ideally, in my mind, it would be getting into that XML format, the output from the tests, so that we now have this artifact, we can see which ones are flaky and eventually apply effort there. What you're saying feels totally right of we should be more particular and granular. But at the same time, the failure mode and the thing that I'm trying, I want to keep deploys going. And I only want to stop deploys if something's really broken.
And if a spec retries, then I'm fine with it is where I've landed, particularly because we haven't had any real solutions where there was anything weird in our code. Like, there's just flakiness sometimes. As I say it, I feel like I'm just giving up. [laughs] And I can hear this tone of stuff's just hard sometimes, and so I've taken the easy way out. And I guess that's where I'm at right now. But I think what you're saying is a good, balanced answer here. I like it. I don't know if I'm going to do anything about it, but...[laughter]
STEPH: Well, going back to when I was saying that I'm biased, our team is feeling this pain because we have flaky tests. And we're creating tickets, and we're trying to do all the right things. We create a ticket. We have that. So it's public. So people know it's been acknowledged. If someone's working on it, we let the team know; hey, I'm working on this. So we're not duplicating efforts. And so, we are trying to address all of them. But then some of them don't feel like a great investment of our time trying to improve.
So that's what I really do like about the RSpec::Retry is then you can still have a resolution. Because it's either right now your resolution is to fix it or to change the code, so then maybe you can test it in a different way. There's not really a good medium step there. And so the retry feels like an additional good outcome to add to your tool bag to say, hey, I've triaged this, and this feels reasonable that we want to retry this.
But then there's also that concern of we don't want to hide all of these flaky tests from ourselves in case we have done it and there is an opportunity for us to improve it. So I think that's what I do really like about it because right now, for us, when a test fails, we have to rerun the entire build, and that's painful. So if tests are taking about 20 minutes right now, then one spec fails, and then you have to wait another 20 minutes.
CHRIS: I would have turned this on years ago with a 20-minute build time. [chuckles]
STEPH: [laughs] Yeah, you're not wrong. But also, I didn't actually know about RSpec::Retry until today. So that may be something that we introduce into our application or something that I bring up to the team to see if it's something that we want to add. But it is interesting that initial sort of ooh kind of feeling that the team will give you introducing because it feels bad. It feels wrong to be like, hey, we're just going to let these flaky tests live on, and we're going to automate retrying them to at least speed us up. And it's just a very interesting conversation around where we want to invest our time and between the risk and pay off.
And I had a similar experience this week where I had that conversation, but this one was more with myself where I was working through a particular issue where we have a state in the application where something weird was done in the past that led us to a weird state. And so someone raised a very good question where it's like, well, if what you're saying is technically an impossible state, we should make it impossible, like at the database layer. And I love that phrase. And yet, there was a part of me that was like, yes, but also doing that is not a trivial investment. And we're here because of a very weird thing that happened before.
It felt one of those interesting, like, do we want to pursue the more aggressive, like, let's make this impossible for the future? Or do we want to address it for now and see if it comes back up, and then we can invest more time in it? And I had a hard time walking myself through that because my initial response was, well, yeah, totally, we should make it impossible. But then I walked through all the steps that it would take to make that happen, and it was not very trivial.
And so it was one of those; it felt like the change that we ended up with was still an improvement. It was going to prevent users from seeing an error. It was still going to communicate that this state is an odd state for the application to be in. But it didn't go as far as to then add in all of the safety measures. And I felt good about it. But I had to convince myself to feel good about it.
CHRIS: What you're describing there, the whole thought sequence, really feels like the encapsulation of it depends. And that being part of the journey of learning how to do software development and what it means. And you actually shared a wonderful video with me yesterday, and it was Cassidy Williams at GitHub Universe. And it was her talking to her younger self, and just it depends, and it was so true. So we will include a link to that in the show note because that was a wonderful thing for you to share. And it really does encapsulate this thing. And from the outside, before I started doing software development, I'm like, it's cool. I'm going to learn how to sling code and fix the stuff and hack, and it'll be great, and obvious, and correct, and knowable. And now I'm like, oh man, squishy nonsense. That's all it is.
STEPH: [laughs]
CHRIS: Fun squishy, and I like it. It's so good. But it depends. Exactly that one where you're like, I know that there's a way to get to correctness here but is it worth the effort? And looping back to...I'm surprised at the stance that I've taken where I'm just like, yeah, I'm putting in RSpec::Retry. This feels like the right thing. I feel good about this decision. And so I've tried to poke at it a tiny bit.
And I think what matters to me deeply in a list of priorities is number one correctness. I care deeply that our system behaves correctly as intended and that we are able to verify that. I want to know if the system is not behaving correctly. And that's what we've talked about, like, if the test suite is green, I want to be able to deploy. I want to feel confident in that.
Flaky specs exist in this interesting space where if there is a real underlying issue, if we've architected our system in a way that causes this flakiness and that a user may ever experience that, then that is a broken system. That is an incorrect system, and I want to resolve that. But that's not the case with what we're experiencing. We're happy with the architecture of our system. And when we're resolving it, we're not even really resolving them. We're just rerunning manually at this point. We're just like, oh, that spec flaked. And there's nothing to do here because sometimes that just happens. So we're re-running manually. And so my belief is if I see all green, if the specs all pass, I know that I can deploy to production.
And so if occasionally a spec is going to flake and retrying it will make it pass (and I know that pass doesn't mean oh, this time it happened to pass; it's that is the correct outcome) and we have a false negative before, then I'm happy to instrument the system in a way that hides that from me because, at this point, it does feel like noise. I'm not doing anything else with the failures when we were looking at them more pointedly. I'm not resolving those flaky specs. There are no changes that we've made to the underlying system. And they don't represent a failure mode or an incorrectness that an end-user might see. So I honestly want to paper over and hide it from myself. And that's why I've chosen this. But you can see I need to defend my actions here because I feel weird. I feel a little off about this. But as I talk through it, that is the hierarchy. I care about correctness.
And then, the next thing I care about is maintaining the deployment pipeline. I want that to be as quick and as efficient as possible. And I've talked a bunch about explorations into the world of observability and trying to figure out how to do continuous deployment because I think that really encourages overall better engineering outcomes. And so first is correctness. Second is velocity. And flaky specs impact velocity heavily, but they don't actually impact correctness in the particular mode that we're experiencing them here. They definitely can. But in this case, as I look at the code, I'm like, nah, that was just noise in the system. That was just too much complexity stacked up in trying to run a feature spec that simulates a browser and a user clicking in JavaScript and all this stuff and the things. But again, [laughs] here I am. I am very defensive about this apparently.
STEPH: Well, I can certainly relate because I was defending my answer to myself earlier. And it is really interesting what you're pointing out. I like how you appreciate correctness and then velocity, that those are the two things that you're going after. And flaky tests often don't highlight an incorrect system. It is highlighting that maybe our code or our tests are not as performant as we would like them to be, but the behavior is correct. So I think that's a really important thing to recognize.
The part where I get squishy is where we have encountered on this project some flaky tests that did highlight that we had incorrect behavior, and there's only been maybe one or two. It was rare that it happened, but it at least has happened once or twice where it highlighted something to us that when tests were run...I think there's a whole lot of context. I won't get into it. But essentially, when tests were being run in a particular way that made them look like a flaky test, it was actually telling us something truthful about the system, that something was behaving in a way that we didn't want it to behave. So that's why I still like that triage that you have to go through.
But I also agree that if you're trying to get out at a deploy, you don't want to have to deal with flaky tests. There's a time to eat your vegetables, and I don't know if it's when you've got a deploy that needs to go out. That might not be the right time to be like, oh, we've got a flaky test. We should really address this. It's like, yes; you should note to yourself, hey, have a couple of vegetables tomorrow, make a ticket, and address that flaky test but not right now. That's not the time. So I think you've struck a good balance. But I also do like the idea of annotating specific tests instead of just retrying all of them, so you don't hide anything from yourself.
CHRIS: Yeah. And now that I'm saying it and now that I'm circling back around, what I'm saying is true of everything we've done so far. But it is possible that now this new mode that the system behaves in where it will essentially hide flaky specs on CI means that any new flaky regressions, as it were, will be hidden from us.
And thus far, almost all or I think all of the flakiness that we've seen has basically been related to timeouts. So a different way to solve this would potentially be to up the Capybara wait time. So there are occasionally times where the system's churning through, and the various layers of the feature specs just take a little bit longer. And so they miss...I forget what it is, but it's like two seconds right now or something like that. And I can just bump that up and say it's 10 seconds. And that's a mode that if eventually, the system ends in the state that we want, I'm happy to wait a little longer to see that, and that's fine.
But there are...to name some of the ways that flaky tests can actually highlight truly incorrect things; race conditions are a pretty common one where this behaves fine most of the time. But if the background job happens to succeed before the subsequent request happens, then you'll go to the page. That's a thing that a real user may experience, and in fact, it might even be more likely in production because production has differential performance characteristics on your background jobs versus your actual application. And so that's the sort of thing that would definitely be worth keeping in mind.
Additionally, if there are order issues within your spec suite if the randomize...I think actually RSpec::Retry wouldn't fix this, though, because it's going to retry within the same order. So that's a case that I think would be still highlighted. It would fail three times and then move on. But those we should definitely deal with. That's a test-related thing. But the first one, race conditions, that's totally a thing. They come up all the time. And I think I've potentially hidden that from myself now. And so, I might need to lock back what I said earlier because I feel like it's been true thus far that that has not been the failure mode, but it could be moving forward. And so I really want to find out if we got flaky specs. I don't know; I feel like I've said enough about this. So I'm going to stop saying anything new. [laughs] Do you have any other thoughts on this topic?
STEPH: Our emotions are a pendulum. We swing hard one way, and then we have to wait till we come back and settle in the middle. But there's that initial passion play where you're really frustrated by something, and then you swing, and you settle back towards something that's a little more neutral.
CHRIS: I don't trust anyone who pretends like their opinions never change. It doesn't feel like a good way to be.
STEPH: Oh, I hope that...Do people say that? I hope that's not true. I hope we are all changing our opinions as we get more information.
CHRIS: Me too.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: Well, shifting only ever so slightly because it turns out it's a very related question, but we have a listener question. As always, thank you so much to everyone who sends in listener questions. We really appreciate them. And today's question comes from Mikhail, and he writes in, "Regarding the discussion in Episode 311 on requiring commits merged to be tested, I have a question on how you view multi commit PRs. Do you think all the commits in a PR should be tested or only the last one? If you test all commits in a PR, do you have any good tips on setups for that? Would you want all commits to pass all tests? For one, it helps a lot when using Git bisect. It is also a question of keeping the history clean and understandable.
As a background on the project I currently work on, we have the opinion that all commits should be tested and working. We have now decided on single commit PRs only since this is the only way that we can currently get the setup reasonably on our CI. I would like to sometimes make PRs with more than one commit since I want to make commits as small as possible. In order to do that, we would have to find a way to make sure all commits in the PR are tested. There seems to be some hacky ways to accomplish this, but there is not much talk about it. Also, we are strict in requiring a linear history in all our projects. Kind regards, Mikhail." So, Steph, what do you think?
STEPH: I remember reading this question when it came in. And I have an experience this week that is relevant to this mainly because I had seen this question, and I was thinking about it. And off the cuff, I haven't really thought about this. I haven't been very concerned about ensuring every single commit passes because I want to ensure that, ultimately, the final commit that I have is going in.
But I also rarely have more than one commit in a PR. So that's often my default mode. There are a couple of times that I'll have two, maybe three commits, but I think that's pretty rare for me. I'll typically have just one commit. So I haven't thought about this heavily. And it's not something that frankly I've been concerned about or that I've run into issues with.
From their perspective about using Git bisect, I could see how that could be troublesome, like if you're looking at a commit and you realize there's a particular commit that's already merged and that fails. The other area that I could think of where this could be problematic is if you're trying to roll back to a specific commit. And if you accidentally roll back to a commit that is technically broken, but you didn't know that because it was not the final commit as was getting tested on CI, that could happen. I haven't seen that happen. I haven't experienced it. So while that does seem like a legitimate concern, it's also one that I frankly just haven't had.
But because I read this question from this person earlier this week, I actually thought about it when I was crafting a PR that had several commits in it, which is kind of unusual for me since I'm usually one or two commits in a PR. But for this one, I had several because we use standard RB in our project to handle all the formatting. And right now, we have one of those standard to-do files because we added it to the project. But there are still a number of manual fixes that need to be applied.
So we just have this list of files that still need to be formatted. And as someone touches that file, we will format it, and then we'll take it out of that to-do list. So then standard RB will include it as it's linting all of our files. And I decided to do that for all of our spec files. Because I was like, well, this was the safest chunk of files to format that will require the least amount of review from folks. So I just want to address all of them in one go. But I separated the more interesting changes into different commits just to make others aware of, like, hey, this is something standard RB wants.
And it was interesting enough that I thought I would point it out. So my first commit removed all the files from that to-do list, but then my other commits are the ones that made actual changes to some of those files that needed to be corrected. So technically, one or two of my middle commits didn't pass the standard RB linting. But because CI was only running that final commit, it didn't notice that.
And I thought about this question, and so I intentionally went back and made sure each of those commits were correct at that point in time. And I feel good about that. But I still don't feel the need to add more process around ensuring each commit is going to be green. I think I would lean more in favor of let's keep our PR small to one or two commits. But I don't know; it’s something I haven't really run into. It's an interesting question. How about you? What are your experiences, or what are your thoughts on this, Chris?
CHRIS: When this question came through, I thought it was such an interesting example of considering the cost of process changes. And to once again reference one of our favorite blog posts by German Velasco, the Say No to More Process post, which we will, of course, link in the show notes. This is such a great example of there was likely a small amount of pain that was felt at one point where someone tried to run git bisect. They ran into a troublesome commit, and they were like, oh no, this happened. We need to add processes, add automation, add control to make sure this never happens again.
Personally, I run git bisect very rarely. When I do, it's always a heroic moment just to get it started and to even know which is the good and which is the bad. It's always a thing anyway. So it would be sad if I ran into one of these commits. But I think this is a pretty rare outcome. I think in the particular case that you're talking about, there's probably a way to actually tease that apart. I think it sounds like you fixed those commits knowing this, maybe because you just put it in your head. But the idea that the process that this team is working on has been changed such that they only now allow single commit PRs feels like too much process in my mind. I think I'm probably 80%, maybe 90% of the time; it’s only a single commit in a PR for me.
But occasionally, I really value having the ability to break it out into discrete steps, like these are all logically grouped in one changeset that I want to send through. But they're discrete steps that I want to break apart so that the team can more easily review it so that we have granular separation, and I can highlight this as a reference. That's often something that I'll do is I want this commit to standalone because I want it to be referenced later on. I don't want to just fold it into the broader context in which it happened, but it's pretty rare. And so to say that we can't do that feels like we're adding process where it may not be worth it, where the cost of that process change is too high relative to the value that we're getting, which is speculatively being able to run git bisect and not hit something problematic in the future.
There's also the more purist, dogmatic view of well, all commits should be passing, of course. Yeah, I totally agree with that. But what's it worth to you? How much are you willing to spend to achieve that goal? I care deeply about the correctness of my system but only the current correctness. I don't care about historical correctness as much, some. I think I'm diminishing this more than I mean to. But really back to that core question of yes, this thing has value, but is it worth the cost that we have to pay in terms of process, in terms of automation and maintenance of that automation over time, et cetera or whatever the outcome is? Is it worth that cost? And in this case, for me, this would not be worth the cost. And I would not want to adopt a workflow that says we can only ever have single commit PRs, or all commits must be run on CI or any of those variants.
STEPH: This is an interesting situation where I very much agree with everything you're saying. But I actually feel like what Mikhail wants in this world; I want it too. I think it's correct in the way that I do want all the commits to pass, and I do want to know that. And I think since I do fall into the default, like you mentioned, 80%, 90% of my PRs are one commit. I just already have that. And the fact that they're enforcing that with their team is interesting. And I'm trying to think through why that feels cumbersome to enforce that.
And I'm with you where I'll maybe have a refactor commit or something that goes before. And it's like, well, what's wrong with splitting that out into a separate PR? What's the pain point of that? And I think the pain point is the fact that one, you have two PRs that are stacked on each other. So you have the first one that you need to get reviewed, and then the second one; there’s that bit of having to hop between the two if there's some shared context that someone can't just easily review in one pull request.
But then there's also, as we just mentioned, there's CI that has to run. And so now it's running on both of them, even though maybe that's a good thing because it's running on both commits. I like the idea that every commit is tested, and every commit is green. But I actually feel like it's some of our other processes that make it cumbersome and hard to get there. And if CI did run on every commit, I think it would be ideal, but then we are increasing our CI time by running it on every commit.
And then it comes down to essentially what you said, what's the risk? So if we do merge in a commit that doesn't work or has something that's failing about it but then the next commit after that fixes it, what's the risk that we're going to roll back to that one specific commit that was broken? If that's a high risk for you and your team, then adding this process is probably the really wise thing to do because you want to make sure the app doesn't go down for users. That's incredibly important. If that's not a high risk for your team, then I wouldn't add the process.
CHRIS: Yeah, I totally agree. And to clarify my stances, for me, this change, this process change would not be worth the trade-off. I love the idea. I love the goal of it. But it is not worth the process change, and that's partly because I haven't particularly felt the pain. CI is not an inexhaustible resource I have learned. I'm actually somewhat proud our very small team that is working on the project that we're working on; we just recently ran out of our CI budget, and Circle was like, "Hey, we got to charge you more." And I was like, "Cool, do that." But it was like, there is cost both in terms of the time, clock time, and each PR running and all of those. We have to consider all of these different things.
And hopefully, we did a useful job of framing the conversation, because as always, it depends, but it depends on what. And in this case, there's a good outcome that we want to get to, but there's an associated cost. And for any individual team, how you weigh the positive of the outcome versus how you weigh the cost will alter the decision that you make. But that's I think, critically, the thing that we have to consider.
I've also noticed I've seen this conversation play out within teams where one individual may acutely feel the pain, and therefore they're anchored in that side. And the cost is irrelevant to them because they're like, I feel this pain so acutely, but other people on the team aren't working in that part of the codebase or aren't dealing with bug triage in the same way that that other developer is. And so, even within a team, there may be different levels of how you measure that. And being able to have meaningful conversations around that and productively come to a group decision and own that and move forward with that is the hard work but the important work that we have to do.
STEPH: Yeah. I think that's a great summary; it depends. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris regains several of his developer merit badges and embarks on a perilous CSRF (Cross-Site Request Forgery) adventure. Steph shares highlights from Plucky, a management training course, including ways we can "click" and "break apart" from our current role, and how to have hard conversations.
They also discuss how software development processes change at different team sizes, processes that break down as teams grow, and processes that are resilient at any team size.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy.
Become a Sponsor of The Bike Shed!
Transcript:
STEPH: Boom. I'm recording. Magic is happening. [singing] What's this? What's this? It's a Bike Shed episode. What's this? What's this?
CHRIS: You did that on the mic. [laughter] So you just started recording too, so it's not like you're like, "Oh, I forgot I was recording."
STEPH: Oh, I didn't have a finishing line that rhymes with shed.
CHRIS: Head, dead, bread, spread.
STEPH: [singing] Is TDD dead? I don't know. [laughs]
CHRIS: Cool. I liked it.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: What's new? I had a fun experience over the past week or two of regaining some of my developer merit badges, which is always enjoyable. So one was I had to configure AWS, specifically S3 and IAM such that I could upload files to an S3 bucket, which seems like one of those things that a developer should be able to do, and it's just not that hard. And, man, I failed so many times, and I stared at the screen. And the ARNs I think that's another acronym that I had to try and figure out what it means and fight against. Anyway, I got there. So that's one merit badge earned. I really hope [laughs] I correctly and securely configured access to an S3 bucket such that we could upload files in our Rails app. Cool, neat.
Moving on, the next merit badge that I went for was restoring the sea of green dots. Our RSpec output had gathered some noise. There was a whole bunch of noise across a variety of things. There were some dev tools that were dumping some stuff in there. And there was something related to apparition, which is the...I want to say it's the Capybara feature spec driver that we're using now, which sits on top of ChromeDriver or something like that. I don't really understand the details, but it was complaining about something. And I found a fix, and then I fixed it and whatnot. But it was one of those. I did this on a Saturday because I was just like, you know what? This will be cathartic and healing. And then I got to the sea of green dots, and I was so happy to get to it.
STEPH: This is me...I'm giving you a round of applause.
CHRIS: Well, thank you. Arguable whether it delivered any real value to users, but again, this was Saturday effort, so I was allowed to indulge my fastidious caretaker of the code role.
STEPH: Sorry, before we move on to more serious, can we pause to talk about developer merit badges? I really, really want cute felt badges that we can...I mean, I can't design them. I don't have the talent. But I think between us and other folks, we could design amazing merit badges, and then people could collect those. I'm very much in love with that idea.
CHRIS: I love the idea. I am now certain that if we were to really pursue this, that we would fall into the deepest of bike sheds as we try and define well; what are all the merit badges? And what are the different levels?
STEPH: [laughs]
CHRIS: And how many do you need to collect before you can get to what are the different...There are just so many different taxonomies that we could introduce, and, oh man, I could spend a couple of weeks on that.
STEPH: [laughs] It has a very strong Pokémon vibe too of you got to catch them all.
CHRIS: Absolutely.
STEPH: Okay. All right. We won't digress into bikeshedding merit badges, but I'm still very, very interested in that idea.
CHRIS: Indeed. If anyone out there in the listener space wants to just make these, that would be great. This is the way that I avoid bikeshedding now is I just say I'm not allowed to make these decisions or even think about it. But if these happened into the world, I would be happy about that.
STEPH: Oh, I just remembered we do have something similar at thoughtbot. They're not physical where you can hold them, but I think we've talked about turning them into physical badges. But we have our internal tool hub that we used to track our schedules. And one of the fun Ralphapalooza events that we had, a team came up with the idea of introducing badges in the tool hub, so then you could award people badges. You could give people badges. And it's very cute. So they could probably help us with the taxonomy. They've probably already figured out a number of badges we could get started with.
CHRIS: And of course, this is where my brain went initially to like, oh, what would the taxonomy be? But I think that's how this goes bad. And if we just keep it in the this is cute and fun, and what are all the possible merit badges, but they're all equal, and the points are made up anyway, and then it's just a fun thing, then I'm like, I'm super into this. Let's do that. Have you used a regular expression to parse HTML? Congratulations, you get a merit badge. Have you not used regular expressions to parse HTML? You get a different merit badge. [chuckles]
STEPH: [laughs] I feel very positive that I could be chief of cute and fun. I could manage that department.
CHRIS: Yes, that feels like definitely a role that you could really excel at. But shifting around ever so slightly, I did run into a fun bug this week. And it was a mystery tour of, I'm going to say, sadness and then eventual learning and understanding, and I think we've come to a better place. But I want to tell a story, take us on a quick tour of the adventure that I went through.
So we recently saw a handful of exceptions come through in our exception monitoring service and then piped into Slack, where we see those around CSRF token expiry. So this occasionally happens in a Rails app. The CSRF token that was on the page gets rotated. And therefore, when someone...if they have an older version of the page open and they try and submit a form or something like that, then CSRF protection is going to kick in. And you do get some false negatives there or some cases where like, nope, this is actually a fine user, this is not hacking, this is nothing bad. It's just that that user had a tab open or something like that.
I'll be honest; I want to understand better the timeline of expiry and how Rails expires those and whatnot. But it's one of those things; it’s deep enough in Rails that I trust that they are doing a very reasonable thing. And I think the failures that we're seeing that's part of the game. And so, mostly, we wanted to add a nicer handling around that. So thankfully, Inertia actually has a really wonderful page in their docs about handling Cross-Site Request Forgery expiration token, this whole thing. This is a particular failure mode that your app might have. And so it's nice to be able to provide a nicer user experience.
And so what we ended up doing is if we catch that exception, we have a rescue_from in our application controller that will instead of having this be a 500 and just a full, like, something went wrong error page, we instead respond in an Inertia-like way to basically show a flash message that says, "This page has expired. Please refresh the page to continue." And if the user just refreshes the page, then they will get a new CSRF token. And from there, everything is going to be fine. So it's not ideal. But it is, I think, both secure and now a nicer user experience.
STEPH: Yeah, that sounds really nice. When they refresh the page, do they lose all that form data? I'm curious how painful of a flow that is for the user.
CHRIS: Currently, yes. Inertia actually has a really nice feature for remembering form data. If you've ever been on GitHub and you're filling in a box, and then you go away to a different tab, and you come back, and it's still there, and you're happy about that, it's that sort of thing. So we could configure that. At this point, we don't have...most of our forms are pretty small. So this is not something that we opted to do proactive management around. But that is definitely something that we could add but not something that's default or anything like that.
STEPH: Cool. Yeah, that makes sense. I was just curious because yeah, either small form doesn't really matter, or also, this may be just a small enough error that only a handful of people are experiencing it that it's also just not that big of a deal.
CHRIS: Yes, this definitely should be an edge case. And we've also recently been working on functionality to log folks out after a period of inactivity, which would also, I think, obviate this in a different way. So all total, this shouldn't be a big deal. And this was basically a quick, little snippet of code that we thought we could just drop in, and everything would be great because it shouldn't happen much.
But then I was testing out a different feature on staging, and everything I tried to do was popping up this little alert flash message that was like, "Hey, your page is expired." And I was like, that seems bad. And then I realized literally every action, any non-GET request, was getting this response that the CSRF token didn't match. And I was like, well, this seems bad. Luckily, it was only on staging and hadn't made it to production.
But it had made it to staging, which meant it had gotten through CI, which was very concerning because we have a pretty robust set of feature specs at this point. We built up a bunch of fakes for all of the external data systems that we're interacting with. And we're really putting the app through its paces and trying to do so in a very production-like way. And so I was like, this is such a deep fundamental breakage. I don't know what's going on here. And so I started to investigate.
And it turns out that in a recent commit, I had started using Axios, which is a little wrapper around the Fetch API. They may not actually use the Fetch API under the hood, but it allows you to have a nicer interface to make XHRs. And we implicitly had that in our package already by virtue of Inertia. Inertia uses it under the hood, but I wanted to make it explicit because now I was using it directly. So I figured that's cool. I will yarn add Axios, and then I will continue on with my day. And I worked on my feature and everything was great. And then I pushed it up into a pull request, and everything was great, and CI passed. And I got it onto staging, and everything was very sad.
So then I started on the adventure of like, what is going on here? It turns out that somewhere between version 0.21.1 of Axios and 0.23.0, which there's a bunch of things about those version numbers that make me uncomfortable but here we are, somehow the behavior where you can configure the XSRF header name, which is what they're calling it on their side, the configuration stopped working. And so our override that says this is what our CSRF or XSRF token should be called when it's sent back up to the server in a header that was getting lost. And so they were falling back to their default name, Axios was. And, therefore, Rails was like, "There's no CSRF token here. So this is going to be a no for me. I'm going to reject all of the requests."
So the fix was relatively easy to roll back and to pin the version of Axios to the previous version that we had been using. I didn't actually intend to upgrade it. I just intended to make it an explicit dependency. But by doing that, I accidentally upgraded it. I don't love that there was this pretty deep breakage in that. I haven't done the good work of trying to open an issue. I still want to scan through and see if there is an open issue or a conversation around this before I start making any noise. But I think if I don't find anything, this is the sort of thing that should be reported because I can't imagine I'm the only one running into this.
Likewise, I was very sad that my test suite did not find this. Turns out in Rails, CSRF protection is just turned off in test mode, which may be overall makes sense. But for feature specs, in particular, I definitely want to have it. And so, it was nice that I was able to find the relevant configuration. And we introduced an RSpec configuration that says, "If it's a feature spec, save off the existing configuration and enable CSRF. And then after the spec, go back to whatever the previous was."
So now all feature specs run with CSRF. And I did make sure to push up that as a singular change to CI, and CI was very unhappy with me. Many, many features-specs failed, which was good. That was what we were going for. They failed for the right reason because things were fundamentally broken. And then, I was able to update the package-lock or the package.json on the yarn lock, pin the version, fix everything.
But man, there was this period of like, oh man, the app is broken in such a fundamental way. Users just can't do stuff anymore. They can view anything, but they couldn't change any data. And it just snuck through CI. And that feeling is the worst feeling. We had, at this point, built up a lot of trust in our test suite. It was really telling us when stuff was wrong, and if it was green, I felt very good merging. And suddenly, this just really shook me to my core on that front.
STEPH: I love these journeys that you take us on. I mean, they're painful for you, and I am sorry to hear that. But I love these journeys that you take us on. [chuckles]
CHRIS: I usually only take us on them when I've figured out the answer. And I'm like, all right, here's where we're at. It was rough for a little while, but now we are happy. And thankfully, the one configuration of saying, hey, Rails, also, please include this as part of our production like, configuration for test mode. So I feel better that moving forward, this breakage won't happen again.
STEPH: We should add that as another merit badge for telling a bug story. All right, I'm taking off my hat of chief of fun and cuteness. So this may not be terribly relevant to all the things that you just shared. But I am curious where you mentioned that with Axios because you'd specified the name of the token, and then that overriding behavior is what then broke. And so then that's what led to this whole adventure that you went on. I'm curious, why did y'all customize the name of that token?
CHRIS: A, this is a great question. B, I'm not super sure. C, I think the reason is because we were trying to align to Rails. So we have a little middleware on the Rails side that will serialize the CSRF token into a cookie. And then that cookie value gets read by Axios and sent back up as a header on the request. So this is the way that with Inertia CSRF just kind of works and is good. And it's different than Rails' normal. We put a hidden input into any form. And so Rails holistically knows about both sides of that, and everything works fine. But now I have to manually round trip the CSRF token.
And Axio's default configuration is a header name X-XSRF-TOKEN, and we needed X-CSRF-TOKEN because that's what Rails is looking for. I probably could have configured it the other way on the Rails side. But one way or another, I had to get Rails and Axios to come to an agreement, to meet at a table, and to agree to collectively protect the app. And so I had to mediate that discussion, and that's what ended us here.
STEPH: A meeting of the minds. [chuckles] Cool, cool, cool. Yeah, that makes sense. I was just curious because then that would have changed the whole journey. But yeah, that is super interesting. And I definitely resonate with the idea of when you've really invested in your test suite, and you trust it that then when it doesn't catch something that obviously breaks the application, then that feels like something worth prioritizing and digging into and then figuring out how to bring back that parity.
I don't know that I've turned on enable CSRF for feature spec. So I'm also very interested in looking at that configuration and considering if I need that for any of my future client projects if that's something that I need to remember for the future because that's very niche but good to know about.
CHRIS: I feel like this only really comes up if you're working in the...it's called the odd middle ground that Inertia ends up occupying. If you're in a traditional Rails app that is generating HTML server-side, forms are generated. They got the CSRF token inlined there in a hidden input. And then when you post that form, it's coming back up. The names automatically are going to match. You don't need to worry about it. And it's probably fine to not have it included in test mode.
And if you're at the other end of the spectrum and you've got API interaction, and that's the way you're doing everything, then you have a different auth mechanism and cookies, and whatnot just don't apply in the same way. And so it won't really matter on that side but for a different reason. And it's only because we're in this interesting middle ground, which, again, I really love. And it's the thing that I love about Inertia. But this is a rare case where it's like, oh, we do have to bring the two sides to meet in the middle. And this is a case where, unfortunately, due to a very subtle breakage on a minor release of...a package that we're using silently broke so, yeah.
But yeah, thankfully, everything is back to working. And again, we've been able to enhance the test suite in that little way that I feel confident again because this won't sneak in another time. We have coverage around this. We're good to go. So while I was very scared when this initially happened, I feel better now. I'm happy to go into the weekend feeling better about this. But that's my story. What's new in your world?
STEPH: So I feel like I've been having one of those weeks where I have less code adventures. In fact, it's one of those days where I went to thoughtbot's daily sync...because we often have our client daily syncs, but then we still have a thoughtbot sync as well. And I went to the group, and I was like, I get to write code today. It's going to be a great day. All the other things I'm doing are also interesting, but I get particularly excited when I get some maker's time and get to write some code.
So I feel like I've had less coding adventures recently and more hiring and process-related adventures. And specifically, I just completed the Plucky Manager Training, which is a program that's founded and led by Jen Dary, who was recently on thoughbot's podcast, The Giant Robots Smashing Into Other Giant Robots. I'll be sure to include a link in the show notes for anyone that's interested.
CHRIS: I believe this was the third time she was on. It's at least the second, possibly the third. And all of them are great listens, just as an aside, so we should include links to all of them.
STEPH: Yes, I think she's one of the rare guests that has been on the show three times. And I think I've only listened to the first couple minutes of that episode. But I think they talk about the fact that this is her third episode, which is really, really cool. And I'm still frankly synthesizing all the information and the ideas that I've collected from the course.
But I do have a few quick takes that I'm interested in sharing with you. So the first one is my cohort...we were the Panda Cohort, so go, Pandas. And some of the things that we talked about were…, and I think that this may have been the first day. So it was three days, and it was three hours for those three days. And they're spread out over a couple of weeks, which is really nice because then you show up for those three hours of the class, but then you leave with some ideas and some things to experiment with. You get a week to then try out an experiment and then come back to class next time and talk about this is how it went; it went to wonderful, or it went terrible. And you get to share that with others and work through it.
And in the first class, we talked about coaching versus managing, which I found just a helpful definition to review. So managing is more direct, and telling someone what to do while coaching is encouraging someone to determine their own path and find their own solution. And I find that as a team lead at thoughtbot, I'm very often more in that coaching space than I am in that managing space. I think it's frankly pretty rare that I actually need to put on a manager's hat. And I often feel like I'm wearing my coaching hat instead.
And some of the other things we talked about one of them is what is work? Which is a fun question to ask. And Jen had an analogy for this speaking about imagine that you have a plastic Easter egg. So it's got two sides, and side one is all the skills and desires and things that you're fulfilled by. And side two is a company that needs those skills. And it's great when those line up and click together, like when you take a job or get a promotion. Have you ever played...do you know what I'm talking about? Those little plastic Easter eggs. Have you ever played with those as a kid?
CHRIS: Yes, certainly.
STEPH: [laughs] I realize I just launched into that analogy. [chuckles] And then Jen goes on to say that's totally normal for then those sides to unclick. And Jen continues to say that it's totally normal for them to unclick. So maybe the company changes direction, the company is acquired. You've fallen out of love with something that you do about your job, or you have kids, and that has changed the things that you are fulfilled by and what you're looking for. And that's not necessarily bad. So it can be like, hey, you are working on x now, and you're not fulfilled by that anymore. But then another company comes along and says, "Hey, we're working on this, and you are fulfilled by that." So then another click happens.
And essentially, it's a nice analogy to represent someone's career path and the ways that we are going to shift and re-prioritize what we're interested in. But it's also a really nice way to help it feel less personal because both sides are allowed to change. The company can change. You, as an employee, can change. And then you can look for that next click that is going to match up with a company that meets your skills and things that help you feel fulfilled.
One of the other topics that we talked about are hard conversations, which I love that we dug into this one because that's certainly one that I struggle with or...I mean, we all get that feeling if you have to confront someone if you have to have that uncomfortable discussion with someone. It is a very hard thing to do. And so we had some very honest conversations around what is a hard conversation? What does that represent? And essentially, they represent that there is stalled progress and something can be improved.
So Jen likens a hard conversation to a tool. It's something that you can use to then help something move forward again if something feels stalled or if there's something that needs to change. And during those hard conversations, you may not get to the resolution that you're looking for. So you may be looking for a specific outcome. But you also have another person that needs time to respond and to take in everything that you have said and process that information.
So when you have a hard conversation, you may actually only move forward an inch. So if you had a lofty goal of we're going to talk and then we're going to have this hard conversation, and we're going to get to this space...But instead, you actually just make incremental progress. Like, okay, at least this person is now aware of this concern. That might be your win for the hard conversation versus actually tackling; how are we going to address it? I just want them to be aware of this concern.
And it's a very vulnerable conversation, and they often take time before you can get to that ideal resolution. But essentially, the idea is get in the game, start the conversation, and then have follow-up conversations for that hard conversation. And I really appreciated that framing because I often will think of hard conversations of oh, we have to have this hard conversation and get to this specific outcome. But if you shift the goal line to be like, no, I really just need to at least make this person aware of a concern, that makes it a lot more approachable. And then also probably yields more fruitful outcomes because that gives the other person time to think about what you've shared to also come to the table with their own ideas and then work together to then get to that ideal resolution.
CHRIS: I like that framing a lot. I can definitely see the case where you, as someone who has recognized something that needs to change (perhaps you're a manager),lineup you've now thought about that a good bit; you've observed it, but the individual that you're bringing that to this may be novel. This may be a surprise for them. And so if you come into that interaction both about to share this information but then also trying to resolve it and trying to get to I need you to internalize it, and I need you to fundamentally change your behavior as a result of this conversation we're going to have, that's quite possibly not a realistic outcome. And if you're trying for that, it might inherently lead to just a bad outcome because that individual is not in a position to do that. But they are potentially ready to hear it. And so you can just achieve step one and then later have step two. So I like that a lot.
STEPH: Yeah, in general, I found the course incredibly helpful, very insightful. It was also really nice to hear from other managers that are facing similar problems or perhaps novel problems and then getting to weigh in and help each other. So it's a wonderful course. I'll be sure to include a link in the show notes for anyone that is interested. And I'll probably come back with some more insights from the class because it's really...we just wrapped up. So I'm sure I still have some ideas that will percolate over time, and I want to come back and share those with the group.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Pivoting just a bit, we have a listener question that I'm excited to dive into. This question comes from the one and only, the Edward Loveall, fellow thoughtboter. And Edward wrote in, "How does the process of software development change at different team sizes? What's a process that breaks down soon after the team starts growing? What's a process that is resilient at all sizes? And by process, I mean anything that involves other people including organizing tasks, code review, deployment, or anything else that isn't you alone writing code in a vacuum."
I'm really excited about this question because I think there's a lot here. And there's actually one part that I'm struggling with a bit, so I'm curious to see what you think, Chris, about it. But I'm going to start off with saying that I think there are a number of management processes that definitely break down as a team grows. But in the spirit of Edward's question, I'm going to focus more on the software development process and how those might need to change and what starts to break as your team grows.
So starting off with processes that break after the team starts growing, this one, frankly, what really starts to break is not a process specifically, but it's the lack of process that really starts to become visible and painful. So, how do we track work? Before, maybe the product manager or someone would just send you a message and say, "Hey, can you work on this?" or "Hey, can you fix this thing?" And how does code need to be reviewed before being merged? Does it need to be reviewed? Are people just merging as they get stuff done? How are deploys performed? Oh, we have a super urgent production fix that needs to go out, and the only person that knows how to deploy is out sick today? Cool. That's the type of process that I think that really breaks down, or at least you start to notice when the team starts to grow. What are your thoughts?
CHRIS: I definitely feel that first one very strongly. We're feeling it right now on the team, which is still very small. There are only three developers working on the project, and then we have a product manager. And each week, we're slowly iterating, and tweaking, and honing, and trying to introduce just enough process in terms of how we define the work to be done, communicate the status of it, all of that fun stuff.
We started with Trello. And we just had a board with some columns, and then we had more columns, and then we got rid of a few of them. And then we recently added a Power-Up to the Trello board, which allows for epics. So there are cards which are epics which tie to sub cards. And I'm staring at it, and I'm like, how long until we're Jira? How long can I hold out here and not be Jira?
But it does feel like we're slowly iterating towards a more useful process for this team rather than process for process' sake, which I feel like is a really useful distinction. There's also a question of like, what can be known or what can be adequately measured and whatnot versus what can't be? So we've talked many a time on the show about estimation and velocity and trying to track that and the pitfalls inherent with that. And so there's, in my mind, two different camps. There's the process we want to avoid. And again, to reference German Velasco's wonderful blog post, Say No To More Process.
And I really feel like there is a tendency often when things go wrong to then try and paper over that with process. Oh, this team didn't use the design system. So we need to write ESLint rules to make sure you can't import from the directories that aren't the thing. And it's like, we can do that, and I've definitely done that. And I will do that again in the future. But I always have the lens of do we need this? Is it worth the trade-off, the cost, the overhead, the complexity that it's bringing in?
But definitely, organizing and communicating tasks is one of the ones that becomes really difficult. The more people that are working on something, the more you need probably more than one person staying out in front of them and trying to define the next bit of work that needs to be done after that.
Code review feels like it probably should stay similar, with the exception that I lose the ability to review all code at some point. Right now, I'm trying to review every single PR that goes through or close to it. At some point, I'm just going to have to give up on that. But for now, that's my goal. But fundamentally, code review, I think, will hopefully take the same shape.
Deployment, similarly, like, I've talked about the merge queue thing. I want to get a little bit of process in there but not too much. There is definitely some necessity for change. But I definitely want to resist the urge to change everything and to just say, like, slowly over time; we’re going to have to be a big Byzantine organization with lots of rules and standard operating procedures and all of that.
I've heard anecdotally, and I don't know if this is true, so maybe someone out there on the internet can correct me if I'm wrong, but my understanding is that at Google, they’re pretty tight in terms of what languages and frameworks can be used and what processes, and workflows, and build tools and all of that whereas Facebook, as a counterpoint, is relatively lax. Obviously, React is used very heavily on the core web application. But there's some flexibility in terms of different languages and frameworks and things for sub-projects or small individual teams having a little bit more autonomy. And I think that's a really interesting thing of are you one large, cohesive, organized company or do you try to act like a bunch of small disparate but roughly connected teams that share good ideas but can work independently? And that changes how I would think about this question.
STEPH: I really like how you're describing the addition of process. It sounds like a just-in-time process. So as you're learning that something needs to be added, then that's when you look for answers. And then you sprinkle on a bit of process that everyone agrees that feels very helpful within also the right to review and see if that still makes sense for the team.
There's one additional area where I think the lack of process really shines through in addition to the number of ways that you've mentioned is also onboarding. So if you have a very small team and you are onboarding, it's likely that...Chris, you can let me know if I'm wrong, but when someone's joining the team, there's probably a good chance that they get to pair with you at some point, or they even get welcomed by you to the team. And then, they get an overview of the product and the codebase. And there's probably this really nice session where they get to ask you questions, and then they have that onboarding session. Does that sound about right?
CHRIS: Yes. But I would go so far as to say it's not just a day or a session, but it's probably a couple of days. So yes, and.
STEPH: That's even better. And with some of the smaller teams that I've seen, that onboarding process is where they are pairing with that lead person on the team. And that's going well until suddenly that lead person can't pair with everybody. And nobody has really thought about how to streamline that onboarding or how to coach or teach someone else to be a really good onboarding pair.
And I have strong feelings about this area because we often focus so much on hiring, but then we drop the ball when it comes to onboarding that new, wonderful colleague that we've worked so hard to recruit. And at the end of that day, someone's going to reach out to them and say, "Hey, how was your first day?" And it makes a big difference for that person's retention as to how those first couple of days ago.
So I think onboarding is another really important part that when you're a smaller team, you probably don't need much process because you have more of that personable onboarding experience. But as the team grows, there needs to be more of a process to help other teammates join the team.
CHRIS: It's interesting. I think I totally agree with you that over time, there is a necessity to be more intentional and to have a little bit more structure in the process. And I don't think you're saying this, but I just want to make sure we are saying the thing that I think we believe, which is that shouldn't replace the human that helps you onboard.
Like, I still like the idea that everybody gets a pair for some amount of time when they start at a new company. And you're working together on a feature, or you're working together on bug fixes. You're shipping to production as soon as possible. But you're not doing that based on some guides in a wiki. You're doing that with another human that's helping you. There should also be guides, and a wiki, and documentation, and formalization as the organization grows but not in place of having another person that you get to talk to.
STEPH: We're just going to send you a little yellow rubber duck and then with a little Post-It note that says, "Good luck [laughs] with your onboarding process." Definitely. I agree with everything you said. It does not replace that human element where there's someone that's helping you onboard. I just see that onboarding is one of those things that gets forgotten, or we often point someone to a README which I do think is great because then it is battle-testing our README. But then there still needs to be someone that is readily there to say, "Hey, how's it going? What are you struggling with? Can I pair with you?" There still has to be that human element that is helping guide you through the process.
And I think smaller teams may forget that they actually need to assign somebody to you to make sure that you have someone that you know. Like, hey, this is who I can reach out to with all my questions. Because they're probably not going to be comfortable posting in the company channel at that point or a larger communication to say, "Hey, I'm stuck on something."
CHRIS: There's one other area that comes to mind, or I guess it's more of an anecdote that I have heard, but it speaks back to GitHub's early, early days. And they were somewhat famous for being very flat in terms of the organization and very self-organized, and everybody's figuring it out, and you're working on the thing that's most important in your mind. And for a long time, this was a celebrated facet of the company and a thing that they talked about rather publicly.
And then I think there was this collective recognition, and maybe they reached a tipping point where that just didn't work anymore. Or maybe it actually hadn't been working for a bit, and there was just the collective realization of that. But it was interesting to watch from the outside as GitHub added more formalization, more structure, more managers, and hierarchy, and career ladders, and things of that nature. And I think there's a way to do all of those things in a complicated, overloaded, heavy way.
But I think a different version of it is...like, you were using the word coaching earlier. Having formal structures within your organization to encourage people on their career path, to help them grow, to have structure around that, I think is a really difficult thing to get right. But I think it is critical, and I think just not having it can't be the answer past a certain probably pretty small size. So that is an interesting one where I think you do need to introduce some process and formalization around how you think about the group of people and how they work together within your organization.
STEPH: I agree. I think where some folks may see a lack of hierarchy; others feel a lack of support. And adding levels of management should really be focused on the outcome is that we're helping people feel supported. So even getting feedback as you're adding those different levels of management, like, hey, did we make your life better? Did we make your life worse? I think that's a great question for management to ask as they're exploring a less flat structure.
CHRIS: So, Steph, I have a question for you now on a variant of this topic. In general, we seem to be fans of having a codebase. Probably a Rails app that’s got a database behind it, and that's where you put the data. Everybody commits to that same repository. It's all kind of one collected thing. And often, organizations grow to a certain size, and they're like, this is untenable. We cannot have this many people working on this same codebase. So we shall do the logical thing, which is we will break it up into small pieces. And those pieces will communicate over HTTP, and it will be great because then our teams can be separate from each other and can manage their little piece of the world. What do you think about that? Is there truth there? Is it not true at all? What do you think?
STEPH: All right, so your team is getting too big, and to the point that you feel like you need to split it out so then you can have small teams, and they can all work independently on different parts and services of the codebase. I don't love the idea. I'm trying to think through because I feel like there's a lot of nuance here. But I don't love the idea that that's the driving force as to why are we making the change?
And that is often a question that comes to mind whenever we are making a big change, either architecture or process-related is like, what's driving this? And then how are we going to measure it? And if we are driving it just because we have a large team, let's talk more. Why are people blocked? Why can't people work together? What's preventing people from being able to contribute to the same codebase? Are people blocked for a long time because they're having to wait on someone else to complete that work? I have a lot of questions that I don't know if I can fully answer your question. But my instinct is to say let's not break up the architecture just because our team grew in size.
CHRIS: Yeah, I think I definitely agree with that. There's probably a breaking point where it's just too many individuals, and there'll be too much contention. But I think resisting that or at least naming that as like, okay, that's what we're saying but is that really what's true? Or are we actually feeling that this system is so deeply coupled that there's no way to change some small piece of the code without impacting other parts of it?
Like, is the CSS completely untenable because we're just using global class names, and it's leaking everywhere? Okay, do we need a different solution there? And then it's actually fine. We don't need to have different services that have their own different style sheets. We just need a different approach to CSS. That's a particularly easy one to go for because there's inherently a global namespace there. But the same thing is true in a lot of different contexts. So services are a way to break things apart and enforce those boundaries. But if inherently coupling is your problem, then you're just going to be coupled over HTTP, and I think it's going to be difficult.
There's a wonderful blog post by Josh Clayton, which I think does a better job than I'm doing in this moment of highlighting some of the questions I would want to ask. The blog post is titled Services are Not a Silver Bullet. And so Josh goes through and enumerates a bunch of the different versions of the story that he's heard throughout the years of well, we need to go to services because x, because our test suite is slow because pull requests are constantly having merge conflicts and whatnot, because the code is very deeply coupled and any change here affects everything else. And a fix over here broke something over there. This is no good. And so he does a really good job of presenting alternatives or at least questions that you can ask to say, like, is this the problem, or is this a symptom? And we need to address the more underlying cause.
And so I think there is a point where you just can't have 1,000 people trying to commit to the same Rails codebase. That feels like it's maybe too big. But it takes a while to get to 1,000 people. And there will be times where extracting a service makes sense or integrating with an external service that exists. Like, I've talked about Stripe before as my canonical like, yeah, it's actually deeply intertwined with the data model, but they're just dealing with such a distinct complexity set over there. And they have such expertise on that that I'm happy to accept the overhead of the fact that that service lives outside of my core application, and I need to deal with synchronizing state and all of that. I will take on that complexity, but it's not worth it for everything, and it's not a silver bullet. Again, to reference the name of Josh’s blog post there, Services are Not a Silver Bullet.
And so, coming back to Edward's original question, I would say that having a monolithic codebase works for a really long time, but there is probably a breaking point somewhere well along, but fight it for as long as you can. I think.
STEPH: I really like how you touched on coupling because it really helps ask those questions to get to the heart of what are the pain points that you are feeling? And it is less of a decision that is based on people and process but more if you're going to split out a portion of your architecture. It is in response to an actual business need and a business value versus some other pain points that you're trying to fix.
A particular example might be like maybe you have a portion of your application that really just needs to spend a lot of time crunching data. And it's really not as specific to your application; it's something that can happen on its own. And then it's beneficial to move that outside so it can scale and relate it to the work that it needs to perform versus keeping it in-house with the application.
I do want to circle back to another question that Edward included which is what's a process that is resilient at all sizes? And the ones that really come to mind for me...and these are a bit amorphous intentionally because it will look different for each company. But three areas that are very resilient at all sizes, whether you are 1 to 2 employees versus you've got hundreds or thousands it's communication, testing, and accountability.
So communication, where are we headed, and how do we know what we're working on? For testing, it's how do we test our changes? Do we write tests? Do we use QA? Do we have a staging environment? What does that look like? What's our parity between staging and production? And then how do we know what's in progress, and how do we know when it's done? Those are three core areas that, regardless of your team size,,I think are very crucial to the team success. What do you think? What are some of the processes that are resilient at all sizes?
CHRIS: I actually really like the list that you just provided. That is a wonderful trifecta, and I think it will take you very far, so probably not much to add from me. But I guess on that note, should we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph talks about binging a few Things Worth Learning podcast episodes and particularly enjoyed an episode that featured one of thoughtbot's design directors, Sameera Kapila. Sam shared her expertise about management and inclusion, and Steph shares her favorite parts.
Chris shares the story of a surprising error and the resulting journey through database transactions and Sidekiq that eventually resolved the issue. He also shares some follow up on the broken build and the merging process changes they introduced (spoiler, the process changes have been rolled back).
Transcript:
STEPH: Oh man, I'm about to stop eating my pop-tart. I'll put it away. It's within distance. I'm going to eat it.
CHRIS: Your high-fat content unfrosted pop-tart.
STEPH: You know, surprise Sunday twist: it has icing on it.
CHRIS: Steph, who even are you?
STEPH: [laughs]
CHRIS: There are a few canonical anchor facts that one knows about other people, and when one of those...
STEPH: I like to keep everyone, including myself, on their toes.
CHRIS: Or you've just secretly accepted that the icing adds another textural flavor adventure component. It's just better with icing.
STEPH: All right, all right, all right. There's a complicated answer to this. And the complicated [chuckles] answer to this is that the more organic ingredients that I recognize when reading about pop-tarts are by a particular company, and they all have frosting on them. And the more generic pop-tarts that don't have frosting on them, I don't know how to pronounce a lot of those ingredients. So I'm like, no, but okay, I still eat them. But I prefer the ingredients I can pronounce. So I either go with the ingredients I can't pronounce or have a little bit of frosting on my pop-tart. And I'm going with the non-cancer route for today.
CHRIS: For today, in this moment, and accepting the frosting. Okay, all right. Well, that is complicated. [laughs] It's tricky out there.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey, Chris. So the weather, I'm going to talk about the weather for a little bit. [chuckles] It's been almost non-stop rain for the past several days, which is fine. I'm sure it's great for plant life. But it's really hard on my dog Utah because then we can't go outside for our normal walks and playtime. Although he is my four-legged water baby because he absolutely loves water, and puddles, and playing in the rain. So he's very fine with going outside and playing for a long time. But then I have to essentially give him a full-on bath before I want to bring him back in.
So not wanting to have to give him a bath each time, in the spirit of improvising, we started finding more indoor games to play. And I've started teaching him to play hide and seek. And he's not great at it mainly because he will only stay until I'm out of eyesight, and then he will come and find me. And so I have to be really, really fast at finding a hiding spot to like dash around a corner or hide behind the door. But I think he enjoys it because he will find me and then he seems very excited. And we go back, and we play again. And so I just have to work on teaching him to wait a bit longer so I can find better hiding spots.
CHRIS: When you said that, at first, I was like, how did you teach him to hide? But I realize he's only playing the seek part of the game, and you're only playing the hide part of the game.
STEPH: [laughs]
CHRIS: I'm just so used to you exchange roles back and forth. First, you hide, then you seek, and then you switch it up. That would be a lot to get your dog to be like, now I'm going to secretly hide.
STEPH: [laughs] I'd be very impressed. Yes, we have very distinct roles in this game. I am the one that always counts and hides. But he's a very good seeker. So that's been fun. We just got to work on getting a little better at it.
But on a more tech-related note, one of the design directors at thoughtbot, Sameera Kapila, who also goes by Sam, was a guest on the podcast Things Worth Learning, which is hosted by Matt Stauffer. And Matt is also the host of The Five-Minute Geek Show and The Laravel Podcast. And in the show Things Worth Learning, Matt meets with individuals that are excited to share something that they're deeply passionate about; maybe it's tech, maybe it's not. And I've binged a couple of those episodes.
And I really like how you can choose between the podcast format or the YouTube format. So then you can really watch the conversation unfold, which I know you and I a couple of times have thought it would be fun if people could see us because there are so many facial emotions and gestures that go along with conversations. So it was really delightful.
And speaking of delightful, Sam shared her expertise about management and inclusion. And I definitely recommend listening to the episode because I can't share everything that Sam shared. But a couple of the topics that Sam mentioned that I really enjoyed and would love to chat about, so the first one is about helping someone, in this case, someone that you manage that comes to you with a concern.
So there's often a presumption that just because someone comes to you with a concern or an issue that they've experienced at work, that they're the ones that will also want to work to address that concern, and that's often not true. It can be true; maybe that person wants to be involved. But they're often coming to you in the leadership or management role to say, "Hey, I've had this issue," and they really want help with that instead of walking away with homework for it. Because then that trains people to essentially be in this mindset of well, if I bring up this concern, then I'm going to be the one that has to address it, even if I'm the one that's most negatively impacted by this. And addressing this concern could be actively harmful to me.
And she shared a really great real-world example from her own experience where her and another co-worker had noticed a concern about the hiring process. And her and that co-worker got together, and they talked about the concerns. They even rehearsed for the meeting because they were trained by the tech industry to say, "Hey, if you bring up a concern, you're going to be responsible for addressing and then resolving that concern."
And so they had that meeting with the person in leadership. And they were pretty nervous about how it was going to go. And that person in leadership said to them, "Thank you both so much for sharing that. That must have been such a burden. And this is my responsibility to fix. And here are what my next steps are." And that was amazing because it allowed Sam and the other person to go back to client work. And they also received follow-up conversations about how that issue was being addressed. So there was even that feedback loop as to how things were going to change.
And I have a personal example that...I really resonated with the example that Sam provided because I remember there are different teams that I've been a part of, where often I was one of the few women engineers on the team. And so we often have conversations about how do we get more women engineers into the company? And they're wonderful conversations.
But there's a part of me that always felt resentful about, like, why am I here? Why am I the one fixing this? I understand I have some more insight and expertise, and experience in this area. But I was also frustrated by the fact that I was the one that was in that meeting often with other women, and it felt like our responsibility to fix this. And I used to feel bad about feeling resentful towards that. Because I was like, shouldn't I want to help other people? And I do. But Sam's example really helped remind me and clarify that yes, just because there's a concern doesn't necessarily mean you should be the one to address it. And it really takes everybody involved, or it takes leadership to step up and address that concern.
CHRIS: Oh, that's really interesting the way Sam is framing that and describing the situation of not having any problem that you bring in be now your work to solve. Like, oh, I found the issue, and now we've got to go do this. But the idea that you can bring something to light and then be able to walk away from it.
And the particular thing that you were saying that if your interaction is always that when you reference something when you bring in a concern that then your manager works with you to figure out how you can solve it, then you get this mental block of like, well, do I even want to say anything? Because I don't want to try and deal with big, amorphous unclear issues. So maybe I just won't even say anything.
And so this as a way to make sure that there's room for all of the conversation is a really interesting framing that I hadn't really thought about, frankly, but it's very interesting. I haven't seen this interview either. So I'm definitely excited to give this a look because Sam is wonderful. And the topic that you're describing here sounds fantastic as well.
STEPH: Yeah. There was an important moment for me where...one of my managers is Matt Sumner, who's been on the show. And when Matt was my manager, at one point, we were having a one on one, and we would often go for walks for our one on one. And I mentioned something about "I have this concern, or I have this problem, but I don't really know how to fix it. So I'm not sure I'm ready to talk about it." And Matt, in his delightful way, was like, "We can still talk about it. You don't have to have an answer or a solution." I'm like, "Yeah, but I feel like I should be able to fix it. Like, if you have a concern, or if you have something that you want to gripe about, then you should come to the table with solutions for it." And Matt was like, "No, you don't need to do that at all. We can totally gripe about stuff or talk about concerns and then either figure out the solutions together or go to other people for ideas."
And that was really important to me because, like you'd mentioned, otherwise, it felt like this mental block where then it feels like you can't air out some of the things that you're worried about or have concerns about because then you think you're the only one responsible. And you may not be able to come up with the best solution. You may need other people to then help you strategize and come up with ideas. And I just love, love, love that part of Sam's discussion.
And oh, there was one other part about the conversation. Well, there are lots of parts that were amazing. But another one in particular that blew my mind is about Comic Sans, the font, the font that everyone loves to hate. [chuckles] And I learned that it's one of the most legible fonts for kids. And it's one of the more accessible fonts for people with dyslexia. And it's actually recommended...I think there are still more academic studies that need to be done to really classify fonts that are best for people that have dyslexia.
But Comic Sans is recommended by The British Dyslexia Association and the Dyslexia Association of Ireland. And there are some other really great posts that talk about the benefits of using a font like Comic Sans because the typeface has long ascenders and descenders and generous letter spacing and asymmetrical lowercase b and d to then help distinguish those letters. And I just thought that was so cool. This font that everybody wants to rip apart because it seems whimsical, unprofessional gets overused. There are lots of reasons, I suppose. [laughs] But there's a really big benefit to it, and it can help others. And I just found that very whimsical in itself.
CHRIS: I love the idea that there are multiple levels of knowing about Comic Sans. First, you're just like, I don't even know the name, but it's that comic book-looking font. And then obviously, the next step is to be like Comic Sans? How could you ever use that? It's an atrocity. And then it's like, but actually, Comic Sans has some things going for it. And it is a really interesting consideration and something that you wouldn't necessarily think of. But then once you learn it, you're like, okay. Man, I wonder how many other things in the world have this interesting shape to them? Hmm.
STEPH: Do you know the history behind Comic Sans?
CHRIS: I do not.
STEPH: I read about it fairly recently, but I'm probably going to botch some of the details. But I believe it was designed or created by Vincent Connare. And it was created for Microsoft. And Vincent was working on a project where I think there was a dog that was essentially going to have these bubbles that would then show you different parts of the application and walk you through the different features. And the dog had a very comic book feel to the character.
And so then Vincent designed a font to go along with that comic book character, this dog and came up with Comic Sans. I don't think the dog actually launched with that particular font. But since the font was still developed, it was released as part of the available fonts. And there we go, there is the birth of Comic Sans. And then it just received so much love and ire all throughout history. [chuckles]
CHRIS: There's something that you said there that I want to loop back on when you were talking about chatting with Matt Sumner and saying, "Here's this thing, but I don't know how to solve it. So I don't even want to bring it up." I really liked the framing that you gave and the fact that Matt was like, "No, no, we can still talk about it. We can at least explore this thing, have a conversation." I think that's really wonderful.
There's a very similar thing that I experience a lot when doing code review, particularly when I'm in more of a leadership role within a team, which is I often want to highlight something that feels a little bit off to me in the code, but I may not have a specific solution. Like, I may see a variable name, or I may see a controller action that feels like it's the wrong shape or something. And I'll often name it but explicitly say, "I actually don't have a better idea here. So feel free to continue on with this, but I want to name it. So in case that sparks something in you, if you were also feeling some incongruousness, maybe it's worth you spending another minute to think about it, but I want to make sure my comment isn't blocking or otherwise making you feel uncomfortable."
If I just come to you and I'm like, "This feels wrong," and that's all I say, that to me is unacceptable code review. Because now I want all of my code review feedback to be very actionable, it’s either here's the thing that I feel strongly I think we should definitely change this. If you disagree, let's have a conversation. But yeah, this one definitely needs to change. Here's the thing that, like, I don't know, maybe we could break this into two lines and split it up. But if you don't like that, that's fine. Do whatever. And so then it's I've given the person my thoughts but given them clarity and a free rein to do whatever they want with that information.
And then there are ones where I'm like, I don't even know what I think we should do here, but I think something. But if you don't have any ideas...like, I don't have any ideas specifically. If you don't have any ideas, it's fine. We'll continue on with this and maybe revisit it down the road. But I want to make sure each of those different tiers is actionable for the other person, and I'm not just giving them homework or something to be sad about because that would be bad code review.
STEPH: I'm just imagining a PR comment that says, "I don't know what we should do here. But I don't think this is it," [laughs] and that just creating sadness. That's so interesting to me because I have flip-flopped with that opinion in regards to there are times that I very much resonate and do what you just said where I will point out to someone where I'm like, "I'm not sure why, but I just have concerns about this. And I don't know if you also ran into anything that was weird about this and would like to talk about it. I don't have any really great ideas, so I think this is good for now. And we should keep moving forward, so we're not blocked on it," but just wanted to, as you mentioned, highlight it in case it sparks something for the other person or for someone else that's reviewing the code.
And then there are other times where I'll look at something, and I'm like, "Yeah, it's not great. There's something that feels brittle or potentially maybe hard to maintain or things like that. But I don't have a better idea." And I don't comment on it because I'm like, I don't want to distract that person or block them. And I do think it's good enough, and I don't have anything to add to the conversation, so I just leave it out. So it's interesting to me where is that line of when I feel like it's important enough to comment to then potentially spark some conversation versus just letting it go so then I don't add any distraction to their work?
CHRIS: I think it's when the spidey-sense gets past 47%. It's a very specific number. I do the same thing where there's something, and I'm like, you know what? I can't even clearly express what about this makes me feel something off, and so I won't even comment on it, and I agree. And then there are things that trip past some magical line in the sand. And I'm like, you know what? I think I'm going to say something here, but I don't even have a recommendation. And then there's a whole spectrum of the nature of code review and, again, 47% being the specific number.
STEPH: There's actually a thoughtbot blog post that correlates nicely to that concept of spidey sense. It's written by Mike Burns, and it's titled How to Skim a Pull Request. But essentially, grabbing from one of the lines here is where Mike presents an unexplained, incomplete, and arbitrarily grouped list of keywords that will cause us thoughtboters to read your code with more care and suspicion. [laughs] That feels perfectly aligned with that idea of spidey sense, spidey-sense 101. I'll be sure to include a link in the show notes. Or, you know, 40%.
CHRIS: I think it was 47%. It's a very precise number. [chuckles]
STEPH: Very precise nonsensical number. Got it. [laughs]
CHRIS: If I'm making up fake statistics, I'm not going to have them round to an even 10. [laughter]
STEPH: Makes it seem more legit somehow.
CHRIS: Exactly.
STEPH: But that's really the novelties that I wanted to chat about.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: What's new in your world?
CHRIS: I have some follow up on a recent topic that we talked about. So we had a kerfuffle which I described where we had a branch that got merged and the rebase some stuff got out of hand. And so we introduced some process, the protected branch configuration within GitHub that required the branches to be up-to-date before they can be merged and CI to be passing. And everybody was happy. It was like, this is great.
Turns out it was never turned on. That's actually the day I was like, man; this is really straightforward. There's been no annoyance here. And then I got to the point where it was like; this seems weird because we just merged a lot of things in rapid succession.
I went and checked, and it turns out what I thought was the name of the branch protection rule in GitHub's UI is, in fact, a regular expression pattern. It might not be a full regular expression but like a wildcard pattern for the branch name to match to, and so it's specific.
I created this rule, and in small, gray text underneath, it said, "This applies to zero branches." I missed that the first time but then the second time going back, I was like, oh, I actually wanted it to apply to more than zero branches. So I went back in and changed that. It's a great example of very subtle UI that just slipped past me.
STEPH: I was going to say in your defense, the very subtle gray font to say, "This applies to zero," feels tricky.
CHRIS: That...also, going through the work of creating this thing and if that results in zero branches that would match, maybe that's the thing to emphasize on creation. I would love that. Because in my case, I was trying very specifically to target an existing branch. There is the ability to say, "Oh, any bugfix-* named branch," if you're using branch naming strategies like that, you can use this for that sort of thing. So it may be that currently, there are no branches with that name. But in my case, I was just like, please, main, anytime anything is happening on main, that is what we want to do. I just needed to put the word main there. But anyway, once I actually turned it on, insufferable, absolutely not, cannot survive in this world.
We have a relatively small team. There are three of us, and not everyone is even full-time, and my time is pulled in a lot of different directions. So I'm actually not pushing as much code as I might otherwise. Even with that, nope, absolutely not. Our CI is like; I don't know, five-ish minutes per run. Turns out, especially Monday mornings, we have a volley of things that will have been reviewed and trickled in through Friday afternoon. And then there's a bunch of work we want to land Monday morning. And then, just at any point, it turns out, yes, this was untenable. So we have turned it off.
I would like to revisit this down the road and introduce the MergeQueue functionality, so the idea of being able to say, "Yeah, you just name when you want something to go in, and then the system will manage the annoying finicky work there." But for now, I had to give up on my dream of everything running on CI, on a feature branch, before it gets merged.
STEPH: Ooph, that phrase, "I had to give up on my dream," that breaks my heart for you. [laughs]
CHRIS: I may be going a little bit fanciful with my language but, like, a little.
STEPH: [laughs]
CHRIS: I liked this thing. I want to exist in that world. But it is not feasible given the current state of the world. And that will only get worse over time, is my expectation. So I get to revisit this when I have the time to more thoroughly figure a thing out. But for now, I don't know, merge whatever; it will be fun.
STEPH: There's a small part of me that feels a little reassured that it was a terrible time, although I hate that it was a terrible time. But I have felt that pain on so many other projects where I am constantly waiting, and I'm constantly checking to be like, can I merge? Can I merge? Can I merge? And then I can merge, but then someone beats me to it. And I'm like, oh, then I got to restart. And I got to wait, and I'm constantly checking. So that feels like it helps validate my experience. [chuckles]
I am excited for that MergeQueue. I would be super excited to try that out and hear about how it goes just because that seems more like the dream where you can just say, hey, I want this PR to go whenever it can go. Just take care of it. I want it to be rebased, whatever the flow is, and have it be merged, so I don't ever have to check on it again.
CHRIS: But once we configured this, there was a new thing that appeared in the GitHub UI, which was auto-merge. And so that was a button where I could say like, "Hey, merge this whenever CI passes," which was a nice upgrade, but it didn't have the additional logic of and rebase as necessary. Or the more subtle logic of like, you don't actually want to rebase where you have five different branches that are all trying to merge, and they keep rebasing. You want to have the idea of a queue, and so you get in line. And you rebase when it's your turn, and then you run the CI. And you try and be as smart as possible about that.
If anyone at GitHub is listening, I would love if you all threw this into your platform, and then you could ping Slack if anything went wrong. But otherwise, there are, like I said, existing tools. At some point, I will probably, I don't know, over a long weekend or something like that, sit down with a large cup of coffee and explore these. But today is not that day.
STEPH: I'm excited to hear about that day.
CHRIS: So that is a tale of woe and sadness. But luckily, I get to balance it out with a tale of happiness and good outcomes. So that's good. The happiness and good outcome story does start with trouble, as they always do. So we had a bug that occurred in the application where something was supposed to have happened. And then there was an email that needed to go out to tell the user that this thing had happened. And the bug popped up within AppSignal and said something was nil that shouldn't have been nil.
Particularly, we're using a gem called Time For a Boolean, which is by Caleb Hearth. And he's a former thoughtboter and maintains this wonderful gem that instead of having a Boolean for like, is this thing approved, or is it paid? Or is it processed? You use a timestamp. And then this gem gives you nice Boolean-like methods on top of that timestamp. Because it turns out, very often just having the Boolean of like, this was paid, it turns out you really want to know when it was paid. That would be a really useful piece of information. And so, while you're still in Postgres land, it's nice to be able to reach for this and have the affordances of the Boolean-like interface but also have the timestamp where available.
So anyway, the email was trying to process but that timestamp...let's pretend that it was paid as the one that matters here so paid at was nil, which was very concerning. Because this was the email that's like, hey, that thing was processed. Or let's say it was processed, actually, because that's closer to what it was. Hey, this thing was processed, and here's an email notification to tell you that. But the process timestamp was nil. I was like, oh no. Oh no. And so when I saw this pop up, I was like, this is very bad. Everything is very bad. Oh goodness.
Turns out what had happened was...because I very quickly chased after this, looked in the background job queue, looked in Sidekiq's UI, and the job was gone. So it had been processed. I was like, wait a minute, how? How did this fix itself? Like, that's not the kind of bug that resolves itself, except, in this case, it was. This was an interaction that I'd run into many times before. Sidekiq was immediately processing the job. But the job was being enqueued from within the context of a database transaction. And the database transaction had not been committed yet. But Sidekiq was already off to the races trying to process.
So the record that was being worked on, the database record, had local changes within the context of that transaction, but that hadn't been committed. Sidekiq then reads that record from the database, but it's now out of sync because that tiny bit of Sidekiq is apparently very fast off to the races immediately. And so there's just this tiny little bit of time that can occur. And this is also a fun one where this isn't going to happen every time. It's only going to happen sometimes. Like, if the queue had a couple of other things in it, Sidekiq probably would have not gotten to this until the database transaction had fully closed.
So the failure mode here is super annoying. But the solution is pretty easy. You just have to make sure that you enqueue outside of the database transaction. But I'm going to be honest, that's difficult to always do right.
STEPH: That's a gnarly bug or something to investigate that I don't think I have run into before. Could you talk a little bit more about enqueueing the job outside the database transaction?
CHRIS: Sure. And I think I've talked about this on a previous episode a while back because I have run into this one a few times. But I think it is sufficiently rare; like, you need almost a perfect storm because the database transaction is going to close very quickly. Sidekiq needs to be all that much more speedy in picking up the job in order for this to happen.
But basically, the idea is within some processing logic that we have in our system; we find a record, we do some work. And then we need to update that record to assign this timestamp or whatever it is. And then we also want to inform the user, so we're going to enqueue a job to send the email notification. But for all of the database work, we are wrapping it in a transaction because we want it to either succeed or fail atomically. So there are three different records that we need to update. We want all of them to be updated or none of them to be updated. So, therefore, we wrap it in a transaction.
And the way we had written, this was to also enqueue the job from within the transaction. That wasn't something we were actively intentionally doing because those are different systems. It doesn't really mean anything. But we were still within the block of ApplicationRecord.transaction do. We're now inside of that block. We're doing all of the record updates. And then the last piece of work that we want to think about is enqueueing the job to send the email.
The problem is if we're still within that database transaction if it's yet to be committed, then when Sidekiq picks up that job to run it, it will see the prior state of the world. And it's only if the Sidekiq job waits a little bit that then the database transaction will have been committed. The record is now updated and available to be read by Sidekiq in the correct updated state.
And so there's this tiny little bit of inconsistency that can happen. It's basically because Sidekiq is going out to Redis, which is a distinct system. It doesn't have any knowledge of the database transaction at play. That's why I sometimes consider using a Postgres-backed background job system because then actually the job can be as part of the database transaction.
STEPH: Cool. That's helpful. That makes a lot of sense the way you explained the whole you're actually enqueueing the job from inside that transaction. I'm curious, that prompts another question. In the case where you mentioned you're using a transaction because you want to make sure that if something fails to update so, everything gets updated together, in the event that something does fail to update because you were previously enqueueing that job from the transaction, does that mean that the update could have failed but that email would still have gone out?
CHRIS: That does not. And the reason for that is because we're within dry-monad world. And so dry-monad will implicitly capture the ActiveRecord rollback, which I think is an exception that gets raised or somehow...But basically, if that database transaction fails for any reason and ends up getting rolled back, then dry-monads will not continue processing through the rest of the sequential operation. And so, therefore, even if we move the enqueuing of the email outside of the database transaction, the sequential nature of that processing and the dry-monad stuff that we have in play will handle that. And I think that would more generally be true because I think Rails raises an exception on rollback. Not certain there. But I know in our case, we're fine on that. And we have actually explicitly checked7 for that sort of thing.
STEPH: So I meant a slightly different question because that makes sense to me everything that you just said where if it's outside of the transaction, then that sequential order won't fire because of that ActiveRecord migration error. But when you have the enqueuing inside of the transaction because then that's going to be inside of the sequential order, maybe before the rollback error gets raised. Does that make sense?
CHRIS: Yes. I think what you're asking is basically like, do we make sure to not send the job if the rest of the stuff didn't succeed?
STEPH: I'm just wondering from a transaction perspective, actually. If you have a transaction wrapped block and then you have in there, like, update this record, send email, end block, let's say update...well, I guess it's going raise because you've got probably like an update bank. Okay, so then yeah, you won't get to the next line. Got it. Got it. Got it. I just had to walk myself through that because I forgot that you probably...I have to visualize [laughs] as to what that code probably looks like. All right, that answered my question.
CHRIS: Okay. So back up to the top level then, this is the problem that we have. And looking through the codebase, we actually have it in a bunch of different places. So the solution in any one of those cases is to just take the line of code where we're saying enqueue UserMailer.deliver_later take that line of code, move it outside of the database transaction, and make sure it only happens if the database transaction succeeds. That's very easy to do in one case.
But my concern was this is a very easy failure mode to end up in. And this is a very easy incorrect version of the code to write. As far as I can tell, we never want to write the code where this is happening inside of the transaction because it has this failure mode. But how do we enforce that? That was the thing that came to mind. So I immediately did a quick look of like, is there a RuboCop thing I can do here or something?
And I actually found something even more specific, which was so exciting to find. It's a gem called Isolator. And its job is to detect non-atomic interactions within database transactions. And so it's fantastic. I was like, wait, really? Is this going to do the thing? And so I just installed the gem, configured it where I wanted, and then ran the test suite. And it showed me every place throughout the app right now where we were doing this pattern of behavior like enqueueing work from within a database transaction, which was great.
STEPH: Ooh, that's really nifty. I kind of want to install that and just run it on my current client's codebase and see what I find.
CHRIS: This feels like something like strong migrations where it's like, yeah, this is great. I kind of want to have this as part of my core toolset now. This one feels even perhaps slightly more so because sometimes I look at strong migrations, and I'm like, no, no, no, strong migrations, I get why you would say that, but for reasons, this is actually fine. And they have configurations within it to say, like, no, this is okay. Isolator feels like it's always telling me something I want to know. So this, very quickly, I'm like, I think this might be part of my toolset moving forward on every single app forever.
And actually, there's another gem that I used. It's made by the same team. So this is from the folks over at Evil Martians, which is another Rails consultancy out there in the world. And the Isolator gem is one thing that they've produced. And then I think the same author of it who is an Evil Martian's employee created the after_commit_everywhere gem.
So after_commit is one of Rails' ActiveRecord callbacks. But in this case, it allows you to use it everywhere, as the name implies. And so rather than actually having to take that line of code out of the database transaction block, which is naturally where we would write it because that's how we think about the code and how we want to express it, you can just use this after_commit method, wrap the call in that, so it's after_commit, and then a block. So either braces or do..end. That enqueueing of the email now just gets wrapped in that. And so what that does is it says, "Defer this until after the transaction commits. If the transaction does not commit, if we roll it back, then don't run it."
And what was nice is the actual code change when I finally submitted all of this was add the gem to the gem file. And then everywhere that we're doing the wrong thing, which running the test suite told me, I just went in, and I wrapped that line in after_commit and a block. And it was such a nice, clean...like, I didn't have to move the code around or actually shift the lines, which was my first attempt at this. I was able to just annotate each of those lines and say, "You're special, you're special, you're special," And then I'm done. And again, the first gem told me every case where I needed to do that. It's like, well, this is a wonderful little outcome here.
STEPH: That's really nice, yeah, how you can make the changes and then, like you said, re-run the test or re-run that gem, and it lets you know what else still needs to be updated. I'm intrigued where you mentioned you didn't have to move any lines, though. Maybe I just need to look at the gem and see it, but I'm still envisioning that you have your transaction do block. And then you're doing some things; you're updating records, and then you have your end. And then after that, it's when you want to enqueue the email. And with this after_commit, you actually added that method call inside of the transaction but then wrapped the call to Sidekiq to send the email inside of that block.
CHRIS: Correct. Yeah. So it's basically like saying, "Here's almost an anonymous function." If you think about a Ruby block in that nomenclature, you're saying, like, here's some work to do when and if the transaction succeeds. And so it meant that I was able to keep the code in the way that we as humans would talk about it but deal with the murky details, and edge cases of database transactions, and Sidekiq, and whatnot. Sort of just handle it by saying like...it almost feels like an annotation or a decoration or something like that. But it was this, in my mind, almost like a perfect melding of I don't want to think about this. Oh, cool. Okay, here's a quick, easy way to deal with it but to not have to fundamentally change how I write the code.
STEPH: Interesting. So I like all the things you're saying. I'll be honest, I'm not totally sold, and I'm trying to think of why. I think the benefits...one, as you mentioned, it's something you don't have to think about or at least signals to others that hey, maybe you should think about this to the extent that you use after_commit. And so that way, you don't have these asynchronous events taking place inside the transaction. So I like that visibility and communication to the rest of the team. Putting it inside of the transaction feels interesting. I don't know why; I feel a little weird about this. [laughs] I'm bringing my true self.
CHRIS: That's fair. So if we're being honest, I solved this first by finding the Isolator gem. Well, I solved it first by just doing it manually. I went through the app, and I found all the places. And I was like, you know what? I'm worried that the next person authoring code like this, it's so easy to fall into this trap. Like, this is such a subtle little thing that our brains are not thinking about. And so I had first fixed it, and so I had a diff that involved moving lots of lines of code, every instance of this moved from being in the database transaction out of it. And that was fine. I was fine with that as a solution. But it was a little bit noisy because I was moving a bunch of lines.
So then I brought in the Isolator gem. I actually reset that, and I went back to before I had made the fix, ran the test just to make sure Isolator was actually finding every instance. They did; that was great. So I was like, all right, cool. This is better because now I have this thing that will tell anyone when this happens. So I'm very happy about that. Because frankly, this is some hard-earned knowledge that I had to read Sidekiq and remember how database transactions work and convince myself of what was going on here and finally come to what I believe the solution is.
And now Isolator is just like, cool, that's encapsulated. And it gives a very nice failure message in the test suite. So it's like, excellent. I really like this. But still looking at it, the diff, the amount of code that I had to change, it's like, well, naturally, this is how we want to write this code, but for reasons, we can't. And it's appeasing the computer more than it's appeasing the reader or the author of the code.
And so then I happen to be reading through the Isolator gem's README, and they mention the after_commit_everywhere gem. And I was like, oh, that's interesting. So one more time, I reset. And then I really tried fixing it with after_commit. And the look of the diff there felt nice to me because the lines got a little more on them, but they didn't move. And so it's like, this is how we naturally would have authored it, and now it works correctly. And I liked that.
But I understand your hesitation because you're like, but the thing is, it's wrong. And so you've made the wrong not wrong anymore, but you didn't...and so I get your hesitation. I still like the fancy version.
STEPH: Yeah, I think you just helped me figure out my grumpiness with it or why I'm not totally sold on it. And it was in regards to adding a dependency to avoid a noisy diff is the oversimplified version that I was processing or the reason that I was a bit grumpy about adding this other gem for that. But then you also just brought a lot of other really good reasons.
One thing that you said that I do really like is adding tools that help us author code in a more natural style, the way that we want to highlight this process, and how this application does work, and how this business logic flows. So given in that light, that makes me feel better about it. But yeah, I think that was my initial grumpiness. I was like, it’ll be a noisy diff. It's okay.
CHRIS: I think I definitely share your hesitation, or you're like, hmm, that's an interesting reason to bring more code into the application. But at the same time, I think the counterpoint that comes to mind for me is we're using Ruby because of its expressiveness; at least, that's why I'm using Ruby. I really want the code that I write to be as close as possible to the thing that I would say to another human about like, oh okay, when a user signs up for the application, we need to create a record in our system, and then we need to send them an email. And then we need to do this other thing. And so, the closer that our code is to those words that I would use to describe to another human, the happier I am.
And I will put in some pretty significant effort to hold that line as long as the code can also be correct. And so, the Isolator gem here does a great job of enforcing that correctness. And then after_commit allows me to still maintain that expressiveness and not have to think about the murky details as much or not have to reshape my code to match the murky realities of different persistence engines.
But I do agree. I think it's a good thing to look at and ask, like, is it worth it? Are you sure? And in this case, I will say, "Yeah, I think so," but with that amount of certainty in my voice, [chuckles] which is not a ton.
STEPH: I think this is going back to my days of working with dependency bot PRs where every time there was an upgrade for a gem, I always ask, what do you do here? [chuckles] Do we need to upgrade you? Can we just remove you from the codebase? So I'm fairly...I don't know, resistant is a strong word. I'm skeptical of when we're adding stuff in, and I just want to question the value that it's adding.
But I want to circle back to something that you said, and that is hard-earned knowledge. And that part I understand so much where when you have gone through a fair amount of work to uncover an issue, and then you want to make sure that others don't have to go through that. This is a really nice way to highlight; hey, there's something that's tricky about computers and software here, and we need to watch out for that. And I want to help you lookout for that. Versus this is just inherit information where this needs to happen outside or after that transaction. And so that makes a really nice entry point where someone can look to say, "Why did we add this gem?" And then there's a commit message that goes with it that explains this is why we use this after_commit gem because we're specifically looking to avoid this type of bug. And I love that.
CHRIS: Yeah, I think more lines of git commit message than diff on this one. So yeah, I wrote a short novel describing all of the features, describing the different pieces that are coming together. And then it's actually a +28 -6 diff. So it's a very small code change. But yeah, lots of story captured there.
STEPH: And if you had just moved the lines, you could still have that commit message. But it's not likely that someone's going to look up that git commit change or that message that went along with it because they're not going to know to blame that one. But if they look at that particular edition of after_commit, they're more likely to find that historical context. So long story short, I think you have walked me through my initial grumpiness and provided some really good ways to avoid that really tricky failure mode for other developers.
CHRIS: Well, thank you. I'm getting Steph's seal of approval starting from grumpy places. [laughs] I feel good. All right.
STEPH: I'll have some special Stephanie's approval stickers designed and printed for you.
CHRIS: I hope you're not joking because I very much want a yellow heart that says, "Steph-approved."
STEPH: [laughs]
CHRIS: And I can put it on PRs, and I can put it on the wall. [laughs]
STEPH: Well, now I have to find a sticker designer and make a...well, it's just a yellow heart. I can probably handle this. I'm going to use Comic Sans. That will be the approved part. [laughs] Yellow hearts and Comic Sans for everybody.
CHRIS: Well, with that absolutely fantastic call back to earlier parts of the episode, shall we wrap up?
STEPH: Let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris evaluates the pros and cons between using Sidekiq or Active Job with Sidekiq. He sees exceptions everywhere.
Steph talks about an SSL error that she encountered recently. It's officially spooky season, y'all!
Transcript:
CHRIS: Additional radiation just makes Spider-Man more powerful.
STEPH: [laughs] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: Fall is in the air. It's one of those, like, came out of nowhere. I knew it was coming. I knew it was going to happen. But now it's time for pumpkin beer and pumpkin spice lattes, and exclusively watching the movie Hocus Pocus for the next month or so or some variation of those themes. But unrelated to that, I did a thing that I do once, let's call it every year or so, where I had to make the evaluation between Sidekiq or Active Job with Sidekiq, as the actual implementation as the background job engine that is running. And I just keep running through this same cycle.
To highlight it, Active Job is the background job system within Rails. It is a nice abstraction that allows you to connect to any of a number of them, so I think Delayed Job is one. Sidekiq is one. Resque is probably another. I'm sure there's a bunch of others. But historically, I've almost always used Sidekiq. Every project I've worked on has used Sidekiq. But the question is do you use Active Job with the adapter set to Sidekiq and then you're sort of living in both worlds, or do you lean in entirely and you use Sidekiq? And so that would mean that your jobs are defined to include Sidekiq::Worker because that's the actual thing that provides the magic as opposed to inheriting from Application Job. And then do you accept all of the trade-offs therein? And every time I go back and forth. And I'm like, well, but I want this feature, but I don't want that feature. But I want these things. So I've made a decision, but I want to talk ever so briefly through the decision points that were part of it. Have you done this back and forth? Are you familiar with the annoying choice that exists here?
STEPH: It's been a while since I've had the opportunity to make that choice. I'm usually joining projects where that decision has already been made. So I can't think of a recent time that I've thought through it. And my current project is using that combination of where we are using Active Job and Sidekiq.
CHRIS: So I think there's even a middle ground there where that was the configuration that I'd set up on the project that I'm working on. But you can exist in both worlds. And you can selectively opt for certain background jobs to be fully Sidekiq. And if you do that, then instead of saying, "Perform_later," You say, "Perform_async." And there are a couple of other configurations. It gives you access to the full Sidekiq API. And you can do things like hey, Sidekiq, here's the maximum number of retries or a handful of other things. But then you have to trade away a bunch of the niceties that Active Job gives.
So as an example, one thing that Active Job provides that's really nice is the use of GlobalID. So GlobalID is a feature that they added to Rails a while back. And it's a way to uniquely identify a given record within your system such that when you say perform_later, you can say, InvitationMailer.perform_later and then pass it a user record so like an instance of a user model. And what will happen in the background is that gets serialized, but instead of serializing the whole user object because we don't actually want that, it will do the GlobalID magic. And so it'll turn into, I think it's GID:// so almost like a URL. But then it'll be, I think, your application name/model name down the road. And the Perform method actually gets invoked via the background system. Then you will just get handed that user record back, but it's not the same instance of the user record. It sort of freezes and thaws it. It's really nice. It's a wonderful little feature. Sidekiq wants nothing to do with that.
STEPH: I'm so glad that you highlighted that feature because that was on my mind; I think this week where I was reviewing...somebody had made the comment where they were concerned about passing a record to a job and saying how that wouldn't play nicely with Sidekiq. And in the back of my mind, I'm like, yeah, that's right. But then I was also I'm pretty sure this got addressed, though. And I couldn't recall specifically if it was a Sidekiq enhancement or if it was a Rails enhancement. So you just cleared something up for me that I had not had time to confirm myself. So thanks.
CHRIS: Well, to be clear, this works if you are using Active Job with Sidekiq as the adapter, but not if you are using a true Sidekiq worker. So if you opt-out of the Active Job flow, then you have to say, "Perform_async," and if you pass it a record, that's not going to work out particularly nicely.
The other similar thing is that Sidekiq does not allow the use of keyword args, which, I'm going, to be honest, I really like keyword arguments, especially for background jobs or shuttling data through your system. And there's almost a lazy evaluation. I want some nicety to make sure that when I am putting something into a background job that I'm actually using the correct call signature, essentially passing the correct data in the correct shape. Am I passing a record, or am I passing the ID? Am I passing a list of options or a single option? Those sort of trade-offs that are really easy to subtly get wrong.
I came around on this one because I realized although Active Job does support keyword arguments, the way it does that is it just has a JSON serialization format for them. So a keyword argument turns into a positional array with an associated hash that allows for the lookup or whatever. Basically, again, they handle the details. You get to use keyword args, which is great, with the exception that when you're actually calling perform_later, that method perform_later is a method missing type magic method. So it does not actually check the keyword arguments at that point. You're basically just passing an options hash as opposed to true keyword arguments that would error because they don't match up. And so when I figured that out, I was like, oh, never mind. This doesn't actually do the thing that I care about. It's a little bit nicer in terms of the signature of the method when you're defining your background job itself, but it doesn't actually do any logical checking. It doesn't give me any safety or robustness within my system. So I don't care about that.
I did find a project called sidekiq-symbols, which does some things under the hood to how Sidekiq serializes and deserializes jobs, which I think gives largely the same behavior as Active Job. So I can now define my Sidekiq jobs with keyword arguments. Things will work. I can't use GlobalID. That's still out. But that's fine. I can do a little helper method that basically does the same thing as GlobalID or at least close approximation. But sidekiq-symbols lets me have keyword arg-like signatures in my methods; basically, it is. But again, it doesn't actually do any check-in when I'm enqueueing a job, and I am sad about that.
STEPH: Yeah, that's another interesting distinction. And I'm unsurprisingly with you that I would favor having keyword args and having that additional safety in place. Okay, so I've been keeping track. And so far, it sounds like we have two points because I'm doing a little scorecard here between Active Job and Sidekiq. And we have two points in favor of Active Job because they offer a GlobalID, which then allows us to pass in a record, and then it takes care of the serialization for us. And then also, keyword args, which I agree with you that's a really nice feature to have in place as well. So I'm curious, so it sounded like you're leaning towards Active Job, but I don't want to spoil the ending.
CHRIS: Yes, I could see why that's what you would be taking away from the conversation thus far. So again, just to reiterate, Active Job and Sidekiq with this sidekiq-symbols extension they both support keyword args, kind of. They support defining your job with keyword args and then enqueueing a job passing something that looks like keyword args. But it ends up...nobody's actually checking anything, so it's mostly like a syntactic nicety as opposed to any sort of correctness, which is still nicer, but it's not the thing that I actually want. Either way, nobody supports it, so it is not available to me. Therefore, it is not a consideration point.
The GlobalID thing is nice, but it is really, again, it's a nicety more than anything. I have gone, and I'm leaning in the direction of full Sidekiq and Sidekiq everywhere as opposed to Active Job in most cases, but then Sidekiq when we need it. And that's because Sidekiq just has a lot more power and a lot more functionality. So, in particular, Sidekiq has a feature which allows you to say...it's a block that you put at the top of your Sidekiq job that says retries exhausted or something. I think Sidekiq retries exhausted is the actual full name of that at that point, which is really unfortunate in my mind, but anyway, I'll deal. At that point, you know that Sidekiq has exhausted all of the retries, and you can treat it as failed.
I'm going, to be honest, I went on a quest to find a way to say, hey, I'm going to put some work into the background. It's really important for me to know if this work succeeds or if it fails. It's very easy to know if it succeeds because that just happens in-line in the method. But we can have an exception raised at basically any point; Sidekiq does a great job of catching those, of retrying, of having fundamental mechanisms there. But this is the best that I can get for this job failed. And so Active Job, as far as I can tell, does not have anything for this in order to say, yep, we are done. We are not going to keep working on this. This work has failed. It is dead.
Dead is; actually, I think the more correct term for where we're at because failed is a temporary state, and then you retry after a failure. Whereas dead is, this has gone through all of its retries, and it will never be run again. Therefore, we should treat this as not having run. And in my case, the thing that I want to do is inform the user that this operation that we were trying to do on their behalf has not succeeded, will not succeed. And please reach out or otherwise deal with the fact that we were unable to do the thing that they asked us to do. That feels like a really important thing for me to be able to do, to be able to communicate back to my users.
This is one of those situations where I'm looking at the available options, and I'm like, I feel like I can't be the only one who wants to know when something goes wrong. This feels like a thing that's important. But this is the best example that I've found, the Sidekiq retries exhausted block. And unfortunately, when I'm using it, it gets yielded the Sidekiq JSON blob deserialized, so it's like Ruby hash. But it's still like this blob of data. It's not the same data that gets passed into perform. And so, as a result, when I want to look up the record that was associated with it, I have to do this nested dig into the available hash of data. And it just feels like this is not a well-paved path. This is not something that is a deeply thought about or recommended use case. But again, I don't feel like I'm doing something weird here. Am I doing something weird, Steph, wanting to tell my users when I was unable to do the thing they asked me to do? [chuckles]
STEPH: That feels like a very rhetorical question. [laughs]
CHRIS: It does. I apologize. I'm leading the witness. But in your sincere heart of hearts, what do you think?
STEPH: No, that certainly doesn't sound weird. I'm actually thinking back to some of the jobs that cause me stress in regards to knowing when they failed and then having that communication of knowing that we've exhausted all the retries. And, of course, knowing when those retries are exhausted is incredibly helpful.
I am intrigued, though,, because you're highlighting that Active Job doesn't have the same option around setting the retry. And I'm trying to recall exactly how it's set. But I feel like I have set the retry count for Active Job. And maybe, as you mentioned before, that's because it's an abstraction, or I'm not sure if Active Job actually has that native support. So I feel a little confused there where I think my default instinct would have been Active Job does have that retry capability. But it sounds like you've discovered otherwise.
CHRIS: I'm not actually sure what Active Jobs core retry logic or option looks like. So fundamentally, as far as I understand it, Active Job is an abstraction. And under the hood, you're always connecting an adapter. So it's either going to be Sidekiq, or Resque, or Delayed Job, or other. And each of those systems, whichever system you have as the adapter, is the one that's actually going to be managing retries. And so I know Sidekiq happens to have as a default 25 retries. And that spans, I think it's a two-week exponential back off. And Sidekiq has some very robust logic that they have implemented as the way retries exist within Sidekiq. I'm not sure what that would look like if you're trying to express it abstractly because it is slightly different.
I know there was some good work that was done on Sidekiq to allow the Sidekiq options that's a method at the top level of the job, even if it's an Active Job job to express the retries. So that may be what you've seen, or there may be truly an abstraction that exists within Active Job, and then each adapter needs to know how to handle retries. But frankly, the what can Sidekiq do that Active Job can't? There's a whole bunch of stuff around limiting when you would retry limiting, enqueuing a job if there already exists one, when and how do those records get locked. There's a whole bunch of stuff.
Sidekiq has a lot of power under the hood. And so if we want to be leaning into that, that's why I'm leaning towards let's just be Sidekiq all the time. Let's become Sidekiq experts. Let's accept that as a deep architectural decision within the app as opposed to just relying on the abstraction. Because fundamentally, if we're just using Active Job, we're not going to have access to the full power of Sidekiq or whatever the underlying system is, so sort of that decision that I'm making, but I don't know specifically around the retries.
STEPH: Okay, thanks. That's really helpful. It's been a while since I've had to make this decision. I'm really enjoying you sharing your adventure because I'm trying to think what's the risk? If you don't use Active Job, what are the trade-offs? And you'd mentioned some of them around the GlobalID and keyword args, which are some niceties. But overall, if you don't go with the abstraction, if you lean into Sidekiq, the risk is then you want to migrate to a different enqueuing service. And something that we talk about is mitigating that risk, so then you can swap it out. That's also something I have never done or encountered where we've had to make that change. And it feels like a very low risk in my mind.
CHRIS: Sidekiq feels like the thing you would migrate to, not a thing you would migrate from. It feels like it is the most powerful. And if anything, I expect at some point we'll be upgrading to Sidekiq pro or enterprise or whatever the higher versions that you pay for, but you get more features there. So in that sense, that is the calculation. That's the risk trade-off in my mind is that we're leaning into this technology and coupling ourselves more closely to it.
But I don't see that as one that will reassess in the same way that people talk about Active Record and it being an ORM. And it's like, oh, we're abstracting the database underneath, and I'm like, no, I'm not. I'm always using Postgres. Please do not take Postgres. I'm not going to switch over to MySQL next week. That's totally fine if you start on MySQL. It's unlikely you're going to port over to Postgres. We may port to an entirely…like it's a Cassandra column store with a Kafka queue, I don't know, something weird down the road. But it's not going to be swapping out Postgres for MySQL or vice versa. Like you said, that's probably not a change that's going to happen. But that I think is the consideration.
The other consideration I have in my mind is Active Job is the abstraction that exists within Rails. And so I can treat it as the lowest common denominator, and folks joining the project, it's nice to have that familiarity. So perform_later is the method on the Active Job jobs, and it has a certain shape to it. People may be familiar with that. Mailers will automatically use Active Job just implicitly under the hood. And so there's a familiarity, a discoverability. It's just kind of up the middle choice. And so if I can stick with that, I think there's a nicety there. But in this case, I think I'm choosing I would like the power and consistency on the Sidekiq side, and so I'm leaning into that.
STEPH: Yeah, that makes a lot of sense to me. And I liked the other example you provided around things that were not likely to swap out and Postgres, MySQL, your database being one of them. And in favor of an example that I do have for something that...I do enjoy wrapping. It's not something that I adhere to strictly, but I do enjoy it when I have the space to make this choice. So I do enjoy wrapping HTTPClients, not just because then I can swap it out for a different HTTPClient, which frankly, that's also rare that I do that. Once I choose an HTTPClient, I'm probably pretty happy, and I don't need to swap it out.
But I really like being able to extend to the API specifically if they don't handle error responses in a way that I would like to or if they raise, and then I want to change the API to have a more thoughtful interface and where I don't have to rescue those errors. But instead, I can interact with this object that then represents an error state. So that was just one example that came to mind for things that I do enjoy having an abstraction around and not just so I can swap it out because that feels like a very low risk, but more frankly, so I can extend the API.
CHRIS: I definitely share the I almost always wrap APIs, or I try and hide whatever the implementation detail whether it be HTTPParty, or Faraday or whatever it is that I'm using and trying to hide that deeply within the system. And then I have whatever API client that we define. And that's what we're interacting with. It's interesting that you bring up errors and exceptions there because that's the one other thing that has caused me this...what I'm describing now seems perhaps like, oh, here's just a list of pros and cons, a simple decision was made, and there we are.
This represents some real soul searching on my part, if we will. And one of the last things that I ran into that was just so frustrating is that Sidekiq is explicitly built around the idea of exceptions; Sidekiq retries if there is an exception raised in the job, otherwise, it treats it as success, and that's it. That is the entirety of it. That is the story. But if you raise an exception in a job, then you can't test that job because now it's raising an exception. You can't test retries or this retry exhausted block that I'm trying to lean into. I'm like, I want to put that in a feature spec and say, oh, this job goes in the background, but it's in a failure state, and therefore, the user sees the failure message. Sorry, I can't do that because the only way to actually fail a job is via an exception.
And I've actually gone to some links in this application to try to introduce more structured data flow. I've talked a bunch about the command objects and the dry-monads and all those things. And I've really loved them where I've gotten to use them. But then I run into one of these edge cases where Sidekiq is like, no, no, no, you can't do that. And so now I have parts of my system that very purposefully return data as opposed to raising an exception. And I just have to turn around and directly raise that failure as an exception, and it just feels less expressive.
I actually just ran into the identical thing with Pundit. They have a little bit better control over it; I can choose whether or not I want the raising version or not. But I see exceptions everywhere, and I want a little more discrete data flow. [chuckles] That is my dream. So anyway, I chose Sidekiq is the summary here. And slowly, we're going to migrate entirely to Sidekiq. And I'm going to be totally fine with it. And I'm done griping now.
STEPH: This is your own little October Halloween movie, that I see exceptions everywhere.
CHRIS: They're so spooky.
STEPH: [laughs] That's cool about Pundit. I'm not sure I knew that, that you get to essentially turn on or off that exception flow behavior. On one hand, I'm like, that's nice. You get the option. On the other hand, I'm like, well, let's just not do it. Let's just never raise on people. But at least they give people options; that seems really cool.
CHRIS: They do give the option. I think you can choose different strategies there. And also, if we're being honest, I'm newer to Pundit. And I used a different thing, which was to get the Policy Object and ask it a question. I wanted to ask, is this enabled or not? Can a user do this or not? That should not raise an exception. I'm just asking a question. We're just being real chill about this. I just want to know some information. Let's flow some data through our system. We don't need exceptions for that.
STEPH: Why are you yelling at me? I just have a question. [laughs]
CHRIS: Yeah. I figured out how to be easy on that front. Sidekiq apparently has no be easy mode, but that's fine. You know what? We're going to make it work, and it's going to be fine. But it is interesting deciding which of these facets of the system that I'm building do I really care about? Which are the ones where I'm like, whatever, just pick something, and we'll move forward, it's not a big deal? Versus, we're actually going to be doing a lot of work in the background. This is the thing that I care about deeply. I want to know about failure and success. I want to really understand that and have a robust answer to what our architecture looks like there.
Similarly, Pundit for authorization. I believe that authorization will be a critical aspect of our system. It's typically a pretty important thing. But for us, I think we're going to have different types of users who can log in and see different subsets of data and having a consistent and concrete way that we have chosen to implement that we are able to test, that we're able to verify. I think that's another core competency within the app. But you only get to have so many of those. You can only be really good at a couple of things. And so I'm in that place where I'm like, which are our top five when I say are the things that I care a lot about? And then which are the things where I'm like, I don't know, whatever, just run with it?
STEPH: Just a little bit ago, I came so close to singing because you said the I want to know phrase again. And that, I'm realizing, [laughs] is a trigger for me and a song where I want to sing. I held it back this time.
CHRIS: It's smart. You got to learn anytime you sing on mic that is part of the permanent record.
STEPH: Edward Loveall at thoughtbot, since I sang in a recent episode, did the delightful thing where then he grabbed that clip of where you talk a little bit, and then I sing and then encouraged everyone to go listen to it. And in which I responded, like, I would highly recommend that you save your ears and don't listen to it. But yes, singing on the mic is a thing. I do it from time to time. I can't hold it back.
CHRIS: We all do. But since it doesn't seem that you're going to sing in this moment, I think I can probably wrap up my Odyssey of choosing between Sidekiq and Active Job. I hope those details were useful to anyone other than me. It was an adventure, so I figured I'd share it. But yeah, that about wraps it up on my side.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: So, I would love to talk about an SSL error that I encountered recently. So one of the important processes in our application is sending data to another system. And while sending data to that other system, we started seeing the following error that the read "Certificate verify failed." And then in parens, it states, "Unable to get local issuer certificate." So upon seeing that error, I initially thought, okay, something is wrong with their SSL certificate or their SSL configuration. And that's not something that I have control over and can fix. So we should reach out and let them know to take a look at their SSL config.
But it turns out that their team already knew about the issue. They had recently updated or renewed their SSL cert, and they saw our messages were no longer being processed, and they were reaching out to us for help. So at that point, I'm still pretty sure that it's related to something on their end, and it's not something that I can really fix on our end. But we can help them troubleshoot. Maybe there's a workaround that we can add to still get messages processing while they're looking into their SSL config. It seemed like they still just needed help. So it was something that was still worth diving into.
So going back to the first error, I want to talk a little bit about it because I realized that I understand SSL just enough, just the surface to get by as a developer. But then, every time that I run into a specific error with it, then I really have to refresh my understanding as to what could be wrong, so then I can troubleshoot more effectively.
So for anyone that could use a refresher on that certificate verification process, when your browser or your server is connecting to a site that uses SSL, then your browser server, whichever one you're using, is going to download that site certificate and verify a couple of things. So it's going to check does the certificate contain the domain name of the website? So essentially, you gave us a certificate. Is this your certificate? Does it match the site that we're connecting to? Is this cert issued by a trusted certificate authority? So did someone that we trust give you this certificate? And is the cert still valid, or has it expired? So that part is pretty straightforward.
The second part, "Unable to get local issuer certificate," so that's the part I was less certain about. And I took this to mean that they had passed two of those three checks that their cert included the site's name, and it had not expired. But for some reason, we aren't able to determine if their cert was issued by someone that we should trust.
So following that journey, my next question was, so what are they giving us? So this is a tool that I don't get to use very often, but I reached for OpenSSL and, specifically, the s_client command, which connects to a specified domain and prints all certificates in the certificate chain. You may already know this, but the certificate chain is basically a fancy way of saying, show me all the certificates necessary to prove your site certificate was authorized by a trusted certificate authority.
CHRIS: I did not know that.
STEPH: Okay, I honestly didn't either. [laughs]
CHRIS: I liked that you thought I would, though. So thank you, but no. [chuckles]
STEPH: Yeah, it's one of those areas of SSL where I know just enough. But that was something that was new to me. I thought there was a site certificate, and I didn't realize that there is this chain of certificates that has to be honored.
So going back and looking through that output of the certificate chain, that's what highlighted to me that their server was giving us their certificate and saying, hey, you should trust our site certificate. It's legit because it was authorized by, let's say, XYZ certificate. And so if it were a proper certificate chain, then they would give us that XYZ cert. And essentially, we can use this chain of certificates to get back to a trusted authority that then everybody knows that we can trust. However, they weren't actually giving us a reference certificate; they were giving us something else. So essentially, they were saying, "Hey, look at our certificate and look at this very trustworthy reference that we have." But they're actually failing to give us that reference.
So to bring it all home, we can download that intermediate certificate that they reference; that is something that is publicly accessible. That's why we're able to then verify each certificate that's provided in that chain. We could go and download that intermediate certificate from that certificate authority. We could combine that with their site-specific certificate, include that in our request to their system, and then complete the certificate chain. And boom, we're back in business. But it was quite a journey.
CHRIS: That is quite the journey. And yeah, I definitely knew very little of that, although everything you're saying makes sense. And I have a bunch of cubbyholes in my brain for SSL knowledge. And the words you said all fit into the spaces that I have in my brain, but I didn't know a bunch of those pieces. So thank you for sharing that.
SSL and cryptography, more generally or password hashing or things like that, occupy this special place in my brain where I'm both really interested in them. And I will occasionally research them. If I see a blog article, I'll be like, oh yeah, I want to read more about this password hashing. And what's a Salt? And what's a Pepper? And what are we doing there? And what is BCrypt versus SCrypt? What are all these things? This is cool. And almost the arms race on the two sides of how do we demonstrate trust in a secure manner on the internet?
But at the same time, I am not allowed to do anything with this information. I outsource this as much as humanly possible because it's one of those things that you just should not do yourself and SSL perhaps even more so. So I have configured aspects of my password hashing. But I 100% just lean on the fact that Let's Encrypt exists in the world. And prior to that, it was a little more work. But frankly, earlier on in my career, I wasn't dealing with the SSL parts of things. But I'm so grateful to Let's Encrypt as a project that exists.
And now, on almost every platform that I work with, there's just a checkbox for please do the SSL work for me, make it good, make it work, and then I will be happy. And I'm so glad that that organization exists and really pushed the envelope also. I forget what it was, but it was only like three years ago where SSL was not actually nearly as common as it is now. And now it is pervasive and everywhere. And all of the sites have it, and so that is a wonderful thing. But I don't actually know much. I know that I should have it. I must have it. I should force it. That's true. So I push that out…
STEPH: Hello.
CHRIS: Are you trying to get me to sing? [chuckles]
STEPH: [laughs] No, but I did want to know if you get the reference, the Salt-N-Pepa.
CHRIS: Push It Real Good the song? Yeah, okay.
STEPH: Yeah, you got it. [chuckles]
CHRIS: I will just say the lyrics. I shall not sing the lyrics. I would say that, though, that yes, yes, they do that.
STEPH: Thank you for acknowledging my very terrible reference. Circling back just a little bit too in regards to...I'm with you; this is a world that is not one that I am very deeply technical in and something that I learned a fair amount while troubleshooting this particular SSL error. And it was very interesting. But there's also that concern where it's like, that was interesting. And we worked around the issue, but this also feels very fragile.
So we still haven't fixed it on their end where they are sending the wrong certificate. So then that's why we had to do more investigative work, and then download the certificate that they meant to send us, and then send back a complete certificate chain so that we don't have this error anymore. But should they change anything about their certificate, should they renew anything like that, then suddenly, we're going to break again. And then, the next developer is going to have to go through the same journey. And this wasn't a light journey. This was a good half-day journey to figure out what was going on and to spend the time, and then to also get that fix out to production. So it's a meaningful task that I don't want anyone else to have to go through.
But we are relying on someone else updating their configuration. So, on one hand, we're in a good spot until they are able to update. But on the other hand, I wrote a heck of a commit message for the next person just describing like, friend, just grab some coffee if we're going to chat. It's a very small code change, but you need to know the scoop. So should you need to replicate this because they've changed something, or if this happens…because we work with a number of systems that we send data to. So if someone else should run into a similar issue, they will understand some of the troubleshooting techniques that I used and be able to look up that chain and find out if there's a missing cert or something else they need to provide. So it feels like a win, but I'm also nervous for future selves, future developers.
So there's another approach that I haven't mentioned yet, but it was often a top recommendation for when dealing with SSL errors. And specifically, it was turning off SSL verification. And I saw that, and I was like, well, that won't work. I'm definitely sending sensitive, important data. And I need to verify that who I'm sending this to is really the person that I want to send this data to. So that was not an option for me. But it made me very nervous how often that was an approach that people would recommend and be like, oh, it's okay, just turn off SSL. You'll be fine. Like, don't worry about it.
CHRIS: I feel like this so perfectly fits into the...some of our work is finding the information and connecting the pieces together and making it work. But some of it is that heuristic sense, that voice in the back of your head that is like, wait, I'm sorry, what? You want me to just turn off the security perimeter and hope that the velociraptors won't come in? That doesn't seem like it's going to end well. I get that that's an easy option that we have available to us right now and will solve the immediate problem but then let's play this out. There are four or five Jurassic Park movies now that tell the story of that. So let's be careful.
STEPH: It always ends super well, though, right? Like, it's totally fine. [laughs]
CHRIS: [laughs] Exclusively. Although it's funny that you mentioned OpenSSL no verify because just this past week, I used that very same configuration. I think it was okay in my case; I’m pretty sure. But it is interesting because when I saw it, I was like, oh no, can't do that. Certainly not that. Don't turn off the security feature. That's the wrong way to deal with the issue.
But in the particular case that I'm working with, I'm using Redis, Heroku Redis, in particular, in a Heroku configuration. And the nature of how Heroku configures the Redis instances and the connectivity to our app into our dyno...I forget why. I read an article. They wrote it; Heroku wrote it. I trust them; they’re good. I've outsourced my trust to people that I do trust. The trust chain actually maps really well to the certificate trust chain. I trust that Heroku has taken security deeply seriously. And for some reason, their configuration of Redis requires that I turn on OpenSSL no verify mode. So I'm using this now both in Sidekiq, and then we're using our Redis instance for our Rails cache as well.
So in both cases, I said, "It's fine. Don't worry about it." I used the Don't worry about it configuration. And I didn't love it but I think it's okay. And partly, I'm trying to say this into the internet radio right now just in case anyone's listening who's like, no, no, no, you can't do that. That's bad. So I'm willing to be deeply wrong on the internet in favor of someone telling me and then I get to get out in front of it. But I think it's fine. Pretty sure it's fine. It should be fine.
STEPH: I love love love that you gave a very visual example of velociraptors, and then you're like, oh, but I turned it off. [laughs] So I'm going to start sending you a velociraptor gif each day.
CHRIS: I hope you do. I hope the internet holds you accountable to that.
STEPH: [laughs]
CHRIS: And I really look forward to [laughs] moving forward because that's a great way to start the day. Well, it doesn't need to start the day, but I look forward to them.
STEPH: [laughs] I am really intrigued because I'm with you. Like you said, there are certain entities that are in our trust chain where it's like, hey, you are running this for us, and so I do have faith and trust in you that you wouldn't steer me wrong and provide a bad recommendation. Someone on Stack Overflow telling me to turn off SSL verify uh; that’s not my trust chain. Heroku or someone else telling me I'm going to take it a little more seriously. And so I'm also interested in hearing from...what'd you say? You're speaking into the internet phone. [laughs] What'd you say?
CHRIS: I think I said internet radio. But yeah, in a way. I mean, we're recording over Skype right now. So in a manner of speaking, we're on the internet phone to make our internet radio show.
STEPH: [laughs] Oh goodness, the internet radio. I'm also intrigued to hear if other people are like, oh, no, no, no. Yeah, that sounds like an interesting scenario. Because I would think you'd still want your connection to...you said it's for Redis. So you still want that connection to be verified. But then if Redis itself can't have a specific...yeah, we're testing the boundaries of my SSL knowledge here as to how the heck you would even establish that SSL connection or the verification process.
CHRIS: Me too. And it also exists in an interesting space where Heroku is rather clear in their documentation about this. And it was a surprising claim when I saw it. And so, I don't expect them to be flippant about a thing that is important. Like, if they're like, "No, no, no, it is okay. You can turn off the security thing, don't worry." I trust that they're not just like, oh, we didn't think about it too much. But we figured why not? It's not a big deal. I'm sure that they have thought about it deeply because it is an important thing.
And so in a weird way, my trust of them and the severity of what this thing represents, I'm like, oh yeah, I super trust that because you're not going to get a major thing wrong. You might get a minor, small, subtle thing wrong. But this is a pretty major configuration change. As I say it, I'm now getting more worried. I'm now like, I feel fine about this. This doesn't seem like a problem at all. But then I keep saying stuff, and I'm like, oh no. That's why I love having a podcast; I find out things about myself as I talk into a microphone to you.
STEPH: We come here to share our deep, dark developer secrets.
Chris: Spooky developer therapy.
STEPH: But just to clarify, even though you've turned off the SSL verify, you're still connecting over SSL.
CHRIS: Yes, I believe that's the case. And if I'm remembering, I think the nature of how this works is they're using a self-signed certificate because of shared infrastructure or something, something that made sense when I read it. But it was the idea that they are doing a self-signed certificate. Therefore, to what you were talking about earlier, there isn't the certificate authority in the chain of those because it's self-signed. And so, they are not a trusted certificate authority. Therefore, that certificate that they have generated would not be trusted. But it does still allow for the SSL handshake and then communication to happen over SSL. It's just that fundamental question of trust. I'm saying, in this case, for reasons, it's okay. Trust me that I trust them. We're good. Which, again, I don't feel great about, but I think yes, it is still SSL, but it is a self-signed certificate. So we have to make this configuration change.
STEPH: Yeah, all of that makes sense. And it certainly sounds like you have been very thoughtful about that change and put in some investigative work. So on that note, I have a very unrelated bad joke for you.
CHRIS: I'm very excited.
STEPH: All right, here we go. All right, so what do you call an alligator wearing a vest?
CHRIS: I don't know. What do you call an alligator wearing a vest?
STEPH: An investigator.
[laughter]
On that note, shall we wrap up?
CHRIS: Oh, let's wrap up. We should also include a link in the show notes to the episode where you told the joke about the elephant hiding in the trees because that's one of my favorite jokes. You slayed me with that one. [laughs] But on that note, yes, let us wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes,,as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Longtime listener and friend of the show, Gio Lodi, released a book y'all should check out and Chris and Steph ruminate on a listener question about tension around marketing in open-source.
Transcript:
CHRIS: Our golden roads.
STEPH: All right. I am also golden.
CHRIS: [vocalization]
STEPH: Oh, I haven't listened to that episode where I just broke out in song in the middle. Oh, you're about to add the [vocalization] [chuckles].
CHRIS: I don't know why, though. Oh, golden roads, Golden Arches.
STEPH: Golden Arches, yeah.
CHRIS: Man, I did not know that my brain was doing that, but my brain definitely connected those without telling me about it.
STEPH: [laughs]
CHRIS: It's weird. People talk often about the theory that phones are listening, and then you get targeted ads based on what you said. But I'm almost certain it's actually the algorithms have figured out how to do the same intuitive leaps that your brain does. And so you'll smell something and not make the nine steps in between, but your brain will start singing a song from your childhood. And you're like, what is going on? Oh, right, because when I was watching Jurassic Park that one time, we were eating this type of chicken, and therefore when I smell paprika, Jurassic Park theme song. I got it, of course.
STEPH: [laughs]
CHRIS: And I think that's actually what's happening with the phones. That's my guess is that you went to a site, and the phones are like, cool, I got it, adjacent to that is this other thing, totally. Because I don't think the phones are listening. Occasionally, I think the phones are listening, but mostly, I don't think the phones are listening.
STEPH: I definitely think the phones are listening.
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey. So we have a bit of exciting news where we received an email from Gio Lodi, who is a listener of The Bike Shed. And Gio sent an email sharing with us some really exciting news that they have published a book on Test-Driven Development in Swift. And they acknowledge us in the acknowledgments of the book. Specifically, the acknowledgment says, "I also want to thank Chris Toomey and Steph Viccari, who keep sharing ideas on testing week after week on The Bike Shed Podcast." And that's just incredible. I'm so blown away, and I feel officially very famous.
CHRIS: This is how you know you're famous when you're in the acknowledgments of a book. But yeah, Gio is a longtime listener and friend of the show. He's written in many times and given us great tips, and pointers, and questions, and things. And I’ve so appreciated Gio’s voice in the community. And it's so wonderful, frankly, to hear that he has gotten value out of the show and us talking about testing. Because I always feel like I'm just regurgitating things that I've heard other people saying about testing and maybe one or two hard-learned truths that I've found. But it's really wonderful. And thank you so much, Gio. And best of luck for anyone out there who is doing Swift development and cares about testing or test-driven development, which I really think everybody should. Go check out that book.
STEPH: I must admit my Swift skills are incredibly rusty, really non-existent at this point. It's been so long since I've been in that world. But I went ahead and purchased a copy just because I think it's really cool. And I suspect there are a lot of testing conversations that, regardless of the specific code examples, still translate. At least, that's the goal that you and I have when we're having these testing conversations. Even if they're not specific to a language, we can still talk about testing paradigms and strategies. So I purchased a copy. I'm really looking forward to reading it.
And just to change things up a bit, we're going to start off with a listener question today. So this listener question comes from someone very close to the show. It comes from Thom Obarski. Hi, Thom. And Thom wrote in, "So I heard on a recent podcast I was editing some tension around marketing and open source. Specifically, a little perturbed at ReactJS that not only were people still dependent on a handful of big companies for their frameworks, but they also seem to be implying that the cachet of Facebook and having developer mindshare was not allowing smaller but potentially better solutions to shine through. In your opinion, how much does marketing play in the success of an open-source project framework rather than actually being the best tool for the job?" So a really thoughtful question. Thanks, Thom. Chris, I'm going to kick it over to you. What are your thoughts about this question?
CHRIS: Yeah, this is a super interesting one. And thank you so much, Thom, although I'm not sure that you're listening at this point. But we'll send you a note that we are replying to your question. And when I saw this one come through, it was interesting. I really love the kernel of the discussion here, but it is, again, very difficult to tease apart the bits. I think that the way the question was framed is like, oh, there's this bad thing that it's this big company that has this big name, and they're getting by on that. But really, there are these other great frameworks that exist, and they should get more of the mindshare.
And honestly, I'm not sure. I think marketing is a critically important aspect of the work that we do both in open source and, frankly, everywhere. And I'm going to clarify what I mean by that because I think it can take different shapes. But in terms of open-source, Facebook has poured a ton of energy and effort and, frankly, work into React as a framework. And they're also battle testing it on facebook.com, a giant website that gets tons of traffic, that sees various use cases, that has all permissions in there. They're really putting it through the wringer in that way.
And so there is a ton of value just in terms of this large organization working on and using this framework in the same way that GitHub and using Rails is a thing that is deeply valuable to us as a community. So I think having a large organization associated with something can actually be deeply valuable in terms of what it produces as an outcome for us as consumers of that open-source framework.
I think the other idea of sort of the meritocracy of the better framework should win out is, I don't know, it's like a Field of Dreams. Like, if you build it, they will come. It turns out I don't believe that that's actually true. And I think selling is a critical part of everything. And so if I think back to DHH's original video from so many years ago of like, I'm going to make a blog in 15 minutes; look at how much I'm not doing. That was a fantastic sales pitch for this new framework. And he was able to gain a ton of attention by virtue of making this really great sales pitch that sold on the merits of it. But that was marketing. He did the work of marketing there.
And I actually think about it in terms of a pull request. So I'm in a small organization. We're in a private repo. There's still marketing. There's still sales to be done there. I have to communicate to someone else the changes that I'm making, why it's valuable to the system, why they should support this change, this code coming into the codebase. And so I think that sort of communication is as critical to the whole conversation. And so the same thing happens at the level of open source.
I would love for the best framework to always win, but we also need large communities with Stack Overflow answers and community-supported plugins and things like that. And so it's a really difficult thing to treat marketing as just other, this different, separate thing when, in fact, I think they're all intertwined. And marketing is critically important, and having a giant organization behind something can actually have negative aspects. But I think overall; it really is useful in a lot of cases. Those are some initial thoughts. What do you think, Steph?
STEPH: Yeah, those are some great initial thoughts. I really agree with what you said. And I also like how you brought in the comparison of pull requests and how sales is still part of our job as developers, maybe not in the more traditional sense but in the way that we are marketing and communicating with the team. And circling back to what you were saying earlier about a bit how this is phrased, I think I typically agree that there's nothing nefarious that's afoot in regards to just because a larger company is sponsoring an open-source project or they are the ones responsible for it, I don't think there's anything necessarily bad about that.
And I agree with the other points that you made where it is helpful that these teams have essentially cultivated a framework or a project that is working for their team, that is helping their company, and then they have decided to open source it. And then, they have the time and energy that they can continue to invest in that project. And it is battle-tested because they are using it for their own projects as well. So it seems pretty natural that a lot of us then would gravitate towards these larger, more heavily supported projects and frameworks. Because then that's going to make our job easier and also give us more trust that we can turn to them when we do need help or have issues.
Or, like you mentioned, when we need to look up documentation, we know that that's going to be there versus some of the other smaller projects. They may also be wonderful projects. But if they are someone that's doing this in their spare time just on the weekends and yet I'm looking for something that I need to be incredibly reliable, then it probably makes sense for me to go with something that is supported by a team that's getting essentially paid to work on that project, at least that they're backed by a larger company. Versus if I'm going with a smaller project where someone is doing some wonderful work, but realistically, they're also doing it more on the weekends or in their spare time. So boiling it down, it’s similar to what you just said where marketing plays a very big part in open source, and the projects and frameworks that we adopt, and the things that we use. And I don't think that's necessarily a bad thing.
CHRIS: Yeah. I think, if anything, it's possibly a double-edged sword. Part of the question was around does React get to benefit just by the cachet of Facebook? But Facebook, as a larger organization sometimes that's a positive thing. Sometimes there's ire that is directed at Facebook as an organization.
And as a similar example, my experience with Google and Microsoft as large organizations, particularly backing open-source efforts, has almost sort of swapped over time, where originally, Microsoft there was almost nothing of Microsoft's open-source efforts that I was using. And I saw them as this very different shape of a company that I probably wouldn't be that interested in. And then they have deeply invested in things like GitHub, and VS Code, and TypeScript, and tons of projects that suddenly I'm like, oh, actually, a lot of what I use in the world is coming from Microsoft. That's really interesting.
And at the same time, Google has kind of gone in the opposite direction for me. And I've seen some of their movements switch from like, oh Google the underdog to now they're such a large company. And so the idea that the cachet, as the question phrase, of a company is just this uniformly positive thing and that it's perhaps an unfair benefit I don't see that as actually true.
But actually, as a more pointed example of this, I recently chose Svelte over React, and that was a conscious choice. And I went back and forth on it a few times, if we're being honest, because Svelte is a much smaller community. It does not have the large organizational backing that React or other frameworks do. And there was a certain marketing effort that was necessary to raise it into my visibility and then for me to be convinced that there is enough there, that there is a team that will maintain it, and that there are reasons to choose that and continue with it. And I've been very happy with it as a choice.
But I was very conscious in that choice that I'm choosing something that doesn't have that large organizational backing. Because there's a nicety there of like, I trust that Facebook will probably keep investing in React because it is the fundamental technology of the front end of their platform. So yeah, it's not going to go anywhere. But I made the choice of going with Svelte. So it's an example of where the large organization didn't win out in my particular case. So I think marketing is a part of the work, a part of the conversation. It's part of communication. And so I am less negative on it, I think, than the question perhaps was framed, but as always, it depends.
STEPH: Yeah, I'm trying to think of a scenario where I would be concerned about the fact that I'm using open source that's backed by a specific large company or corporation. And the main scenario I can think of is what happens when you conflict or if you have values that conflict with a company that is sponsoring that project? So if you are using an open-source project, but then the main community or the company that then works on that project does something that you really disagree with, then what do you do? How do you feel about that situation? Do you continue to use that open-source project? Do you try to use a different open-source project?
And I had that conversation frankly with myself recently, thinking through what to do in that situation and how to view it. And I realize this may not be how everybody views it, and it's not appropriate for all situations. But I do typically look at open-source projects as more than who they are backed by, but the community that's actively working on that project and who it benefits. So even if there is one particular group that is doing something that I don't agree with, that doesn't necessarily mean that wholesale I no longer want to be a part of this community. It just means that I still want to be a part, but I still want to share my concerns that I think a part of our community is going in a direction that I don't agree with or I don't think is a good direction.
That's, I guess, how I reason with myself; even if an open-source project is backed by someone that I don't agree with, either one, you can walk away. That seems very complicated, depending on your dependencies. Or two, you find ways to then push back on those values if you feel that the community is headed in a direction that you don't agree with. And that all depends on how comfortable you are and how much power you feel like you have in that situation to express your opinion. So it's a complicated space.
CHRIS: Yeah, that is a super subtle edge case of all of this. And I think I aligned with what you said of trying to view an open-source project as more generally the community that's behind it as opposed to even if there's a strong, singular organization behind it. But that said, that's definitely a part of it. And again, it's a double-edged sword. It's not just, oh, giant company; this is great. That giant company now has to consider this.
And I think in the case of Facebook and React, that is a wonderful hiring channel for them. Now all the people that use React anywhere are like, "Oh man, I could go work at Facebook on React? That's exciting." That's a thing that's a marketing tool from a hiring perspective for them. But it cuts both ways because suddenly, if the mindshare moves in a different direction, or if Facebook as an organization does something complicated, then React as a community can start to shift away. Maybe you don't move the current project off of it, but perhaps you don't start the next one with it. And so, there are trade-offs and considerations in all directions. And again, it depends.
STEPH: Yeah. I think overall, the thing that doesn't depend is marketing matters. It is a real part of the ecosystem, and it will influence our decisions. And so, just circling back to Thom's question, I think it does play a vital role in the choices that we make.
CHRIS: Way to stick the landing.
STEPH: Thanks.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Changing topics just a bit, what's new in your world?
CHRIS: Well, we had what I would call a mini perfect storm this week. We broke the build but in a pretty solid way. And it was a little bit difficult to get it back under control. And it has pushed me ever so slightly forward in my desire to have a fully optimized CI and deploy pipeline. Mostly, I mean that in terms of ratcheting. I'm not actually going to do anything beyond a very small set of configurations.
But to describe the context, we use pull requests because that's the way that we communicate. We do code reviews, all that fun stuff. And so there was a particular branch that had a good amount of changes, and then something got merged. And this other pull request was approved. And that person then clicked the rebase and merge button, which I have configured the repository, so that merge commits are not allowed because I'm not interested in that malarkey in our history. But merge commits or rebase and merge. I like that that makes sense.
In this particular case, we ran into the very small, subtle edge case of if you click the rebase and merge button, GitHub is now producing a new commit that did not exist before, a new version of the code. So they're taking your changes, and they are rebasing them onto the current main branch. And then they're attempting to merge that in. And A, that was allowed. B, the CI configuration did not require that to be in a passing state. And so basically, in doing that rebase and merge, it produced an artifact in the build that made it fail. And then attempting to unwind that was very complicated.
So basically, the rebase produced...there were duplicate changes within a given file. So Git didn't see it as a conflict because the change was made in two different parts of the file, but those were conflicting changes. So Git was like, this seems like it's fine. I can merge this, no problem. But it turns out from a functional perspective; it did not work. The build failed. And so now our main branch was failing and then trying to unwind that it just was surprisingly difficult to unwind that. And it really highlighted the importance of keeping the main branch green, keeping the build always passing. And so, I configured a few things in response to this. There is a branch protection rule that you can enable.
And let me actually pull up the specific configuration that I set up. So I now have enabled require status checks to pass before merging, which, if we're being honest, I thought that was the default. It turns out it was not the default. So we are now requiring status checks to pass before merging. I'm fully aware of the awkward, painful like, oh no, the build is failing but also, we have a bug. We need to deploy this. We must get something merged in.
So hopefully, if and when that situation presents itself, I will turn this off or somehow otherwise work around it. But for now, I would prefer to have this as a yeah; this is definitely a configuration we want. So require status checks to pass before merging and then require branches to be up to date before merging. So the button that does the rebase and merge, I don't want that to actually do a rebase on GitHub. I want the branch to already be up to date. Basically, I only ever want fast-forward merges on our main branch. So all code should be ahead of main, and we are simply updating what main points at. We are not creating new code. That code has run on CI, that version of the code specifically. We are fully rebased and up to date on top of main, and that's how we're going.
STEPH: All of that is super interesting. I have a question about the workflow. I want to make sure I'm understanding it correctly. So let's say that I have issued a PR, and then someone else has merged into the main branch. So now my PR is behind me, and I don't have that latest commit. With the new configuration, can I still use the rebase and merge, or will I need to rebase locally and then push up my branch before I can merge into main but at least using the GitHub UI?
CHRIS: I believe that you would be forced to rebase locally, force push, and then CI would rebuild, and that's what it is. So I think that's what require branches to be up to date before merging means. So that's my hope. That is the intention here. I do realize that's complicated. So this requirement, which I like, because again, I really want the idea that no, no, no, we, the developers, are in charge of that final state. That final state should always run as part of a build of CI on our pull request/branch before going into main. So no code should be new. There should be no new commits that have never been tested before going into main. That's my strong belief. I want that world. I realize that's...I don't know. Maybe I'm getting pedantic, or I'm a micromanager of the Git history or whatever. I'm fine with any of those insults that people want to lob at me. That's fine. But that's what I feel.
That said, this is a nuisance. I'm fully aware of that. And so imagine the situation where we got a couple of different things that have been in flight. People have been working on different...say there are three pull requests that are all coming to completion at the same time. Then you start to go to merge something, and you realize, oh no, somebody else just merged. So you rebase, and then you wait for CI to build. And just as the CI is completing, somebody else merges something, and you're like, ah, come on. And so then you have to one more time rebase, push, wait for the build to be green. So I get that that is not an ideal situation.
Right now, our team is three developers. So there are a few enough of us that I feel like this is okay. We can manage this via human intervention and just deal with the occasional weight. But in the back of my mind, of course, I want to find a better solution to this. So what I've been exploring…there's a handful of different utilities that I'm looking at, but they are basically merged queues as an idea. So there are three that I'm looking at, or maybe just two, but there's mergify.io, which is a hosted solution that does this sort of thing. And then Shopify has a merge queue implementation that they're running.
So the idea with this is when you as a developer are ready to merge something, you add a label to it. And when you add that label, there's some GitHub Action or otherwise some workflow in the background that sees that this has happened and now adds it to a merge queue. So it knows all of the different things that might want to be merged. And this is especially important as the team grows so that you don't get that contention. You can just say, "Yes, I would like my changes to go out into production." And so, when you label it, it then goes into this merge queue. And the background system is now going to take care of any necessary rebases. It's going to sequence them, so it's not just constantly churning all of the branches. It's waiting because it knows the order that they're ideally going to go out in.
If CI fails for any of them because rebasing suddenly, you're in an inconsistent state; if your build fails, then it will kick you out of the merge queue. It will let you know. So it will send you a notification in some manner and say, "Hey, hey, hey, you got to come look at this again. You've been kicked out of the merge queue. You're not going to production." But ideally, it adds that layer of automation to, frankly, this nuisance of having to keep things up to date and always wanting code to be run on CI and on a pull request before it gets into main. Then the ideal version is when it does actually merge your code, it pings you in Slack or something like that to say, "Hey, your changes just went out to production." Because the other thing I'm hoping for is a continuous deployment.
STEPH: The idea of a merge queue sounds really interesting. I've never worked with a process like that. And one of the benefits I can see is if I know I'm ready for something to go like if I'm waiting on a green build and I'm like, hey, as soon as this is green, I'd really like for it to get merged. Then currently, I'm checking in on it, so I will restart the build. And then, every so often, I'm going back to say, "Okay, are you green? Are you green? Can I emerge?" But if I have a merge queue, I can say, "Hey, merge queue, when this is green, please go and merge it for me." If I'm understanding the behavior correctly, that sounds really nifty.
CHRIS: I think that's a distinct but useful aspect of this is the idea that when you as a developer decide this PR is ready to go, you don't need to wait for either the current build or any subsequent builds. If there are rebases that need to happen, you basically say, "I think this code's good to go. We've gotten the necessary approvals. We've got the buy-in and the teams into this code." So cool, I now market as good. And you can walk away from it, and you will be notified either if it fails to get merged or if it successfully gets merged and deployed. So yes, that dream of like, you don't have to sit there watching the pot boil anymore.
STEPH: Yeah, that sounds nice. I do have to ask you a question. And this is related to one of the blog posts that you and I love deeply and reference fairly frequently. And it's the one that's written by German Velasco about Say No to More Process, and Say Yes to Trust. And I'm wondering, based on the pain that you felt from this new commit, going into main and breaking the main build, how do you feel about that balance of we spent time investigating this issue, and it may or may not happen again, and we're also looking into these new processes to avoid this from happening? I'm curious what your thought process is there because it seems like it's a fair amount of work to invest in the new process, but maybe that's justified based on the pain that you felt from having to fix the build previously.
CHRIS: Oh, I love the question. I love the subtle pushback here. I love this frame of mind. I really love that blog post. German writes incredible blog posts. And this is one that I just keep coming back to. In this particular case, when this situation occurred, we had a very brief...well, it wasn't even that brief because actually unwinding the situation was surprisingly painful, and we had some changes that we really wanted to get out, but now the build was broken. And so that churn and slowdown of our build pipeline and of our ability to actually get changes out to production was enough pain that we're like, okay, cool.
And then the other thing is we actually all were in agreement that this is the way we want things to work anyway, that idea that things should be rebased and tested on CI as part of a pull request. And then we're essentially only doing fast-forward merges on the main branch, or we're fast forward merging main into this new change. That's the workflow that we wanted. So this configuration was really just adding a little bit of software control to the thing that we wanted. So it was an existing process in our minds. This is the thing we were trying to do. It's just kind of hard to keep up with, frankly. But it turns out GitHub can manage it for us and enforce the process that we wanted. So it wasn't a new process per se. It was new automation to help us hold ourselves to the process that we had chosen.
And again, it's minimally painful for the team given the size that we're at now, but I am looking out to the future. And to be clear, this is one of the many things that fall on the list of; man, I would love to have some time to do this, but this is obviously not a priority right now. So I'm not allowed to do this. This is explicitly on the not allowed to touch list, but someday. I'm very excited about this because this does fundamentally introduce some additional work in the pipeline, and I don't want that.
Like you said, is this process worth it for the very small set of times that it's going to have a bad outcome? But in my mind, the better version, that down the road version where we have a merge queue, is actually a better version overall, even with just a tiny team of three developers that are maybe never even conflicting in our merges, except for this one standout time that happens once every three months or whatever. This is still nicer. I want to just be able to label a pull request and walk away and have it do the thing that we have decided as a team that we want. So that's the dream.
STEPH: Oh, I love that phrasing, to label a pull request and be able to walk away. Going back to our marketing, that really sells that merge queue to me. [laughs]
Mid-roll Ad
And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API.
With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city.
The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby.
CHRIS: To be clear, and this is to borrow on some of Charity Majors' comments around continuous deployment and whatnot, is a developer should stay very close to the code if they are merging it. Because if we're doing continuous deployment, that's going to go out to production. If anything's going to happen, I want that individual to be aware. So ideally, there's another set of optimizations that I need to make on top of this. So we've got the merge queue, and that'll be great. Really excited about that.
But if we're going to lean into this, I want to optimize our CI pipeline and our deployment pipeline as much as possible such that even in the worst case where there's three different builds that are fighting for contention and trying to get out, the longest any developer might go between labeling a pull request and saying, "This is good to go," and it getting out to production, again, even if they're contending with other PRs, is say 10, 15 minutes, something like that.
I want Slack to notify them and them to then re-engage and keep an eye on things, see if any errors pop up, anything like that that they might need to respond to. Because they're the one that's got the context on the code at that point, and that context is decaying. The minute you've just merged a pull request and you're walking away from that code, the next day, you're like, what did I work on? I don't remember that at all. That code doesn't exist anymore in my brain. And so,,, staying close to that context is incredibly important.
So there's a handful of optimizations that I've looked at in terms of the CircleCI build. I've talked about my not rebuilding when it actually gets fast-forward merged because we've already done that build on the pull request. I'm being somewhat pointed in saying this has to build on a pull request. So if it did just build on a pull request, let's not rebuild it on main because it's identically the same commit. CircleCI, I'm looking at you. Give me a config button for that, please. I would really love that config button.
But there are a couple of other things that I've looked at. There's RSpec::Retry from NoRedInk, which will allow for some retry semantics. Because it will be really frustrating if your build breaks and you fall out of the merge queue. So let's try a little bit of retry logic on there, particularly around feature specs, because that's where this might happen.
There's Knapsack Pro which is a really interesting thing that I've looked at, which does parallelization of your RSpec test suite. But it does it in a different way than say Circle does. It actually runs a build queue, and each test gets sent over, and they have build agents on their side. And it's an interesting approach. I'm intrigued. I think it could use some nice speed-ups. There's esbuild on the Heroku side so that our assets build so much more quickly. There are lots of things. I want to make it very fast. But again, this is on the not allowed to do it list. [laughs]
STEPH: I love how most of the world has a to-do list, and you have this not-allowed to-do list that you're adding items to. And I'm really curious what all is on the not allowed to touch lists or not allowed to-do list. [laughs]
CHRIS: I think this might be inherent to being a developer is like when I see a problem, I want to fix it. I want to optimize it. I want to tweak it. I want to make it so that that never happens again. But plenty of things...coming back to German's post of Say No to More Process, some things shouldn't be fixed, or the cost of fixing is so much higher than the cost of just letting it happen again and dealing with it manually at that moment.
And so I think my inherent nature as a developer there's a voice in my head that is like, fix everything that's broken. And I'm like, sorry. Sorry, brain, I do not have that kind of time. And so I have to be really choosy about where the time goes. And this extends to the team as well. We need to be intentional around what we're building. Actually, there's a feeling that I've been feeling more acutely than ever, but it's the idea of this trade-off or optimization between speed and getting features out into the world and laying the right fundamentals. We're still very early on in this project, and I want to make sure we're thinking about things intentionally.
I've been on so many projects where it's many years after it started and when I ask someone, "Hey, why do your background jobs work that way? That's a little weird." And they're like, "Yeah, that was just a thing that happened, and then it never changed. And then, we copied it and duplicated, and that pattern just got reinforced deeply within the app. And at this point, it would cost too much to change." I've seen that thing play out so many times at so many different organizations that I'm overwhelmed with that knowledge in the back of my head. And I'm like, okay, I got to get it just right.
But I can't take the time that is necessary to get it, quote, unquote, "Just right." I do not have that kind of time. I got to ship some features. And this tension is sort of the name of the game. It's the thing I've been doing for my entire career. But now, given the role that I have with a very early-stage startup, I've never felt it more acutely. I've never had to be equally as concerned with both sides of that. Both matter all the more now than they ever have before, and so I'm kind of existing in that space.
STEPH: I really like that phrasing of that space because that deeply resonates with me as well. And that not allowed to-do list I have a similar list. For me, it's just called a wishlist. And so it's a wishlist that I will revisit every so often, but honestly, most things on there don't get done. And then I'll clear it out every so often when I feel it's not likely that I'm going to get to it. And then I'll just start fresh. So I also have a similar this is what I would like to do if I had the time.
And I agree that there's this inclination to automate as well. As soon as we have to do something that felt painful once, then we feel like, oh, we should automate it. And that's a conversation that I often have with myself is at what point is the cost of automation worthwhile versus should we just do this manually until we get to that point? So I love those nuanced conversations around when is the right time to invest further in this, and what is the impact? And what is the cost of it? And what are the trade-offs? And making that decision isn't always clear. And so I think that's why I really enjoy these conversations because it's not a clear rubric as to like, this is when you invest, and this is when you don't.
But I do feel like being a consultant has helped me hone those skills because I am jumping around to different teams, and I'm recognizing they didn't do this thing. Maybe they didn't address this or invest in it, and it's working for them. There are some oddities. Like you said, maybe I'll ask, "Why is this? It seems a little funky. What's the history?" And they'll be like, "Yeah, it was built in a hurry, but it works. And so there hasn't been any churn. We don't have any issues with it, so we have just left it." And that has helped reinforce the idea that just because something could be improved doesn't mean it's worthwhile to improve it.
Circling back to your original quest where you are looking to improve the process for merging and ensuring that CI stays green, I do like that you highlighted the fact that we do need to just be able to override settings. So that's something that has happened recently this week for me and my client work where we have had PRs that didn't have a green build because we have some flaky tests that we are actively working on. But we recognize that they're flaky, and we don't want that to block us. I'm still shipping work. So I really appreciate the consideration where we want to optimize so that everyone has an easy merging experience. We know things are green. It's trustworthy. But then we also have the ability to still say, "No, I am confident that I know what I'm doing here, and I want to merge it anyways, but thank you for the warning."
CHRIS: And the constant pendulum swing of over-correcting in various directions I've experienced that. And as you said, in the back of my mind, I'm like, oh, I know that this setting I'm going to need a way to turn this setting off. So I want to make sure that, most importantly, I'm not the only one on the team who can turn that off because the day that I am away on vacation and the build is broken, and we have a critical bug that we need to fix, somebody else needs to be able to do that. So that's sort of the story in my head.
At the same time, though, I've worked on so many teams where they're like, oh yeah, the build has been broken for seven weeks. We have a ticket in the backlog to fix that. And it's like, no, the build has to not be broken for that long. And so I agree with what you were saying of consulting has so usefully helped me hone where I fall on these various spectrums. But I do worry that I'm just constantly over-correcting in one direction or the other. I'm never actually at an optimum. I am just constantly whatever the most recent thing was, which is really impacting my thinking on this. And I try to not do that, but it's hard.
STEPH: Oh yeah. I'm totally biased towards my most recent experiences, and whatever has caused me the most pain or success recently. I'm definitely skewed in that direction.
CHRIS: Yeah, I definitely have the recency bias, and I try to have a holistic view of all of the things I've seen. There's actually a particular one that I don't want to pat myself on the back for because it's not a good thing. But currently, our test suite, when it runs, there's just a bunch of noise. There's a bunch of other stuff that gets printed out, like a bunch of it. And I'm reminded of a tweet from Kevin Newton, a friend of the show, and I just pulled it up here. "Oh, the lengths I will go to avoid warnings in my terminal, especially in the middle of my green dots. Don't touch my dots." It's a beautiful beauty. He actually has a handful about the green dots. And I feel this feel.
When I run my test suite, I just want a sea of green dots. That's all I want to see. But right now, our test suite is just noise. It's so much noise. And I am very proud of...I feel like this is a growth moment for me where I've been like, you know what? That is not the thing to fix today. We can deal with some noise amongst the green dots for now. Someday, I'm just going to lose it, and I'm going to fix it, and it's going to come back to green dots. [chuckles]
STEPH: That sounds like such a wonderful children's book or Dr. Seuss. Oh, the importance of green dots or, oh, the places green dots will take you.
CHRIS: Don't touch my dots. [laughter]
STEPH: Okay. Maybe a slightly aggressive Dr. Seuss, but I still really like it.
CHRIS: A little more, yeah.
STEPH: On that note of our love of green dots, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Chris talks feature flags featuring Flipper (Say that 3x fast!), and Steph talks reducing stress by a) having a work shutdown ritual and b) the fact that thoughtbot is experimenting with half-day Fridays. (Fri-yay?)
Transcript:
STEPH: Hey, do you know that we could have an in-person recording at the end of October?
CHRIS: I do. Yes, I'm planning. That is in the back of my head. I guess I hadn't said that to you yet. But I'm glad that we have separately had the same conversation, and we've got to figure that out, although I don't know how to do noise cancellation and whatnot in the room. [laughs] How do we...we'll have to figure it out. Like, put a blanket in between us but so that we can see across it, but it absorbs sound in the middle. It's weird. I don't know how to do stuff. Just thinking out loud here.
STEPH: We'll just be in the same place but still different rooms. So it'll feel no different.
[laughter]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, what's new in your world?
CHRIS: Feature flags. Feature flags are an old favorite, but they have become new again in the application that I'm working on. We had a new feature that we were building out. But we assumed correctly that it would be nice to be able to break it apart into smaller pieces and sort of deliver it incrementally but not necessarily want to expose that to our end users. And so, we opted with that ticket to bring along the feature flag system.
So we've introduced Flipper, in particular, which is a wonderful gem; it does the job. We're using the ActiveRecord adapter. All that kind of makes sense, happy about that. And so now we have feature flags. But it was one of those mindset shifts where the minute we got feature flags, I was like, yes, okay, everything behind a feature flag. And we've been leaning into that more and more, and it really is so nice and so freeing, and so absolutely loving it so far.
STEPH: I'm intrigued. You said, "Everything behind a feature flag." Like, is it really everything or? Yeah, tell me more.
CHRIS: Not everything. But at this point, we're still very early on in this application, so there are fundamental facets of the platform, different areas of what users can do. And so the actual stuff that works and is wired up is pretty minimal, but we want to have a little more surface area built out in the app for demo purposes, for conversations that are happening, et cetera.
And so, we built out a bunch of new pages to represent functionality. And so there are sidebar links, and then the actual page itself, and routing, and all of the things that are associated with that, and so all of those have come in. I think there are five new top-level nav sections of the platform that are all introduced behind a feature flag right now. And then there's some new functionality within existing pages that we've put behind feature flags. So it's not truly every line of code, but it's basically the entry point to all new major features we're putting behind a feature flag.
STEPH: Okay, cool. I'm curious. How are you finding that in terms of does it feel manageable? Do you feel like anybody can go into the UI and then turn on feature flags for demos and feel confident that they know what they're turning on and off?
CHRIS: We haven't gotten to that self-serve place. At this point, the dev team is managing the feature flags. So on production, we have an internal group configured within Flipper. So we can say, "Ship this feature for all internal users so that we can do testing." So there is a handful of us that all have accounts on production. And then on staging, we have a couple of representative users that we've been just turning everything on for so that we know via staging we can act as that user and then see the application with all of the bells and whistles.
Down the road, I think we're going to get more intentional with it, particularly the idea of a demo account. That's something that we want to lean into. And for that user, we'll probably be turning on certain subsets of the feature flags. I think we'll get a little more granular in how we think about that. For now, we're not as detailed in it, but I think that is something that we want to expand as we move forward.
STEPH: Nice. Yeah, I was curious because feature flags came up in our recent retro with the client team because we've gotten to a point where our feature flags feel complex enough that it's becoming challenging and not just from the complexity of the feature flags but also from the UI perspective. Where it feels challenging for users to understand how to turn a feature on, exactly what that impacts, and making sure that then they're not changing developer-focused feature flags, so those are the feature flags that we're using to ship a change but then not turn it on until we're ready. It is user-facing, but it's something that should be managed more by developers as to when we turn it on or off. So I was curious to hear that's going for you because that's something that we are looking into.
And funnily enough, you asked me recently, "Why aren't y'all using Flipper?" And I didn't have a great answer for you. And that question came up again where we looked at each other, and we're like, okay, we know there was a really good reason we didn't use Flipper when we first had this discussion. But none of us can remember, or at least the people in that conversation couldn't remember. So now we're asking ourselves the question of we've made it this far. Is it time to bring in Flipper or another service? Because we're getting to the point that we're starting to build too much of our own feature flag system.
CHRIS: So did you uncover an answer, or are you all just agreeing that the question makes sense?
STEPH: Agreeing that the question makes sense. [laughs]
CHRIS: That's the first step on a long journey to switching from internal tooling to somebody else managing that for you.
STEPH: Yeah, because none of us could remember exactly. But it was funny because I was like, am I just forgetting something here when you asked me that? So I felt validated that others were like, "Oh yeah, I remember that conversation. But I too can't recall why we didn't want to use Flipper in the moment or a similar service."
CHRIS: I'll definitely be interested to hear if you do end up trying to migrate off to another system or find a different approach there or if you do stick with the current configuration that you have. Because those projects they're the sort of sneaky ones that it's like, oh, we've been actually relying on this for a while. It's a core part of our infrastructure, and how we do the work, and the process, and how we deploy. That's a lot. And so, to switch that out in-flight becomes really difficult. It's one of those things where the longer it goes on, the harder it is to make that change. But at some point, you sometimes make the decision to make it. So I will be very interested to hear if you do make that decision and then, if so, what that changeover process looks like.
STEPH: Yeah, totally. I'll be sure to keep you up to date as we make any progress or decisions around feature flags.
CHRIS: But yeah, your questions around management and communication of it that is a thing that's in the back of my mind. We're still early enough in our usage of it, and just broadly, how we're working, we haven't really felt that pain yet, but I expect it's coming very soon. And in particular, we have functionality now that is merged and is part of the codebase but isn't fully deployed or fully released rather. That's probably the correct word. We have not fully released this functionality, and we don't have a system right now for tracking that.
So I'm thinking right now we're using Trello for product management. I'm thinking we want another column that is not entirely done but is tracking the feature flags that are currently in flight and just use that as a place to gather communication. Do we feel like this is ready? Let's dial this up to 50%, or let's enable it for this beta group or whatever it is to sort of be able to communicate that. And then ideally, also as a way to track these are the ones that are active right now. You know what? We feel like this one's ready. So do the code change so that we no longer use the feature flag, and then we can actually turn it off. Currently, I feel like I can defer that for a little while, but it is something that's in the back of my mind.
And then, of course, I nerd sniped myself, and I was like, all right, how do I grep the codebase for all the feature flags that we're using? Okay. There are a couple of different patterns as to how we're using…You know what? I think I actually need an AST-based parser here, and I need to use the Visitor...You know what? Never mind. Stop it. Stop it. [laughs] It was one of those where I was like...I was doing this not during actual work hours. It was just a question in my mind, and then I started to poke at it. I was like, oh, this could be fun. And then I was like, no, no, no, stop it. You need to go read a book or something. Calm down.
STEPH: As part of the optimization around our feature flag system that we've created, we've added a few enhancements, which I think is also one of the reasons we're starting to question how far we want to go in this direction. One of them is we want a very easy way to track what's turned on and what's turned off for an environment. So we have a task that will easily check, or it prints out a really nice list of these are all your flags, and this is the state that they're in. And by using the system that we have, we have one file that represents...well, you mentioned migration because we're migrating from the old system to this new one. So it's still a little bit in that space of where we haven't fully moved over. So now, moving over to a third thing like Flipper will be even more interesting because of that.
But the current system, we have a file that lists all the feature flags and a really nice description that goes with it, which I know is supported by Flipper and other services as well. But having that one file does make it nice where you can just scan through there and see what's in use. I really think it's the UI and the challenges that the users are facing and understanding what a feature flag does, and which ones they should turn off, and which ones they shouldn't touch that that's the point where we started questioning okay, we need to improve the UI.
But to improve the UI, do we really want to fully embrace our current system and make those improvements, or is now that time that we should consider moving to something else? Because Flipper already has a really nice UI. I think there is a free tier and a paid tier with Flipper, and the paid tier has a UI that ships.
CHRIS: There's definitely a distinct thing, Flipper Cloud, which is their hosted enterprise-y solution, and that's the paid offering. But Flipper just the core gem there's also Flipper web, I want to say is what it is, or Flipper UI. And I think it's an engine that you mount within your Rails app and that displays a UI so that you can manage things, add groups and teams. So we're definitely using that. I've got my eye on Flipper Cloud, but I have some fundamental questions around I like to keep my data in the system, and so this is an external other thing. And what's the synchronization? I haven't really even looked into it like that. But I love that Flipper exists within our application.
One of the niceties that Flipper Cloud does have is an audit history, which I think is interesting just to understand over time who changed what for what reasons? It's got the ability to roll back and maintain versions and whatnot. So there are some things in it that definitely look very interesting to me. But for now, the open-source, free version of Flipper plus Flipper UI has been plenty for us.
STEPH: That's cool. I didn't know about the audit feature.
CHRIS: Yeah. It definitely feels like one of those niceties to have for a more enterprise offering. So I could see myself talking me into it at some point but not quite yet. On that note though, so feature flags we introduced a week and a half, something like that, ago, and we've been leaning into them more and more. But as part of that, or in the back of my mind, I've wanted to go to continuous deployment.
So we had our first official retro this week. The project is growing up. We're becoming a lot of things. We used retro to talk about continuous deployment, all of these things that feel very real. Just to highlight it, retro is super important. And the fact that we haven't had one until now is mainly because up till now, it’s been primarily myself and another developer. So we've been having essentially one-on-ones but not a more formal retro that involves others.
At this point, we now have myself and two other developers that are working on the project, as well as someone who's stepped into the role of product manager. So we now have communication collaboration. How are we doing the work? How are we shipping features and communicating about bugs and all of that? So now felt like the right time to start having that more formal process. So now, every two weeks, we're going to have a retro, and hopefully, through that, retro will do the magic that retro can do at its best which is help us get better at all the things that we're doing.
But yeah, one of the core things in this particular one was talking about moving to continuous deployment. And so I am super excited to get there because I think, much like test-driven development, it's one of those situations where continuous deployment puts a lot of pressure on the development process. Everything that is being merged needs to be ready to go out into production. And honestly, I love that as a constraint because that will change how you build things. It means that you need to be a little more cautious. You can put something behind a feature flag to protect it. You decouple the idea of merging and deploying from releasing. And I like that distinction.
I think that's a really meaningful distinction because it makes you think about what's the entry point to this feature within the codebase? And it's, I think, actually really nice to have fewer and more intentional entry points into various bits of functionality such that if you actually want to shut it off in production, you can do that. That's more straightforward. I think it encourages an intentional coupling, maybe not a perfect decoupling but an intentional coupling within the system.
So I'm very excited to explore it. I think feature flags are going to be critical for it, and I think also observability, and monitoring, and logging, and all those things. We need to get really good at them so that if anything does go wrong when we just merge and deploy, we want to know if anything goes wrong as quickly as possible. But overall, I'm super excited about all of the other niceties that fall out of it.
STEPH: [singing] I wanna know what's turned on, and I want you to show me. Is that the song you're singing to Flipper? [laughs]
CHRIS: [laughs]
STEPH: Sorry, friends. I just had to go there.
CHRIS: That was just in your head. You had that, and you needed to get it out. I appreciate it. [laughter] Again, I got Flipper UI, so that's not the question I'm asking. I think that's the question you have in your heart.
STEPH: [laughs]
Mid-roll Ad
And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API.
With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city.
The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby.
STEPH: That's funny about the CI deployment adding pressure to the development process because you're absolutely right. But I see it as such a positive and improvement that I don't really think about the pressure that it's adding. And I just think, yes, this is awesome, and I want this to happen and if there are steps that we have to take in that direction.
It dawned on me that what you said is very true, but I've just never really thought about it from that perspective about the pressure. Because I think the thing that does add more pressure for me is figuring out what can I deploy, or do I need to cherry-pick commits? What does that look like? And going through that whole cycle and stress is more stressful to me than figuring out how do we get to continuous deployments and making sure that everything is in a safe space to be deployed?
CHRIS: That's the dream. I'm going to see if I can live it. I'll let you know how it goes. But yeah, that's a bit of what's up in my world. What else is going on in your world other than some lovely singing?
STEPH: Oh, there's always lots of singing. It's been an interesting week. It's been a mix of some hiring work. Specifically, we are helping our client team build their development team. So we have been helping them implement a hiring process. And then also going through technical interviews and then going through different stages of that interview process. And that's been really nice. I haven't done that specifically for a client team where I helped them build a hiring pipeline from scratch and then also conduct those interviews.
And one thing that stood out to me is that rotations are really important to me and specifically that we don't ask for volunteers. So as we were having candidates come through and then they were ready to schedule an interview, then we are reaching out to the rest of the development team and saying, "Hey, we have this person. They're going to be scheduled at this time. Who's available? Who's interested? I'm looking for volunteers." And that puts pressure on people, especially someone that may be more empathetic to feel the need to volunteer. So then you can end up having more people volunteer than others.
So we've established a rotation to make sure that doesn't happen, and people are assigned as it becomes their next turn to conduct an interview. So that's been a lot of fun to refine that process and essentially make it easier. So the rest of the development team doesn't have to think about the hiring. But it still has an easy way of just saying, "Hey," and tapping someone to say, "Hey, it's your turn to run an interview."
The other thing I've been working out is figuring out how to measure an experiment. So we at thoughtbot are running an experiment where we're looking to address some of the concerns around sustainability and people feeling burned out. And so we have introduced half-day Fridays, more specifically 3.5 Fridays, as our half-day Fridays just to help everybody be certain about what a half-day looks like.
And then also, you can choose your half-day. Everybody works different schedules. We're across different time zones, so just to make sure it's really clear for folks and that they understand that they don't need to work more than those hours, and then they should have that additional downtime. And that's been amazing. This is the second Friday of the experiment, and we're doing this for nine Fridays straight.
And one of the questions that came up was, well, how do we know we did a good thing? How do we know that we helped people in terms of sustainability or addressing some of the feelings that they're having around burnout? And so I've collaborated with a couple of other thoughtboters to think through of a way to measure it. It turns out helping someone measure their wellness is incredibly complex. And so we went for a fairly simple approach where we're using an anonymous survey with a number of questions.
And those questions aren't really meant to stand up to scientific scrutiny but more to figure out how the team is feeling at the time that they fill out the survey and then also to understand how the reduced weekly hours have impacted their schedule. And are people working extra hours to then accommodate the fact that we now have these half Fridays? So do you feel pressured that because you can't work a full day on Friday that you are now working an extra hour or two Monday through Thursday to accommodate that time off? So that survey just went out today.
And one of the really interesting parts (I just haven't had to create content for a survey in a while.) was making sure that I'm not introducing leading questions or phrasing things in a very positive or negative light since that is a bias that then people will pick up on. So instead of saying, "I find it easy to focus at work," and then having like a multiple choice of true, always, never, that kind of thing, instead rephrasing the question to be, "Are you able to focus during work hours?" And then you have a scale there.
Or instead of asking someone how much energy they have, maybe it's something like, "Do you experience fatigue during the day?" Or instead of asking someone, "Are you stressed at work?" because that can have a more negative connotation. It may lead someone to feel more negatively as they are assessing that question. Then you can say, "How do you feel when you're at work?" And then you can provide those answers of I'm stressed, slightly stressed, neutral, slightly relaxed, and relaxed.
So it generated some interesting conversations around the importance of how we phrase questions and how we collect feedback. And I really enjoyed that process, and I'm really looking forward to seeing what folks have to say. And we're going to have three surveys total. So we have one that's early on in the experiment since we're only two Fridays in. We'll have one middle experiment survey go out, and then we'll have one at the end once we're done. And then hopefully, everybody's responses will then help us understand how the experiment went and then make a decision going forward.
I'll be honest; I’m really hoping that this becomes a trend and something that we stick with. It is a professional goal of mine to slowly reduce the hours that I work each week or quickly; it doesn't have to be slowly. But I really like the four-day workweek. It's something that I haven't done, but I've been reading about it a fair amount lately. I feel like I've been seeing more studies conducted recently becoming published, and it's just very interesting to me.
I had some similar concerns of how am I still going to be productive? My to-do list hasn't changed, but my hours are changing. So how am I still going to get everything done? And does it make sense for me to still get paid the same amount of money if I'm only working four out of the five days? And I had lots of questions around that, and the studies have been very enlightening and very positive in the outcome of a reduced workweek, not just for the individuals but for the companies as well.
CHRIS: It's such an interesting space and exploration. The way that you're framing the survey sounds really great. It sounds like you're trying to be really intentional around the questions that you're asking and not being leading and whatnot. That said, it is one of the historically hard problems trying to quantify this and trying to actually boil it down.
And there are so many different axes even that you're measuring on. Is it just increased employee happiness? Is it retention that you're talking about? Is it overall revenue? There are so many different things, and it's very tricky. I'm super interested to hear the results when you get those. So you're doing what sounds like more of a qualitative study like, how are you feeling? As opposed to a more quantitative sort of thing, is that right?
STEPH: Yes, it's more in the realm of how are you feeling? And are you working extra hours, or are you truly taking the time off?
CHRIS: Yeah, I think it's really hard to take something like this and try and get it into the quantitative space, even though like, oh yeah, if we could have a number, if it used to be two and now it's four, fantastic. We've doubled whatever that measure is. I don't know what the unit would be on this arbitrary number I made up. But again, that's the hard thing and probably not feasible at all. And so it makes sense the approach that you're taking. But it's super difficult. So I'm very interested to hear how that goes.
More generally, the four-day workweek thing is such a nice idea. We should do that more. I'm trying to think how long I did that. So during the period that I was working freelance, I think there were probably at least five months where I did just a true four-day workweek. Fridays were my own. It was fantastic. Granted, I recorded the podcast with you. But that day was mine to shape as I wanted.
And I found it was a really nice decompression period having that for a number of weeks in a row. And just getting to take care of personal stuff that I hadn't been and just having that extra little bit of space and time. And it really was wonderful. Now I'm working full five days a week, and my Fridays aren't even investment days, so I don't know what I'm doing over here.
But I agree. I really like that idea, and I think it's a wonderful thing. And it's, I don't know, sort of the promise of this whole capitalism adventure we're supposed to go on, increasing productivity. And wasn't this the promise the whole time, everybody, so I am intrigued to see it being explored more, to see it being discussed. And what you're talking about of it's not just good for the employees, but it's also great for the companies. You're getting people that are more engaged on the days that they're working, which feels very true to me.
Like, on a great day, I can do some amazing work. On a terrible day, I can do mediocre to bad work. It is totally possible for me to do something that is actively detrimental. Like, I introduce a bug that is going to impact a bunch of customers. And the remediation of that is going to take many more hours. That is totally a realistic thing. I think we often think of productivity in terms of are you at zero or some amount more than zero? But there is definitely another side of that. And so the cost of being not at your best is extremely high in my mind. And so anything we can do to improve that.
STEPH: There's a recent study from a non-profit company called Autonomy that published some research called Going Public: Iceland's Journey to a Shorter Working Week. It's very interesting. And a number of people in my social circle have shared it. And that's one of the reasons that I came across it. And they commented in there that one of the reasons...I hope I'm getting this right, but we'll link to it in case I've gotten it a bit wrong.
But one of the reasons that Iceland was interested or open to this idea of moving workers to a shorter workweek is because they were struggling with productivity and where people were working a lot of hours, but it still felt like their productivity was dropping. So then Autonomy ran this study to help figure out are there ways to improve productivity? Will shortening a workweek actually lead to higher productivity?
And there was a statement in there that I really liked where it talks about the more hours that we work; we’re actually lowering our per hour productivity which rings so true for me. Because I am one of those individuals where I'm very stubborn, and so if I'm stuck on something, I will put so many hours into trying to figure it out. But at some point, I have to just walk away, and if I do, I will solve it that much faster. But if I just try to use hours as my way to chip away at a problem, then that's not going to solve it. And my ability to solve that problem takes exponentially more time than if I had just walked away and then come back to the problem fresh and engaged.
And some of the case studies I admired the way that they tackled the problem. They would essentially pay the company. So the company could reduce the hours for certain employees so then they could run the experiment. So if they reduced employees to say 32 hours but the company didn't actually want to stop working at 32 hours and they wanted to keep going, so then they brought in other people to work the remaining eight hours. Then as part of that study, they would pay the company to help them stay at their current level of productivity or current level of hours. This way, they could conduct the study. And I thought that was a really neat idea.
I do have lots of questions still around the approach itself because it is how do you reduce your to-do list, essentially? So just because you dropped to a four-day workweek. So essentially, you have to just say less stuff gets done. Or, as these case studies promise, they're saying you're actually going to be more productive. So you will still continue to get a lot of your work done. I'm curious about that. I'd like to track my own productivity and see if I feel similarly.
And then also, who is this for? Is this for everybody? Does everybody get to move to a four-day workweek? Is this for certain companies? Is it for certain jobs? Ideally, this is for everybody because there are so many health benefits to this, but I'm just intrigued as to who this is for, who it impacts, how can we make it available for everyone? And is the dream real that I can work four days a week and still feel as productive, if not more productive, and healthier, and happier as I do when working five days a week?
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: I remember there was an extended period where working remote was this unique benefit that some organizations had. They had adopted that mode. They were async, and remote, and all of these wonderful things. And it became this really interesting selling point for those companies. Now the pandemic obviously pushed public opinion and everything on that in a pretty significant way such that it's a much more common thing. And so, as a result, I think it's less of a differentiator now. It used to be a way to help with recruiting.
I wonder if there are organizations that are willing to take this, try it out, see that they are still close to as productive. But if it means that hiring is twice as easy, that is absolutely...especially if it is able to double your ability to hire, that is incredibly valuable or retention similarly. If you can increase retention or if you can make it easier to hire, the value of that is so, so high.
And it's interesting in my mind because there's sort of a gold rush on that. That's only true for as long as a four-day workweek is a unique benefit of working at the organization. If this is actually the direction that everything's going and eventually everyone's going to settle to that, then if you wait too long to get there, then you're going to miss all the benefits. You're going to miss that particular benefit of it.
And so I do wonder, would it be advantageous to organizations...I'm thinking about this now. Maybe this is the thing I have to do. But would it be advantageous to be that organization as early on as possible and try to get ahead of the curve and use that to hire more easily, retain more easily? Now that I say it all out loud, I'm sold. All right. I got to do this.
STEPH: Yeah, I think that's a great comparison of where people are going to start to look for those types of benefits. And so, if you are one of the early adopters and you have the four-day workweek or a reduced workweek in general, then people will gravitate towards that benefit. And it's something that people can use to really help with hiring and retention. And yeah, I love it.
You are CTO. So you have influence within your company that you could push for the four-day workweek if you think that's what you want to do. And I would be really intrigued to hear how that goes and how you feel if you...well, you've done it before where you've worked four days a week. So applying that to your current situation, how does that feel?
CHRIS: Now you're actually holding me accountable to the things that I randomly said in passing. But it's interesting. So we're so early stage, and there's so much small work to do. There's all…oh, got to set up a website. We've got to do this. We've got to build that integration. There's just kind of scrambling to be done.
And so there's a certain version in my mind that maybe we're in a period of time where additional hours are actually useful. There's a cost to them. Let's be clear about that. And so how long that will remain true, I'm not sure. I could see a point perhaps down the road where we achieve a little bit closer to steady-state maybe, who knows? It depends on how fast growth is and et cetera, a lot of other things.
So I'm not sure that I would actually lead with this experiment myself, given where the organization is at right now. But I could see an organization that's at a little bit more of a steady-state, that's growing more incrementally, that is trying to think really hard about things like hiring and retention. If those were bigger questions in my mind, then I think I would be considering this more pointedly. But for now, I'm like, I kind of just got to do a bunch of stuff. And so my brain is telling me a different story, but it is interesting. I want to interrogate that and be like, brain, why is that the story you have there, huh? Huh?
STEPH: I really appreciate what you're saying, though, because that makes sense to me. I understand when you are in that earlier stage, there's enough to do that that feels correct. Versus that added benefit of having a reduced workweek does benefit or could benefit larger companies who are looking to hire more heavily, or they're also concerned about retention or just helping their people address feelings of burnout. So I really appreciate that perspective because that also rings true.
So along this whole conversation around wellness and how we can help people work more sustainable hours, there's a particular book that I've read that I've been really excited to share and chat with you about. It's called Burnout: The Secret to Unlocking the Stress Cycle. It's written by two sisters, Emily and Amelia Nagoski. And they really talk through the impact that stress has on us and then ways to work through that.
And specifically, they talk about completing the stress cycle. And I found this incredibly useful for me because I have had weeks where I have just worked hard Monday through Friday. I've gotten to the end of my day Friday, and I'm like, great, I'm done. I've made it. I can just relax. And I walk away from work, and I can't relax. And I'm just like, I feel sick. I feel not good.
Like, I thought I would walk away from work, and I would just suddenly feel this halo of relaxation, and everything would be wonderful. But instead, I just feel a bit ill, and I've never understood that until I was reading their book about completing the stress cycle. Have you ever had moments like that?
CHRIS: It has definitely happened to me at various points, yes.
STEPH: That makes me feel better because I haven't really chatted about this with someone. So until I read this book and I was like, oh, maybe this is a thing, and it's not just me, and this is something that people are experiencing. So to speak more about completing the stress cycle, they really highlight that stress and feelings, capital F feelings, can cause physiological symptoms. And so it's not just something that we are mentally processing, but we are physically processing the stress that we feel.
And there's a really big difference between stressors and stress. So a stressor could be something like an unmeetable deadline. It could be family. It could be money concerns. It could be your morning commute, anything that increases your stress level. And during that, there's a very physical process that happens to your body anytime there's a perceived threat. And it's really helpful to us because it's frankly what triggers our fight, flight, or freeze response. And our bodies receive a rush of adrenaline and cortisol, which essentially, if we're using that flight response, that's going to help us run. And a number of the processes in our system will essentially go into a state of hibernation because everything in our body is very focused on helping us run or do the thing that we think is going to save our life in that moment.
The problem is our body doesn't know the difference between what's more of a mental threat versus what is a truly physical threat. So this is the difference between your stress and your stressors. So in more of a physical threat, if there's a lion that you are running from, that is the stressor, but then the stress is everything that you still feel after you have run from that lion.
So you encounter a lion, you run. You make it back to your group of people where you are safe, and you celebrate, and you dance, and you hug. And that is completing the stress cycle because you are essentially processing all of that stress. And you are telling your body in a body-focused language that I am safe now, and everything is fine. So you can move back, and anything that was in a hibernation state, all of that dump of adrenaline and cortisol can be worked out of your system, and everything can go back to a normal state.
Most of us aren't encountering lions, but we do encounter jerks in meetings or really stressful commutes. And whenever we have survived that meeting, or we've gotten through our commute to the other side, we don't have that moment of celebration where we really let our body know that hey, we've made it through that moment of stress, and we are away from that stressor, and we can actually process everything.
So if you're interested in this, the book's really great. It talks about ways that you can process that stress and how important it is to do so. Otherwise, it will literally build up in your system, and it can make you sick. And it will manifest in ways that will let us know that we haven't dealt with that stress.
And one of the top methods that they recommend is exercise and movement. That's a really great way to let your body know that you are no longer in an unsafe state, and your body can start to relax. There's also a lot of other great ways. Art is a really big one. It could be hugging someone. It could be calling someone that you love. There are a number of ways that you can process it. But I hadn't recognized how important it is that once you have removed yourself from a stressor, that doesn't necessarily just mean you're done, and you can relax. You actually have to go through that physical process, and then you can relax.
So I started incorporating that more into my day that when I'm done with work, I always find something to do, and it's typically to go for a walk, or it's go for a run. And I have found that now I really haven't felt that ill-feeling where I'm trying to relax, but I just feel sick. Saying that out loud, I feel like I'm a mess on Fridays. [chuckles]
CHRIS: I feel like you're human. It was interesting when you asked the question at the beginning. You were like, "Is this a thing that other people experience?" And my answer was certainly, yes; I have experienced this. I think there's something about me that I think is useful where I don't think I'm special at all on any axis whatsoever. And so whenever there's something that's going on, I'm like, I assume that this is just normal human behavior, which is useful because most of the time it is.
And this is the sort of thing where if I'm having a negative experience, I will look to the external world to be like, I'm sure other people have experienced this, and let me pull that in. And I've found that really useful for myself to just be like, I’m not special. There's nothing particularly special about me. So let me go look from the entirety of the internet where people have almost certainly talked about this. And I've not read the book that you're describing here, but it does sound like it does a great job of describing this.
There is a blog post that I found that has stayed in the back of my mind and informed a little bit of my day-to-day approach to this sort of thing which is a blog post by Cal Newport, who I think at this point we've mentioned him a handful of times on the show. But the title of the post is Drastically Reduce Stress with a Work Shutdown Ritual. And it's this very interesting little post where he talks about at the end of your day; you want to close the book on it. I think this is especially pointed now that many of us are working from home. For me, this is a new thing. And so, I've been very intentional with trying to put walks at the beginning and end of my day.
But in this particular blog post, he describes a routine that he does where he tidies things up and makes his list for the next day. And then he has a particular phrase that he says, which is "schedule shut down, complete." And it's a sort of nonsense phrase. It doesn't even quite make sense grammatically, but it's his phrase that he internalized, and somehow this became his almost mantra for the end of the day.
And now when he does it, that's like his all right, okay, turned off the brain, and now I can walk away. I know that I've said the phrase, and I only say the phrase when I have properly set things up. And so it's this weird structure that he's built in his mind. But it totally works to quiet those voices that are like, yeah, but what about…Do we think about…Do we complete…And he's got now this magic phrase that he can say. And so I've really loved that.
For myself, I haven't gotten quite to that level, but I've definitely built the here's how I wind down at the end of the day. Here's what I do with lists and what I do so that I can ideally walk away comfortably. Again, this is one of those situations where I sound like I know what I'm doing or have my act together. This is aspirational me.
Day-to-day me is a hot mess like everybody else. [laughs] And this is just what I...when I do this, I feel better. Most of the time, I don't do this because I forget it, or because I'm busy, or because I'm stressed, [chuckles], and so I don't do the thing that reduces stress, you know, human stuff. But I really enjoyed that post.
STEPH: I haven't heard that one. I like a lot of Cal Newport's work, but I haven't read that particular blog post. Yeah, I think the idea of completing the stress cycle has helped me tremendously because by giving it a name like completing the stress cycle has been really helpful for me because working out is important to me. It's something that I enjoy, but it's also one of those things that's easy to get bumped. It is part of my wellness routine. And so, if I'm really busy, then I will bump it from the list. And then it's something that then doesn't get addressed.
But recognizing that this is also important to my productivity, not to just this general idea of wellness, has really helped me recenter how important this is and to make sure that I recognize hey, it's been a stressful day. I need to get up and move. That is a very important part of my day. It is not just part of an exercise routine, but this is something that I need to do to close out my day to then make sure I have a great day tomorrow.
So bringing it back, it's been a week that's been filled with a lot of discussions around burnout and then ways that we can measure it and then also address it. And I've really enjoyed reading this book. So I'll be sure to drop a link in the show notes. On that note, shall we wrap up?
CHRIS: Schedule shut down, complete. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeee!!!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
Steph talks about a new GitHub feature and Twitter account (@RubyCards) she's really excited about and Chris talks about his new job as a CTO of a startup and shifting away from writing code regularly.
Transcript:
CHRIS: Oh God, my computer is so stupid slow. I need a new computer.
STEPH: Come on, little computer, you can do it. You know you could just buy a new one. You don't have to wait for the fancy-schmancy M1.
CHRIS: I want to wait for the fancy. I want it so bad.
STEPH: [laughs]
CHRIS: Do you know how long I've had this computer? And if I can hold out one more month, I want the fancy stuff. I've waited this long. Why would I give in now when I'm right on the cusp of victory?
STEPH: One more month. I'm going to send you...as a kid, did you ever make those construction…
CHRIS: Oh yeah.
STEPH: They look like chain links bow construction paper. So we would make those for a countdown to special days. I'm going to send you one that's all crumpled and folded in the mail. It would be delightful. And you'll be able to snip off a little chain each day as your countdown to your new fancy-schmancy. [laughs]
CHRIS: I love it.
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new in your world?
STEPH: Hey. Well, I just got back from vacation. So getting back to work is what's new in my world. And vacation is nice. I miss it already. But it's also nice to be back, and see everybody, and see what they've been up to.
CHRIS: I've heard wonderful things about vacation.
STEPH: Yeah. Have you had one recently? I know you've been quite busy.
CHRIS: I have. I think it's hard to tell, especially because everything just kind of blends together these days. But I think I took off a few days recently. I haven't had an extended vacation since much earlier on in the summer, I think. And so I think I'm due for one of those sometime in the not too distant future. But it's one of those things where you got to plan it. And you got to think ahead, and I haven't been doing that of late really with anything. So kind of living for the moment, but that's not how you take a vacation. So I got to rethink some strategies here. [chuckles]
STEPH: Yeah, I've been trying to schedule more vacation time just further out. Because then if I don't want to take it, like if I decide that I don't want the staycation or I don't need the day off, then I can just change my mind, and that's pretty easy to do. But I'm like you; if I don't plan it, then I don't feel like I have the energy to plan a vacation, and then it just doesn't happen. So I know that's one thing that I've been doing.
I've also been mentoring or coaching others, just checking in with them to say, "Hey, when's your next vacation? Have you scheduled any days off? Do you want to schedule a day off next month?" And saying that to other people has also been a very helpful reminder to me to do so.
CHRIS: Oh, I like that a lot as a recurring one-on-one question of, so what can you tell me about vacation? What do you got in the works there? Because that's the most important thing, [chuckles] which it kind of is. It's the way that we keep doing the work that we do.
STEPH: And I think so many people just haven't been taking a vacation. I mean, in 2020, we were all locked in and going through a pandemic, so then a lot of people weren't taking those breaks. And so part of it is just reminding people that even if you can't go somewhere, still please take some downtime and just know that you can step away from work and should step away from work.
But for us, we did go somewhere. So we went out to Seattle, which I've never...I've been out to the West Coast, but it's more like I've been out to L.A., Santa Monica. But this time, we went to the Northwest region. We went to Seattle, and we explored and did a lot of hiking and camping around the Northern Cascades and then Mount Rainier. And both of those are amazing. And I've never flown with camping gear, but that went really well. It worked out nice. We had an Airbnb every so often just for showers and having a roof over your head. That's really nice. But for most of the trip, we did a lot of camping and hiking.
CHRIS: That sounds like an awesome trip.
STEPH: Yeah, it was really cool. I'd love to go back to the Olympic National Park because there are just so many national parks that are around Seattle and in Washington that we couldn't begin to do it all. But Olympic National Park is still on my list. And I'm really grateful to have also seen the Northern Cascades and Mount Rainier.
But switching gears a bit, I have something that I'm really excited to share with you because I don't think you've seen it yet. I'm excited to find out if you have. But it's a new GitHub feature that came out, I think about a month ago, but there doesn't seem to have been much fanfare from GitHub about announcing this new feature. And I happened to find out through Twitter because someone else found it, and then they were really excited. And so now I think it's really gaining some more traction. But it still seems like one of those sneaky feature releases, but it's really cool.
So GitHub has added the ability to open up a web-based editor that allows you to view the source code for a repo, view it in syntax, highlighting, make a code change, and commit the change. And it's free for everybody. And there's a couple of ways to get there, but I'll pause there. Have you seen this yet? Have you interacted with it?
CHRIS: I think I've seen it and poked around ever so gently with it. I want to say this is GitHub Codespaces. Is that the name of this feature?
STEPH: Yep. That's it?
CHRIS: Yes. I poked around with it just a tiny bit, and I'm very excited about it. But it's very much in the like, huh, okay, cool; I’ll look at that someday down the road and figure out what I want to do with it. But have you actually dug into it particularly deeply?
STEPH: I used it to make a change for a personal project, just because I wanted to see the whole flow. So I went to a personal project, and there are two ways that you can open it up for anyone that hasn't seen this yet. So you can either press the period button that's on your keyboard, and that will open it up, or you can just alter the URL. So instead of github.com, replace that .com with .dev, and then that will also open up the browser.
And so I made a change to a personal project, and it worked really well, and it commits the change to main. And it was nice. It was easy. In my case, I was just making a change to make a change. I think I actually went to an older project where I was still using the underscore target to force users that when they clicked on a link that it opened a new tab, and I was like, perfect. This is a good thing to just change. And I could do it from my iPad. I didn't have to be at my computer. And it was really nifty. I was very impressed with it.
And they also mentioned that it's very easy to integrate your own VS Code settings and environment. I'm not a heavy VS Code user, so I haven't tried that. But I've heard really positive things about how easy it is to sync your settings between your local VS Code and then GitHub's editor. But overall, it was really easy to use.
CHRIS: That's super cool. My very limited understanding of it is like GitHub has had the ability to edit files and things like that for a while. But it was very much like a simple web editor where it's a big text box that happens to contain the code. And they've added some stuff for like browsing with syntax highlighting and even some context-aware show usage and things like that. But as far as I understand it, this is like a whole VS Code instance in the cloud that is running it.
And then I think what you're saying about you can have your VS Code settings in there, but even your project settings and the ability to run the tests, I'm not sure where the edges of it are. But my understanding with Codespaces it's like this is how your team can develop. Everyone gets one of these Codespaces. You're developing in the cloud. But it does VS Code remote sync type stuff. I'm very intrigued to see where it goes and that idea of...obviously, I like Vim. That's the thing that's probably known and true about me. So I will probably be one of the later adopters of this.
But the idea of being able to bottle up the development environment for your projects and have those settings, and the ability to run the test and all of that packaged up as part of the repository, and then allow people to run with that, especially in the cloud, and be able to carry that with them as they move around, that's really intriguing. And the idea of having this very easy on-ramp, especially for open-source projects and things like that. If you want people to be able to contribute easily but with the linting, and the configuration, and the settings, and all the stuff, well, now you can have that packaged up. And that is very interesting to me.
So I'm super intrigued to see where it goes. Again, I will probably be one of the later adopters of this platform for reasons. But I am super interested, and I continue to like...the work with VS Code is so interesting in the way it keeps expanding out and the language server stuff and now the Codespaces stuff. And it's super interesting developments across the board.
STEPH: Yeah, I'm with you. I don't actually see this replacing my current development that I do day-to-day, but it's more generally nice to have access. So if I needed to make a change and I don't have my laptop or if it's just something small and I don't want to have to go through…I guess essentially, if I don't have my laptop, but I wanted to make a change, then I could do this realistically from something that doesn't have my full local dev setup.
I don't know if you have the ability to run tests. I didn't explore that far as to whether you can actually have access to run those types of commands or processes. I did see some additional notes while reading through GitHub's documentation about this new editor. And they included some notes that talk about how the editor runs entirely in your browser's sandbox. So it doesn't actually clone the repo, but instead, it loads your code by invoking the services API directly from the browser. So then your work is saved in the browser's local storage until you commit it, and then you can persist your changes by then committing it back to the repo.
And because there's no associated compute, you won't be able to build and run your code or use the integrated terminal. Ah, I think that actually answers the question about running tests. So only a subset of extensions can run in the web will appear in the extensions panel and can be installed. So this does impose certain limitations for particular programming languages and full functionality, things that we may need like running tests.
CHRIS: Interesting. That now puts it back more on the uncanny valley for me where it's like, oh, it's just VS Code, except it can't do a bunch of the stuff. So yeah, I'll probably be hanging out in Vim for a while. But again, I'm super interested to see where they can push this and what the browser platform allows, and then how they're able to leverage that and so on and so forth.
STEPH: There is one flow that I was testing out because I was reading someone else mentioned that not only can you use this for looking at source code and then changing that source code but also for a pull request. And so I went to a pull request and changed the URL to dev. And I do have the ability to make changes, but I'm not quite sure if I could commit my changes and if that would go to the branch or how that would work. It wasn't obvious to me how I could save my changes. But it was obvious to me that I could make changes. [laughs] So that part feels weird to me, and I will have to test that out. But I'm going to wait until I have my own PR before I start fooling around [laughs] so I don't ruin somebody else's PR.
CHRIS: Ideally, the worst case is you just push commit to a branch, and commits are reversible. You can throw them away. You can reset, and you can do all sorts of stuff. But I agree with you that maybe I'll do this on my home turf first before I start messing around with somebody's PR.
STEPH: That way, someone doesn't reach out to me and say, "Steph, what is this commit that I have on my PR?" And I'm like, "Oh, I'm just testing." [laughs] But that's something that I was excited to talk about and share with you. What's new in your world?
CHRIS: Well, what's new in my world? I think we've talked about this a little bit, but to give a little bit of context on what's new in my world, I joined a startup. I am now engineer number one. I'm also CTO, a very fancy title, but again, I'm the team of one, so count it as you will. But we do have some consultants working with us. So there is a small team that I am managing, and very quickly, I found myself shifting away from the code or having to balance that trade-off of maker versus manager time. Like, how much of the time am I actually coding and shipping features versus managing and communicating, and trying to figure out the work to be done and triaging the backlog? And all of those sorts of things.
I've also just been coding less, and I think that's a trend that will almost certainly continue, and I'm intrigued by that. And that's a thing that I want to poke at just a little bit. And then I've also noticed that my work has become much more reactive than it used to be, where there are lots of things in Slack. And there's stuff that I'm kind of the only person that can do certain things because I have certain access levels and yadda yadda. And I want to make sure other folks aren't blocked. So I'm trying to be as responsive as possible in those moments. But I'm also struggling with that that trade-off between reactive versus proactive.
My ideal version I think of the work is gather all of the information, all of the different permutations, and what are all the features we want? And then I think about them holistically, and then I respond once solidly as opposed to little one-off interactions and things like that. So there are just a lot of subtle differences. And I think there are trends that will continue. And so I'm trying to just take a step back, observe them from a distance and say, "How do I feel about these?"
But probably most interesting to me is the moving away from code. Have you noticed that at all in your work? Or is that something you've thought about, something you'd be interested in, opposed to? How do you feel about that space in the coding world?
STEPH: That is a wonderful question. It's one that I have wrestled with for a while because I really love my current position. I love being a team lead because I feel like there's this wonderful balance between where I get to code a lot of the time, but then I also get to learn how to be a manager, and help those around me, and provide some coaching or mentoring or just help people find the resources that they need essentially. And I really like that balance. That feels like the right balance to me, where I still get to grow in both areas.
But then, as you'd mentioned, it still feels like one tries to take over the other with time. Like you find that more responsibilities are growing as CTO of the company. And so you feel more responsible to do more of the managerial task or unblocking others and taking on that role, and then that reduces your time for coding.
And I often find myself in that space where I think it's just how I'm wired. I'm very interested and empathetic towards how people are doing and how they're feeling. So I'm always looking for ways to support others and to help unblock them and make sure that they're having a very positive experience with our project. And so then that may mean I'm coding less because then I'm more focused on that. But then, it's still also a very valid part of my job to code. So finding the right balance between those is frankly hard.
To answer your other question, I don't think I want to give that up. I've considered for myself if I'm going to head towards more of a manager path, and I'm going to reserve the right to change my mind. But currently, I still like maintaining most of my individual contributor status with a dash of management sprinkled in there and then some responsibilities for making sure that the team is doing well and that people are enjoying their work.
Along that line, as I've been having conversations with others around, tell me more about your job as a manager, and what does that look like? What responsibilities do you have? How much coding do you still get to do? There have been a couple of books that have been recommended to me that really help someone define are you interested in management? Is that a place that you see yourself going? This is really an honest look at what it means to be a manager. The fact that a lot of your fulfilling work isn't necessarily work that you get to produce, but it's actually helping someone else produce that work and then getting to see them succeed. That is your new fulfillment or a big part of it.
So you are losing that closeness of being a maker,, but instead, you are empowering someone else to be the maker, and then that becomes your win. And that becomes an indication of your success. Versus as an individual contributor, it's really easy to see our wins in a different light: how many tickets have we addressed? How many PRs have we reviewed? That type of work. So there is an interesting dichotomy there, and I can't remember the books off the top of my head, but I will find them and I'll add a link to them in the show notes.
CHRIS: Yeah, definitely interested to see the book recommendations. And generally, yeah, everything you're saying makes sense to me. I think I'm somewhat on the adventure right now. I very much intentionally chose this, and I want to lean into it and explore this facet of the work and doing more of the management and leading a team. But I have to accept that that comes with letting go of some of the individual contributor parts. And I was coding a bit over the weekend. I was just rediscovering the flow of that. And I was like, oh yeah, I really like this. Huh, that's interesting. What am I going to do with that? But I think, again, it's an exploration. And there are facets of both sides that I really like.
And I've spent a lot of time deeper in the individual contributor side. And I've explored the manager side somewhat but not quite as much. And so this is very much about that I want to push on those edges and try and find what feels true to me. So the moving away from code and then moving more into management, I think I like that overall. Although I know there's the small amount in the back of my head that I'm like, I know there's a cost there. That is a trade-off. And so do I find more time in my evenings and weekends to do personal coding projects and things like that just to have that enjoyable work for myself?
The maker versus manager stuff is interesting, though, where my day is now split up into smaller pieces. And even if I'm not coding, there's still writing up docs, or there are things that still require structured blocks of time. And my day is now just sprinkled with other things. And so trying to find that heads down of I want to just do the work right now, and I want to think hard about something is just fundamentally harder to do with more meetings and things speckled throughout the day. So that's one that I think I just don't like overall. But it's sort of a trade-off inherent to the situation.
So I think there's also a version of trying to be intentional about that and saying, you know what? I need some heads-down time. And so Tuesday and Thursday afternoons those are going to be mine. I'm going to wall those off on my calendar and try and protect that time so that whatever necessary heads-down work that I need to do this week fits into those blocks of time and then fit the rest of things around that. But I think I have to make that intentional choice to do that.
Mid-roll Ad
And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API.
With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city.
The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby.
STEPH: Your mention of having more meetings really resonates with me. And it also made me think of a recent episode of a new TV show I just started watching. Have you seen the TV show called Schmigadoon!?
CHRIS: I have indeed.
STEPH: Okay. We need to have a whole conversation about Schmigadoon! in an upcoming episode. I'm very excited about this show. It's delightful. [laughs] There's a particular line that Keegan-Michael Key says that I just love so much where he says that he became a surgeon because he wanted to help people without talking to people. And I was like, oh, that's a developer. [laughs] I'm the same way. And I really enjoyed that. Although I do like talking to people but still, it just made me think about when you're talking about more meetings and then increasing the amount of talking that needs to be done as you progress into more of a management role.
Also, circling back, I really like what you said earlier about you're noticing the changes that are happening. You're letting those changes happen, and then you're reflecting on how you feel about it. I really like that approach. Do you think that's working well for you? Does it feel too loose because then you don't feel in control enough of those changes? Or do you actually feel like that's a really good way to explore a new role and then find out if you like those changes?
CHRIS: Now that you are restating it back to me, I'm like, oh yeah, I guess that is a good way to do things. But to clarify, I'm not doing nothing with it. I am trying to proactively, where I can, structure my days and do things like that or recognize that right now, I'm probably not the right person to be moving code along. And so I'm saying okay, that is true. And I'm actively choosing to not pick up the bigger pieces of work or to pair with someone else so that they can then run with it but not having me being the person that owns it. So it's not completely letting it happen, but it is almost like meditation to invoke that idea of I'm observing that I'm having these thoughts, and I'm just going to let them go. And it's more about the thinking and the response to it.
So I'm trying to name the thing and be like, oh, this is interesting that this is happening. And I'm noticing an immediate visceral reaction to it where it's like, you're taking away my coding? And I'm like, well, hey, it's not them, it's you; you chose to do this. But let's just spend a minute there. That's okay. How do we feel about this? And so it's trying to not have it be a purely reactive response to it but have it be a more intentional, more thoughtful, and more observing, and then giving it a little bit of time to ruminate and then see a little bit more what I think.
And also, some of it is purposefully pushing myself out of my comfort zone. I think I'm happy, and I do a reasonable job when I'm the person moving the code along. But I also have really enjoyed being at the edge of an engineering team and working with sales or working with other groups and facilitating the work that's happening. And so, if I explore that a little bit more, what's that going to look like for me?
So this period of my career, I'm very intentionally trying to do stuff that I'm like, well, this is a little bit different for me, or this is stretching a little bit, but that is the goal. And I hope good things will come out of it across the board. But it may be that I find like, you know what? Actually, I really miss coding, and I need to find a way to restructure that. And I have seen examples of individuals who are even in CEO positions that are like, no, no, no, I still make some time to code.
Like Amir, the founder of Todoist talks regularly about the fact that he is a CEO who still codes. And that organization has a very particular approach to work. And they're very much about async remote, et cetera. So having these blocks of times and being intentional about how they work. So it's not surprising that he's been able to do that and a purposeful thing that he's structured. I don't think that will make sense for me immediately. But I could see a version down the road where I'm like, this is who I am. I need to get this thing back. But for now, I'm purposefully letting it happen and seeing how I feel from there.
Also, as I'm saying all of this, it sounds like I'm totally on top of this and really thinking it through. I'm like, no, no, no, this is in the moment. I'm noticing some stuff and being like, oh, okay, well, that's interesting. And some of it I intentionally chose. Again, intentionally chose to get out of my comfort zone. So I think I'm just actively out of my comfort zone right now and saying things about it. And then I think I'm telling the story of how I want to respond to it moving forward but not necessarily perfectly achieving that goal immediately.
STEPH: I think that's a nice representation of essentially how you and I have processed things. We've highlighted before that you and I...it's funny, I just made the joke about not talking to people, but it's how I actually process stuff. And the best is when I'm talking out loud to somebody else. And so it totally makes sense that as you were noticing this and reflecting on it, that then this is another way that you are then processing those changes and reflecting on it and thinking through is this a good change? Is it something that I'm going to enjoy? Or am I really going to miss my street coding creds? I need to get back to the editor.
CHRIS: I just need that precious flow state that comes from drinking some Mountain Dew and coding for hours.
STEPH: Do you drink Mountain Dew?
CHRIS: No, I gave it up years ago.
STEPH: [laughs]
CHRIS: I don't drink soda broadly. But if I'm going to drink soda, it's going to be Mountain Dew because if we're going to do it, let's do this thing. I'm pretty sure that stuff is like thermonuclear, but that's fine.
STEPH: [laughs] That's funny. I know we've had this conversation before also around Pop-Tarts where you're like, hey, if I'm going to have a Pop-Tart, I'm going to have the sugariest (Is that a word - sugariest?) Pop-Tart possible.
CHRIS: To be clear, that means it has icing on it because some people in the world, namely you, would prefer the ones without icing. Although we recently learned that the ones without icing have a higher fat and calorie content, so I don't know. The world's murky. I wish it were all just clear, and we could just work with it. But it turns out even Pop-Tarts icing versus not is not a simple question.
STEPH: It's a very simple question. You just need to be on the right side, which is the non-frosted side. [laughs] I can simplify this for you because fat is delicious. Fat trumps sugar; that’s my stance. That's my hot take.
CHRIS: I'm saying both, a little from column A, a little from column B. You got yourself a stew.
STEPH: [laughs] You got a fat sugar stew.
CHRIS: Yeah. That was in Arrested Development. All right, we're veering way off course now. [laughter] To bring it back, what you were highlighting of I'm definitely someone who thinks through stuff by talking out loud, and so it's been wonderful. I've learned so much about myself while talking to you on this podcast. I'll say something, and I'll be like, wait, I actually believe that thing I just said. This is fantastic. Now I can move forward with the knowledge that I've just gained for myself by talking about it on a podcast. So highly recommended: everybody should get a podcast.
STEPH: Plus one. I also have a very real, maybe silly, follow-up question for you as we are, like you just said, exploring the things that we believe or not. My question for you is part of the transition to management and moving away from coding. Isere some fear in the back of your mind where you're like, if I stopped coding, I'm going to lose this skill?
CHRIS: Honestly, no. And I feel kind of bad saying that because I feel like I should say, "Yeah, I feel like it'll fade away and whatnot." But I think I have an aptitude and an interest towards this work. And if I were to ignore it for two years, then frankly, I also know myself. And I'm still going to keep an eye on everything for a while. So I think I'll be aware of what's going on and maybe just haven't spent as much time with it.
But I think if I need to two years from now, I'm like, all right, I got to rebuild my coding muscle. I'll skip a couple of JavaScript frameworks, which will be nice, and I'll be on to the 15th iteration that's new now. But I hope that I could revisit that not trivially, not with no effort. It's the wonderful nature of coding. It's one of the things that I love about it so much is that there are blog posts and YouTube tutorials. And it's so individually discoverable that I'm not really worried about that aspect.
My concern, if anything, isn't so much that I'm going to lose my skills or not be able to code anymore; it’s that I really enjoy coding. It's a practice that I find very enjoyable. A workweek is enjoyable when it contains big blocks of me putting on my headphones, listening to music, and digging into a problem, and then coding and producing a solution. And those tiny little feedback loops of test-driven development or running something and then going to the browser and clicking around like that, there's a directness there that has always really worked well for me.
And so the more I'm abstracted away from that sort of thing, and the more of my work is I'm helping a team, and I'm directing strategy, or whatever it is, that just feels so indirect. And so I'm very interested to find out how I respond to that sort of thing. I've definitely enjoyed it in the past, and so that's why I'm intentionally leaning into it.
But I know that I'm giving up a part of the work that I really love, and giving up is too strong of a word as well. I'm going to find what shape makes sense moving forward. And I expect I'll still be pairing with the other developers on the team and helping to define architecture and things like that. So it's not like it's 100% gone. But for now, I think the world where most of my week was spent coding is no longer the case. And so just naming that and being intentional about it. And yeah, that's the game.
STEPH: Cool. Yeah, that makes a lot of sense. I was mainly interested in that question because that is a question that I've asked myself from time to time that I think I do have that worry that if I step away from coding for too long, then it won't be easy to jump back into. And I've talked myself out of that many times because I don't think it's true for all the reasons that you just said. But it is something that I have considered as like, well, if I take this leap of faith into this other direction, how easy is it for me to get back if I decide to change my mind and go back to being more of an individual contributor?
And one other thing that weighs on me as I'm splitting my time between two areas that I really want to grow…So I'm constantly trying to grow as a developer. I'm also trying to grow as a manager, and I don't want to do a bad job at either. I want to do a great job at both, and that's frankly not always possible. And at times, I have to make trade-offs with myself around okay, I'm going to focus a little heavier this day or this week on being a really great manager or focus a little bit more on being a developer and to pick and choose those topics. And then that sometimes means doing like B+ work in one area, and that's really hard for me. I'm an A-work person. So even downgrading to a B+ level of effort is challenging. But I have found that that's a really great space to be because then I'm doing well in both areas, not perfect, but doing well enough. And often, that's really what counts is that we're doing well enough and still pursuing growth in the areas that are important to us.
CHRIS: Yeah, I think that intentional switching back and forth between them is the space that I'm in. I expect my work will remain very technical, and I hope that that's true. And I think to a certain extent; I get to shape it and determine that. And so how much of it is strategy and planning and things like that? Versus how much of it is helping the team with architecture and defining processes as to how we code, and what are our standards, and what are our languages and frameworks and all of that? I expect I'm still going to be involved in the latter. And again, I think to a certain extent; I get to choose that.
So I am actually interested to see the shape that both naturally the organization needs out of the role that I'm in. But also, what sort of back pressure I can apply and be like, but this is how I want it to be. Is there room for that, or is there not? And it's all an experiment, and we're going to find out. But personally, for me, I'm going to keep reading Twitter and blog posts every day, and I'm probably going to code on the weekends and things.
So the idea of my coding muscle atrophying, I don't know, that one doesn't feel true. But we'll see what I have to say a year from now or after what that looks like. But I expect...this has been true of me for so long, even when I had an entirely different career that I was just reading blogs and other things all the time because this is a thing that deeply interests me. So we will see.
STEPH: Yeah, I'm excited to hear how it goes. And I think there's something to be said for the fact that you are also a CTO that's very close to the work that's being done. So being someone that is very involved in the technical decisions and the code that's being written but then also taking on more of the management responsibilities. And that feels more of a shift where you still have a lot of your coding skills.
And you are writing code day-to-day at least based on what you're saying, but then you are also acquiring a lot of these management skills to go along with it. Versus if someone were going into management and maybe they're at a really large company and then they are very far away from the development team. And they're focused on higher-level themes and discussions, at least that's my guess. But I'm very excited to hear more about your updates and how this experiment is going and to find out who is the true Chris?
CHRIS: Who's the true Chris? That feels complicated. I feel like I contain multitudes. But yeah, you know what? I'm excited to find out as well. Let's see what's going on there. But yeah, so that's a grand summary of the things that are going on in my head. And I expect these are topics that will be continuing to evolve for me. So I think we'll probably have more conversations like this in the future but also some tech stuff. Because like I said, I don't know, I can't stop.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: Yeah, that's actually the perfect segue as we were talking earlier about just ways that we're looking to grow as developers. And I saw something that I really enjoyed, and it's published by another thoughtboter. Their name is Matheus Richard. And Matheus runs a Twitter account that's called @RubyCards. And I don't recall the exact cadence, but every so often, Matheus will share a new snippet of either Ruby or Rails code and then will often present the information as a question.
So I'll give you an example, but the highlight is that it teaches you something, either about Ruby or Rails. Maybe you already knew it, maybe you didn't. But it's a really nice exercise to think through okay, I'm reading this code. What do I think it's going to return? And then respond to this poll and then see how other people did as well. Because once the poll closes, then Matheus shares the actual answer for the question.
So one example that I saw recently highlights Ruby's endless method definition, which was introduced in Ruby 3. So that would be something like def, and then let's say the method name is message. And then you have closing, but empty parenthes equals a string of "Hello, World." And so then the question is if you call that method message, what would that return? And then the poll often has options around; it would return "Hello World," or it's going to return a syntax error. It's going to return nil. And then it highlights, well, because of Ruby's endless method definition, this would return "Hello, World."
And then I also saw a new method that I hadn't used before that's defined in Ruby's Hash class that's called store. And so you can use it calling it on a Hash. So if you have your hash equals and then curly brackets, let's say foo is equal to an integer of zero, then you can call hash.store and then pass in two arguments. The first argument's going to be the key. The second argument is the value. And then, that would essentially be the same syntax that we use for assigning a value to a hash. But I just hadn't actually seen the method store before.
So there are fun snippets of Ruby or Rails code. A little bit of a brain teaser helps you think through how that code works, what it's going to execute, what it's going to return. And I really enjoy it. I'll be sure to include a link to it in the show notes so other people can check it out.
CHRIS: Oh, that sounds fun. I hadn't seen that, but I will definitely be following. That's the word on Twitter, right? You have subscribing, subscribe and follow, smash that like button, all of the things. I will do all of the things that we do here on the internet. But I do like that model of the question and answer, and it's slightly more engaging than just sharing the information. So yeah, super interested to see that.
STEPH: Yeah, I like the format of here's some code, and then we're going to ask you what does it return? So that way, you get a moment to think it through. Because if I read something and it just shows me the answer, my brain just doesn't absorb it. And I'm like, okay, that makes sense, and my brain quickly moves on. But if I actually have to think about it and then respond with my answer, then it'll likely stick with me a lot longer. At least we'll find out; that’s the dream.
On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes,; maybe as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
You know what really grinds Chris' gears? (Spoiler Alert: It's Single-Page Applications.)
Steph needs some consulting help. So much to do, so little time.
Transcript:
CHRIS: I have restarted my recording, and we are now recording. And we are off on the golden roads. Isn't software fun?
STEPH: Podcast battle. Here we go!
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, happy Friday. How's your week been?
CHRIS: Happy Friday to you as well. My week's been good with the exception of right before we started this recording, I had one of those experiences. You know the thing where software is bad and software is just terrible overall? I had one of those. And very briefly to describe it, I started recording, but I could hear some feedback in my headphones. So I was like, oh no, is that feedback going to show up on the final recording? Which I really hope it doesn't. Spoiler alert - listener, if I sound off, sorry about that. But so I stopped recording and then I went to go listen to the file, and I have our audio software configured to record directly to the desktop. And it does that normally quite well. But for some reason, the file wasn't there. But I remember this recently because I ran into it another time.
For some reason, this is Finder failing me. So the thing that shows me the files in a graphical format, at least on my operating system. Although I think it also messes up in the terminal maybe. That feels like it shouldn't be true, but maybe it is. Anyway, I had to kill all Finder from the terminal to aggressively restart that process. And then suddenly, Finder was like, oh yeah, there's totally files there, absolutely. They were there the whole time. Why do you even ask?
And I know that state management is a hard problem, I am aware. I have felt this pain. I have been the person who has introduced some bugs related to it, but that's not where I want to experience it. Finder is one of those applications that I want to just implicitly trust, never question whether or not it's just sneakily telling me that there are files that are not there or vice versa. So anyway, software.
STEPH: I'm worried for your OS. I feel like there's a theme lately [chuckles] in the struggles of your computer.
CHRIS: On a related note, I had to turn off transparency in my terminal because it was making my computer get very hot. [chuckles]
STEPH: Oh no, you're not a hacker any more.
CHRIS: I'm not. [chuckles] I just have a weird screen that's just dark. And jellybeans is my color scheme, so there's that going on. That's in Vim specifically. Pure is my prompt. That's a lovely little prompt. But lots of Day-Glo colors on just a black background, not the cool hacker transparency. I have lost some street cred.
STEPH: What is your prompt? What did you say it is?
CHRIS: Pure.
STEPH: Pure, I don't know that one.
CHRIS: It is by Sindre Sorhus; I think is his name. That's his Twitter handle, GitHub name. He is a prolific open-source creator in the Node world, particularly. But he created this...I think it's a Bash and a Zsh prompt. It might be for others as well. It's got a bunch of features. It's pretty fast. It's minimal. It got me to stop messing around with my prompt, which was mostly what I was going for. And it has a nice benefit that occasionally now I'll be pairing with someone, and I'll be like, "Your prompt looks like my prompt. Everything is familiar. This is great."
STEPH: Well, if you get back in the waters of messing around with your prompts again, I'm using Starship. And I hadn't heard of Pure before, but I really like Starship. That's been my new favorite.
CHRIS: Wow.
STEPH: Wow.
CHRIS: I mean, on the one hand…
STEPH: You're welcome. [laughs]
CHRIS: On the one hand, thank you. On the other hand, again, let me lead in with the goal was to stop messing around with my prompt. So you're like, oh, cool. Here's another prompt for you, though. [chuckles]
STEPH: [laughs] But my goal is to nerd snipe you into trying more things because it's fun.
CHRIS: I don't know if you know this, but I am impervious to nerd sniping.
STEPH: [laughs]
CHRIS: So try as you might, I shall remain steady in my course of action.
STEPH: Are we playing two truths and a lie? Is that what we're doing today? [laughs]
CHRIS: Nah, just one lie. It's easier. Everybody wins one lie.
STEPH: [laughs]
CHRIS: But anyway, in other news, we're going to do a segment called this really grinds my gears. That's today's segment, which is much like when I do a good idea, terrible idea. But this is one that I'm sure I've talked about before. But there's been some stuff that I saw moving around on the internet as one does, and it got these ideas back into my head. And it's around the phrase single-page application. I am not a fan of that phrase or SPA as the initialism. Thank you, Edward Loveall, for teaching me the difference between an initialism and an acronym. I really hope I'm getting it right, by the way, [laughs] SPA as people call them these days.
I feel weird because of how much I care about this thing, how much I care about this idea, and how much whenever I hear this acronym, I get a little bit unhappy. And so there's a part of it that's I really do think our words shape our thinking. And I think single-page application has some deeply problematic ideas. Most notably, I think one of the most important things about building web applications is the URL. And those are different pages, at least in my head. I don't know of a different way to think about this.
But if you are not emphasizing the URL and the fact that the URL is a way to address different pages or resources within your application, then you are throwing away one of the greatest advancements that humankind has made, in my mind. I care a lot about URLs; it turns out. And it's not inherent to an SPA that you will not be thinking about URLs. But again, in that idea that our words shape our thinking, by calling it an SPA, by leaning into that idea, I think you are starting down a path that leads to bad outcomes. I'm going to pause there because I'm getting kind of ranty. I got more to say on the topic. But what do you think?
STEPH: Yeah, these are hot takes. I'm into it. I'm pretty sure that I know why URLs are so important to you and more of your feelings around why they're important. But would you dive in a bit deeper as to why you really cherish URLs, and why they're so important, and why they're one of the greatest advancements of humanity?
CHRIS: [laughs] It sounds lofty when you say it back to me, but yeah. It's interesting that as you put into a question, it is a little bit hard to name. So there are certain aspects that are somewhat obvious. I love the idea that I can bookmark or share a given resource or representation of a resource very simply. Like the URL, it's this known thing. We can put hyperlinks in a document. It's this shared way to communicate, frankly, very complex things.
And when I think of a URL, it's not just the domain and the path, but it's also any query parameters. So if you imagine faceted search on a website, you can be like, oh, filter down to these and only ones that are more than $10, and only ones that have a future start date and all those kinds of nuance. If you serialize that into the URL as part of the query param, then that even more nuanced view of this resource is shareable is bookmarkable is revisitable.
I end up making Alfred Workflows that take advantage of the fact that, like, oh, I can look at this URL scheme, and I can see where different parts are interpolated. And so I can navigate directly to any given thing so fast. And that's deeply valuable, and it just falls naturally out of the idea that we have URLs. And so to not deeply embrace that, to not really wrap your arms around it and give that idea a big hug feels weird to me.
STEPH: Yeah, I agree. I remember we've had this conversation in the past, and it really frustrates me when I can't share specific resources with folks because I don't know how to link to it. So then I can send you a link to the application itself to the top URL. But then I have to tell you how to find the information that I thought was really helpful. And that feels like a step backward.
CHRIS: Yeah. That ability to say, "Follow this link, and then it will be obvious," versus "Go to this page, click on this thing, click on the dropdown, click on this other thing." Like, that's just a fundamentally different experience. So one of the things that I saw that got me thinking about this was I saw folks referring to single-page applications but then contrasting them with MPAs, which are multiple-page applications.
STEPH: So the normal application? [laughs]
CHRIS: And I was like, whoa, whoa, everybody. You mean like a website or a web app? As much as I was angry at the first initialism, this second one's really getting me going. But it really does speak to what are we doing? What are we trying to build? And as with anything, you could treat this as a binary as just like there are two options. There are either websites which, yeah, those have got a bunch of URLs, and that's all the stuff. And then there are web apps, and they're different. And it's a bundle of JavaScript that comes down, boots up on the client, and then it's an app thing. And who cares about URLs? I think very few people would actually fall in that camp.
So I don't really believe that there is a dichotomy here. I think, as always, it's a continuum; it’s a spectrum. But leaning into the nomenclature of single-page application, I think pushes you more towards that latter end of the spectrum. I think there are other things that fall out of it. Like, I believe deeply in having the server know more, have more of the logic, own more of the logic, own more authorization and routing, and all of those things because really great stuff falls out of that. And that one has more of a trade-off, I'd say.
But I won't name any names, but there is a multiple billion-dollar company whose website I had to interact with recently. And you land on their page on their marketing site. And then, if you click log in, it navigates you to the application, so a separate domain or a separate subdomain, the application subdomain, and the login page there. And the login page renders, and then I go to fill in my username and password. Like, my mouse makes it all the way to click on the little box or whatever I'm doing if I'm using keyboard things. But I have enough time to actually start to interact with this page.
And then suddenly, it rips away, and it actually just renders the authenticated application because it turns out I was already logged in. But behind the scenes, they're doing some JWT dance around that they're checking; oh no, no, you're already logged in, so never mind. We don't need to show you the login page, but I was already on the login page.
And my feeling is this sort of brittle UI; this sort of inconsistency erodes my trust in that application, particularly when I'm on the login page. That is a page that matters. I don't believe that they're doing anything fundamentally insecure. But I do have the question in my head now. I'm like, wait, what's going on there, everybody? Is it fine? Was that okay? Or if you see something that you shouldn't see and then suddenly it's ripped away from you, if you see half of a layout that's rendered on a page and then suddenly you see, no, no, no, you actually don't have access to that page, that experience erodes my trust.
And so, I would rather wait for the server to fully resolve, determine what's going to happen, and then we get a response that is holistically consistent. You either have access, or you don't, that sort of thing. Give me a loading indicator; give me those sorts of things. I'm fine with that. But don't render half of a layout and then redirect me back away.
STEPH: I feel like that's one of the problems with knowing too much because most people are not going to pick up on a lot of the things that you're noticing and caring deeply about where they would just see like, oh, I was logged in and be like, huh, okay, that was a little weird, but I'm in and just continue on. Versus other folks who work very closely to this who may recognize and say, "That was weird." And the fact that you asked me to log in, but then I was already logged in, did you actually log me in correctly? What's happening? And then it makes you nervous.
CHRIS: Maybe. Probably. But I wonder…the way you just said that sounds like another dichotomy. And I would say it's probably more of a continuum of an average not terribly tech-savvy user would still have a feeling of huh, that was weird. And that's enough. That's a little tickle in the back of your brain. It's like, huh, that was weird.
And if that happens enough times or if you've seen someone who uses an application and uses it consistently, if that application is reasonably fast and somewhat intuitive and consistent, then they can move through it very quickly and very confidently. But if you have an app that half loads and then swaps you to another page and other things like that, it's very hard to move confidently through an application like that. I do think you're right in saying that I am over-indexed on this, and I probably care more than the average person, but I do care a lot.
I do think one of the reasons that I think this happens is mobile applications came along, and they showed us a different experience that can happen and also desktop apps for some amount of time this was true. But I think iOS apps, in particular really great ones, have super high fidelity interactions. And so you're like, you're looking at a list view, and then you click on the cell for that list view. And there's this animated transition where the title floats up to the top and grows just a little bit. And the icon that was in the corner moves up to the corner, and it gets a little bigger. And it's this animated transition to the detailed view for that item. And then if you go back, it sort of deanimates back down.
And that very consistent experience is kind of lovely when you get it right, but it's really, really hard. And people, I think, have tried to bring that to the web, but it's been such a struggle. And it necessitates client-side routing and some other things, or it's probably easiest to do if you have those sorts of technologies at play, but it's been a struggle. I can't think of an application that I think really pulls that off. And I think the trade-offs have been very costly.
On the one positive note, there was a tweet that I saw by Sarah Drasner that was talking about smooth and simple page transitions with the shared element transition API. So this is a new API that I think is hoping to bring some of this functionality to the web platform natively so that web applications can provide that higher fidelity experience. Exactly how it'll work whether or not it requires embracing more of the single-page application, client-side routing, et cetera, I'm not sure on that. But it is a glimmer of hope because I think this is one of the things that drives folks in this direction. And if we have a better answer to it, then maybe we can start to rethink the conversation.
STEPH: So I think you just said shared element transitions. I don't know what that is. Can you talk more about that?
CHRIS: I can try, or I can make a guess. So my understanding is that would be that sort of experience where you have a version of a certain piece of content on the page. And then, as you transition to a new page, that piece of content is still represented on the new page, but perhaps the font size is larger, or it's expanded, or the box around it has grown or something like that. And so on mobile, you'll often see that animate change. On the web, you'll often see the one page is just completely replaced with the other. And so it's a way to have continuity between, say, a detailed view, and then when you click on an item in it, that item sort of grows to become the new page. And now you're on the detail page from the list page prior.
There's actually a functionality in Svelte natively for this, which is really fancy; it's called crossfade. And so it allows you to say, "This item in the component hierarchy in the first state of the application is the same as this item in the second state of the application." And then, Svelte will take care of transitioning any of the properties that are necessary between those two.
So if you have a small circle that is green, and then in the next state of the application, it's a blue rectangle, it will interpolate between those two colors. It will interpolate the shape and grow and expand it. It will float it to its new location. There is a really great version of it in the Svelte tutorial showing a to-do list. And so it's got a list on the left, which is undone things, and a list on the right that is done things. And when you click on something to complete it, it will animate it, sort of fly across to the other list. And if you click on it to uncomplete, it will animate it and fly back.
And what's great is within Svelte because they have this crossfade as a native idea; all you need to say is like, "It was on this list, now it's on this list." And as long as it's identifiable, Svelte handles that crossfade and all the animations. So it's that kind of high-fidelity experience that I think we want. And that leads us to somewhat more complex applications, and I totally get that. I want those experiences as well. But I want to ask some questions, and I want to do away with the phrasing single-page application entirely. I don't want to say that anymore. I want to say URLs are one honking good idea. Let's have more of those.
And also, just to name it, Inertia is a framework that allows me to build using some of the newer technologies but not have to give up on URLs, give up on server-side logic as the primary thing. So I will continue to shout my deep affection for Inertia in this moment once again.
STEPH: Cool. Thanks. That was really helpful. That does sound really neat. So in the ideal world, we have URLs. We also have high fidelity and cool interactions and transitions on our pages. We don't have to give it a fancy name like single-page application or then multi-page application. I do wonder, with our grumpiness or our complaint about the URLs, is that fair to call it grumpy?
CHRIS: It's fair to call it grumpy, although you don’t need to loop yourself in with me. I'm the grump today.
STEPH: [laughs]
CHRIS: You're welcome to come along for the ride if you'd like. And I'm trying to find a positive way to talk about it. But yeah, it's my grumpytude.
STEPH: Well, I do feel similarly where I really value URLs, and I value the ability to bookmark and share, like you said earlier. And I do wonder if there is a way to still have that even if we don't have the URL. So one of the things that I do is I'll inspect the source code. And if I can find an ID that's for a particular header or section on the page, then I will link someone to a section of that page by then adding the ID into the URL, and that works. It's not always great because then I have to rely on that being there. But it's a fix, it's a workaround.
So I wonder if we could still have something like that, that as people are building content that can't be bookmarked or the URL doesn't change explicitly, or reference that content, to add more thoughtful bookmark links, essentially, or add an ID and then add a user-facing link that says, "Hey, if you want to link someone to this content, here you go." And under the hood, it's just an ID. But most people aren't going to know how to do that, so then you're helping people be able to reference content because we're used to URLs, so just thinking outside the box. I wonder if there are ways that we can still bookmark this content, share it with people. But it's okay if the URL isn't the only way that we can bookmark or reference that content.
CHRIS: It's interesting that you bring that up, so the anchor being the thing after the hash symbol in the URL. I actually use that a ton as well. I think I built a Chrome extension a while back to try what you're saying of I'll inspect the DOM. I did that enough times that I was like, what if the DOM were to just tell me if there were an ID here and I could click on a thing? Some people's blogs...I think the thoughtbot blog has this at this point. All headers are clickable. So they are hyperlinks that append that anchor to the URL.
So I wouldn't want to take that and use that functionality as our way to get back to URLs that are addressing resources because that's a way to then navigate even further, which I absolutely love, to a portion of the page. So thinking of Wikipedia, you're on an article, but it's a nice, long article. So you go down to the section, which is a third of the way down the page. And it's, again, a very big page, so you can link directly to that. And when someone opens that in their browser, the browsers know how to do this because it's part of the web platform, and it's wonderful.
So we've got domains, we've got paths, we've got anchors, we've got query params. I want to use them all. I want to embrace them. I want that to be top of mind. I want to really think about it and care about that as part of the interface to the application, even though most users like you said, are not thinking about the shape of a URL. But that addressability of content is a thing that even if people aren't thinking of it as a primary concern, I think they know it when they...it's one of those like, yeah, no, that app's great because I can bookmark anything, and I can get to anything, and I can share stuff with people.
And I do like the idea of making the ID-driven anchor deep links into a page more accessible to people because you and I would go into the DOM and slice it out. Your average web user may not be doing that, or that's much impossible to do on mobile, so yes, but only more so in my mind. [laughs] I don't want to take anchors and make them the way we do this. I want to just have all the URL stuff, please.
Mid-roll Ad
Now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open-source culture in an accessible and documented API.
With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city. The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end.
If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby.
STEPH: I have a confession from earlier when you were talking about the examples for those transitions. And you were describing where you take an action, and then the page does a certain motion to let you know that new content is coming onto the page and the old content is fading away. And I was like, oh, like a page reload? We're just reimplementing a page reload? [laughs] That's what we have?
CHRIS: You have a fancy, though.
STEPH: Fancy, okay. [laughs] But that felt a little sassy. And then you provided the other really great example with the to-do list. So what are some good examples of a SPA? Do you have any in mind? I think there are some use cases where...so Google Maps, that's the one that comes to mind for me where URLs feel less important. Are there other applications that fit that mold in your mind?
CHRIS: Well, so again, it's sort of getting at the nomenclature, and how much does the acronym actually inform what we're thinking about? But taking Google Maps as an example, or Trello is a pretty canonical one in my mind, most people say those are single-page applications. And they are probably in terms of what the tech actually is, but there are other pages in those apps. There's a settings page, and there's a search page, and there's this and that. And there's like the board list in Trello.
And so when we think about Trello, there is the board view where you're seeing the lists, and you can move cards, and you can drag and drop and do all the fancy stuff. That is a very rich client-side application that happens to be one page of the Trello web app and that one being higher fidelity, that one being more stateful. Stateful is probably the thing that I would care about more than anything. And so for that page, I would be fine with the portion of the JavaScript that comes down to the client being a larger payload, being more complex, and probably having some client-side state management for that. But fundamentally, I would not want to implement those as a true client-side application, as a true SPA. And I think client-side routing is really the definition point for me on this.
So with Trello, I would probably build that as an Inertia-type application. But that one page, the board page, I'd be like, yeah, sorry, this is not going to be the normal Inertia thing. I'm going to have to be hitting JSON endpoints that are specifically built for this page. I'm going to have a Redux store that's local. I'm going to lean into all of that complex state management and do that within the client-side app but not use client-side routing for actual page-level transitions, the same being true for Google Maps. The page where you're looking at the map, and you can do all sorts of stuff, that's a big application. But it is one page within the broader website, if you will. And so, I still wouldn't want client-side routing if I can avoid it because I think that is where I run into the most problems.
And that thing I was talking about where I was on the login page for a second, and then I wasn't; I do not like that thing. So if I can avoid that thing, which I have now found a way to avoid it, and I don't feel like I'm trading off on that, I feel like it's just a better experience but still reserving the right to this part of the application is so complex. This is our Wiziwig drag and drop graphical editor thing, cool. That's going to have Redux. That's going to have client-side state management, all that stuff. But at no point does single-page application feel like the right way to describe the thing that we're building because I still want to think about it as holistically part of the full web app. Like the Trello board view is part of the Trello web app. And I want it all to feel the same and move around the same.
STEPH: Yeah, that makes sense. And it's funny, as you were mentioning this, I pulled up Google Maps because I definitely only interact with that heavy JavaScript portion, same for Trello. And I wasn't even thinking about the fact that there are settings. By the way, Google Maps does a lot. I don't use hardly any of this. But you make a great point. There's a lot here that still doesn't need such heavy JavaScript interaction and doesn't really fit that mold of where it needs to be a single-page app or even needs to have that amount of interactivity. And frankly, you may want URLs to be able to go specifically to these pages.
CHRIS: That actually is an interesting, perhaps counterpoint to what I'm saying. So if you do have that complex part of one of your applications and you still want URL addressability, maybe you need client-side routing, and so that becomes a really difficult thing to answer in my mind. And I don't necessarily have a great answer for that. I'm also preemptively preparing myself for anyone on the internet that's listening to this and loves the idea of single-page applications and feels like I'm just building a straw man here, and none of what I'm saying is actually real and whatnot.
And although I try to...I think we generally try and stay in the positive space of like what's good on the internet. This is a rare case where I'm like, these are things that are not great. And so I think in this particular case, leaning into things that I don't like is the way to properly capture this. And giant JavaScript bundles where the entirety of the application logic comes down in 15-megabyte download, even if you're on 3G on a train; I don't like that.
I don't like if we have flashes of a layout that they can get ripped away b; it’secause it turns out we actually aren't authorized to view that page, that sort of thing. So there are certain experiences from an end client perspective that I really don't like, and that's mostly what I take issue with. Oh, also, I care deeply about URLs, and if you don't use the URL, then I'm going to be sad. Those are my things.
Hopefully, that list is perhaps a better summary of it than like...I don't want it to seem like I'm just coming after SPA as a phrase or a way of thinking because that's not as real of a conversation. But those particular things that I just highlighted don't feel great. And so I would rather build applications that don't have those going on. And so if there's a way to do that that still fits any other mold or is called whatever, but largely what I see called an SPA often has those sorts of edge cases. And I do not like those edge cases.
STEPH: Yeah, I like how you're breaking it down where it's less of this whole thing like I can't get on board with any of it. You are focusing on the things that you do have concerns with. So there can be just more interesting, productive conversations around those concerns versus someone feels like they have to defend their view of the world.
I have found that I think I'm a bit unique in this area where when people have a really differing opinion than mine, that gets me really excited because then I want to know. Because if I believe very strongly in something and I just think this is the way and then someone very strongly says like, "No, that's not," I'm like, "Oh yeah. Okay, we should talk because I'm interested in why you would have such a different opinion than mine." And so, I typically find those conversations really interesting. As long as everybody's coming forward to be productive and kind, then I really enjoy those conversations.
CHRIS: That is, I think, an interesting frame that you have there. But I think I'm similar, and hopefully, my reframing there puts it in the way that can be a productive conversation starter as opposed to a person griping on a podcast. But with that said, that's probably enough of me griping on a podcast. [chuckles] So what's up in your world, Steph?
STEPH: Oh, there are a couple of things going on. So I am in that pre-vacation chaotic zone where I'm just trying to get everything done. And I heard someone refer to it recently as going into a superman or superwoman mode where you're just trying to do all the things before you go, which is totally unreasonable. So that has been interesting. And the name of the game this week has been delegate, delegate, delegate, and it seems to be going fairly well. [chuckles] So I'm very excited for the downtime that I'm about to have.
And some other news, some personal news, Utah, my dog, turns one. I'm very excited. I'm pretty sure we'll have a dog birthday party and everything. It's going to be a thing. I'll share pictures on Twitter, I promise.
CHRIS: So he's basically out of the puppy phase then.
STEPH: Yeah, the definition for being a puppy seems to be if you're a year or younger, so he will not be an adult. Teenager? I don't know. [laughs]
CHRIS: What about according to your lived experience?
STEPH: He has calmed down a good bit.
CHRIS: Okay, that's good.
STEPH: He has gotten so much better. Back when we first got him, I swear I couldn't get 15 minutes of focus where he just needed all the attention. Or it was either constant playtime, or I had to put him in his kennel since we're using that. That was the only way I was really ever getting maker’s time. And now he will just lounge on the couch for like an hour or two at a time. It's glorious. And so he has definitely calmed down, and he is maturing, becoming such a big boy.
CHRIS: Well, that is wonderful. Astute listeners, if you go back to previous episodes over the past year, you can certainly find little bits of Utah sprinkled throughout, subtle sounds in the background.
STEPH: He is definitely an important part of the show. And in some other news, I have a question for you. I'm in need of some consulting help, and I would love to run something by you and get your thoughts. So specifically, the project that I'm working on, we are always in a state where there's too much to do. And even though we have a fairly large team, I want to say there's probably somewhere between 7 and 10 of us. And so, even though we have a fairly...for thoughtbot, that's a large team to have on one project. So even though there's a fair number of us, there's always too much to do. Everything always feels like it's urgent. I can't remember if I've told you this or not, but in fact, we had so many tickets marked as high priority that we had to introduce another status to then indicate they're really, really high, and that is called Picante. [chuckles]
CHRIS: Well, the first part of that is complicated; the actual word that you chose, though, fantastic.
STEPH: I think that was CTO Joe Ferris. I think he's the one that came up with Picante. So that's a thing that we have, and that really represents like, the app is down. So something major is happening. That's like a PagerDuty alert when we get to that status where people can't access a page or access the application. So there's always a lot to juggle, and it feels a lot like priority whiplash in terms that you are working on something that is important, but then you suddenly get dragged away to something else. And then you have to build context on it and get that done. And then you go back to the thing that you're working on.
And that's a really draining experience to constantly be in that mode where you're having to pivot from one type of work to the other. And so my question to you (And I'll be happy to fill in some details and answer questions.) is how do you calm things down? When you're in that state where everything feels so urgent and busy, and there's too much to do, how do you start to chip away at calming things down where then you feel like you're in a good state of making progress versus you feel like you're just always putting out fires or adding a band-aid to something? Yeah, that's where I’m at. What thoughts might you have, or what questions do you have?
CHRIS: Cool. I'm glad you brought an easy question that I can just very quickly answer, and we'll just run with that. It is frankly...what you're describing is a nuanced outcome of any number of possible inputs. And frankly, some of them may just be like; this is just the nature of the thing. Like, we could talk about adding more people to the project, but the mythical man-month and that idea that you can't just throw additional humans at the work and suddenly have that makes sense because now you have to coordinate between those humans. And there's that wonderful image of two people; there's one line of communication. Three people, suddenly there are a lot more lines of communication. Four people, wow. The exponential increase as you add new people to a network graph, that whole idea.
And so I think one of the first questions I would ask is, and again, this is probably not either/or. But if you would try and categorize it, is it just a question of there's just a ton of work to do and we're just not getting it done as quickly as we would want? Or is it that things are broken, that we're having to fix things, that there are constant tweaks and updates, that the system doesn't support the types of changes that we want, so any little thing that we want to do actually takes longer? Is it the system resisting, or is it just that there's too much to do? If you were to try and put it into one camp or the other.
STEPH: It is both, my friend. It is both of those camps. [chuckles]
CHRIS: Cool. That makes it way easier.
STEPH: Totally. [laughs] To add some more context to that, it is both where the system is resistant to change. So we are trying to make improvements as we go but then also being respectful of the fact if it is something that we need to move quickly on, it doesn't feel great where you never really get to go back and address the system in a way that feels like it's going to help you later. But then, frankly, it's one of those tools that we can use. So if we are in the state where there's too much to do, and the system is resisting us, we can continue to punt on that, and we can address things as we go.
But then, at some point, as we keep having work that has slowed down because we haven't addressed the underlying issues, then we can start to have that conversation around okay; we’ve done this twice now. This is the third time that this is going to take a lot longer than it should because we haven't really fixed this. Now we should talk about slowing things down so we can address this underlying issue first and then, from now on, pay the tax upfront. So from now on, it's going to be easier, but then we pay that tax now. So it is a helpful tool. It's something that we can essentially defer that tax to a later point. But then we just have to have those conversations later on when things are painful. Or it often leads to scope creep is another way that that creeps up.
So we take on a ticket that we think, okay, this is fairly straightforward; I don't think there's too much here. But then we're suddenly getting into the codebase, and we realize, oh, this is a lot more work. And suddenly, a ticket will become an epic, and you really have one ticket that's spiraled or grown into five or six tickets. And then suddenly, you have a person that's really leading like a mini project in terms of the scope of the work that they are doing.
So then that manifests in some interesting ways where then you have the person that feels a bit like a silo because they are the ones that are making all these big changes and working on this mini-project. And then there's the other one where there's a lot to do. There are a lot of customers, and there's a lot of customization for these customers. So then there are folks that are working really hard to keep the customers happy to give them what they need. And that's where we have too much to do. And we're prioritizing aggressively and trying to make sure that we're always working on the top priority. So like you said, it's super easy stuff.
CHRIS: Yeah. To say it sincerely and realistically, you're just playing the game on hard mode right now. I don't think there is any singular or even multiple easy answers to this. I think one question I would have particularly as you started to talk about that, there are multiple customers each with individualized needs, so that's one of many surface areas that I might look t say, "Can we sort of choke things off there?"
So I've often been in organizations where there is this constant cycle of the sales team is going out. They're demoing against an InVision mock. They're selling things that don't exist. They're making promises that are ungrounded and, frankly, technically infeasible or incredibly complicated, but it's part of the deal. They just sold it, and now we have to implement it as a team. I've been on teams where that was just a continuing theme. And so the engineering team was just like, "We can never catch up because the goalpost just keeps moving."
And so to whatever degree that might be true in this case, if there are ten different customers and each of them right now feels like they have an open line to make feature requests or other things like that, I would try to have the conversation of like, we've got to cut that off right now because we're struggling. We're not making the forward progress that we need to, and so we need to buy ourselves some time. And so that's one area that I would look at.
Another would be scope, anywhere that you can, go into an aggressive scope cutting mode. And so things like, well, we could build our own modal dialogue for this, but we could also use alert just like the JavaScript alert API. And what are all of the versions of that where we can say, "This is not going to be as nice, and as refined, and as fitting with the brand and feel and polish of the website. But ways that we can make an application that will be robust, that will work well on all of the devices that our users might be using but saves us a bunch of development time"? That's definitely something that I would look to.
What you described about refactoring is interesting. So I agree with we're not in a position where we can just gently refactor as we find any little mess. We have to be somewhat ruthless in our prioritization there. But like you said, when you get to that third time that a thing is working way harder, then take the time to do it. But really, like just every facet of the work, you just have to be a little better. If you're an individual developer and you're feeling stuck, raise your hand all the earlier because that being stuck, we don't have spare cycles right now. We need everybody to be working at maximum efficiency. And so if you've hit a wall, then raise your hand and grab somebody else, get a pair, rubber duck, whatever it is that will help you get unstuck. Because we're in a position where we need everybody moving as fast as they can.
But also to say all of those aren't free. Every one of those where you're just like, yeah, do it the best you can. Dial it up to 11 on every front. That's going to drain the team, and so we have to also be mindful of that. This can't be forever. And so maybe it is bringing some new people onto the team or trying to restructure things so that we can have smaller communication channels. So it's only four people working together on this portion of the application, and therefore their communication lines are a bit simpler. That's one way that we can maybe save a little bit. But yeah, none of these are free. And so, we also need to be mindful that we can't just try harder forever. [laughs] That's a way to burn out the team. But what you're describing is like the perfect storm of every facet of this is difficult, and there's no singular answer.
There's the theory of constraints (I think I'm saying that right.) where it's like, what's the part of our process that is introducing the most slowdowns? And so you go, and you tackle that. So if you imagine a website and the app is slow is the report that you're getting, and you're like, okay, what does that mean? And you instrument it, and you log some stuff out. And you're like, all right, turns out we have tons of N+1s. So frankly, everything else doesn't matter. I don't care if we've got a 3 megabyte JavaScript bundle right now; the 45 N+1s on the dashboard that's the thing that we need to tackle. So you start, and you focus on that.
And now you've removed that constraint. And suddenly, the three megabyte JavaScript bundle is the new thing that is the most complicated. So you're like, okay, cool, let's look into tree shaking or whatever it is, but you move from one focus to another. And so that's another thing that could come to play here is like, which part of this is introducing the most pain? Is it feature churn? Is it unrealistic sales expectations? Is it developers getting stuck? And find the first of those and tackle it. But yeah, this is hard.
STEPH: Yeah, it is. That's all really helpful, though. And then, I can share some of the things that we are experimenting with right now and then provide an update on how it's going. And one of the things that we're trying; I think it's similar to the theory of constraints. I'm not familiar with that, but based on the way you described it, I think they're related.
One of the things that we are trying is breaking the group into smaller teams because there are between 7 and 10 of us. And so, trying to jump from one issue to the next you may have to really level up on different portions of the application to be able to make an impact. And there are areas that we really need infrastructure improvements and then essentially paving the way for other people to be able to move more quickly. We do have to prioritize some of that work as well.
So if we break up into smaller teams, it addresses a couple of areas, or at least that's the goal is to address a couple of areas. One is we avoid having silos so that people aren't a bottleneck, or they're the only ones that are really running this mini-project and the only one that has context. Because then when that person realizes the scope has grown, bringing somebody on to help feels painful because then you're in an urgent state, but now you have to spend time leveling someone else up just so that they can help you, and that's tough.
So the goal is that by having smaller teams, we will reduce that from happening because at least everything that feels like a small project...and by feels like a small project, I mean if we have more than one ticket that's associated with the same theme, that's going to start hinting at maybe this is more than just one ticket itself, and it might actually belong to an epic. Or there's a theme here, and maybe we should have two people working on this. And breaking people into groups, then we can focus on some people are focused more on the day-to-day activity. Some people are focused on another important portion of the codebase as we have what may be extracted. I'm going to say this, but we're going to move on, maybe extracted into its own service. [laughs] I know that's a hot one for us, so I'm just going to say it.
CHRIS: I told you I can't be nerd sniped. This is fine. Let's continue on. [laughs]
STEPH: [laughs] And then a small group can also focus on some of those infrastructure improvements that I was alluding to. So smaller teams is something that we are trying.
We are also doing a really great job. I've been really happy and just proud of the team where folks are constantly reaching out to each other to say, "Hey, I'm done with my ticket. Who can I help?" So instead of immediately going to the backlog and grabbing the next thing. Because we recognize that because of this structure where some people are some silos, they have their own little mini backlog, which we are working to remove that to make sure everything is properly prioritized instead of getting assigned to one particular person. But we are reaching out to each other to say, "Hey, what can I do to help? What do you need to get done with your work before I go pick something else up?"
The other two things that come to mind is who's setting the deadlines? I think you touched on this one as well. It's just understanding why is it urgent? Does it need to be urgent? What is the deadline? Is this something that internally we are driving? Is this something that was communicated without talking to the rest of the team? Is this just a really demanding customer? Are they setting unrealistic expectations? But having more communication around what is the sense of urgency? What happens if we miss this deadline? What happens if we don't get to this for a week, a month? What does that look like?
And then also, my favorite are retros because then we can vote on what feels like the highest priority in terms of pain points or run these types of experiments like the smaller teams. So those are the current strategies that we have. And I'm very interested to see how they turn out because it is a tough way. Like you said, it's challenge mode, and it is going to burn people out. And it does make people feel fatigued when they have to jump from one priority to the next. So I'm very interested. It's a very interesting problem to me too. It just feels like something that I imagine a lot of teams may be facing. So I'm really excited if anybody else is facing a similar issue or has gone through a similar challenge mode; I’d love to hear how your team tackled it.
CHRIS: Yeah, I'm super interested to hear the outcome of those experiments. As a slightly pointed question there, is there any semi-formal version of tracking the experiments? And is it just retro to retro that you're using for feedback on that? I've often been on teams where we have retro. We come up with it, and we're like, oh, this is a pain point. All right, let's try this. And then two weeks later, we're like, oh, did anyone actually do that? And then we just forget. And it's one of those things that I've tried to come up with better ways to actually manage, make slightly more explicit the experiments, and then have a timeline, have an almost scientific process of what's the hypothesis? What's the procedure? What are the results? Write up an executive summary. How'd it go?
STEPH: We are currently using retro, but I like that idea of having something that's a bit more concrete. So we have action items. And typically, going through retro, I tend to revisit the action items first as a way to kick off retro. So then that highlights what did we do? What did we not do? What do we not want to do anymore? What needs to roll over to the current iteration? And I think that could be just a way that we chat about this. We try something new, and we see how it's going each week in retro. But I do like the idea of stating upfront this is what we're looking to achieve because I think that's not captured in the retro action item. We have the thing that we're doing, but we haven't captured this is what we hope to achieve by taking this experiment on.
Mid-roll Ad
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
STEPH: As for the other thing that you mentioned, I do have an idea for that because a former client that I worked with where we had experiments or things that we wanted to do, we were using Trello. And so we would often take those action items…or it was even more of a theme. It wasn't something that could be one-and-done. It was more of a daily reminder of, hey; we are trying this new thing. And so, we want to remind you each day to embrace this experiment and this practice. And so we would turn it into a Trello ticket, and then we would just leave it at the top of the board. So then, each day, as we were walking the board, it was a nice reminder to be like, hey, this is an ongoing experiment. Don't forget to do this.
CHRIS: I do like the idea of bringing it into a stand-up potentially as like that's just a recurring point that we all have. So we can sort of revisit it, keep it top of mind, and discard it at some point if it's not useful. And if we're saying we're doing a thing, then let's do the thing and see how it goes. So yeah, very interested to hear the outcomes of the experiment and also the meta experiment framework that you're going to build here. Very interested to hear more about that.
And just to say it again, this sounds like your perfect storm is not quite right because it doesn't sound like there's a ton of organizational dysfunction here. It sounds like this is just like, nah, it's hard. The code's not in perfect shape, but no code is. And there's just a lot of work to be done. And there are priorities because frankly, sometimes in the world, there are priorities, and you're sort of at the intersection of that.
And I've been in plenty of teams where it was hard because of humans. In fact, that's often the reason of we're sort of making up problems, or we're poorly communicating or things like that. But it sounds like you're in the like, nope, this is just hard. And so, in a way, it sounds like you're thinking about it like, I don't know, it's kind of the challenge that I signed up for. Like, if we can win this, then there's going to be some good learnings that come out of that, and we're going to be all the better. And so, I wish you all the best of luck on that and would love to hear more about it in the future.
STEPH: Thank you. And yeah, it has been such an interesting project with so many different challenges. And as you've mentioned, that is one area that is going really well where the people are wonderful. Everybody is doing their best and working hard. So that is not one of the competing challenges. And it is one of those; it’s hard. There are a lot of external factors that are influencing the priority of our work. And then also, some external areas that we don't have control over that are forcing some of those deadlines where customers need something and not because they're being fussy, but they are themselves reacting to external deadlines that they don't have control over.
So it is one of those where the people are great, and the challenges are just real, and we're working through them together. But it's also hard. But it's helpful chatting through all the different challenges with you. So I appreciate all of your thoughts on the matter. And I'll report some updates once I have some more information.
On that note, shall we wrap up?
CHRIS: Let's wrap up.
STEPH: The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed on Twitter. And I'm @christoomey.
STEPH: And I'm @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
On this episode, Chris talks about testing external services and dissects a tweet on refinements for Result. Steph talks about thoughbot's recent improvement to their feature flag system.
Links:
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, what's new with you?
STEPH: Hey, Chris. Well, today is Summit Day at thoughtbot, and it's the day where all the bots gather, and we hang out, and we chat, and we play games. And it's a lot of fun. We're actually taking more of a respite this year just because life has been taxing. And so we decided to give people more of the day off. So we still had some fun events, but most of it is everybody gets a chill day. Do something that brings you joy is the theme of the day.
But we had Lightning Talks, which is my favorite thing that we do on Summit Day because I realize that I just work with the coolest people, and they have such interesting things to talk about. And we had such a variety of topics. So one of them, Alex Chen taught us acronyms in K-pop. And Sam Kapila, our resident foodie, taught us about a variety of spices. And one of my favorite talks was by Akshith Yellapragada, and it's the top 10 best limo entrances by The Bachelor, and it was phenomenal. And I really want to share some stuff that I learned with you.
CHRIS: The Bachelor like the TV show?
STEPH: Yeah, like the TV show. Are you familiar with it? Have you seen it before?
CHRIS: I am familiar with it. I know it exists. I know that there's a spinoff, The Bachelorette. And I believe we have now exhausted my information on the matter.
STEPH: [laughs] That's fair. For anyone that hasn't seen the show, the show revolves around a single person. For the bachelor, it's a single bachelor who dates a number of people over several weeks, and then they narrow down the people. There are elimination rounds, and the whole goal is for them to find their true love. So each week, someone is eliminated, and I think the show ends with a marriage proposal. So it's a wild show. It's something. [chuckles]
And in Akshith's talk, I learned some really fun terminology. The first one is the Crown, and this is actually an important building block because we're going to get to the rest of the terminology that uses this word, so we got to start here. So the first one is the Crown, and this is the person that everyone's competing for. So they're the star of the show. They're the one that everybody is hoping to fall in love with or will fall in love with them so they get a marriage proposal.
So then the other stuff that I've learned is all about the entrance because again, we're talking about the top 10 best entrances. And one of them is the sidecar entrance. So this is where the player, because yes, this is totally a game, has someone assist them in meeting the crown. So it could be like a family member, maybe it's like your grandma.
And then there's TOT, T-O-T, which is short for Trick Or Treat. And this person exits the limo wearing a costume. So it's someone wearing a shark costume. There was someone wearing a sloth costume where they really dedicated to the role, and they climbed a tree and hung from a branch. I don't know for how long but for long enough to really vibe with the role.
And then there's the Kringle, and this person brings a prop or a present to the Crown. And there's the Grandy, and this player arrives in something other than a limo. So the example that Akshith provided is someone arrived in a motorized cupcake.
CHRIS: Was the cupcake edible?
STEPH: I don't think so, fair question. [laughs]
CHRIS: So really just like a go-kart that looked like a cupcake, not really a motorized cupcake, if I'm going to meet pedantic about the thing, [chuckles] which I think is my job.
STEPH: Yes, it is a motorized non-edible cupcake, but that seems like something a next player should do. They should really up the game, and they should bring an edible motorized cupcake.
CHRIS: Yeah, because you get the visual novelty, but then you layer on top of it that it's actually something that you can now eat, and it's a double win.
STEPH: Ooh, and then you're a Grandy, and you're a Kringle because you arrived in something other than a limo, and it's a present.
CHRIS: I love how you have so deeply internalized this now that you're like, ooh, okay. I can remix here. I'm going to bring together the pieces. Yeah, all right. Yeah, this all makes sense.
STEPH: Yeah, it was a lot of fun. Those are most of my notes for today. I have some tech stuff too, but this felt like the most important thing to start the show with.
CHRIS: We use the phrase tech talk and nonsense to describe the show often, but I think nonsense and tech talk is the correct orientation.
STEPH: [chuckles]
CHRIS: Correct in terms of importance and chronological order, and whatnot. But yeah.
STEPH: I love that we start with a bit of nonsense. So I do have some tech stuff. But first, before I share any of that, what's going on in your world?
CHRIS: I'm sure there's plenty of nonsense in my world, but at the top of my list is some tech stuff. So someone on Twitter, Adam Lassek, reached out and he suggested related to the conversation and the back and forth that I've been having with myself around some of the data structures within the app that I'm building…So I've talked about the dry-monads result object, and there's this success and failure. And I wanted to introduce this new method called bimap, but I wanted to do it in a reasonable way. So I wrapped, and then I wrapped, and I wrapped things.
As an aside, former colleague and friend of the show, Joel Oliveira, sent a wonderful tweet which was a reference to the SNL video where they make a taco and put it inside of a pizza and put it inside of a bag. And that was his joke about it, which I really liked. That was an excellent reference. But in this case, Adam Lassek reached out and suggested if I'm that squeamish about monkey patching, which I am, have I considered refinements? And so he sent an image of a code sample, which is so kind of him to send that much detail over, but it was interesting because I know of refinements in Ruby. I know of that as an alternative to monkey patching, a more refined way, but a safer way, a more controlled way to alter code, but I've not actually used them.
STEPH: I'm not familiar with refinements. What is that?
CHRIS: Refinements are a way...so similar to monkey patching, where you say like, I'm going to reopen this class or this module and define a new method or redefine a method or do something like that, a refinement is a way to do that in a scoped manner. So I'll be honest, I'm not super familiar with them. I think I came into Ruby at a time where the community was moving away from monkey patching. And the dogmatic swing of the pendulum was like, that's a bad thing to do. And so even the refinements were introduced, as far as I understand it, to be a more controlled way to do it. So it's not just like, hey, cool. This module is redefined now in your app in a magical way that's really hard to figure out and hard for folks to debug refinements. You have to explicitly opt into within a certain lexical scope.
I'll be honest; I know that at the headline level. I don't actually know the ramifications or where and when you can use them and how you can. But I know that that was the idea is refinements are a way to do monkey patching but in a more controlled, more understandable manner, and so the code sample that Adam shared does that. And it's very interesting. As I'm looking at it, I'm like, okay, that's cool because I think it'll be a little bit safer.
But at the end of the day, my concern wasn't safety in this case because I was introducing a method that would be new, that would be additive to the API of this module that I'm working with, and so that I think of as a relatively safe operation. My hesitation was more around how does someone figure it out if they're working with this? And particularly, the name of the method that I was introducing was bimap so, B-I-M-A-P. And if someone sees that in our codebase and is like, "Bimap, where is this coming from?" Well, this is one of those dry-monad result objects. And they go to the code, and they try and look it up in the docs, and they're just not going to find anything.
And I can imagine losing a lot of time to try and chase that down. There are ways to figure it out. There's the method in Ruby, which is a wonderful trick for chasing things down. Or if you grep the codebase, you'd find it. But I think I'm possibly over-indexed on worrying about that lost time, that moment. But I've lost that time so many times in my life where I'm like, I can't grep for this. I can't Google for this. And so I have so strongly moved in the direction of being like, everything should be grepable, everything should be googleable. Those are the two of the things that I believe about software. I think I believe a bunch of stuff.
STEPH: I think we have a full episode that talks about what we believe in software.
CHRIS: I believe we do.
STEPH: Cool. Thanks. Yeah, I have not heard of refinements. That sounds really interesting. I really like that bit about everything should be grepable, and everything should be googleable, googling everything. I kind of agree with that one. We live in a world where we're always doing bespoke things so that one feels a little bit harder that we're always going to be able to Google it. But then that encourages people to constantly publish the bespoke work that they're doing so then others can benefit from that work. But the grepable, I absolutely agree with that one. It's so frustrating where I see a method, but I cannot find its definition. And then having the ways to figure out where that method is defined to then find its definition is crucial.
CHRIS: Yeah, it's interesting. I definitely feel that way very strongly. And it's in such stark contrast to Rails. Rails is like, hey, don't worry. There's going to be a lot of methods. You don't need to worry about where they come from, or why they exist, or what they are, or what they do. Well, probably what they do. But all of the magic inflections on database tables,, and suddenly you have methods named after every column. That's both very magical and hard to grep for or impossible to grep for, but it also leaks the entire structure of your database into your application in a way that I've always felt a little bit complicated about. And so explicitness, grepability, those are things that I care about.
There's another one, delegates in Rails, that I sometimes pause around using especially when it's like delegates 19 methods to user prefix user. And so you end up with methods that are like username. And that's a delegation to the user object to get the name method off of it, but it creates the method user_name. And you're never going to be able to grep for that. And it saves like a little bit of code, definitely, but it saves this very obvious, very knowable code. So this one I actually shy away from using delegates in most cases, and I'll just write out the methods manually because sometimes I like to hear the clackety of my keyboard. There's a reason I have a clackety keyboard.
STEPH: You want to get your money's worth. You want to clackety as much as possible. Yeah, I'm also not a fan of delegates. This may be a lie, but I don't know that I've actually ever used it. I've worked with it, but I can't think of a time that I've implemented delegates. Maybe that's a lie, but I'm going to say it anyways because that feels true, at least in the last couple of years.
CHRIS: I feel like that could be true for the last couple of years. I would be surprised if you have never even added to a delegates line. Because that's the thing, you can just keep shoveling stuff into them as well. So I would put money on you having used it at some point and then just forgotten about it. But who knows, maybe not.
STEPH: This is where we play two truths and a lie and that one's my lie. [laughs] Yeah, that's also fair about adding to it because if that's already defined and it's easier to add to it, I don't know. Who knows what past Stephanie has done, probably some wild stuff.
CHRIS: It's unknowable at this point. It's lost to the sands of time. But looping back to the core thing of this refinement and the module, I think I'm leaning in the direction of doing that and unwinding my wrapping and wrapping layer thing. Because obviously, as I talked about...I think it was the previous episode or maybe two episodes ago. There was conceptual complexity to the additional wrapping layer. Even as I was fully in the context of working on that, I was still getting myself confused in either triple wrapping or then unwrapping too much or whatever. And these are the concerns with this type of code. So moving away from that feels better, having just a single layer of context wrapping around a given value.
And then the other thing it's actually just a lot less code, and it's less prone to error, I think. That's my hope. I have to look into exactly how refinements get used, but I noticed in a couple of places that sometimes we were wrapping with this local value object that gave us the bimap method, and sometimes we were forgetting to. And so, I could see that being a very subtle, easy way to introduce failures into the app that would be hard to catch just by looking at it.
So I think having a more global refinement...although I think that's sort of a contradiction, a global refinement because I think refinements are meant to be local. But anyway, I'm going to look into it because it's a much more concise code sample than what I have. Yeah, I'm going to poke at that a little bit. But it was an interesting exploration of some different things. And then it forced me to consider why am I so resistant to monkey patching at this point, especially in this particular case where I think it's okay-ish?
STEPH: That's a good question. Do you have any insights? I am also resistant to monkey patching. I feel that pain and also that timidness of diving into that space. But I'm curious, have you figured out any other reasons that you really prefer to avoid it?
CHRIS: I think this one falls into that sort of...what's the word? Like tribal knowledge of we've been burned by it in the past and therefore we build almost a...religious is too strong of a word but that sort of cultural belief. This is a thing that we do not do because of the bad things that we've experienced in the past. And there are a lot of things that fall into that experiential negative space.
So with monkey patching, things that I know we can run into is if I introduced this bimap method, but I introduce it subtly differently than the library will eventually, then they could eventually introduce it themselves. And suddenly, I have this fork of my code expects it to work this way, but you've now implemented it that way. I no longer can upgrade. This is a critical piece of infrastructure in my app. I've just painted myself into a corner by doing this. Whereas if I do this wrapping layer, that's my code. I own that. It's not going to be a problem in that same way.
There's also the subtlety, the grepability that sort of thing is a concern in my mind. Like, is this our code? Is this their code? Is this an engine? Being able to find code within a codebase, I think, is a critical thing. And so that's a part of the hesitation. I also know longer ago prototypes...I want to say Prototype JS was the name of the project, but it was one that was just like, yeah, JavaScript doesn't have enough stuff in the standard library. So we're just going to override everything and add all of these wonderful methods sort of in the way that Active Support does, which is an interesting comparison.
But the JavaScript community definitely moved away from Prototype. And now JavaScript is a language or the standard runtime that's available in most JavaScript engines. It has a lot of the methods, but there are conflicts, and stuff gets weird, and it's all complicated. But again, as I thought of it, Active Support is a complete contradiction to everything I'm saying. Active Support just adds whatever to anything, 2.days.ago. Why does the number 2 have a days method? Because it's great, that's why. But I'm just a walking contradiction, I guess.
STEPH: Everything you said really resonates with me. And I'm just trying to reason with myself like yes, Active Support uses a lot of this, a lot of metaprogramming, and adds everything it wants to. So why does that feel okay? And I wonder if it comes down to one is more almost like an agreed standard. It's built by a team, and it's maintained by a team, and then it's used by a large number of people, and then you get that feedback. Or maybe it's not even just a team, but it's a larger community versus if it's internal to your software team, maybe that doesn't feel like a big enough group or if it just needs...Rails is also documented. So maybe that's part of it, too, is if you are going to dive into that space, it's easy to discover, and it's well-documented as if you are building an open-source project that other people are going to use. Like, you designed for the intent of people to use this pattern that you've introduced, then perhaps that's when it starts to feel okay. ,
But the experiences I have had is where people basically will add some dynamic programming or monkey patch an existing feature. And then that's very hard to find and has surprising results, or it gets outdated. So I guess it comes down to who are you designing for? Are you designing for more of an open-source community, or you're at least designing for the people behind you that are going to be using this? Or is this a one-off adventure that you have chosen for yourself and future developers to discover? [chuckles]
CHRIS: Yeah, I think that's a good summary, although I'm open to the fact that I exist in a state of contradiction. I'm also fine with that, to be clear. [chuckles] But I think what you said is true, and I think there is subtlety and nuance and reasons that it's okay in one context and less okay in others. And that idea of just like, I don't know, this is one of those things that I got in my head that I've done the thinking a long time ago to decide this is a thing I don't do.
So now, in order to override that, I would have to do so much thinking. I would have to be like, all right, well, my brain tells me, no, but I'm going to go reread everything about monkey patching right now to convince myself that it's okay or to fully get the context and the subtlety and the nuance. And so sometimes we have to rely on that heuristic knowledge of monkey patching, nope, don't do that. That's not a thing, but other stuff is fine. And well, Active Support is fine because it's Rails. But it is interesting to observe contradictions and be like, huh, look at me go. All right. Well, moving on.
STEPH: It's our lizard brain that's saying, "Hey, there's danger here." [laughs]
CHRIS: Exactly.
STEPH: I rather like living in a world of contradictions, or at least I find it that I'm drawn to them. And maybe that's also one of the things that I really like about consulting is because then I join all these different teams, and I hear all these different opinions. So as I'm forming these opinions around something like tests are great, I really like tests, and then someone's like, "I really hate tests." I'm like, "Cool. Let's talk. I want to understand why you don't like this thing that I think is wonderful because then I'm really interested." So I find that I'm often really drawn to contradictions as I like hearing opinions that are very different than mine and finding out why people have a different opinion than mine.
CHRIS: Yeah, the world is full of contradictions. So it's, I think, at least a useful way to exist in the world, to be open to them and to enjoy exploring them. But yeah, I'll update in future weeks if I do end up going the refinements route. I'll let you know if anything interesting falls out of that.
And now we're going to take a quick break to tell you about today's sponsor, Orbit. Orbit is mission control for community builders. Orbit offers data analytics, reporting, and insights across all the places your community exists in a single location. Orbit's origins are in the open-source and developer relations communities. And that continues today with an active open source culture in an accessible and documented API.
With thousands of communities currently relying on Orbit, they are rapidly growing their engineering team. The company is entirely remote-first with team members around the world. You can work from home, from an Orbit outpost in San Francisco or Paris, or find yourself a coworking spot in your city
The tech stack of the main orbit app is Ruby on Rails with JavaScript on the front end. If you're looking for your next role with an empathetic product-driven team that prides itself on work-life balance, professional development, and giving back to the larger community, then consider checking out the Orbit careers page for more information. Bonus points if working in a Ruby codebase with a Ruby-oriented team gives you a lot of joy. Find out more at orbit.love/weloveruby.
STEPH: So we made a recent improvement to our feature flag system, which I'm really excited about, that we have found a way to improve that workflow because it felt really great that we're...well, okay, I should say that with a caveat. It felt really great that we're using feature flags to ensure that the main branch is always in a deployable state. But it did not feel great around how tedious it was becoming to add all of the feature flags specifically because each time we're adding a feature flag, we're having to add a migration. So we're having to run a migration, add the feature flag column, and then we can interact with that feature flag. And that part's okay. It was more removing that feature flag once we're done with it, that that part was starting to feel tedious because then that's becoming a two-deploy process.
So one change is to remove the code that's relying on that feature flag. And then the second deploy was to actually drop that column because we wanted it to be safe to make sure that the code wasn't trying to reference a database column that didn't exist anymore, which is what happened at one point at first when we weren't doing the two-deploy process.
So the improvement that Chris White came up with is where we're now using a Postgres JSONB column. And it's here that we actually have a feature flag YAML file. And we can have the name of the feature flag. We have a description of the purpose of the feature flag. And we have an enabled property on there, so then we can turn it on and off. The benefit of this is now we don't have to do that two-deploy process. And we also don't have to run a migration for when we're adding a new feature flag. So we can add it to the feature flag file, we can load it in, and then we can set that property to say, "Yes, this is enabled," or "No, it's not." And that has just simplified our feature flag process.
One tricky bit that I believe the team ran into is around enabling this with Active Admin because Active Admin was just relying on those database columns to then turn something on or off. But then we've added some methods that work well with Active Admin that then say, "Read from here when you're checking to see if something is enabled," or "Look at this list to see which feature flags can be turned on and off." So it's been a really nice improvement, and everybody on the team seems to be in favor of the ways that we've improved this. So it's been really nice. So I wanted to come back and bring an update on how we've simplified our feature flag system.
CHRIS: That definitely sounds like a nice improvement, the ability to just more regularly iterate around that or taking away the pain, any pain associated with using feature flags. Because they are such a nice thing to have, but there's that overhead. Then you start to have that voice in your head that's like, do I really need a feature flag for this? Could I just sneak this one in? And we always regret that.
I had a similar thing this week where I wrote some code. I didn't quite write as many tests as I should have. And it was wildly broken, just like all of the connection points through everything were broken. But then it pushed me in an interesting direction where I was like, well, what I'm going to do is write an integrated test. It was basically an event coming in from a webhook that then enqueued a job, which did a thing, which then spit out an email. But it was broken at like three layers, and I was very embarrassed, if we're being honest. But, I don't know, I was just having a low energy afternoon, and I did not write the test, which I know I'm supposed to do.
So similarly, any pain that we can take out of these things that we're supposed to do, any way that we can pave the happy path, I'm all about those. I'm intrigued because I think we've talked about this before, but it sounds like you guys have a very home-grown feature flag system. Is that true?
STEPH: We do.
CHRIS: Is there something about it that makes it unique to your situation, or was it just like that's what happened? Someone early on was like, "We need feature flags. I can just do the simplest thing that works," and then that's where you're at now or?
STEPH: You're asking a very good question. And I'm trying to recall what led us to the state that we're in because I feel like we had this same discussion several episodes back when we were introducing the home-grown feature flag system. And I was like, there are reasons, but I didn't really dive into those reasons because it felt very custom to the application. But now I've forgotten what those reasons were. So I think you ask a great question where it'd be worth revisiting to confirm that yes, there's a reason for this home-grown version versus using something like Flipper.
CHRIS: I'm glad I'm at least consistent over time in the questions that I ask and the heuristics that I have. This does feel like one of those things. It's not quite like crypto where I'd be like, we can never write our own crypto. But a feature flag system, I would be really intrigued if there are things that they are just workflows or functionality that you really need that are not supported by any of the existing solutions that are out there. I think audit trails is an interesting one. I think Flipper has a hosted product at this point that does that, but the local version wouldn't necessarily. So maybe that's a thing that you want to get. Again, I'd just be really interested. It sounds like the current state of the world that you have is enabled or disabled; just broadly, that's it. Those are the two states for any given flag. Is that true?
STEPH: It is. There's nothing complex with the flags in that nature. And then we use naming to indicate if something is more for beta, so if it's a change that we're making to the codebase, but it's a feature flag that we plan on removing, versus maybe it's a feature flag for enterprise customers.
CHRIS: Oh, interesting. I wouldn't think of using a feature flag in that context where it's going to be like a persistent, long-lived; this is conditional logic around some state or some property of the viewer. I think of feature flags as a way to gate code conditionally based on a point in time. And the reason I asked about the enabled-disabled basically like the Boolean state for your flags is when I've worked with feature flags in the past, I've liked having the ability to say, for this user or these users, or this group of users, which we've named this is our beta list…and it's the ten people that just really love the product and are happy to bump into some rough edges. And so we'll put things on for them first or even like percentages, so roll it out to 10% and then 50% and so on. And I think the larger an application and user base gets, the more that sort of thing starts to feel right.
STEPH: Yeah, we certainly have some complexity around where each customer can really specify which features that they want. And then the features also differ a bit for each customer. So we are in a world where we're pretty customized or configurable for different customers. And whether that's something that we could simplify, that would certainly be a good question or something to pursue.
But part of this also feels like our decision may have been based around what the system was already doing, and we're looking for ways to make slow improvements versus trying to redesign the whole thing. Because initially, the way we were customizing all of these different features for customers was in a YAML file. And that part was painful because then, anytime we wanted to make a change, it required a deploy. So the introduction of feature flags is really to get away from having to deploy to then make a small change like that.
But now that we're in the space that we can easily configure that change and do that on the fly and not have to issue a deploy, I think we're now in a good space to reassess. And the team may have some really good answers. Perhaps I'm just not recalling as to why we've chosen the more home-grown feature flags. But yeah, I'll visit that topic and report back. Because I've been coasting along on our new system and enjoying it, but you're asking some really good questions.
CHRIS: I mean, as an aside, if you're coasting along and really enjoying it, then maybe you don't need to ask any questions. It's still interesting. I would be intrigued to know. But if it's not causing you any pain, then you probably shouldn't change it. Because frankly, changing out the feature flag system is going to be non-trivial, I'm pretty sure. You could feature flag the feature flag system, and then you can transition from one to the other. You need a third feature flag system for that. But anyway, I digress. [chuckles]
STEPH: You referenced crypto earlier. So I think I like the feature flag, the feature flag system. We should have some crypto flags in there somewhere. I think that's a thing too. But I think the main goal if I'm looking into changing it would be, circling back to what we were talking about earlier, is discoverability, so having a home-grown feature flag system. How easy is it for…if nobody was around on the team and there was someone new working with it, how easy would it be for them to turn something on or off? And if that's easy, then that's great. Then I think we've got a great home-grown system. If that's challenging, then I definitely think it's worth reassessing.
And now a quick break to hear from today's sponsor, Scout APM.
Scout APM is leading-edge application performance monitoring that's designed to help Rails developers quickly find and fix performance issues without having to deal with the headache or overhead of enterprise platform feature bloat. With a developer-centric UI and tracing logic that ties bottlenecks to source code, you can quickly pinpoint and resolve those performance abnormalities like N+1 queries, slow database queries, memory bloat, and much more.
Scout's real-time alerting and weekly digest emails let you rest easy knowing Scout's on watch and resolving performance issues before your customers ever see them. Scout has also launched its new error monitoring feature add-on for Python applications. Now you can connect your error reporting and application monitoring data on one platform.
See for yourself why developers call Scout their best friend and try our error monitoring and APM free for 14 days; no credit card needed. And as an added-on bonus for Bike Shed listeners, Scout will donate $5 to the open-source project of your choice when you deploy. Learn more at scoutapm.com/bikeshed. That's scoutapm.com/bikeshed.
CHRIS: One of the things that's been interesting working lately in the app that I'm building is thinking about testing. We have a number of interactions with third-party services. Frankly, a lot of the app is that at this point. We have a handful of different external data providers systems that we're interacting with, webhooks and flows and things like that. And so we had to make that decision that you always have to make in these sorts of situations which is, how are we going to test this?
And there's a wonderful blog post on the thoughtbot blog called Faking External Services in Tests with Adapters. It's by the one and only German Velasco. And it is a beautiful summary of the different approaches that you can take, but it really dials into one, which is the adapter pattern. There's also a weekly iteration episode on Upcase with Joël Quenneville, which discusses a little bit more of an exploration of the different options. There are sort of a handful of different options that we can consider your whereas the blog post by German talks specifically about the adapters approach.
But to talk about them briefly, there's one where you can go all the way outside your app, spin up a fake service. Typically, we would do this with Capybara Discoball, which is a wonderfully named project. But it allows you to spin up a little Sinatra app type thing such that your web application is still making quote, unquote "real HTTP requests." This external service is going to catch that and respond with whatever canned data or structured responses that you want.
But you still have the ability in that to, say, tell it to create data beforehand or be in a certain state or respond with certain data or have any stateful persistence. So if you create a record in that external system, and then later you query for it, that system can do that. But it has the complexities of now, your test suite is running different systems. And do you have thread-safety or all that kind of stuff? So that's a particularly complex end of the spectrum. At the lowest end would be stubbing and mocking. You just take whatever external clients you have, and you're mocking the API calls in them. That's the lowest end. And that's the one, especially for feature specs, those I try and avoid. Then there's a middle ground of like WebMock or VCR, those sort of things where you're saying whenever you see an HTTP request that looks like this, respond in this way. You record the cassettes, all that kind of stuff.
And then there's the one that we've settled on, which is the adapters. So the client that we've introduced in our local codebase to interact with any of these third-party systems internally has a class attribute, a cattr_accessor in the Rails parlance, I believe. And that allows us to switch out the backend. And so we have a real HTTP backend, and that's the one that actually runs in production and a test in-memory backend. And that in-memory backend can implement whatever logic. We're ending up with one of them almost recreating this external service, sort of re-coding some of their inconsistencies or oddities but also features and whatnot.
But it feels like it has struck just the right balance, and it allows our feature specs to be very rich, very real. We start up the world, and we say, "Hey, external service be in this state." And then I'm going to go visit the page. I'm going to see the data. But we are almost making real HTTP requests. It's very close. It's always an interesting choice to make here. I'm very happy with the one that we've made, but it's still not perfect. There are always going to be trade-offs between the different options here. But it's always interesting revisiting this and being like, which one am I going to choose today?
STEPH: I feel like my natural progression when testing external services; I always start with WebMock, and then I progress to using adapters. And then from there, I go to actually replacing the HTTP service that is receiving and then returning a response, like you mentioned to Capybara Discoball earlier. So I can certainly see what you like about the adapter pattern. You mentioned that you're coding some of the inconsistencies. That feels very real. I'm curious if you have an example of how you've had to manage that recently.
CHRIS: A specific example would be the external API responds with certain error codes or error structures. So it's an error. It has a status of a number and then a reason, or sometimes instead of a key that is reason; it’s the message. So it's like, oh, okay, I see that in this endpoint, you respond with reason, and then this endpoint you respond with message. So now, do I encode that into my fake? I guess I do. So my adapter now implements things like that. There are cases where it's inconsistency where I'm like, well, this is the way they behave. So I would like our test suite to exist in the context of that because then our app is getting exercise in a real way.
But in some cases, it's like little bits of logic validation that an external system might do if that's an important part of the flow. The app that we're building has a lot of forms and a lot of data validation and things like that. And so, we want to make sure that we have robust handling around that robust messaging to the user so that it's very clear what they need to do and how they need to respond to things. And so putting in little bits of that like, oh, that's how you format a phone number, okay, cool. Our fake will also format phone numbers in that way, things like that.
STEPH: Every time the topic of testing external services comes up, I really, really want VCR to be the answer. I really like the idea of being able to validate that...because you'd mentioned that we're programming the expected return from this other service. And it's very easy to get out of sync with those actual responses. And then we don't really have a great way to stay up to date other than we wait for production or staging environment to fail. And then we realize something has changed, and we have to go and update either our mock or our adapter. And maybe that doesn't happen often if you're working with an external service that is very good about broadcasting when they have a breaking change.
But if you're working with a less stable endpoint, then I always want VCR to really work. But it's just one of those areas where I'm like, yes, that's the thing that I want. I want this idea where I can rerun my tests in a way that they actually hit that service and record the response. But then I have felt pain [chuckles] from working with VCR and how it's configured, and how people have used it. It's one of those where I don't blame the library. I like the library. But the way people have implemented it and test I have felt a lot of pain from that.
CHRIS: Yeah, I definitely agree with that. It feels like it's nice if you can push the mocking all the way out to that layer. Because like right now, our codebase has code in it that is subtly changing the behavior for a test, and I don't like that. It's only the swapping out of the adapter, so it's a very minimal thing. And we try and push all logic away from that such that the test adapter is as similar as possible to the real production situation. But it's enough difference that I agree I would like if VCR would just like, I catch the HTTP requests, and I respond with the same thing and sometimes we can pass through.
I do think one of the fundamental limitations, or at least very hard to get right things, would be sequential requests. So I post to this endpoint in the external service, which creates some data. And then later, when I make a GET request to their endpoint, I should get back that data that I just created. That's, I guess, doable because you can have sequential requests, have cassettes that are first this request, then that request, then that request. And it knows that, like scope them to a given spec. But that feels extra difficult. And it does, again to your comment, the maintainers of that project do a wonderful job, but it's a really hard target to hit.
STEPH: Well, and one of the other hard requirements with using a tool like VCR is then that external service really needs that sandbox staging environment that you can use. So that way you can create this data, you can rerun your test. So they're actually going to hit this real environment. They're going to create this data and that not have any harmful effects. And then you can record fetching that data. So it requires a lot of pieces to fall into place for it to work well. But then I was just thinking as you're talking about adapters, I'm like, yeah, I love the adapter pattern. I've really enjoyed that one for testing as well. But then I immediately start to think, oh, well, what happens when it gets out of sync, and how do we know that it got out of sync? And I don't have a great answer to that.
CHRIS: Production blows up, obviously.
STEPH: Production blows up, and then we go update our adapter. That's very calm. [laughs]
CHRIS: It would be great if CI could more proactively catch that or...yeah, I agree. I would love if VCR would work because that facet of it is so attractive. But [chuckles] I've never gotten to walk exclusively the happy path with VCR. So here we are. This is a classic case of here's four options as to how we can think about this hard and important thing that we do in our codebases, and they all have trade-offs much like everything else in software.
STEPH: I'm going to add this to my developer bucket list to live in a world where I can easily validate if an external API has changed or not and then also have tests that know when something has broken before production does.
CHRIS: Ooph, dare to dream. I like it.
STEPH: I'm a dreamer.
CHRIS: I want to live in that world. Well, with that wonderful dream to take us out, should we wrap up?
STEPH: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @_bikeshed on Twitter. And I'm @christoomey.
STEPH: And I'm @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeee.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Sponsored By:
In this episode, Steph and Chris talk about things they've changed their minds about over the course of their careers as software developers. Steph talks about as it turns out, arm chair rests are good, feature flags and comments are also good, she's changed her mind about how teams structure the work that each person is doing at once, and believes strongly in representation in the field.
Chris is not a fan up upgrading his operating system and when he first started out, he gravitated towards learning dynamic languages, and since then, much prefers functional languages, static typing or more broadly, static analysis. He also no longer believes in the 10x engineer, and also very much believes that URLs matter on the internet. So basically, don't call them single-page applications; call them client-side applications instead!
Transcript:
CHRIS: I still have dreams that I missed an entire semester of math class, and now it's time for the final. I don't know that I'm ever going to grow out of that.
STEPH: That's wild.
CHRIS: You don't experience that? It's a mixture of I'm in elementary school, but it's a college final. Like, the physical school that I'm in is my elementary school, but it's a calculus college course that I missed. And now it's time for the final, and I won't graduate college as a result. But it's also high school at the same time. Just every part of education sort of melded together into this nightmare scenario. Do you not experience that? I thought this was normal.
STEPH: [chuckles] Not in a very long time, not since I was in college. But I'm imagining this very cute, young Chris showing up with a backpack to the calculus final like, "Oh no." [laughs]
CHRIS: Yeah, pretty much, yeah. I really thought I would grow out of it at some point. But it shows...I think it manifests when I have anxiety about something else in the world, and then I have a math terror dream.
STEPH: That's your stress sign. That's your terror dream.
CHRIS: Apparently.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, how's your week going?
CHRIS: Oh, it's going fine. Yeah, I'll go with fine. I had to upgrade my operating system. Enough things had stopped working or seemed to be pestering me about it regularly, which normally I'm going to ignore that for as long as I can. That's sort of where I'm at in the world these days. Like, I don't want to upgrade because I don't know what's going to break and whatnot, but then things had broken already.
Text messages were no longer showing up on my computer. And it turns out that the primary way that I interact with text messages is by replying to them through my computer. I don't want to type on my phone, that's not a thing. I'm already grumpy enough about text messages, to begin with, that I will regularly respond switching to email, and then I'll go off from there. But yeah, they stopped working, it stopped connecting.
And then I got this really weird message from Apple when I tried to sign in. And I was like, I feel like I should at least try to upgrade to the new operating system, which I think has been out for a long time, and I've just been ignoring it. But then I had the added problem of I didn't have enough space on my computer to install it, which I tried once before.
So I downloaded the installer, but the installer downloader doesn't check whether or not you have enough space to do the install. So it's just like, hey, so you know how you didn't have enough space? Well, we took up the remainder of it, and now you can't do anything about it. And the installer is hidden somewhere in the computer. So at one point, it just went away, and then suddenly had a lot of space on my computer. But finally, I decided to bite the bullet.
I found a bunch of caches on my computer. So there was a cache for my backup utility, which is called Arq, A-R-Q, which was a lot of space. It was like 20 gigs or something like that. So it was like, sorry, you have no more cache. I'm pretty sure my computer's going to light on fire the next time it tries to do a backup because it has no cache to rely on, and it's got to try a lot harder, pull a lot more data down. I don't know what it does, but whatever. It's going to do that.
And then, I found the more general application caches on the computer. Spotify had like six gigs of cache. Well, what are you doing? Aren't you streaming from the internet? Stop it. That's not okay. That is not acceptable. Yarn had three gigs. I was like, what is everybody doing? And I busted all of these. I threw away everything, and my computer seems to be doing fine after the fact. So, were the caches even doing anything? I ask.
Anyway, so I upgraded, and then some stuff didn't work. And so then I had to find the versions to make stuff work. The particular one that stood out was Karabiner-Elements, which I used to make my mechanical keyboard do the right things for the function keys. That stopped working. And I tried to upgrade it to the newer version because I figured okay; they probably hopefully released a new version, but it failed in the upgrade process.
And it turns out the secret was I had to upgrade to an intermediate version. I was on 12.3, and I needed to go to 13.4. But in between, I had to go to 12.10. And if I went to 12.10, then the upgrade to 13...everything about it was everything that I hate about upgrading software. It's like, I just know it's working right now, and I feel like if I even just look at it wrong, this whole tower of software is going to fall over.
The worst thing, the thing that I have not been able to fix, is now I use iTerm as my terminal, my terminal emulator as it were. And I typically run with transparency mode on which some people look at and say, "Wow, that's a choice." And I say, "I kind of like it. I don't know; it makes me feel like a hacker or something." I don't know, whatever. [chuckles] Let me live my life. But for some reason, switching to Big Sur, the version of OS X that I'm on now, iTerm doesn't have transparency anymore. And I just haven't been bothered to fix it yet.
But, man, I got rambly. I clearly have some feelings about upgrading software.
STEPH: You have so many feelings. The fact that you kept going...People can't see me, but I'm just dying because of that whole story. [laughs]
CHRIS: I kind of felt like I had to get through it. I had to exorcise the demons, tell my tale, and then be done with it, which I think I'm at now.
STEPH: When I start laughing that hard, [laughs] I try to hide from the camera view because I want you to keep going for people to listen.
CHRIS: But what's fun is you bob and weave. You'll hide for a minute, and then you'll come back and be like, okay, I'm composed, never mind. And then you'll just fade off to the side again. So yeah, but I powered through. [laughs]
STEPH: Oh, all right, there is so much there. [laughs] Upgrading is the worst. I agree with that. That was actually something I ran into earlier this week. Well, it was a mix of where upgrading presented a problem and then upgrading something else resolved that problem. And so that was an adventure where I shared a tweet. I can link to it in the show notes as well.
But Ruby was just taking up 100%, a full core, just all the time, and I couldn't figure out why. I wasn't doing anything with Ruby. We weren't talking at the moment, but it was just turning up one of those 100% CPU or higher. And so then I did some searching. And I did find the resolution, which was to upgrade the Listen gem because there was something in the Listen gem that didn't fully support Big Sur. Is that the name of the thing that I am on?
CHRIS: That's the new one, yeah. I know because I've just upgraded to it. I have thoughts on the matter. [chuckles]
STEPH: Cool. [chuckles] Yeah, when I upgraded to Big Sur. But then someone had kindly marched in to fix it, then upgrading resolved that problem. And Ruby is back to a peaceful level as to the amount of process, the amount of CPU that it should be taking up. Transparency mode, I'm thumbs up on it. I like how you called that out, how that's a choice. And I'm with you on that choice, although I didn't realize that's broken. I guess I just hadn't...I guess I don't care deeply enough that I've tried to restore my transparency, but you're telling me to hold on.
CHRIS: We're going to get realer now in this moment. So I have a very old version of iTerm because it has a different way of going fullscreen than the default operating system level fullscreen. I really hate that it animates to fullscreen, and it doesn't quite fill the full screen. Like, it still had a border around it or something. So I have a very old version of iTerm that I've been running with forever, and I refuse to upgrade in any way as a result of I want to cling to this old version of things working. But as a result, I think I finally hit the end of the road on that. This is like years running now too.
I remember I kept it in a Dropbox folder so that each time I upgrade or get a new computer, I'm like, okay, good. I still have my old special version [chuckles] of iTerm. But I think that time is over and I got to find...I feel like there are new terminal emulators out there. It's like Alacritty and other stuff that people talk about. So maybe it's time for me to try and find something new as long as I can get that transparency because I want to feel like an uber lead hacksaw.
STEPH: You have such a brand of new-new that I'm now discovering that you are also a software hoarder, so you have both in your personality. [chuckles]
CHRIS: There was a period early on in my software career that was like, oh, I got to find all this stuff. I got to figure things out and configure it. And then I was like, wow, that's taking up a lot of my time, I should stop it. And I think since then, I haven't upgraded anything. If you go look at my .files, I don't know the last time I pushed to them, but it's been a while. I'm still doing things, of course, but not as much. I know the cost of it, and I know the cost of maintenance.
And really, this is an allegory for software overall. This isn't just about our local development environments, but entropy exists in software. Software does not exist at rest, and it will decay over time. And so the idea of we've worked with so many clients where they're like, yeah, we're on Ruby 1.8, and it's Rails 0.9. So okay, all right, well, we're going to have to deal with that, it turns out. We can't just keep ignoring that. So really, it's the same story played out but in my local hoarder cavern.
STEPH: There was a part of the saga, the story that you shared with the installer and that you don't have enough space, and it took up the rest of the space, and you can't do anything. I'm very nervous; what happened to your stuff, your space? How did that resolve? [chuckles]
CHRIS: I finally bit the bullet. And so I have a bunch of...I've tried a bunch of the different pieces of software that will visually analyze your disk space. So they crawl the whole directory starting from the very root of your computer, and it will be like, all right, applications has this much, and the library directory in your home directory has this much. Here are all of the different places that stuff might be hiding on your computer. And then you can visualize and be like, okay, that's where the most of it is.
Node modules, as an aside, we did not choose an efficient way to approach how to put code on my computer because Node modules take up a lot of space on my computer, but they're so spread out. Multiple times I've seen people share a version of rm -rf, and then it's some subshell that does find every Node modules directory underneath a code folder. So you can find every single Node module and just blow them away. That will regain you some space. But that was not the solution this time.
I've tried lots of piecemeal solutions over time. But eventually, the thing that got me there was just busting all of those caches. So I cleared the backup utility, Arq's cache. I cleared a bunch of them, Spotify Yarn, et cetera. And that cleared enough space for the installer to actually run. And then, once that was done, the installer program itself was no longer around, so I reclaimed that space. But it was this weird chicken and egg thing where I had to have enough space to complete the installation such that the installer could go away.
And now...actually, let me see what my hard drive looks like now. So somehow, according to the Macintosh hard drive info, I have 50 gigabytes of available space, which is really frustrating because there were a number of weeks where we went into a Bike Shed recording, and I was like, I have one gigabyte. I'm not safe right now because this audio is going to be more than that. And so I don't know how now I'm sitting at 50. I guess all those caches that I cleared and the installer being gone probably puts me in a good spot.
But anyway, I'm living in an upgraded, wonderful world. As an aside, Big Sur is ridiculously rounded and colorful and almost cartoonish. They're really leaning into the iOS vibes. And I'm not sure it's my personal aesthetic, but that's fine. I spend most of my time in the terminal anyway. But I think that's enough of me ranting about upgrading my operating system, which apparently I had a lot to say about. But what else is up in your world, Steph?
STEPH: I do appreciate the ranting, though. You're not often grumpy, and when you are, it's quite humorous. [laughs] I really enjoy the grumpiness. And it's often a painful process. So I appreciate all of that story.
Something that I really need to share with you and get off my chest is a couple; I don't know, x number of episodes back, you and I were talking about computer chairs. And I bragged about the fact that I have a computer chair that has no armrest, and I love it. I love my chairs like this, and it's wonderful. And I just think it's the best way to live.
And it turns out that that's bad because I happened to go see a massage therapist who's also very well-skilled in physical therapy and other areas. And they were talking to me about my desk setup. And I mentioned the fact that I get these typical headaches, and I have my chair, but there's no armrest. And they're like, "Oh, that would do it." I was like, "Why? I like my setup. What's wrong with it?" And they're like, "Well, if you don't have armrests, then your back is having to compensate and to hold up your arms and your shoulders all day. So while you're typing, you're using more muscles to then hold that. And then they eventually tighten and contract, and then that can cause headaches."
So in case, I have led anyone astray into having no armrest, they are apparently very important to not having headaches or having your back overworked to the point that you have headaches, which I'm a bit sad about. But on that front, I have ordered a new chair, and we'll see how it goes. I will have to assimilate into the world of chairs with armrests.
CHRIS: We welcome you with open armrests. [laughs] Sorry, I saw it, and then I went with it. Anyway, I'm realizing now I actually don't use the armrests on my chair per se. I actually end up putting my arms on the desk, which is probably not ideal either. I have a little wrist pad so that my wrists are brought up and so that I don't have the upward breaking of the wrist thing going on. I think that matters a lot. And then my arms are supported by the desk, but it is just right on the desk, and I wonder if that's worse. But I've never...I don't know, getting the armrests just right and then also having the wrist pad.
But I can't adjust my desk is probably the main problem. If I could bring my desk down a little bit, and if it were a thinner top, then I'd have more flexibility. The chair that I have is wonderful and has flexibility. The arms can go up and forward into the side and lumbar and this and that. And so I'm able to make the chair work to the desk. But I do wish I had more of an adjustable...ideally, like a stand-sit desk. But I haven't made that jump just yet.
STEPH: When you're ready to make that jump, I'm going to share with you where I bought my desk because I'm really happy with it. And it's also not nearly as expensive as most of the other desks that will go up and down.
CHRIS: Presumably, we can include it in the show notes as well so that we share it with everyone.
STEPH: Definitely, yeah.
CHRIS: Otherwise, that's just kind of mean. [laughs] You and I have a weird back channel that we talk about on the show, but they're not actually put in the show notes.
STEPH: We're not mean. We wouldn't do that. I love my desk. And it was from someone else. They're the ones that shared it with me, so I'm happy to pass it along because it has served me well. And yeah, I'm also not sure about how this is going to work with the chair and the armrest because I'm just worried they're going to be too wide, and they're not going to actually offer support. I have doubts. I have lots of doubts, but I'm willing to investigate. And we'll see how this goes because I would like for the headaches to stop.
CHRIS: Good luck on that front. That definitely seems like an indication of worth putting in some effort there.
STEPH: Agreed. I also have some other exciting news. Stephen Hanson at thoughtbot has organized a number of other thoughtboters to get together who are interested in really diving into leveling up, learning React, and specifically focusing on purchasing the Kent C. Dodd’s Epic React course. And it's for anyone that is comfortable writing code, whether you know React really well or if you're new to it. Everyone's welcome to join.
So we just kicked that off today where we're going to go through the course together and then meet every Friday. I think the cadence is probably three hours, three and a half hours every Friday, that then we're going to commit to working through the course together.
And I have to admit, I always nerd out a bit over how does someone build a course? Like, I'm really excited about the content as well, but I just want to know how did someone go about producing this content and then sharing it with everyone? And then what's their outline? How do they help people that are getting stuck because they can't be there in the same room? How do they record their videos?
So I'm really excited to see all the ways that Kent has crafted this workshop. And so far, there's so much content, but I'll have more to report as we really start to dive in. But I'm excited to revisit React because I haven't been in React land for at least a year and a half; it’s been a while. And so it's one of those areas that I know some bits, but a lot has also changed. And I would like to just revisit that world. So I'm really excited to dive into the course.
And so far, I really like the structure that Kent has taken with the curriculum where we're focusing first on what exactly is happening and all the effort that goes into if you wanted to actually write HTML and then layer on JavaScript on top of that. But then here's how React makes that easier for you. Here is how JSX makes it even easier on top of the React API. I really liked that. Here's some pain; feel a little bit of pain, let's get a little bit better. And then let's get even better on top of that. And that has been a really nice reminder and progression into the course.
CHRIS: I'm definitely a fan of the way you're describing it like, feel some pain, and then let's get better. But then, like, what's the hook? With any educational content, this is the sort of structure where there can be full education. But this is the thing that I feel very deeply about conference talks is my goal isn't to teach you everything if I'm giving a conference talk; it is just to get your attention just to say, "Here's the thing, here's why you might care." And starting from the problem, starting from the pain is always such a good way to do that. Because you know how this stuff is hard? What if I had an option that was easier? And then building from that totally makes sense.
I want to say that course, Kent's course was built in conjunction with the egghead team, egghead.io. And it's a distinctly branded course. But it was built on top of the framework in the platform that's there and all of that, and then some of the editing support. I don't know this for certain, but I think there was some teamwork there.
And I love just pushing forward the envelope of how we do educational content in the world of development because it is such an interesting world that has, frankly, such a need for ongoing development. The world is changing out from underneath us every two days. And therefore, having great educational content is so important. So yeah, definitely interested to hear how your experience goes both with the course and then also diving deeper into React.
Well, switching gears just a little bit, I had a topic that I wanted to dig into with you today. And so to give some context, the topic, the thing that we're going to be talking about today is what have we changed our mind about? So you and I have both done a little bit of thinking and tried to come up with some answers to this. The background, this was actually inspired by a tweet that I saw between Shawn Wang, aka "Swyx" on the internet, and Charity Majors, a recent guest here on this podcast.
And Charity is someone who is known for having strong opinions. But Shawn asked the question of what are some opinions that you've changed your mind about? And Charity actually had a wonderful list, which we'll link to her tweet thread where she shared some of her both technical and then also more personal ones, but really talking about the sort of evolution of thinking and the way someone's thoughts can change over time.
And I thought it was just such an interesting thing because, for most points in time, we experience someone's sort of snapshot of where are you at now? What do you believe to be true? But I think there's such an interesting story and sort of the arc there of what did you believe to be true that you don't anymore? What have you softened your beliefs on? What have you strengthened your beliefs on? So yeah, with that as the context, what have you changed your mind about, Steph?
STEPH: Yeah, this one really got me thinking, and I feel a little stumped on it. I have a few that I'm excited to share. But I'm very excited to hear your list to see if that also helps me reflect more on some of the things that I have changed my mind about. And I have found that there's only a couple maybe that I feel like I've really solidly changed my mind about. The others, I've either dialed up the strictness, or I've dialed it down. So the ones where I've really changed my mind about are feature flags and comments. Those are two of them. Well, there's a third one, but I'll get to that in a moment.
So starting with the first one, feature flags I was more in the camp where I very much appreciate feature flags, but I use them sparingly because then there is a tedious nature of introducing them and then having to clean them up, and then having to maintain two states of code. But now I've really seen the value of feature flags and how we can make sure that we have calm releases and ensuring that main is always in a deployable state. So feature flags is one for me. I'm very invested in having more of a robust feature flag system because I see the benefit to that.
The other one was comments. I used to be very rigid about comments are bad. We should never have comments in our code. They are just waiting to go out of date, and they're not going to be helpful. But I have since dialed down that strictness where I have certainly seen moments where comments do feel very helpful, and I can see how people use them. I still want to avoid them for the most part, but I am less strict now in regards to people who really find value in comments. I'm more open to that discussion. I want to understand what it is they find helpful about that comment, and if it is something that we can't capture with code or a test, where does that live?
CHRIS: Those are both interesting. Feature flags, for me, I think I actually was more strongly opposed in the beginning. Earlier on in my career, I saw them as added complexity, as noise. I often would encounter them left behind in a codebase. And so, I had this negative association with them. And I didn't see the value; I hadn't yet felt that pain. And over time, I've definitely shifted to where you're at where I'm like, I love feature flags. This is a critical tool in our toolset of how we actually…like you said, calm deploys, being able to always deploy main, making sure that we don't have long-running feature branches. There are so many benefits that come out of it that I'm now very strongly in favor of them. But it's interesting; I think I would say that I started in a more strongly opposed place. So that wasn't on my list, but it's an interesting one that you've brought up and probably one that I've moved more on.
Code comments, I think, actually started in my career being like, obviously, you comment your code. It's the thing that I read about and stuff. And slowly, over time, I think I've just dialed in on I don't think we should be doing that. There are, of course, going to be exceptions.
And actually, one of the things that I discovered about myself as I was trying to go through this exercise is there are very few things that I believe are black and white. If anything, that maybe is one of the things that I've leaned into over time. It's like, nothing is binary. Nothing is black and white. Everything is on a continuum or shades of gray. There are things that I believe a little more seriously. But there's almost nothing that I can be like, nope, absolutely I will not equivocate on this beyond how we interact with other humans and being reasonable, kind people. And in terms of software practices, not really. Comments, though, are one that I still am pretty strongly not going to lean into. So it's interesting that you're like, eh, I've kind of opened up to that one.
STEPH: There's a particular talk, The Art of Code Comments by Sarah Drasner, and that's the one that really shifted some of my opinions around comments, and then how we talk about them, and what benefits they can play. But I will admit, if I see a PR that has code comments, I still immediately have a negative reaction to that. And I want to have a conversation around why that comment was added and if we can remove it, and how we can remove it. But even with that negative perspective, I still find that I'm more open to that discussion versus before, where I would have been like, no, that's just unequivocally bad.
CHRIS: I do like that you always bring up that talk whenever we talk about comments. This is a great talk. And in the background, I just looked up Sarah's Twitter profile because every time you bring it up, then I mention that she has a still from the movie Labyrinth in her Twitter background, but she actually changed it. And so now that's not true anymore. It's now something from The Force Awakens. Well, it's actually a joke, but I'm still going to suggest that you watch the movie Labyrinth at some point. That's the thing that I feel actually kind of weird about. It's a weird movie.
STEPH: I'm going to take your suggestion, but not watch it. But thank you. [laughs] To share my truth today.
CHRIS: That's fair, that's fair.
STEPH: What are some of the things on your list?
CHRIS: Okay, I have a couple, some more on the technical. Let's lean into one of the technical ones. Early on, I started with dynamic languages. I think I started with Python primarily and a little bit of JavaScript. I eventually found my way to Ruby and felt very at home there. And then, I started to explore functional languages. And I started to lean into them really hard and felt that immutability and functional programming and true pure functional programming was the thing. It was the answer, and I just needed to figure out how to do it. And so I would say that is the belief that I have since changed my mind on and decided, you know what? Actually, it feels like a bit of a force fit. I have tried. And maybe for others, it is actually a really fantastic way to build software. But having worked with a number of other people in more functional contexts, I find that it is a bit of a force fit. It's a bit rough.
And in particular, of late, I've been working with Svelte as opposed to React, and React does sort of lean into the functional paradigm, especially with Hooks and all those sorts of things. And it's a little bit rough because it turns out UIs are these deeply mutable things. We're changing values or typing things in. There are actions that are changing the state over time, and having a system that just more directly models that feels very natural.
I still love functional programming for the more core of an application. So again, I reference this talk often, but Gary Bernhardt's Functional Core, Imperative Shell. Gary has really formed some of my thinkings on this. And now I've started to find the examples in the work that I'm doing of like, oh, okay, I see that pattern actually applied here. But much as I would love to use them, the functional languages I find just aren't quite landing for me. And additionally, the mutability, particularly in the front end right at the edge of the UI, is not quite as good of a fit.
STEPH: So I think that resonates with me although I do still get very excited about following more patterns that represent more immutable state just because I felt so much pain and found bugs from the fact that we have mutated state in surprising ways. I'm honestly not quite sure how I feel about it. I'm going to have to think on that one. That's a very interesting one that you've changed your mind on.
CHRIS: Yeah, similarly, my feelings are lukewarm, whereas before, they were stronger. I was like, oh, okay, I think I found something here. And then, in attempting to use it across a wide variety of applications, it just didn't quite feel right. I felt like I was swimming upstream sort of thing.
Actually, there is an interesting counterpoint. One thing that I have leaned into and definitely changed my mind on and embraced is static typing or, broadly, static analysis. But I think static typing being the most pointed version of that. Early on, like I said, I got my start in very dynamic languages in Ruby, and Python, and JavaScript. And so that dynamic duck typing runtime can be anything. We just make our systems respond to the messages, and all of that sounded great.
But it turns out I really love having a compiler that can tell me some truths about my program before it ever reaches runtime. And the idea that a typo can make it to production feels absurd at this point. And actually, as I'm working in Ruby, I'm like, man, I really got to go look at that whole Ruby typing thing we got going on. I don't know what the state of it is. I've looked at it in the past, and I need to revisit it soon. But like TypeScript, I've definitely embraced that very strongly. And I would not work without TypeScript in a JavaScript project at this point.
I've loved the work that I've done in Elm, although that also sort of blends into the functional stuff where it's like, it was a little bit noisy, though, I'll say that. But the type system and the fact that the compiler can give you so much rich information about your program, I would not trade that at this point. And I don't see myself going back on that front, which is an interesting place for me to be on of actually, I'm not that into the functional programming as the core way that I build my applications.
But I do like static typing. And I feel like functional programming and static typing actually go together incredibly well. And functional programming and, more imperative, whatever it is that I'm doing with my day-to-day life these days is a more interesting fit. But it is interesting to me to observe that sort of combination of opinions where I really like static typing, and having a compiler, and something that can tell me about my program before I get to runtime. But also saying that I don't quite want the functional programming thing, or at least not as the entire way that I modeled my application because I found it a bit difficult to work with. Because I think static typing or compilers and functional programming go really well together.
But I think generally, what I'm finding is a more middle ground dynamic optimization of a bunch of different things. And the answer is like, well, it depends which I guess if you've listened to the show before, you'll have heard those words said, so I guess it makes sense.
STEPH: Yeah. All of that makes sense to me. And I can see why you might have a favor for types or why that feels more valuable initially because that is giving us so much feedback right off the bat versus following a more functional paradigm is something that could feel like more of a force fit and doesn't provide that same immediate feedback. But it has a longer-term or a longer cycle of that reward system. So I can see why you might favor one over the other or why I myself would favor one over the other.
CHRIS: How do you feel about types?
STEPH: I'm a big fan, although I say that, but I work in Ruby. [laughs] I don't have them. But when I have worked with types, I very much enjoyed it because it makes me think more about the design of my code in a way that I don't as much with Ruby. And working with types has heavy influence than when I am working in Ruby and thinking about the design of my code. So I think working with types is a wonderful thing that, frankly, all of us should do as developers at some point because it is so influential. So I'm for types, but I'm not using types in my day-to-day.
Another thing that I have changed my mind about is how we structure the work that each person is doing. So I used to be more in the camp of everybody can work on their own very complicated piece of codebase, their own complicated feature. We can have a bunch of complicated things in the sprint, and everything will just be great; it’ll be fine. And we'll get a bunch of work done, and we'll ship it. And then we're an even more productive team.
And I very much disagree with that now where I have found where everybody is working in their own silo on a complicated feature has slowed down the progress of then being able to ship that feature. Because we often want to collaborate with someone, we need to collaborate with someone. Then the PR review process is tough if I really have no idea what you're working on, and I don't have a context that then when I look at your code, not only am I evaluating at the code level, but then I'm also trying to understand the feature and gain all of that context. And that's a heavy cost for me to have to pay to then pick all of that up and then for you to have to reintroduce me to what's happening. Or I might make the bigger mistake, and I may look at your code and just evaluate it from the code perspective but not really understand the feature, the value that's being delivered. And that doesn't feel useful.
And I have a recent example where that happened where someone was working on a very complicated feature that I didn't have any insight into. So then, when I was looking at the PR, it was easier for me to just look at the code and get feedback on that. But then it was probably a day or two later. It wasn't until then that I finally started asking, what are we building? Like, what purpose is this serving? And that opened up a much larger discussion where we realized what was being built didn't actually really deliver what we needed to deliver. So I no longer agree with the idea that everybody should be working on their own complicated features independently, and there should be some collaboration. And, you know, it's the buddy system; we all need a buddy.
CHRIS: Well, I like that one. I feel like I've shared similar ideas where it made sense. It was just the efficient thing to do, to split the work up and have everybody very independent. I also feel like earlier on in my career; I was more scared of Git conflicts and things like that or people interacting with the same parts of the code. And so in my mind, it made sense to really strongly separate like, oh, you shouldn't even be touching the controller for this. I'll handle the views, and you handle the controller; it'll be separate. And I care less about that now. And I think what you're saying of like, it's actually better if we have some shared context, and we understand what we're working on, and it's more of a collaborative process. Yeah, I like that one. I think I followed a similar arc, and I'm at a similar place now as well.
Interestingly, to go into another one of mine that I think you'll probably be most surprised by on my list is I think I used to believe in 10x engineers. I used to believe in the idea of that one developer just off in the corner fueled entirely by Mountain Dew that would just produce the perfect code. They would just solve it. Over the weekend, they would write the entire billing system, and it would be great. And I think it was predicated on the idea that the coding is the hard part, which I no longer believe. I think coding at its core is communication. It's taking this thing that we want to be true in the world and then communicating it to a computer but also ideally communicating it to our teammates, and to future versions of ourselves, such that we can revisit that code, we can maintain it over time, other people can add to or augment it.
And so the idea of this loner that can just do incredible volumes of work and have that be a good outcome that just doesn't make sense to me anymore. I've worked with incredibly talented developers, to be clear, folks that I was sort of in awe of. I've worked with people who have, I think, just truly photographic memories. They seem to remember every single bug that they've ever had and exactly where they can look it up. Or from the top of their head, they can just intuitively know, oh, this bug means this. Go look at this line of code. I'm like, how did you do that? How did you do that magic trick? And they're incredibly capable developers. But at the end of the day, the folks that I see being most impactful on a team are the folks that are able to communicate and collaborate most effectively and make the whole team more effective.
STEPH: Maybe it's the Mountain Dew; maybe that's actually the secret sauce here. That's what I'm missing from my life to take me into that status.
CHRIS: I'm now imagining Mountain Dew but in a more viscous form, like a barbecue sauce, but it's Mountain Dew flavored. That's the secret sauce because it's a very…anyway, moving on. [laughs]
STEPH: It's a terrible product. We should make it and sell it.
[laughter]
CHRIS: Career pivot, we now sell Mountain Dew sauce.
STEPH: [laughs]
CHRIS: But yeah, I do not believe in 10x engineers anymore. If anything, I believe that that is a huge warning sign if you have anyone that's behaving in something close to that space.
STEPH: Yeah, I'm super interested in that you've shared because I don't think...We've talked about 10xers, but we haven't talked about the fact that you used to think that they were more of a thing and that they existed. And now it's all I'm sorry, but it's all crap. [chuckles] That's super interesting to me. Do you remember what changed your mind? Do you remember that pivotal moment of where you were like, oh, maybe this is all bullshit?
CHRIS: I think it was just an amalgamation of experience over time. I've encountered people who fit the archetype. But if anything, I would say they're deeply problematic in teams. They're that individual who refuses to collaborate, who just goes off and heads down, writes a bunch of code, but then it doesn't integrate with the other pieces, or no one else knows how to use it, or they won't let anyone contribute to it. And yeah, I've seen that just be very, very problematic.
So the folks that most fit, I think the imagined version of this, actually end up, in my experience, leading things astray. And the folks that are actually most productive and really cause teams to improve in a drastic way behave very differently. They're much more collaborative; they’re much more engaged with the team. It's less about their individual contributions and it's more about building a system together, collaborating, communicating, engaging external stakeholders, et cetera, et cetera. It's all that stuff that matters. And so, it's very much in contrast to what the 10x engineer ethos is about. But there's no one day where suddenly this idea that I had in my head crumbled when I saw that behind the pile of Mountain Dew cans, there was nothing there. [laughs]
STEPH: It's all a mirage. [laughs] I do like what you just said around that there are very impressive people out there. And those impressive people often focus less on their individual contributions and more at a higher level around communication. And then they are the powerhouses that then is helping facilitate everybody else be their best and have high levels of individual contribution. Those are the ones that...I'm still not going to endorse a 10xer, but they are the ones who, to me, embody the idea of someone that is incredibly efficient and really good at their job.
CHRIS: There's an adage that comes to mind here that "If you want to go fast, go alone. If you want to go far, go together." And that does ring true to me. I think an individual can have their individual productivity be higher if they're working entirely on their own, if they understand every line of code because they wrote every single line of code if they know where every feature of the platform is integrated because they wrote the whole thing. But they're going to be fundamentally limited. And in order to do bigger, more complex things, fundamentally, we have to work as a team. And then the way you have to interact just fundamentally changes.
So I think it started from that, like, one person on their own I think can be individually more effective. But the minute you start to have a team, that one person acting on their own is actually dragging the team down because other people can't then work in that space, and that's a problem.
STEPH: I really like that adage that you just shared where, "If you want to go fast, go alone. If you want to go far, go together." And that touches on something else that I have really changed my mind about, and that's representation. And this is more specific to me. So when I joined engineering and became a web developer, and I joined a team, and I was the only female engineer on that team, my initial feelings were I am the only female engineer, and that is fine. We're all just a group of engineers. We're here to solve problems together. It really doesn't matter if there's anyone here on this team that's like me. It's fine if there's no one that I can see myself in that's in leadership because we're all just people, is what I was coming down to. And I've completely changed my mind and realized that that's not true.
And I've experienced this where I've worked on other engineering teams with female engineers, and it's fucking awesome, and it does make a difference. And then when I can see someone that I can see myself in, in a leadership position, that is also inspiring. So that is something that I went in where I think it was more of I was trying to shield myself from the idea that I am different from everybody else in this room, and that could be a problem. And instead, I just tried to neutralize it by saying it's not.
But I think representation is incredibly important. People are not just people. We all have very important social and racial, and cultural identities. And it's very important that we get to feel that we can express all of those identities and see people that represent those identities in spaces where we would like to go. That's a big one that I've changed my mind on.
CHRIS: Yeah, I certainly agree that representation certainly matters, and being able to bring your full authentic self to work and seeing others around you that reflect that. And frankly, having teams that are made up of people that represent the users of the software that we're building feels so critically important. And it's very interesting to hear about the arc that you've had on that where initially, you tried to downplay it, but then you found a little more truth in it. And so yeah, thank you for sharing.
STEPH: You're welcome. It feels good to say that, too, because that's something that I've admitted and realized on my own, that that is something that has changed and shifted. But it's nice to be able to share that here with you as we're going through the things that we've changed our mind about. What else is on your list?
CHRIS: Well, to round us off with one more very technical version because, of course, that's where I'm going to take us after a much deeper and more nuanced topic that you led us on, single-page applications. Broadly, I'm opposed to the name; that’s a side conversation. But, man, URLs matter on the internet. So don't call them single-page applications, but client-side applications or whatever. Broadly, the idea of a bundle of JavaScript, and so you send down an empty HTML document, and then you reference a bundle of JavaScript, which that thing boots up and it then makes a bunch of API requests to the backend, and then it starts to fill in the page.
I was convinced for a while that this is a reasonable and perhaps even necessary way to build software. We need APIs for our mobile apps anyway. So if we're doing that, then let's have that be the consistent way that we are accessing information. This is going to be fine; it's not a problem. And then eventually, we found some problems. So then we got GraphQL, and we tried to solve it that way. But overall…and I have spent a lot of time trying to make this thing work, trying to find a version of this that I'm happy with that I find the end outcome of the software to be as pleasant to work with from an end-user perspective as a server-driven application, and I can't find it.
And so, to be clear, I'm still doing client-rendered applications these days. But Inertia.js is the framework that I've leaned into that helps me bridge that gap. And the idea that the server owns routing, that the server owns statefulness, things like that, not having to think about client-side routing, not having to think about client-side state management, being able to use traditional auth mechanisms built into cookies, all of these familiar things that we've had. Leveraging the fact that the server is the more privileged in terms of the information it has access to, the more secure, the more powerful environment, all of these things feel right to me. And the nature of the application that I can build just feels more robust, more consistent, easier to evolve.
There were a lot of promises that I heard when we started building applications in these ways. And I just haven't seen an example or have not worked on an example, at least of an application that is built as a client-side bundle that boots up and does some stuff and had a good experience with that. So Inertia, as an aside, is my answer to this. And I continue to be extremely happy with that as a solution, as really a middle-ground solution. Because going all the way back to true HTML server-side rendering is limiting in other ways that I didn't like. But I find that Inertia really strikes an ideal balance in the middle there.
STEPH: I feel like I completely agree with everything you're saying. But I also feel like I have a developer secret to share where I really haven't worked on single-page applications, and I am okay with that. [laughs]
CHRIS: It's fine, skip it. Just go straight to Inertia. It's better.
STEPH: Cool, cool, cool. I am working on leveling up React, and then the plan is to go to Svelte and Inertia. So I'll just completely...I'll skip that. I'll skip that part of my career.
CHRIS: I actually want to back up just a little bit as I'm saying this because I really try to avoid being in a more negative space. And I think this space, this architecture for building applications, is complex, and there are things that will warrant it. So things like Google Maps, it makes sense to have a lot of Dynamic JavaScript and to be doing complex things on the client-side. Trello is another example of an application that that as a server-rendered thing, doesn't really make sense. And frankly, using a tool like Inertia wouldn't quite work there. That said, that is, in my mind, truly a single page within the broader application. So the Trello board page is a very, very complex stateful application, and I think modeling it as such makes sense. Google Maps, similar. But there's still the profile page, and the login page, and all of these other things.
I think routing is probably where it breaks down for me. I think client-side routing is the thing that I feel the most pain on. Because at the end of the day, the server still needs to know the answer. And if we do client-side routing, we end up with this duplication of logic across the client and the server-side. We end up with disagreements from time to time. We end up with the weird flashes of half-rendered layout, and then we go to the login page because we get an API response that is different. And so, I think that is probably the kernel of the thing that I struggle with. And, of course, it is possible to build great things using any of these technologies.
But I think my summary is I've really tried on that front, and I've just not been able to make the fidelity of application that I want using…primarily; I’d say it's client-side routing is the thing that I struggle with the most.
STEPH: Yeah, it sounds like you're saying there are very valid use cases for using a single-page app or following that structure. But we haven't really gotten there in terms of our web development expertise, where we've made that easier to maintain and easier to implement. And there's still enough pain points around it that even though it seems like a very valid idea and approach, it still feels painful enough that you actively avoid it until it feels like something that you have to then invest in at that point to then really deliver the user experience that you want to provide.
CHRIS: Yeah, I think that's an accurate summary. And I think adding on to that, I’m noticing it becoming more and more of the standard approach; this is the way we build applications, and I don't agree with that. That is probably the thing that is the kernel of what I don't believe in. I think actually server rendering is a great way to start, and then you can slowly augment or move more things into complex client-side behavior. But starting with this as the mode that we're building our applications just feels like a less stable foundation than I would want. So it's perhaps an architecture that you want to evolve to at some point as the complexity necessitates it, but I definitely wouldn't be starting there. Similar to service-oriented architecture, not going to start there. Client-side routing, I'm not going to start there.
STEPH: Ooph. I feel like I've been holding my breath this episode. I feel like this was a very interesting topic that has been challenging to reflect on what we believe and what we've changed our mind about.
CHRIS: I think it's perhaps more nuanced than a lot of our episodes where often we're saying this is what we did, and this is how we felt in the moment. And that can be very experiential and true. But this, yeah, we had to draw the line in the sand and say what do we believe? I similarly definitely feel more tension in this episode than other ones. But hopefully, it was useful. Hopefully, folks found some value in the things, and hearing our story, also, the idea that we have singular formed opinions. Hopefully, this episode has broken that idea in anyone's head. And we're all on a journey.
STEPH: I really like how this has prompted me to reflect on the things that I used to hold dear and really cherish or follow strictly to then reflect on what are things that I used to believe versus what I believe now? Because that transition often happens so seamlessly for me that I don't really stop to think about it to be like, oh, something just happened that is really changing how I approach things, how I build, how I work with teams. And I really like this reflection point to be like, oh, what did I used to believe, and what's different today? I'd like to keep this practice going and just try to track the things...I'll have to make a list of all the things I believe. That seems like an easy list. [laughs]
CHRIS: Just the easiest list to write.
STEPH: The easiest list to write. And then I'll just check in with it every so often, scratch stuff out, or update it with the things that have changed my mind about. This is the good idea, terrible idea where you go, "Stephanie, that's a terrible idea." [laughs]
CHRIS: I don't know, write it down on a list, and then look at it in six months and see if it sounds like a good idea, and then we'll be able to close the loop on the whole thing. But with that, should we wrap up?
STEPH: Let's wrap up. I've got a list to write.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @bikeshedor reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
This week Chris talks about Bifunctor optics and introduces an app he's been liking recently called CleanShot X, which is a replacement for the built-in screenshot utilities on OSX.
Steph talks about her experience using New Relic Browser Stats to troubleshoot a slow page and burnout. Who's feeling it? (Raise your hand.) How do we identify it? What do we do about it?
Transcript:
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. And we're off the rails already, everybody. It's going to be a good one. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, how's your week going?
STEPH: Hey, Chris, it's going really well. We talked recently that I have a new laptop. So I have been migrating the things that I'm accustomed to over to my new laptop, but I also love that clean, fresh start. So as part of that fresh start, I was like, what if I use Safari? What if I just switch? I'm a Chrome user, for the record. I'm pretty sure you know that, but just to share that. I was like, well, what if I just switched and I try out Safari for a while? So that was the thing.
CHRIS: So I heard the words try and the phrase that was the thing, but I'm going to probe a little deeper. How'd that go? Was it good, great, not so great?
STEPH: Honestly, it was fine. I did enjoy being in a new environment to see how Safari handles bookmarks and then also the inspector. So it was novel to be in a different browser where I really don't spend much time in a different browser other than when I need to test this specific UI bug or things like that.
But the reason that I ended up migrating back to Chrome was frankly for Chrome profiles because I really like that I can have this clear separation now between my work life and my personal life, and then it also keeps me signed in. So my personal email versus my thoughtbot one versus before the Chrome profiles, which I'm not sure how recent of an addition that is where Chrome introduced that feature. But before, I just always had to be signed into both, and it was just all together in one spot. But now I really like that I can separate. And it's more intentional where I'm like, oh, I'm going into work mode, so I just want that profile versus I do need to hop over to my personal side for a while. So that was the thing that brought me back.
CHRIS: Interesting. I don't take advantage of that at all. I know of the feature, but it's never really called to me. And if anything, I do the opposite. So specifically, this doesn't work in the browser, but on my phone, I use the iOS Gmail client, and I use the unified inbox. So I just have everything come together. And I subscribe to the idea of, I don't know, it's all work and stuff. And hopefully, people aren't sending me a lot on the weekends, and I will defer and snooze and all of that. But that holistic view pulls me in. And so it's interesting that you're just on the other side of that. It totally makes sense. I actually think I'm wrong here. I think I'm doing the wrong, bad thing. But it's interesting just the way we're on the two sides of that.
STEPH: I can see the merits for your approach where it all goes to one place. So you have one place to go and triage, and I think that makes total sense. I haven't triaged my personal life well enough that I want it to come into my work life. And that's the one that needs the more immediate response typically. So I want to prioritize all of my work emails and focus on that and then have my personal ones more like, okay, I've got some time, and I want to check on this. But I don't want to blend those together. Because frankly, I need to do some more triage on the personal side if I'm just going to bring it all into one space.
CHRIS: Interesting. Yeah, I would almost view it from the other point of view of I want to protect my personal space, and this is obviously not what I do based on what I just said but protect my personal time so that when it's evenings or weekends or whatever, that I'm not seeing work emails in there. I do my best to snooze them and get them out of the way. But if they are coming in, maybe there's something I need to respond to or, I don't know, maybe it's FOMO in a certain way, FOMO but professional FOMO. I don't really know. It's interesting that that's the feature that brought you back. But overall, how was your experience using Safari?
I have heard loosely that now most of the browsers are evergreen; even Edge has really caught up and is now implementing features in a similar way. And so Chrome, Firefox, and Edge are very similar. And my understanding is Safari is the one that actually lags behind or even holds back web standards and implementations and things like that. So, did you find any rough edges of that sort, or was it otherwise just fine, and it was mostly the profile stuff that made you switch?
STEPH: It was honestly just fine. I also may have not used it long enough to run into any of those rough edges. But overall, it was just fine. It worked. I just got to that point where I've run into this situation before where I'm signed into my personal email, and then I'm signed into my work email. And then I'm going to Google Hangouts, and Google Hangouts gets confused, and it's like, which person are you? And then I have that moment of where I have to sign out of one, or sometimes it just gets complicated. And what I found with profiles is that that's just never an issue. I don't have to worry about it anymore.
So yeah, overall, Safari was fine. I wouldn't mind going back to using it. I just really like the profile feature. This was one of those moments where it helped me notice how much I really liked that feature because I had just opted into it a while back. But this was that moment where I was like, oh yeah, I really miss that. So I'm going to go back to it.
CHRIS: The thing you said a minute ago about which person am I? [chuckles] There's a deep philosophical underpinning there, but I've definitely struggled with that and trying to trick the browser into it. So Trello is the one that I'm struggling with that right now. I am one human in the world, I assure you. But GitHub gets this; GitHub plays the game correctly where I'm one human, but I have multiple internet identities. I am the work person. I am the open-source person. And I'm able to route notifications and things to the different inboxes based on the organization that they're part of, et cetera. And I really liked that, that GitHub seems to understand that I am a human that is multifaceted, whereas Trello does not. And so I use Trello in a professional context a lot, but it's my personal...like, I would have to create a distinct account on Trello. And I'm like, Trello, that's not true; I'm still one person. Just understand this Trello organization is a separate facet of my existence. Got on my soapbox on that part.
The other thing I want to say is I do feel bad about the fact that I'm just on Chrome. Because increasingly, Chrome has got so much of the market share, and it's becoming the new deeply dominant thing. And so I want to be the agent of change or like, no, we should use different browsers, and we should support them and make sure that we're testing against them and all of that. And then I'm just on Chrome all the time, and I feel bad about it. But it's one of those like; I have so much muscle memory and built-up knowledge around how to use Chrome. And I've just used it for so long now that the switching cost would be pretty high, I assume. I actually haven't even really tried, but I feel bad about it. I'm now saying two things which are I feel bad about it, and I've never even really tried. So I don't feel great in this moment, but these are my truths.
STEPH: [laughs] Well, I can plus-one your truths. Those resonate with me. Well, there's always stuff that I am trying that's new all the time. So I feel like I need some constant in my life until I'm ready for other things to be the constant in my life. And then I can muck around with which browser I'm using and change other things. And you have to be in that moment. You have to be ready for it.
So going back to a thing that you said a minute ago about separating your work life and your personal life, I very much like that framing, and that's a nice segue, frankly. And it's something that I've been thinking about where you and I often start with technical topics. But I have a very people-centric topic that I'd love to chat with you about today, and it is emphasis on burnout. And who's feeling it? How do we identify it? What do we do about it?
And that's been very much on my mind because I have noticed a lot of people around me, including myself; I just feel like we are more inclined to experience burnout right now or are going through it actively. And it feels even more important to have those conversations with each other and with ourselves to talk about what does it look like when we're burned out? How do we recognize when we're there? Because often, when we are burned out, it's not something that happens gradually, or at least it's not something that we notice happening gradually. It's like, okay, I'm fine. And then suddenly, okay, I'm burned out. And I'm at a place where then I can't really focus. I feel overwhelmed. I'm drained.
For anyone that is less familiar with burnout, one, hooray, and then two, burnout is a state of emotional, physical, and mental exhaustion that can be caused by excessive and prolonged stress. So then that's what leads to us feeling very overwhelmed, emotionally drained, unable to meet constant demands, or those are the things that are causing our burnout.
So I have been doing what I typically do when I'm thinking about a particular topic, and then I'm looking to gather knowledge and information is I try to go pretty wide where I start looking for podcasts, books, just people that are having similar conversations and trying to synthesize a lot of the information that they're sharing. And I have found some really good stuff. The question is now, what does one do once one has that content, and then how do you bring it back and help apply that content to yourself and then people that are around you? So I have some thoughts there, but before I go further, I'm curious, what's your experience with burnout?
CHRIS: I think for me, I've been somewhat lucky in that I don't think I've ever really got into an acute period of burnout, but I've definitely had periods where I just felt weighed down. And there are ebbs and flows of life, and of work, all of those things. There is something that I've been thinking about recently, which is the inherent nature of the work that we do where consistently we're working on something where we don't quite know how to do it, and we're struggling, and we're struggling. And finally, we figure out how to do the thing, how to trick the computer into doing what we want. And then the minute we do that, in trying to encode, ideally, we're automating that away. And we take that solved problem, and we just ship it off. And then, we pick up the next unsolved problem from the list. This isn't entirely true, but it feels like sometimes unspecificity in sort of I'm just working on things that are underdefined, underspecified, and I'm trying to solve little puzzles constantly. And, I don't know, I was feeling that recently of the nature of the work was burning me out.
And I think earlier on in my career; I definitely experienced this in a certain way. And then, in the middle of my career, there was this perfect inflection point of my skills and the level of tasks that I was going for. But then, the further I got into my career, the more I tend to take the weird underspecified stuff and anything that's relatively clear I'm giving to other folks on the team. I'm like, "Oh, here's this well-defined piece of work here. Can you go implement this admin page?" Whereas I am doing the investigate integrating with third-party platform XYZ that uses a SOAP API that isn't documented. I'm like, okay, cool. Let me roll up my sleeves and figure out what that means. And I noticed in my work that that was starting to weigh on me, and I was able to shake that off and shift around some of the tasks that I was doing.
But that was a particular form where the work itself was weighing on me. And I took a step back, and I was like, why do we do the things that we do as developers? Because there's something just fundamental like, you have to enjoy that nature of challenge and constantly escalating challenge to a certain degree, I think, to really like this work, but it can be a lot sometimes. So I feel like maybe that's a slight digression from the topic, but it was a thing that I was feeling in this space. And that's a little bit of my story.
STEPH: Well, the beautiful thing about that is it highlights everybody experiences burnout differently. So that could really be how someone is experiencing burnout where they're taking on all these very complicated, different tasks, and they're feeling just worn down by that and that they're not able to meet demand. And they get to that point where maybe they lose their interest in tech and coding because they have pressed too hard in one direction. And so then they need to take a step back.
As for me, I've been thinking back over the last couple of months because there was once or twice where you and I had, I think a conversation here on The Bike Shed where I shared that things were okay, but I didn't feel like my normal self. I was losing some of my interest and energy for technology and coding. And I'm very fortunate; I love what I do. So the fact that I wasn't feeling that interest was a really big sign to me that something's different; something feels off. And it does vary depending on the client that I'm working with. And I think feeling that burnout then was a mix of some of those client pressures that I was feeling and that I was working perhaps too many hours as I was very interested in that client's success.
And then the other stuff was more personal because we only have, to borrow from the spoon theory, we only have so many spoons to give. And so if you have a lot going on in your personal life as well, that's going to detract from the energy that you also have to give to work. Are you familiar with the spoon theory?
CHRIS: I am not.
STEPH: I recently heard about it, and I can't stop using it now because I really like it. But it essentially...and there's a really great article that we can link to so others can read about it because I'm not going to remember exactly who came up with this theory. But the idea is that each spoon represents a unit of energy. And let's say if you start each day with only ten units of energy and you use spoons to represent that, as someone needs energy from you, maybe it's work, maybe it's a personal commitment, maybe you're dealing with a chronic illness, then you are giving a spoon away to each of those. So at some point, you're going to run out of spoons. And you want to also be mindful of who you're giving these spoons to because you are giving that energy away.
CHRIS: I definitely liked the idea of we start each day with a certain amount of energy, and different things can pull from that pool and whatnot. I'm intrigued by spoons as the unit. It just feels like a weird...I got this little bag of spoons that I walk around with, [chuckles], and I give them out throughout the day. I guess it could be anything in there, you know, objects. But, I don't know, spoons are interesting to me.
STEPH: I think it's because this person who came up with the idea was literally having a conversation with their friend in a cafe. And so that was just something that was in front of them. And they're like, oh, I can use spoons to represent. Well, we'll have to double-check the article to make sure, but I think that's why spoons became the representation.
So circling back to once you're in burnout, what do you do with it? And that is one of my questions right now. And that's what I'm trying to synthesize a lot of information around. Because once you're in that state, I don't know of a lot of great ways to help other than take time off because, at that point, you're in a crisis state. And you need to step away, and you need to find out how you can recover from having entered this state of crisis. So that feels really important to identify ways that once someone is in that state, that then we can help them. And that feels good. We can advise someone to take PTO. I still don't feel great about it in terms that then, as a manager myself, I don't really know of other helpful ways to then help someone through that period.
So then I really started thinking about the fact that once someone is in that burnout stage, frankly, it's too late. We have let someone get to that point that now they are in that crisis instead of addressing it early on. So that is the other thing that's on my mind is one, how do we help people that are already in that crisis state? But then two, how do we start identifying that someone is starting to go in that direction? And then how do we help them tell us? How do we then triage those situations? How do we prevent them from getting to that burnout state? And that's where I've also found some really good content.
And specifically, there is a podcast that I've started listening to called The Burnout Show. They essentially share their experience with burnout, and what they did about it, how they recovered from it, and then how they continue to fight it because a lot of people then still go back to the workforce. So then, once you do find a way to recover, then how do you go back to work? And there have been some really great episodes. And I'll be sure to include a link for it in the show notes.
There's one particular episode with Grant Gurewitz, who is a guest on the show. And he speaks specifically to the strategy of Three Good Pockets. And this speaks to the idea that there are many things that we can't control in our day. It could be work, family, other commitments, but we can strive for Three Good Pockets of time where we focus on something that's just for us. This is time that's reserved for you and any activity that you find restorative or joyful. And each pocket can vary in size. So perhaps that first pocket is spent just reading a few pages from a book that you're enjoying, and then the next pocket of time is spent outside or calling a friend.
And Grant also has a great suggestion around if you're worried that you'll get sidetracked and not actually step away, which I felt called out for that one because that one's definitely me. I will have good intentions, but then I won't actually take the break that I set for myself. So Grant recommends creating a list of restorative activities so that way when it is time for that break when your calendar is reminding you, then you have a list of these activities to choose from. So it makes it easier to say, okay, then I can do this for a couple of minutes, and I can truly step away from work and step away from my screen.
But especially now, when so many of us when we're sharing our workspace with our restorative space, for everybody who is still working from home or working remotely, then creating those daily breaks are incredibly important to our wellbeing. And so, it has me thinking about what restorative activities can I add to my day? How can I encourage other people to add more restorative activities to their day? So I really appreciated that advice.
And I have noticed that the idea of burnout, but not so much burnout specifically, I've been thinking of it as recovery and balance is a theme for me. And it is something that I am purposely choosing as a theme right now where I want to research and understand more of how we handle these situations and continue to make progress not just for myself but also for my team.
CHRIS: I think finding that right cadence and structure and way to reinvest in yourself and ideally gain more spoons if that is at all possible or at least defend the spoons that you have, those all feel very meaningful.
I do have a question. I'm interested in your thoughts on this. I feel like we hear about burnout a lot in our industry. I get the sense maybe that it is a more common thing. Like, I hear so many developers talking about how their dream is just to give up tech and go get a cabin and just farm in the woods or something like that. And I wonder, is it a more pervasive thing in our industry? So that's one question.
Another is just an observation that we actually do work in a wonderfully...it's an amazing industry where being a developer, there are so many jobs out there. And I don't want to discredit anyone's efforts if they're earlier on and struggling with that. But broadly speaking, it is a developer's market trying to go out there and get jobs and extremely well-compensated, as a general rule. But does that come with this inherent burnout? And if so, which I'm not sure is true, I wonder if maybe we're just more vocal and maybe we actually share more in public. We have more blogs and podcasts and things like that. And that's just a common thing for developers, and so we hear the stories more often, whereas maybe in other industries, it is actually very common, but people are suffering in silence.
But also I do wonder, our industry is still so young. The work that we're doing is changing constantly, and that churn and that working in the unknown maybe there is an inherent nature. So that's a bunch of pontifications off the top of my head. And I have no idea what the answer to any of them is. But I am intrigued because it does feel like the shape of burnout as a concept in the developer world is perhaps a little overrepresented, or maybe it shows up more than I would expect. And, I don't know, is the work that hard? I don't know. But then I hear these stories constantly, and I definitely have felt it myself, so maybe.
STEPH: Yeah, maybe. Yes, I do think the work is that hard for the record. It's challenging work. I enjoy it, but it is challenging work between figuring out the tech but then also everything else that comes with that.
I don't have anything to back this up, but I suspect that a lot of other industries are also experiencing burnout. And I just happen to be more aware of it right now because I'm hearing it more from my friends and the people that I work with. And I suspect that's more directly related to we all just went through 2020, and probably a number of us were trying to forge ahead and get through that time. And so there may be a lot of us that are just now dealing with those consequences of where we just pushed ourselves through a very hard time. And now a lot of that is manifesting and surfacing around really identifying the damage that we may have done to ourselves by just prioritizing work and trying to put our head down and get the work done even though there was so much happening around us.
And I suspect that may be a contributing factor is that now people are really starting to recognize, like, oh, I feel this way. And maybe there's time for me to address it. Or frankly, it may not even be that there's time, but your body is just like, okay, I'm done. I made it through the past year or however many months, and I'm going to start shutting down on you. I've given you all the warning signs, but now we're here. We're at a breaking point. So I don't know about the other industries, but I do know the reason that it's more on my mind is because I'm just hearing it more from people, and they're just expressing it. And so, it has become more of a focal point for me, and I've experienced it myself more recently.
I'm sure I experienced this back early on in my career, but I took a strategy of well, I'm just a junior, and I just have to get through this. And I have to build experience. For the record, that is not a healthy mentality. I'm just being honest about where I was in my life. And so, I didn't really stop to think about it, but perhaps it is becoming more normalized where people are having more open, honest discussions about where they're at. And if other industries aren't talking about this, I would love for them to.
So to round that out a bit, this is something that is just very interesting to me. It's very top of mind. So I suspect I will be sharing a lot more content in future episodes that are just around this. How do we recover? And then how do we balance? How do we work hard without burning out?
CHRIS: Work hard, play hard; those are the two placards that you have. Well, I look forward to continued conversations on all of those topics because they are sort of that's the story that underpins all of the work that we do. So I'm very interested to chat more about that.
STEPH: Thanks. So what's going on in your world? How's your week been?
CHRIS: Oh, my week has been fine. My topics are going to be way more mundane and tech-focused. But let's see, a couple of things, so one is that Stack Overflow has their I think it's Annual Developer Survey. And this year, the results came out, and there was an interesting standout, which was that Svelte was the most beloved framework, which was very exciting to see. Granted, you always have to take these sorts of stats with a grain of salt.
But Svelte was 71% loved and 23% dreaded, which they give it as a ratio of how many people really love this thing versus how many people really hate this thing. And so Svelte, 23% of people who have used it are like, I hate that, but 71% loved, so that's a 48% net approval rating. Versus React which was 69% loved, 31% hated or dreaded as the word would be, so that's a 38% net approval. And then Vue, interestingly, was 64% loved, 36% dreaded for a 28% net approval rating. So, yeah, Svelte was decidedly winning in that.
But again, the big grain of salt there is looking at the usage stats. React has 40% usage. So of all the respondents, 40% of the people responding to the survey were like, yeah, I've done React professionally, which is a wildly high number for a JavaScript framework. Vue was at 19%, so roughly half of React's usage, which I'm actually impressed that Vue is that high. And Svelte came in at 3%, so it's definitely still in the early adopter strong fan phase. So it makes sense that they would have this outsized high rating. I'm actually surprised that Vue wasn't higher than React, given that. Because I feel like more people are cajoled into React versus Vue can be more of a choice. And I would have expected this to shape out a little bit differently, but yeah, that's the story.
STEPH: That's really cool. I liked how you described that as in the very early adopters’ strong fan base stage.
CHRIS: But nonetheless, the people that are using Svelte do seem to really like it; that’s coming through in these numbers. And that definitely is my experience. I love Svelte and would love to continue using it for as long as possible. But really, I want a lot of other people to start using it. I want to really grow the usage base so that there are more libraries, and frameworks, and blog posts, and just mindshare in that space because I really do believe there are some wonderful ideas in Svelte. And it's just so straightforward to implement things that I just want more people hanging out. So that's one quick thing.
Another quick thing is, I've been using a utility lately or a program called CleanShot X, which is a replacement for the built-in screenshot utilities on OSX, and it is just fantastic. So I can capture a screenshot. I can capture a window. You can capture a GIF or a video. And then you can do little trims and annotations. And then it has this really nice feature where after you take a screenshot, it just hovers in the bottom corner of your screen and is easily accessible. So if you take a video, and then you want to upload it to a Trello card, it's just floating there waiting for you. You can actually dismiss it and push it down, but it's still peeking up from the bottom of your screen, and you can pull it back up, and you can have a couple of them. But it just really makes the whole workflow of grabbing screenshots or videos so easy.
And I cared deeply about that because now that I have this tool, I'm all the more inclined to grab a screenshot or a video with just about every piece of work that I do. So it's going into pull requests; it's going into Trello cards. And it's so nice to have a utility that just really makes that as easy as possible.
STEPH: I really liked how you mentioned that you can annotate because I often...I'm laughing as I'm thinking about this. When I am taking a video of something that I'm going to share with someone, I will use my mouse to indicate, oh, this is important. And so I circle around it and do silly things with my mouse to try to indicate but being able to annotate would be so much nicer.
I know there is another tool that you're really excited about that I can't remember off the top of my head right now. Do you know the name of the tool I'm thinking of?
CHRIS: Was it Loom?
STEPH: Yes, Loom, because I also used that for a little while, and I've really enjoyed it. So I'm curious, how does Loom and CleanShot X stack up? Is one replacing the other, or are they complementary tools?
CHRIS: Mostly complimentary. Loom is great because it hosts the videos, and you can also do audio capture, although I wonder if CleanShot has that as well. CleanShot also, I think, has a hosting thing. So I think there's a strong overlap in their functionality, but right now, I'm using both. And definitely for screenshots and things, CleanShot owns that end of it. And I think it's more likely that I could have CleanShot as the entire tool that I'm using. But I'm still using Loom for this is a walkthrough where I'm going to talk to you about a thing. I want to make it available at a URL that everyone can see rather than actually getting a GIF or MOV artifact file on my computer. So ever so slightly different, but I think of them, CleanShot X is probably the ideal one. But yeah, I'm still reaching for both.
So the one other thing I did want to talk about is I have been expanding our use of the dry-monads within the project that I'm working on. And I've done some things. I did some stuff, Steph, and I think it's good.
STEPH: Shtuff with Shteph. [chuckles]
CHRIS: Shtuff with Shteph, yeah. I'm definitely pushing the envelope of how much we're leaning on these concepts within the app, and I continue to question it. I'm really intrigued to see what happens when other folks come into the project, and they're like, "Why can't I just get the value? It should be a string. Why isn't it a string? Why is it a string that I have to do a ceremony and a dance to get at?" And I'm like, "Well, because everything can fail, you know, like life."
But what I have done here so dry-monads is the project that we're using, particularly their result type. So the result represents something that can either succeed or fail. And so we either have a success, which is this wrapper around the value that's successfully executed. So say we make an API call, we get back a response. If we get a 200 or maybe even a 300, then we get the data, and that's a success, or we get a failure and the error message. But fundamentally, we're modeling that in our system in a way that downstream from that, we have to basically determine if it was success or failure. So we're really encoding into the system; listen, pretty much everything can fail, so let's be careful with that. Let's be intentional and purposeful with it.
But there is an interesting thing where these objects have fmap as a method on them. So fmap is a way to transform that wrapped value, but fmap works specifically on the success case. So if you make an API request, you get back the data. Everything's great. You can call fmap, and it will yield into a block that data. You can transform that data in some way, and then it will rewrap it up as a success object. So you can operate on this thing as if it has been successful. But in the case that it's a failure, it will just ignore that transformation because you don't want to transform the failure. It's going to be a totally different shape of data. So you want to separate those. We're getting into functors and monads here. So I'm going to handwave a bunch. But fundamentally, that's the thing that we're going for here.
But we found ourselves really wanting to work with both sides. So we make this API request, and in the case that it succeeds, we actually want to transform and actually slice out a piece of data from the nested object that we get back. So that's one transformation that we want to apply on the success portion of the aisle. But then, we also want to transform the failure message. It turns out this backend is giving us very unfriendly error messages. So we want to take those and transform them into friendlier user-facing error messages.
So it turns out we want to map both sides. And so I went to dry-monads, and I was like, what do you got? I want to know about this in the world. And it turns out they did not have anything. So I started looking into it, and it turns out this is a concept in the world of functors, specifically. Or, more specifically, I reached out to a former colleague, Sid Raval, a former thoughtboter as well. And he likes the functional programming stuff, so I knew he was the right person to ask about this. And he pointed me at bifunctors. So I found myself in a new space and category theory which I never thought I would explore a category theory in this way, but here we were.
So a bifunctor basically is exactly what I was talking about where there are these two branches. In our case, it's either success or failure, but it allows you to operate on both sides, both branches. So the method or the function that gets applied there is bimap. So it's fmap which I don't know why it's f why that's typically what it's called. Success map would be a really great word in this context in my mind. So success map only deals with the success side, but bimap takes two different transformations, one for the successful outcome and one for the failure outcome. And it allows you to very directly talk about what you want to do with that.
To be clear, dry-monads has a function called Either or Either, depending on how you want to pronounce it. And that takes two Lambda proc-type things because it's Ruby, and functions are kind of weird in Ruby. But it yields you either the successful value or the failure value, but then it doesn't rewrap them. So it's meant to be the terminal. You use that in a controller when you're either redirect or render or whatever it is you want to do. What I wanted was something for mapping, so staying in the success object or the failure object but yeah, bimaps. So I introduced my own extra wrapping layer. This is where things go off the rails, I think. We now have our own internal result objects.
I thought about monkey patching for a while. I convinced myself monkey patching was a bad idea. Now that I've implemented as an extra layer of wrapping and I got the wrapping wrong like four times, or I kept recursively wrapping and re-wrapping, and there's a reason people aren't supposed to write these things themselves. But I think monkey patching may have been a better idea here, or maybe I should have never done any of this. We ended up with a stable working implementation and a nice test suite that covers it. But I introduced bimap and failmap as two different methods on our success object. And I did it by doubly wrapping the result objects. So we have our internal result, which wraps the dry-monad result. And I'm worried about that future situation where a junior developer comes on the team and is like, "I don't know what any of this is."
STEPH: I love the weekly progression of I've done some things, and let's talk about it. And then seeing this glimpse into your argument with yourself as to yes, but we need it, and we want it, and it's not something that's defined. So let's go ahead and implement it. You ended on a high there where you talked about the fact that it is nice to work with. It does the thing that you'd like it to do, and it's well tested; I love that part.
I'm deciding which thread to go with because there were a lot of interesting bits in everything that you shared. And I'm intrigued about the monkey patching as to why you think that could have been a better approach. Could you talk more about that one?
CHRIS: Sure. So I ended up having to introduce the secondary object that then wraps the result. And so if you poke at that even a tiny bit, you start to see this like Russian doll nesting of it's our result wrapping a result, wrapping a value. And it's a burrito with three different tortillas wrapped around it. And if you want to add guacamole, you got to unwrap all three burritos. I'm sorry, this is a terrible monad joke here. [laughs]
STEPH: For the record, if Taco Bell is not offering that, they should now, now that you've created that. [laughter]
CHRIS: There is one, but there's usually cheese in between the layers. They're not just extra layers of tortillas, so that's what I've got here. It's way too much tortilla, and that's sad. You don't want that. And it's confusing. You're like, wait, does this one have two or three layers of tortilla? So when I was working on it and when I was implementing our additional wrapping layer, I tricked myself multiple times. And my test suite was telling me that things were working, but I was testing incorrectly. And I was like, oh man; this is very subtle. And even though I'm deeply immersed in the context here, I'm still struggling with this.
So the question is, did I successfully encapsulate all of this? And now anyone downstream just gets to use it,, and everything will be fine. Am I the heroic programmer that made the perfect abstraction that no one's ever going to struggle against, or was that pain that I was feeling representative of the complexity of what I'm trying to do here, and maybe I got it wrong?
And so then the monkey patching side is we've got this one layer of wrapping. What if we just monkey patch the result such that it's got these new methods? I'm not adding an additional layer. I don't need to deal with double mapping through multiple layers, and I just get to deal with the context that I have. So I did an initial spike of an implementation that way, but I talked myself out of it because #monkeypatchingisbad, but I don't know. I don't know where I'm ending here. I am happy with where we're at, but I am aware that I may be sad down the road.
STEPH: I'm just dying over here [laughs] from everything you're saying. I have this image of you staring out the window thinking, am I the hero, or am I the villain? And figuring out who you are in this scenario with this abstraction that you've created.
CHRIS: To be clear, it's raining, and I have a nice rocks glass of scotch in my hand. And I'm just wistfully looking out the window trying to determine what's true. That's pretty accurate, actually. That's pretty much what's going on here.
STEPH: I think we're just going to need updates as it progresses along.
CHRIS: I will say overall this paradigm...so failures can happen at all levels. These command objects, which are the core of where this is coming in in the application, really represent the workflows of the app in a wonderful, testable, straightforward way. So I love that part. And if I have that, it implies that I have to have this other stuff. I don't think I can get away from that.
The other thing is I've always loved Gary Bernhardt's Functional Core, Imperative Shell, which is a conference talk that he gave a while back. I talked to Gary about it actually when he was on the show, and we can link to both that and the conference talk because they are fantastic. And I just love the way he thinks about software. But that was always a little bit abstract for me. Like, what does that actually look like, though, in say, a Rails architecture? And I didn't have a great answer.
And now this thing that I'm doing is the closest I think to that where the innards of the system are almost functional even though it's Ruby. We're leaning into that. And so we have these command objects that take in some data, and then they operate on it, and they yield out these results. Did it go well, or did it not? We've got the railway-oriented stuff, which again I'll link to. I link to now every third episode, apparently, but here we are. And that just models the reality of these programs in a really great way.
And then we've been trying to introduce some, not rules per se, but guidelines as to how we interact with these things. So inside of the core of the application, we're trying to be as functional as possible. We do transformations on these result objects, but that's it. And then it's only really in the controllers or the mailers that we are doing unwrap or deal with the actual nested value, but everything else is working on conceptual values or whatever it is…these result objects. And that's actually been really nice, and it's allowed us to have really nice error handling within the app. The logging is very straightforward. A lot of apps that I've worked on in the past, I've just silently thrown away many error cases or edge cases. And this is a really great way to sequence the work that needs to be done but never throw away data that you would want. And so far, I'm finding it to be really great. And I'm seeing very obvious ways to hook into it like, oh, this is where we need to find the user-facing error message versus this is where we figure out what we want to log. And so, in some ways, it's great, but I am still open to the idea that this was a terrible idea. I always remain open to that idea to be clear. [chuckles]
STEPH: I love how that's the feedback that you're always open to it. You're always trying something new, and then you're constantly going back to revisit; was this a good idea or a bad idea?
To go back just a little bit, I do absolutely love the priority and focus that you're giving to the failure state because I feel like that is an area that we, as developers we're very skittish of that failure state. And I realize I'm projecting here. So if you're listening, feel free to take that however you like; maybe it doesn't apply to you, maybe it does. But the failure state is like you said, it's life, it's important, and it's something that is going to happen. And it is something that we should make accommodations for.
And I find that we're often very hand-wavy with a failure state. So I love, love how much you always prioritize the failure state and make that something that people can work with and understand. Versus then when something goes wrong, then that's when we have to start to understand the failure state. I recognize there's a balance there because you're not going to know the failure state until you encounter it, but there are ways that we can still optimize to have observability into that failure state for when we do encounter that failure.
CHRIS: I've definitely seen that as an evolution in my own thinking, how much am I focused on how easily can I do the core thing, the happy path versus how robustly can I do all of the variations of what this app needs to do? What if the network's down? What do we do there? I do occasionally worry that I've overcorrected on that. And it's like, you know what? This thing that I'm worried about that I'm protecting against in the application is a 0.01% edge case. It's going to affect almost no users, and we're both putting time into trying to avoid it. But also, there's code complexity that comes from trying to handle all the different variants. And so there's definitely an optimization, and I feel like, at different points in my life, I've been undercorrected or overcorrected on that.
But I think if I were to describe the arc of my career, it is desperately searching for that optimal path and trying to find exactly the right amount of error handling to apply, and then yeah, then I'll be happy, then it'll be great. But it is like when I look at DHH's classic 15-minute I'm making a blog, look how much I'm not doing, I'm like, sure, sure, sure. Show me that in three years, though. What does the blog look like? How easily can I add a new feature? What happens when there's a bug in production, and a user reports it? Can I chase it down? Can I figure it? Can I fix it? These are the questions that I care about now, almost to the exclusion of what's the first run experience like? I almost don't care about that at this point. Because I spend my time…six months and on that's where the hard work is. And so the first couple of months where you're figuring things out, that's not the hard work of this thing. And so, I'm very strongly focused on those later periods of time. But again, I'm open to the idea that maybe I am overcorrected there.
STEPH: I think it does highlight more of a shift in our career. We're still building, but we have experienced the maintenance side as well, and we felt that pain. And so that has led to perhaps overcorrecting, or maybe it's the correct amount of correction.
But I do like how you highlighted there is always a cost to each side and those are usually the questions that I'm asking myself when I'm thinking about the failure mode and how much I want to optimize for the failure mode is how much does it cost when this fails? Who's it going to impact? And how much does it cost for me to make this more observable or to address this failure state? And then I try to find the balance between those two. Because you're right, it's not free to address that failure state. And so I may not want to fully optimize to handle that if it's going to be a very small percentage of users that actually are impacted by this failure state, or it seems very rare this is going to happen. But then still finding ways to know that if it does fail, then I can say, "Okay, I'll come back, and now it's worth the investment to improve this."
CHRIS: When you said earlier that this is really hard work that we do, I don't know that I believed you. What you just described sounds super easy. You just handle all the stuff, and you dynamically optimize for the needs at the point in time. And that seems easy. [laughs]
STEPH: Super easy, yeah. [laughs] Way to bring it back. Well, speaking about observability and failure states, that does lead nicely into a bug that I was working on this past week where there was a particular page that was loading very slowly. And it was something that we'd heard from users that then they let us know that this page was either taking a very long time to load or, frankly, it was just crashing. And then they were never getting to that page. So I happened to be the one that then picked up that ticket. And I went to reproduce the issue, and sure enough, when I clicked on this particular link and then started counting, it took about 14 seconds for that page to load, which is a very long time. And then also sometimes it was just crashing.
So the first place that I went was to our error tracking. So I went to New Relic to then look to see okay; maybe there's a slow query. There's something here that's creating this performance issue, but I couldn't find anything. And New Relic does a great job of breaking down all the different response times so I can see how long Postgres is taking, Redis, and Ruby. All of those looked very normal. I couldn't find anything that seemed alarming that was indicating that the page was struggling to load even though I could reproduce the problem. Because I was clicking on it several times thinking, okay, well, if I just do this a couple of times, New Relic's going to notice, and then I'll get to see something, a little breadcrumb that's going to lead me in the right direction.
And while I was waiting for New Relic to surface something helpful to me, I mentioned to another developer the issue that I was triaging. And they said, "Yeah, that page has been getting progressively slower, and we don't know why." And I thought, ooh, okay, I'm intrigued even more now as this is something that has been escalating over time, and now we've hit this threshold that we're working on it.
And I discovered that in New Relic, I can look specifically at Postgres, Redis, Ruby, all those different response times. But there's a browser monitoring tool that I had not used before. And it showed a lot of helpful information around first paint, First Contentful Paint, window load, all of those areas. So I started diving in and found session tracing, and it was there that then I saw New Relic was telling me, "Hey, you have a page that's taking about 14 to 15 seconds to load." And I thought, okay, I feel validated now that at least New Relic is recognizing this issue. I have seen this issue, but I still didn't know why it's occurring.
So the next tool that I used that I don't know if I've used before or it's just been a very long time; it felt fresh in the moment, but it's the Chrome DevTools, the performance tool. And so you can open that up in your inspector, and then you can go to the page that you want to track the performance. And then you can essentially say, "Hey, go ahead and start profiling and reload this page." And it has so many stats when it finally does load. It has CPU flames charts, which essentially it visualizes a collection of all the stack traces. It has a film strip, so you can actually see the rendering progress of your website along different time points. So if you wanted to go back to a specific time, you can see what did the webpage look like at this point? And then if you go a little further, okay, how much was loaded at this point? So there's a lot of interesting and a little overwhelming information that's there.
But the thing that did catch my attention is there's a chart. I don't actually know what this chart is called. It's not a pie chart because there's no center to it. So it looks like a donut chart, and it's broken down. And it shows you the loading times, scripting, rendering, painting, all of those different values. And the rendering time was taking 35 seconds. And I was like, ooh, okay, that is meaningful right there. So then further investigation, now that I knew what I was looking for, I wasn't looking for something more on the back end. I was looking for something more on the front end. And I didn't think it was necessarily JavaScript because we also have JavaScript on this page. So at least this was helping me get a little bit closer before then I went into the codebase to start seeing what's happening.
So once I knew it was a rendering issue, I went to look specifically at that view, and we have a form on that page that was generating an empty HTML select option for every record that's in the system. So let's say that you're ordering from a restaurant. On this page, there was a form where it had a list of all the restaurants, and, in our particular case, we had about 17,000 restaurants. So there were 17,000 empty HTML selection options, which could have some significant impact on the DOM and page load time. And that was the piece that was really leading to the performance, is the fact that we were rendering that empty select option.
So from there, it was then just triaging okay; we don't really need to render all of these restaurants. There are ways we can scope this down. And that way, we're only showing a little bit at a time versus creating all of these empty options. I should clarify they're empty because part of this form is you select from the first dropdown, and then it populates the other one, and it gives you more information. But the way that this form was implemented, it was actually trying to show all of them at once. But it didn't actually have the data yet, but it was doing like a restaurant.all type of count. And so then that's how we were getting that many empty options.
So it was a very interesting journey. It was very helpful to learn that New Relic has this browser monitoring tool. And I really appreciated their performance tool. And circling back to Chrome, Safari may have something similar. But I found Chrome's performance tool very helpful because then it helped me realize that it was the rendering. And so then I could really focus on the markup and the view versus knowing it wasn't more in the database layer.
CHRIS: I really love the description, almost like a mystery novel of these bugs when we encounter them. Because if you just get to the end and you're like, oh, I was rendering a select, and it had all of this, that loses so much of this story because again, the coding is not the hard part of the work that we do; it's the figuring out what needs to be done. And in this case, that journey that you went on to find the bug. I really like the point where you said, "And someone mentioned this page has just been getting progressively slower over time." I was like, ooh, that's interesting. Now we got a clue. Now we've got a lead, and we'll chase it down, and then finding the browser tools and all of that.
And also, as an aside, browsers are just such an immensely impressive piece of technology, everything that they do. And then you add the DevTools on top of them and magical stuff going on there. But yeah, also probably don't render 17,000 empty selects. [laughs] That seems like it will get you in trouble pretty quickly. But also very easy to get to and especially if there is this incremental, slow creep over time where it's like, oh, that page seems like it's a little slower. It's a little slower still, and it just keeps creeping up over time. But yes, I appreciate you taking us on that journey with you.
STEPH: Yeah, it was a fun discovery. And it made me realize that while we have alerting set up for some of our other queries, we don't have anything set up for the browser time. So that would be a good optimization on our side is to start alerting us before a page gets to the point that it's taking that long to load to notify us sooner. So we don't have to wait for a user to reach out to us, but we can triage sooner.
CHRIS: I also do love the idea of extending the metrics that we hold ourselves accountable to all the way through to the user, and so the First Contentful Paint and all of that. The one that I really love recently that has captured an idea that I struggled to put words to is the Cumulative Layout Shift. Are you familiar with this piece?
STEPH: Uh-uh.
CHRIS: So there's like, how quickly does the page render? That's the thing that we want to know. But a lot of applications these days, particularly single-page apps, render pretty quickly, but they render what ends up being a skeleton or a shell of the page. And then behind the scenes, there are like ten different AJAX requests happening. And as the data comes in, suddenly, a part of the screen will render. And they'll render a list of items that they just got back from the back end, but they're still waiting for the information to populate the header. And so if you look at that page, it's constantly shifting as it's loading and just feels, I don't know, flimsy in my mind. But I didn't have a good word, or I didn't have a metric or a number to attach to that. And then I learned about Cumulative Layout Shift, and I was like, oh, that's the one. Now you've mapped the thing that I was feeling. And I like when that happens.
STEPH: Is that the difference between the first paint and First Contentful paint? Is that similar?
CHRIS: I think it's more than that because there are not just two discrete events in this. It can be multiple. And so it's like, how many different times did the thing that's rendered on the screen move around? And so if images are loading, but you didn't have a proper image height and width set, that's another way that this can happen. Then initially, the browser is not going to reserve any space for that because it's like, I don't know how big this is. And then, when the image shows up, it now knows the intrinsic height and width of the image. So suddenly, your page is going to jump from that. You can get ahead of that by putting the height and width on your image, and that's great to do. And frameworks like Next.js have done some really amazing work of making that a build time step as opposed to something you have to do manually.
But then also, more generally, how do we handle this? React is doing some interesting work with Suspense, where you can aggregate together multiple different loading states into one collective thing. It's almost like promise.all, but for your page. I haven't followed that too closely, but I know that that's framework-level work that's happening over there. GraphQL does a really great job of allowing you to group queries together. So there's a solution on that side. But broadly, if you just render some HTML on the server and you send it to the front end, then you don't have this problem because you just have one ball of HTML. The browser is pretty good at rendering that in one pass versus if you have single-page applications that are making a handful of AJAX requests that will resolve in their own timelines and eventually paint to the screen. You get this different shape. And then the worst case of it in my mind is you render half the page. And then suddenly, one of the requests realizes the JWT has expired, and suddenly, you get thrashed over to the login page. Please don't give me that experience, developers, please. Please do something else that isn't that. That makes me sad in my heart.
STEPH: Prioritize the failure state. That's what I'm hearing.
CHRIS: Callbacks.
STEPH: Well, on that wonderful circular reference, shall we wrap up?
CHRIS: Wait, I thought circular references were bad...Never mind. Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes,, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed, or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
This is the sweeps week episode, the epic crossover episode, the mega episode! We have a very special episode as Chris, and Steph teamed up with the hosts of three other podcasts to bring you one giant, mega Ruby episode!
In this episode, you'll hear from the hosts of Remote Ruby, Rails with Jason, and Brittany Martin, the host of the Ruby on Rails podcast. They cover the origins of their shows, their experiences as hosts, and why podcasting is so important in keeping the Ruby community thriving.
*Transcript: *
STEPH: Hello and welcome to another episode of the Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. This week we have a very special episode as Chris, and I teamed up with the hosts of three other podcasts to bring you one giant, mega Ruby episode! In this episode, you'll hear from the hosts of Remote Ruby, Rails with Jason, and Brittany Martin, the host of the Ruby on Rails podcast. This episode was so much fun to record, and we have Brittany Martin to thank as she organized and moderated this special event. So without further ado, here is the mega Ruby episode.
BRITTANY: Welcome, everyone. We have a whopping seven podcast hosts recording today. So, listeners, you are in for a treat. This is the sweeps week episode, the epic crossover episode, the mega episode. We're going to need our editor to insert some epic sound effects right here.
Announcer: The mega episode.
BRITTANY: So let's go ahead and introduce the crew today. I am Brittany Martin from the Ruby on Rails Podcast.
CHRIS OLIVER: I'm Chris Oliver from Remote Ruby.
JASON CHARNES: I am Jason Charnes, also from Remote Ruby.
ANDREW: I am Andrew Mason, also from Remote Ruby.
STEPH: And I'm Stephanie Viccari from The Bike Shed.
CHRIS TOOMEY: I'm Chris Toomey from The Bike Shed.
JASON SWETT: And I'm Jason Swett from Rails with Jason
BRITTANY: Today, we're going to cover the origins of our shows, our experiences as hosts, and why podcasting is so important in keeping the Ruby community thriving. Now I know personally, I really enjoy the origin story behind Remote Ruby. So, Chris Oliver, could you kick us off with that?
CHRIS OLIVER: Yeah, we can go back maybe to the first time that Jason and I met, which was Jason emailed me out of the blue and was like, "Hey, are you going to be at RailsConf?" And I wasn't planning on it, but it was over in Kansas City, like four hours away from me. I was like, "No, I'm not going, but I'll meet you." So we went and drove over there and met and have been friends ever since. And Jason had the idea of doing an online meetup. And I'll let him explain where that started and turned into the Remote Ruby Podcast.
JASON CHARNES: I thought it would be a good idea. There weren't any online meetups. This was pre even the idea of shutting down the world for a pandemic. And maybe I was just too soon because I got Chris to speak at the first one, and we had 40, 50 people. I spoke at the next one, and there were 20. And by the third one, there were five of us. So it wasn't really a super sustainable thing for me to do. So Chris and I got together and said, "What if we tried podcasting?" Chris, you hadn't really done your own podcast at that point, had you?
CHRIS OLIVER: No, I don't think so. And you and I were just having calls every week or whatever just to hang out and chat. And we were like, why don't we just record that and publish that as a podcast? And here we are.
JASON CHARNES: Yeah. So we've been doing that. I think we started in 2018, so yeah, three years in June, and somehow people still keep listening to us talk but probably because we brought along our friend, Andrew.
ANDREW: Wow. Okay. No, that's not true. But yes, I was a guest on Remote Ruby before I joined as a host. And not to get into the details, but I was on another podcast, and something went down, and I no longer was on that podcast anymore. And Chris and Jason were like, "Do you want to come hang out with us?" And I was like, [chuckles] "Absolutely." So I started doing that, and at the same time, I also started The Ruby Blend with Nate Hopkins and Ron Cooke. And so we were doing that for a while until that had to tragically shut down. But I'm still here with Jason and Chris. I guess I should also mention that Jason Swett gave me my start in podcasting a month or two after I started full-time as a Rails developer on a now archived show called The Ruby Testing Podcast.
BRITTANY: Which is the perfect segue because Jason Swett was also my first opportunity to guest on a podcast. So I was already hosting, but I hadn't guested, which is kind of the opposite order. So, Jason, do you want to tell the origin of where Rails with Jason came from?
JASON SWETT: Sure. I'd been involved with podcasting since around 2016. I somehow ended up on the Ruby Rogues Podcast and was on there for maybe a year or so. And then, somehow, I got the idea that I could start my own podcast. And as an experiment, I started a podcast that I called The Ruby Testing Podcast, which I figured was sufficiently narrow that I could get some traction. And to my surprise, guests actually said yes to coming on the show. And also, to my surprise, people actually listened to the podcast. That gave me some confidence. So maybe a year later, I broadened, and I changed from The Ruby Testing Podcast to just Rails with Jason. And I have been doing that for something like two years.
BRITTANY: That's fantastic. I want to move to probably our most experienced podcast veteran, and that would be Chris Toomey. When I was learning how to code, I was listening to Giant Robots and then was excited for the transition that The Bike Shed took. Chris, I would love to hear the story of what it was like taking over a really popular podcast and really maintaining the drive behind it.
CHRIS TOOMEY: So, as you mentioned, I had done a little bit of podcasting. It was about a six-month run where I was a co-host on Giant Robots, which was the original podcast of thoughtbot. And that was more in the business and sort of how do we build a software company? So at that point, I was running Upcase, which was the subscription learning platform that thoughtbot had. So I was talking about the inner details of the business, and the marketing tests, and A/B tests and things like that that I was doing. And every week, I was sharing my MRR rather transparently in that thoughtbot way that we do. I did that for, like I said, about six months and then took a while off.
And in the background, thoughtbot had started up a new podcast called The Bike Shed, and that started October 31st of 2014. So The Bike Shed has been going for a long time now, and that was hosted by Derek Pryor and Sage Griffin. And they ran that for a number of years. I think it was about four years that the two of them worked collectively on that. But at some point, they both moved on from thoughtbot, and there was an opportunity for new hosts to step in. So I took over in August of 2018. So I've been doing this now for about three years.
And so, for that first year, I took the opportunity to do a tour around thoughtbot and talk with many different individuals from the company and a handful of people external to thoughtbot. But I knew that there were so many great voices and ideas and points of view within thoughtbot that I really wanted to spend some time getting to know more of them personally and then sharing that as much as I could with the existing audience that The Bike Shed had. But secretly, all along, I was looking for a person to hang out with all the more so, and Steph was the person that was a perfect choice for that. And so, for the past two years, Steph and I have been chatting. And I will send it over to Steph to share a little bit of her point of view on that transition. But from my point of view, it's been fantastic.
STEPH: I still remember exactly when we had the conversation. You were running The Bike Shed and doing an incredible job of just having weekly guests. And then you'd reached out to me and said, "Hey, would you be interested in doing an episode?" And I thought, "No, absolutely not. I can't podcast. I can't begin to do this." So you continued to convince me. And finally, you said something that resonated where you were like, "Well, we can just show up and record, and we don't have to publish. We can just see how it goes." I was like, that's a perfect safety net. I'm into that. So I showed up, and I think the first episode that you and I recorded ended up being titled What I Believe About Software. And it was a lot of fun. I realized I have a lot of things to say. And after that, I think it was another month or so. You continued interviewing more guests, but then you reached out to me and asked me if I wanted to be a co-host. And at that point, I was super jazzed about it, and it's been wonderful. It's been a roller coaster. I have learned a ton.
BRITTANY: I'm kind of seeing a pattern here where over the last three years, it seems like Remote Ruby came into place, Bike Shed transitioned. That's when I took over as host of the 5by5 Ruby on Rails Podcast. We're going to call it the golden era of the Ruby Podcasts. But for me, I probably have the longest-running podcast. It was started back in 2009 on the 5by5 Network, but it's gone through many different hosts. And so, I took over roughly about three and a half years ago as the main host from Kyle Daigle. And then, just a couple of weeks ago, as I announced on my podcast, we took the podcast independent. We are now just The Ruby on Rails Podcast. And I'm starting to change the model where I'm bringing in more co-hosts. So that way, I can get those regular updates that I really appreciate on all these podcasts we have featured on the show today.
I am curious. I want to talk about how we put together the episodes and plan out how everything's going to go down. I know for me, I'm currently a mix of interviews and co-host episodes. So I'd love to hear from Andrew. How do you plan out what Remote Ruby is going to be week to week?
ANDREW: This is an easy question because we don't at all. We don't plan. We do have some guests that come on, and sometimes, they may get their Zoom link the day of; who’s to say? But we really don't have a plan. We don't talk about what we're going to talk about beforehand. We all just kind of show up, and I think we have that kind of relationship and flow where it always just works.
JASON CHARNES: And I think part of that came from actually how Chris and I started the show because we were trying to make it as low stress as possible because we knew if we put a lot of pressure on it, we would stop doing it. Our first episodes were YouTube live links that we just shared out. And then in our next episodes, we were like, oh, we should start using some software to do this. And then eventually, we got an editor, but that same core of let's just keep it fun for better or for worse, I think, also affects our planning.
BRITTANY: I've been lucky in the sense that I have guests sit on all three of the episodes. And I do want to give a compliment to The Bike Shed because it is very well run and very well planned. So I want to kick it over to Steph as to how putting together a Bike Shed episode looks.
STEPH: Oh, thank you. That's wonderful to hear, by the way. That's wonderful feedback. So we predominantly use Trello to organize our thoughts. So we will have...and as we're capturing community questions that are coming in, so we will capture those on the board. And then, we will have a ticket that represents a particular episode. Usually, on the day of, we'll share some thoughts about, hey, these are the broad topics I'm interested in. And there's usually some hot takes in there, which is fun because the other person doesn't know exactly what's coming, and we can have real honest conversations on the mic. And then, every so often, we'll grab a beer, and we'll go through that list. And we'll chat through what sparks joy. What do we want to talk about? What would we like to respond to? And that's pretty much how we organize everything that we discuss. Chris, is there anything I've left out that you want to add?
CHRIS TOOMEY: I think that mostly covers it. We do occasionally have interviews just as a way to keep some variety and different things going on, but primarily it's the sort of what's new in your world? And I find that those episodes are the ones that I think are the most fun to record for Steph and I when it really feels like a sincere conversation. I've recently taken to a segment I call good idea, terrible idea where I'm like, "I'm actually considering this, Steph. What do you think?" And live on-air, I'm getting Steph's feedback, and generally, we're very aligned. But every once in a while, she's like, "That's a terrible idea. Don't do that." And I love those, and I love being able to share that because I think it's really easy to talk about, you know, here's a list of things that are true about software, but really, everything depends. And it's all the nuance. And so, being able to share some of our more pointed experiences and then share the conversation that we have over those is hopefully very valuable to the audience but definitely the thing that I enjoy the most.
BRITTANY: So kicking it over to Jason Swett, I really enjoy the interviews that you do. I'm curious, how do you select guests?
JASON SWETT: Well, thanks. Selecting guests is tough. I had Peter Cooper on the other day, and I was telling him that I feel like every guest that I get on the show is the last guest I'm ever going to be able to get on the show. But somehow, I keep finding more and more guests. Early on, it was relatively easy because I would just find book authors, or if somebody else does podcasting, then it's fairly obvious okay, you're the kind of person who does podcasts, so I'll invite you.
But it's a little bit tough because I don't want to invite people who aren't into podcasting and would be really thrown, although sometimes that happens. But let's see, sometimes I send an email out to my email list, and I'm like, "Hey, I'm looking for guests for my show." Sometimes I just tweet that I'm looking for guests. And sometimes I get some really interesting guests from surprising places. But at least in the start, it was looking for those authors and podcasters and the people who are known in the Ruby community.
BRITTANY: I know for me, I strive to have at least 50% of my interviews be with people who've never been on a podcast before. And so that usually involves the top of the episode they're dry heaving into a paper bag. And I'm explaining to them, don't worry, about halfway through the episode, you're not going to remember that you're recording anymore. It'll be fine. And you know what? It's always fine. And so, I do love hearing from a wide variety from the Ruby community just because it really proves just how big it is. So I'm curious, could you host the podcast that you are currently hosting now if you weren't actively working in Ruby?
ANDREW: I could because Chris is the one that has all the clout. I could sit back and make dumb jokes and memes during it. And as long as Chris is there, I think we'll be good.
JASON SWETT: Yeah, I think I could because a good majority of what we talk about on Rails with Jason actually has nothing to do with Rails, so that would probably actually work out.
STEPH: I think yes is the answer. While a lot of our conversations do focus around Ruby and Rails, we often use a lot of other languages and tools, and those are a lot of fun to talk about. So I think I would just talk about whatever new tool or language that I'm using. So I think yes, it would just take a slightly different form but would still be at its core the same where we're still talking about our daily experiments and adventures in web development.
BRITTANY: I agree with you, Steph. I will say that it seems like Chris Oliver and Chris Toomey have an endless well of things to talk about just based on what they do day-to-day.
CHRIS TOOMEY: I try and go on adventures and then share as much as I can. But to resonate with what Steph was saying there, we try to make the show more generally about software, and it happens to be that it's grounded in Ruby on Rails because the vast majority of the work that we do is in that. And I just recently started a new project. I was given the choice of I could pick any technology I want, and it remains the technology that makes sense to me to be the foundation of an application that I want to maintain for years and years and years. So, on the one hand, I think I could definitely talk about software more generally. I think I'm doing that most of the time. But at the other end of the spectrum, but it's always going to be based on Ruby because I haven't found a thing elsewhere in the world that is better than that.
CHRIS OLIVER: I completely agree with that. I probably have a little bit of a unique thing doing a screencast every week. A lot of those are based on I'm building some project, and I need to build some random feature like Stripe Checkout. And that's a good one to do a screencast on and implement in the project. And then, we can also talk about the decisions along the way on the podcast, which is kind of nice.
BRITTANY: Yeah, it feels like every week, Chris Oliver is like, yeah, I've created a new open-source library, and I'm fabulous. [laughs] Let me listen to this.
CHRIS OLIVER: Too many of them. I'm currently rewriting a lot of the Pay gem. And it's just one of those things where you make a bunch of decisions. And then, if you make an open-source project, people use it in all these different ways that you didn't intend yourself, and so you want to support that. But then you need to rearchitect things in it. It is a lot of learning as you go, which is always a lot of fun. So those I think are really good topics to talk about when you're building something like that.
I'm always amazed by how does the Rails core team make these decisions on what should be in the framework and what shouldn't? And what do they want to maintain, and how do they keep it flexible but yet have some sort of rule with how they allow things to be implemented and whatever? It is a very hard job to have. So I get my little taste of that with some open source but not on their level.
BRITTANY: I always thought that you had a good contrast to Jason Charnes because Jason works at Podia. And while you do get to work on a lot of really cool technologies, I feel like the stakes are much higher. So you can't just rip out StimulusReflex and put in something else just because it sounds cool that week. And I love how you talk through the pluses and minuses to making a big change within the Podia codebase.
JASON CHARNES: Yeah. I haven't really thought about that contrast before, but it's helpful for me even just to talk it out with two other people once a week, and luckily, pretty cool about me just coming on and talking about hey, these are the steps we took to get here. Yeah, it's a cool dynamic.
BRITTANY: Steph, have you ever had a client from thoughtbot say, "Hey, were you talking about me?" whenever you're talking about your current client?
STEPH: That is one of my fears at times that it will happen [chuckles] although we stay very positive on the show. That's something that's very important to us. There's enough negativity in the world. So we really want to focus on our positive experiences through the week. But there have been times where I'm speaking about some of the challenges or things that we are running into that yes, the engineering team is listening to the podcast, and they're like, "Oh, I heard you talk about this feature that we're working on or this particular challenge." And that's really cool because they get that behind-the-scenes peek to see how Chris and I are chatting about that. But yet they know enough, and they know which project that I'm on that they recognize exactly the technology and the feature that I'm trying to describe. So that has certainly happened, and it can be a lot of fun when it does.
BRITTANY: Andrew, how have things changed for you now that you're not working at CodeFund, which was very much like an open-source thing? People could see what you were actively working on. And now you're working for a company where it's closed source. And so, you might not be able to reveal as much as what you're working on at any given point.
ANDREW: It's different, but I don't think it's been an issue per se. I'm not like, oh crap, I let that slip, and I didn't mean to. That's not really an issue. I really cherish the time I had at CodeFund. When I think back on my experiences, that was my favorite time just because I was able to do that thing that a lot of people really want to do. I was working as an open-source developer. We were spiking StimulusReflex; that’s when we were building up StimulusReflex and trying to build up the community. I joined Ruby. We started the Ruby Blend, and things were going good before a dramatic turn. But in terms of the closed and open source, it hasn't been that big of a shift just because instead of talking about what I'm doing at work, like, I still talk about it, but I speak about it in more general terms. But I also then kind of freed up to talk a lot more about the dumb crap I do on the nights and weekends.
BRITTANY: So the majority of our podcasts either have the word Ruby or Rails in it, but I think we've all agreed that a lot of the topics that we're talking about are not specific to that community. But in a lot of ways, I feel that having podcasts in our community is how we're going to keep our community thriving. So I'm curious if anyone has any thoughts around...is there a way to market our podcasts so that other developers will listen to it? I get really excited when I get listener feedback saying, "Hey, I used to do Rails maybe ten years ago, but I've been listening to your podcast, and I really enjoy such and such episode." How can we make our podcasts accessible to the general software community as opposed to just Ruby?
CHRIS TOOMEY: One thing that stands out to me about Ruby and Rails is because it's full-stack, because of its foundations, it tends to be holistically about web development. And so, whereas I look at React projects or other JavaScript or different things that are going on, I see a more narrow focus in those frameworks. And with Ruby and Rails, what I love about it is that it's really about building software. It's about building products that are valuable, that deliver value to end-users. And so that being the core of it, that's the story that constantly brings me back to Ruby and Rails. And it's the story that I want to keep telling as much as possible. And it's the thing that keeps me engaged with this community. And so, I think podcasts are a great way to continue to literally tell those sorts of stories and really celebrate that aspect of Ruby and Rails and why it remains such a productive way to build software.
CHRIS OLIVER: I think related to that, one of the things that we should talk about more is the draw of Rails was look at what you can do with one person or two people. And I feel like we went down the JavaScript route, and now you need two teams of people, and you end up building bigger stuff. And Hotwire has kind of been like, hey, here's a reminder of what you can do with a very small team. And I think that resonates a lot with a lot of people building startups and trying to build side projects and everything. And that's one that is Rails-related. But there's a ton of people building Hotwire stuff in Laravel too. And they're all very similar. So I think at a certain point, yeah, we're talking about maybe Rails specifically, but you can apply all those things to different frameworks and just different tools.
STEPH: I'd like to add on and extend that because I wholeheartedly agree with what both Chris Toomey and Chris Oliver just said. And in addition, a lot of the conversations that we have on The Bike Shed are focused on Ruby and Rails, but then we will extract that particular concept to the point that it really doesn't matter which language that you're using or which framework that you're using. We're talking more about the high level. What's your process? What are you thinking as you're going through and implementing this? And based on more of our recent conversations, you'd think we're more of a Postgres podcast, how much we hype up Postgres, and the things that we can do at the database layer. So I think there are a lot of ways that we can start with a foundation of this is how we're doing it with Ruby and Rails, but then talk about it at a higher level where then it's really applicable for everybody.
JASON CHARNES: If talking about one technology defined your podcast, we might as well be a Laravel podcast because we talk about that framework more than we do Rails sometimes. [chuckles]
BRITTANY: So that begs the question: is there room for more Ruby and Rails podcasts outside of who's currently on this call?
JASON SWETT: I think so. And I mentioned that Peter Cooper was on our podcast a little bit ago. That's something he and I actually talked about in that episode. And I shared the anecdote about how in the new America's founding, Ben Franklin's brother or something like that wanted to start a newspaper. And somebody told him what a dumb idea that was because America already had a newspaper. And people might say, oh, there are already however many Rails podcasts. There are a small handful. But I think there could be ten more Rails podcasts or even more than that potentially because I think people have an appetite for help, and camaraderie, and stuff like that. And I don't think we've nearly bottomed out in terms of satisfying people's appetite for that stuff.
JASON CHARNES: Yeah, I agree with that because a lot of times, when I listen to podcasts, the more you get to know someone, that connection becomes what it's about for me. So, yeah, there's plenty of room. I mean, brand it as Ruby and tell me about your life as a developer I'll listen.
CHRIS TOOMEY: I'll also throw it out there that the way you framed the question is like, is there room for it? But one of the wonderful things about podcasting as a medium is it is distributed. It's not centralized. You can start up a podcast any day. And I will say, as someone who inherited a popular podcast or a sufficiently popular podcast and just got to run with that, it has been such a wonderful way to get my voice out there and provide opportunities that I want that for everyone. I want everyone to have this ability to speak about the way they think about software and then find like-minded people and be able to build even many communities within the larger community of Ruby on Rails. So beyond the question of, Is there room?” which I definitely think there is, I so wholeheartedly support anyone pursuing this for their own reason.
ANDREW: Yeah, I think to bring it all the way back, one thing that Chris, Jason, and I care a lot about is Ruby as a community. The community aspects of Ruby are very important to us. And we're actively trying to build that up and bring in new people and bringing people onto their first podcast. We say it all the time, like, hey, if you want to come on the show, let us know. We've had a few people even, you know, recognition in jobs from that. So to us, that is the payoff of doing the show. Maybe our show is the first time someone learns about Rails. And that to me is the possibility in the future. It's like, how can we market our shows that markets Ruby as well so that this meme of Ruby being dead finally goes away because it's not. I think it's growing. And I think the more and more we push as people who are public figures in this space that we want to bring more people on, that this is a space for everyone, I think that's just kind of the ethos that all of us have, and I think that's great.
BRITTANY: So I'm curious, on a lighter note, has anyone had the funny experience of realizing that you're not just podcasting into the ether and that what you're saying and what you're doing matters? For me, I have definitely been at conferences where people will run up and hug me just because they heard my voice, and they are like, "I didn't know what you looked like, but I have your voice memorized," and it just blew my mind. And I was like, "Thank you so much for being such a loyal listener." And it just proves that people are out there listening.
ANDREW: I tend to talk very openly about mental health. And I very often fail in public and talk about it. And I've had a lot of people message me and email me over the past three or four years and be like, "Hey, thank you for talking about this thing that's not actually about Ruby. It's not actually about coding, but it's just about being a developer." And those are the emails that make me feel the best. Like, someone who's out there like, "Yeah, I also feel like this. Thank you for speaking about it."
JASON SWETT: I had a surreal experience. I went to India in 2019 through RubyConf India. And this guy wanted to take a selfie with me because apparently, he considered me famous. So that was cool and pretty surprising because I definitely didn't consider myself famous.
STEPH: My favorite has been when we receive listener questions because it lets us know that people are listening and engaged in the conversation, and I essentially feel like they're part of the conversation. They will write in to us and share anecdotes, or they'll share answers to some of the questions that Chris and I will pose on the show. But every now and then, we will also get an email from someone that says, "Hey, just thanks for doing the show. I listen, and it's great," and that's all they share. And that, to me, is just the most wonderful thing that I could receive.
BRITTANY: Some of my favorite episodes from all of your shows is when we get an inside peek into what people are doing, like Andrew moving. Jason Charnes, you putting together a conference was actually some of my favorite episodes of yours, which was really early on, which proves that I'm a Remote Ruby OG. But I loved hearing the inside track as to what organizing a conference is because I think we need to get more content out there about how difficult but how rewarding it is.
JASON CHARNES: Yeah, I hadn't really thought about...that was around those times we hadn't done... It feels like it's been ages since we did Southeast Ruby, but Chris and I actually podcasted from the last Southeast Ruby we did. We just met in a room and recorded. But when I started that conference, I didn't have a lot to go on. So I'm more than glad to share because the reason I started is there were no Ruby conferences around me, plus I'm an open book. So for better or for worse, maybe that's good podcast material.
JASON SWETT: Side note, it's one of the most enjoyable conferences I've ever been to.
JASON CHARNES: Thank you.
BRITTANY: I completely agree. I miss the regional conferences.
JASON CHARNES: We lucked out because we were already planning on skipping 2020 because we were tired, and then COVID hit. I just sat on the couch one night and looked at Shannon (she helps me put on the conference), and I was like, "Wow, that would have been terrible. That would have come out of our own bank account, all that loss if we would have already booked somewhere." So phew, when it chills out, we'll try it again.
BRITTANY: So let's talk about legacies. I know that some of us have taken over from popular podcasts. Some of us have grown podcasts from the very beginning. So I'm curious, do you ever put any thought into the legacy of your podcast, whether or not you're going to stay with it to the end? Would you eventually pass it off? Do you think about whether or not it's your responsibility to the community to make sure that it keeps going?
JASON SWETT: I, for one, plan to have my consciousness uploaded to a supercomputer upon my death so that the Rails with Jason Podcast can continue on indefinitely.
JASON CHARNES: Did you recently watch Upload the TV show?
JASON SWETT: No, I've never heard of it.
JASON CHARNES: Oh, man. That's a whole nother conversation.
BRITTANY: Consider that homework, Jason.
JASON CHARNES: It's an interesting question because we started ours out of nothing. I wonder, is one of us going to get tired and just quit? I'd like to think that if one of us did, it would keep going because there are plenty of cool people who could hang out and talk Ruby on it. But it's interesting, something that's casually crossed my mind, but I think we're good. I think we're still doing it unless Chris and Andrew have a surprise for me today.
ANDREW: Surprise! [chuckles] I've thought about it a few times, specifically because I'm the youngest member of Remote Ruby. What if Jason and Chris just left, and they were like, "Oh, it's all yours now." Could I keep running it by myself? I think honestly, the answer is I would probably still do it just to have an excuse to talk to someone. I enjoy it. It's almost like a hobby at this point. I don't feel any obligation to create it. To me, it's really like an excuse to hang out with two friends, and other good stuff comes from that. But at the end of the day, I cherish that time just us hanging out a lot.
CHRIS OLIVER: Yeah. I think that's why we sometimes joke about it being a weekly therapy session where we are just hanging out and chatting about stuff. It's nice to be able to talk about programming things at a high level with people you don't work with that have totally different perspectives and stuff. So yeah, if Jason and Andrew dropped off, I would still try to have conversations with random people I know and keep it going just because it's enjoyable. I would hope that we would be able to keep it going and have other people on there.
BRITTANY: I'd love to hear from someone from The Bike Shed.
STEPH: I have thought about it. I've thought about it partially from the perspective that Chris Toomey brought up earlier in regards to being on a podcast is an incredible platform. You get to share your opinions, and people listen to you. And they know you, and it's really wonderful marketing. So I have thought about it from the perspective of I want other people to have access to this really wonderful podcast that we put on each week. So part of me is very aware of that and thinking about how more people can have similar exposures.
So a sort of a similar event occurred when Chris was moving on from thoughtbot and pursuing other interests. And at that moment, I just thought, oh my goodness, Chris brought me on as co-host, and now I'm here alone, and I don't know what I'm going to do. And I just panicked. I truly don't think I even considered other options. I was like, well, okay, it's over now. This was fun. And then it turned out where Chris was going to stay with the show. So things have just gone on swimmingly, and it's been wonderful.
But similar to what someone was saying earlier around when you start listening to a podcast, and you really develop that relationship and you go back to that podcast because you really enjoy hearing from those people and their adventures, it's very similar for me where The Bike Shed is very much the conversations and chats with Chris. So I think if we were to move on, it would be whenever Chris and I decided to move on and give the reins over to somebody else. I don't know if Chris fully agrees, so this will be interesting to find out. [chuckles]
CHRIS TOOMEY: I agree with that. Honestly, I'm honored to have continued on in the podcast after having moved on from thoughtbot because, in a very real way, the show is thoughtbot's channel to talk about things. I was at thoughtbot for seven years. I think I live and breathe that truth. And to me, that's what maybe has made sense for me to continue on. But I really do feel a responsibility to keep the show in good shape so that someday someone else gets to inherit this thing because I was so happy to get handed it. It was such a wonderful thing. And it has been such a joy to do for these past three years.
But at some point, I do presume that we will move on. And at that point, I do hope that other people pick up the mantle. And thankfully, thoughtbot as an organization, there is a group of individuals that I'm sure there will be someone wonderful that gets to step in, but I'm in no hurry to do that. And, Steph, I hope you're not either. So we'll continue the conversations for now, but I definitely do want to keep this thing alive if for no other reason than I got handed it. I don't feel like I could let it drop on the floor. That doesn't feel right.
BRITTANY: Well, I think on that warm, fuzzy feeling, we should wrap up. So let's go through everybody and just tell the listeners where they can listen to your podcasts and follow you. I am Brittany Martin, @BrittJMartin on Twitter. And you can listen to the Ruby on Rails Podcast at therubyonrailspodcast.com.
JASON CHARNES: So I'm Jason. We are Remote Ruby. I am @jmcharnes on Twitter. And I'll let the others tell you where you can find them.
ANDREW: You can find me everywhere @andrewmcodes. And if you email me, there's a really good chance you're never going to see a response because my email is a disaster. Please don't email me, but you can contact me anywhere else.
CHRIS OLIVER: I'm Chris Oliver, and you can find me on Twitter @excid3 or at Go Rails, and of course, gorails.com. And you can find the Remote Ruby podcast at remoteruby.com.
CHRIS TOOMEY: I am @christoomey on Twitter. The Bike Shed is @bikeshed on Twitter. We are at bikeshed.fm for a URL. I'm pretty sure www works, but I'm going to go check that real quick after because I want to make sure that's true. And yeah, that's me. And I'll send it over to Steph for her part.
STEPH: I am on Twitter @SViccari, and I post programming stuff, usually pictures of cute goats, cute dogs, that kind of content if you're into that.
JASON SWETT: For me, if you want to find my podcast, it's Rails with Jason. And if you search for Rails with Jason anywhere, you should be able to find it. And then my website, if you're interested in my blog and all that stuff, is codewithjason.com.
BRITTANY: Fantastic. Thank you, everyone, for being on this mega episode today. It was a lot of fun. We are going to be having a podcast panel at RubyConf; we’re excited to announce and some of us will be present. So stay tuned for details around that. And if you enjoyed this mega episode and want to see more mega episodes, please let us know on Twitter.
All: Bye.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris gives a DB sessions update and talks bifunctors & command objects. Steph shares the coolness of a gem she's been using called after_party, and excitedly gushes about her new laptop. (Chris is hoping to hold off on replacing his until the end of the year and then they can compare!)
The two then answer a listener question on retrospectives and how they've seen productive ones run, while giving some of their own helpful opinions on dos and don'ts. They're talking to you, Grumpy Goose!
Transcript:
STEPH: Cool.
[laughter]
CHRIS: Good. No, I like what you did there.
STEPH: Yeah, I feel like we can get rambling on that one.
CHRIS: It's been great. This is what the Bike Shed is at its best. It's the two of us just rambling and being like, well, what about this? And if it's this, then that, then these, and it depends. And it's complicated and it's nuanced. And what about the humans? That's the story of The Bike Shed right there. [laughs]
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm CHRIS Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, how's your week?
CHRIS: My week has been good. I have some updates actually on some topics from previous episodes. One of the things that I can update on is the discussion around the cookie versus the database store. So I had posed this as a thing that I was going to be doing in the app for a handful of reasons. Most notably, I wanted the ability to invalidate sessions from the server-side, wanted to have a little more control over that. And so that's a dream that the database-backed session store can do. Eventually, I have to make that actually work in the way that I want.
But I was asking the question in that episode, which we can include a link to the specific episode, but I was asking the question of why don't we just do this all the time? The database-backed sessions seem better in all these ways. It's a lower overhead per request because you're just sending the session ID and the cookie instead of the whole payload of the session. You actually can have more data stored in it, a bunch of things that seemed really great. And then right after I introduced it, I figured out the thing. I figured out the secret. It's not a big issue, and we're going to stick with database session stores. But we have to be purposeful because it turns out they are essentially plain text in the database.
And so if there's anything that you are putting into the session like say a social security number or an authentication token or other things which naturally I might have done if it was in a cookie that lives on the user's browser and never actually lives on the server, persists on the server, that seems fine to me. But now these things are getting stored in the database and that really changes the calculus, especially because if I'm not purposeful, they'll just stick around for forever. So social security is probably the most pointed example of this. If you happen to have a form in the app that accepts a social security number and you want that to persist through some number of other steps, not actually going to store the social security number in the database because that's a thing that I have actively chosen not to do. I need to send it off to some other system, but I do need to hold onto it for a few minutes. The session is a perfect place to put that unless the session gets stored in my database.
STEPH: That's such a great point. I'm so glad you discovered that. And in our recent conversation, we were trying to think of the reasons why this isn't the default case. You may be headed in this direction, but this may also be timely, the fact that you're discovering this issue but also the fact that Rail 6.0 now has encrypted columns. Is that where you're headed with the fact that you can still keep session data in a database?
CHRIS: That is a great question, and it is an intriguing option. But it's not the one that I'm going with here. I think broadly, my hope is to completely avoid ever persisting this data in the database, this truly sensitive user-specific PII or PCI or social security numbers or any of these other fancy acronyms that get collected together under the umbrella of I probably don't want that on my server. For those, I'm just opting to push them back into a cookie. So I'm using particularly a...In rails, it's fun because they have a fluent interface where you can just chain together things, so it's cookies.signed.encrypted.whatever, and then you go from there. But I'm using signed, encrypted cookie, which is essentially what the session store views; the cookie session store uses itself.
So I'm basically reverting to the old session store behavior for specific values. So anything that is truly sensitive like that, I'm just saying, cool, that's actually just going to live in a cookie, and that will be it, but not leaning on the ability to encrypt the database sessions. There's just enough subtlety around that. There's so much volume of data if I do allow that sensitive data into the system that any failure, any exploit that happens, would be somewhat catastrophic. So, in my mind, this lowers the surface area and says, yeah, this data really never lives on the server. It comes with a request, and then it's gone after the fact. And that's the world I want to live in.
STEPH: Yeah, that's super interesting. You're also raising questions for me that I hadn't considered when we originally had this conversation where there's necessity or that you're looking to store form data or sensitive data in that session as well. So that makes a lot of sense to me that for that type of behavior, we're going to separate that from the idea of authentication and the user session and still use encrypted cookies for those details. So it only stays with the user's browser, but then the actual authentication of a user that part could still live in the database.
CHRIS: It is ending up being a weird Venn diagram of; this is data that I want to stick around but only sometimes and particular to the machine that the user is interacting with because the session is still associated with a cookie at the end of the day. So a user may have multiple sessions in the database-backed session version. It is somewhat interesting, and I'm going to see how it develops over time. But yeah, at a minimum, I have now found this edge case of like, ooh, okay, sensitive data, that's a thing, which is one of the reasons that I would reach for the session inherently. So it turns out, as always, it depends. Things are complicated.
STEPH: It's a nice update. I like when we have closure to a question like that, especially so quickly.
CHRIS: Love to provide that continuity. It's what I'm all about. But yeah, what's up in your world?
STEPH: What's up in my world? I am excited that I have a new laptop. So I have been using a MacBook Pro for about the last...it's lasted me for a while. I think I've had it for a good four, four, and a half years, but that's on the fritz. The keyboard, in particular, the keys are popping off, and that's something that I could go get fixed. But I'm at the point that I need a new laptop. So I have a brand new shiny laptop. And I had the option to either go with a 16-inch MacBook Pro or to go with one of the new, fancy laptops that has the M1 chip. And I was torn for a little while because having the M1 chip sounds really cool and novel, and there's a lot of speed improvements that come with that.
But I ended up going with just the 16-inch MacBook Pro; specifically, one, it's still very fast. It's very reliable. I can use this as my work machine and just know everything's going to work. That part feels really important to me. And then also the screen size is important. So any Mac laptop that is using the new M1 chip, I think they only go up to a 13-inch, right now, screen size. And I really want the 16-inch in case I am traveling, so I have that larger desktop. But I did do some research into the M1 just because I know about it. I know it's out there. I know it's hot. People are interested in it, but I didn't know a whole lot about it. So for anyone else that's like me and is curious about what the heck this M1 chip is, it's essentially Apple's foray into making their own processors.
So traditionally, their machines have used Intel CPUs and third-party graphic processors, and other parts. And this introduction of the M1 chip really represents Apple's switch to having their own internal architecture rather than relying on those third-party parts. And it also means that all those features that were sourced from other parties like the CPU and its security are now being combined into a single chip, which has also led to some performance improvements. And while I was reading about the M1, there's a lot to go through, but the thing that stood out to me was this idea of Apple's Neural Engine. And I thought, well, that sounds super fancy. What is that? Are you familiar with Apple's Neural Engine? Have you read about that?
CHRIS: I don't think I am. What all is that?
STEPH: Yeah, good question. That was a question I was asking myself just recently. So essentially, their neural engine it's a microprocessor that specializes in the acceleration of machine learning algorithms. So it's really similar to how a GPU will focus on accelerating graphics rendering. And their neural engine or neural engines in general then focus on accelerating neural network operations. And the inclusion of a neural engine isn't something that's new because Apple introduced this into iPhones and iPads back in 2017 to support their features like Face ID and emoji, searching for photos with dog pictures. Siri speech recognition is also one that's using this engine and other machine learning tasks.
But the sparkly stat that Apple is sharing with this new design is it's a 16-core design that can perform 11 trillion operations per second, which sounds very fancy, very fast. But it really got me thinking about how companies are working to improve, not just laptops but also our mobile devices to run machine learning software more efficiently, and then how that's just going to evolve and change all the different features that we use, and then how developers can integrate with this engine. I think currently, Apple hasn't shared much information about how this engine works, but I think they've exposed a few developer tools so people can still build features that will then use the power of this faster, improved neural engine.
CHRIS: Oh, that's super interesting. I have still not really delved into machine learning or artificial intelligence, or any of that stuff in any real way. But it's one of those things like the number of mentions is ticking up. And at some point, I'm like; I probably have to pay attention to this, don't I? I'm still in the not paying attention to it camp. So if I'm understanding, though, you just described this wonderful feature, but you opted for the machine that does not have all the fancy stuff. You did not...
STEPH: Exactly. [laughs]
CHRIS: Okay, yeah.
STEPH: Yes, it would have been nice. That would have been neat, but yeah, I needed a machine with a larger screen, all those good things. And that's still really fast, for the record.
CHRIS: Oh yeah. I'm desperately hoping to make it to the end of this year. This is going to be a bit of a rumor mill here, but my understanding is the expectation is that Apple is going to release a 14-inch MacBook Pro with the M1 and the return of MagSafe, and the removal of the touch bar. And that sounds like my dream machine right there. I want that piece of hardware.
I also seem to care a little bit less about the size of the laptop screen. I'm so often working at my desk with a large-format monitor that I'm connected to. And so, when I'm on the road, I want to optimize for portability when I'm traveling because I do it so rarely, and then I'm hopefully focusing on travel at that point. But we'll see if that remains true as the shape of my work changes and I start to not only work from home. And maybe I'll actually change my tune on that. But for now, that's my hope is to make it to that machine and then get one, and that it exists because right now, those are all rumors.
STEPH: Well, I totally support this goal of yours. So that way, you can have that new-new, and then you can report back on what it's like, and then we can compare. Because I'll have the other version, the Intel CPU, and then you'll have the M1 chip, and we can see how our lives are different.
CHRIS: You'll have the new, and I'll have the new-new, and that's how we'll categorize them.
STEPH: [laughs] But yeah, I'm very much looking forward. Having a new laptop is always just such a fun feeling. It's just a clean space that I get to rebuild. It's like going through and prioritizing; what are the things that still spark joy? And then I get to only port over the stuff that I still really use all the time and want to keep. So I'm looking forward to getting it set up.
CHRIS: I need to do that sometime soon. I'm like five years deep, at least on this machine. So I've been dragging along. Also, the hard drive is just completely full, and I regularly have to go through and delete things before we start recording because it turns out these audio recordings start as very large files. [chuckle] So it's almost a weekly thing where I'm just like, got to throw something out today. I don't know what. It's fine. I'm going to be fine. [laughter] I'm going to make it to the end of the year, and it's going to be great.
STEPH: What else is going on in your world?
CHRIS: Well, I wrote some fancy code, and I use fancy not necessarily as a good word. [chuckle] So I'm intrigued that the code could be described perhaps as clever or other words like that, which I think are very complicated words in the coding space. I tend to try and avoid this type of coding where I'm trying to introduce abstractions and clean things up, and remove duplication because I've been burned by that so many times in the past. But this time, I think maybe this time it'll work.
So, in particular, there are two different areas of the application. There were two sets of refactorings, but they really went together. One is we have the idea of command objects within the application that we're working on. So there are a lot of cases where we need to save something to the database and then communicate something to an external API. And then presuming the results of that is a successful response from them, then unpack some data, make sure it's in the right shape, and then save something else to the database. And ideally, wrap that all in a transaction and keep everything together and then return some data at the end of it.
So that whole sequential operation, I've been using dry-monads to model that. I've talked about this on a few previous episodes. I'm really enjoying it. The more I lean into it, the more I find that it is just a really great way to wrap up that very procedural code. But ideally, do it in almost a functional way so that we've got these sequential operations that feed into each other. There's the railway-oriented programming stuff, which is associated with this idea.
But there is a lot of boilerplate to these objects. So the way we've defined them is they have a class method called run that takes whatever the arguments are, and then it needs to pass those arguments into the initialize and then call run on the instance. So in order to define one of these objects, what we had been doing was def self.run and then all the arguments. And then inside of the body of that, it's new, and then pass forward all the arguments .run, and then define initialize to capture all of those and set all the instance variables, and then define the run method, which actually does stuff. Also need to define an Adder reader for all of those instance variables, which is a thing that I enjoy doing. So that's the interface I want, or that's the way that I want this class to work. I know other folks in the Ruby world feel differently. But that's the shape of the thing that I want, but that's a lot.
And there's also I regularly would find myself forgetting to duplicate something that we put into the class method run interface into the initialize method. And it was just like, this is all just wiring up and plumbing. There's also the binding of the dry-monads do notation for the run method as well as the inclusion of the results type within dry-monads. Type is a strong word, but that gives us the success or the failure objects that we can create. So ideally, all of these command objects either return a success object or return a failure object. It's one of the two. And that's one of the things that I really like about them. But yeah, so much plumbing.
So we define a base command, and the base command has the self.run method, the class method, and that method is defined very abstractly. So it's just args * keyword args. So we're capturing all of the arguments and then forwarding them on to new. So that way, I don't have to think about that interface. It basically just says, "Give me anything, and I'll forward it onto new." And the new or initialize is in charge of actually defining things. It also includes the result type. It includes the macro annotation for the run method, which is how dry-monads does its magic, that actually I had to include inline within the self.run, just because of the sequence of definition and the metaprogramming that's going on there. As I said, that sentence terrifies me a little bit, but hopefully, no one ever needs to look at this magic base class [chuckles] and figure anything out. So that was one part of it. That cleaned a lot of things up, so that meant I didn't have to write a ton of the wiring up code.
Then there was still the noise of actually defining all of the arguments to these classes. They often take a handful of arguments because that's their job is to grab a bunch of things and do some work with those things. So for that, I have brought Adder Extras, which is a gem that I've talked about probably in previous episodes, I think so. But this is the first time that I've really leaned into it and used it. And it gives some very high level what look like macros are just class methods. But the one that I'm using is Adder private initialize, and that you can then pass a variety of values too. And it will then say, okay, this method accepts a required keyword arg, a defaulted keyword arg, and a positional argument or something to that effect. But it's a very, very concise way to express that and then also get the private Adder readers, which again is the direction that I want to go with all of this.
So that’s a bunch of things that I have said. But all total, it cleaned up these command objects very nicely. And now, when you look at one of these command objects, all you see is the run method that does the work. And the plumbing and the wiring up behind the scenes should just happen. I am concerned about the day that someone forgets to inherit from the space command, and then it's like, why does nothing work? I thought command objects just worked in the system. But we're going to deal with that when we get there, which is hopefully a while down the road.
STEPH: I like how you're pushing at the boundaries of our comfort zone. I say our comfort zone because I imagine we feel similar.
CHRIS: It is. We definitely got a shared comfort zone. [laughter]
STEPH: Yeah, we have a shared comfort zone with inheritance, but you're pushing at that boundary of that comfort with inheritance because it is something that can be so painful. But you've identified an area where inheritance feels useful. And then it also sounds like a very meaningful...you're introducing this pattern and then trying to make it easier for others to follow this pattern. So it's a very intentional design decision of where we want to group these behaviors together and then make it very easy for other developers to then pick up this pattern and run with it, and then also work with these classes. So I am intrigued to hear how it goes and how others feel about the pattern as well.
I also wonder, this is one of those areas where it feels like this very intentional design decision. Is it something that you think in the base class would be worth highlighting? Like, hey, here are the things that we are using in this base class. This is the intention of this base class. I don't know if that's maybe a comment or if that's something that's documented in the README. I know; I see your eyebrows went up when I said comment. But it does feel like one of those areas where it's like, hey, we have introduced this new concept. We want you to follow along. Here are some helpful guidelines.
CHRIS: Those were mostly joking eyebrow raises because I have thought of that. I haven't actually gone to that level. But in the back of my mind, there's this pattern that we have within this application. Ideally, we're going to lean into it more and more so that A, we have a clear way that we do things within the app but also make that as understandable and discoverable as possible. I'm not sure if a comment in the class is the right thing or...so I'm deferring what I want to do on that for now because right now, it's myself and one other developer. We sort of developed this in tandem. So we were working together on it. We would pair in a bunch of the features. And what we have now is the crystallization of what we found useful. And we're both very comfortable with it. So there isn't the need to explain it.
I'm almost thinking about it as just-in-time educational content around this piece of our application. I don't actually trust that I would do a good job describing it in the abstract because I know it. Like, to me, this thing makes sense right now. But I've been on the other side of stuff where someone was like, "Hey, this totally makes sense." And I'm like, "I don't know any of the words you just said," and so I felt that pain being on the other side. You could say I'm just being lazy, but I do think this is a purposeful delaying of that where I want to wait until I actually have someone to teach this to. And at that moment, I want to see what that conversation looks like. And I'll try and explain it to the best of my ability, but I'm sure they're going to ask questions, and I'll be like, "Oh, wow. Yeah. I hadn't even thought of that. But now that you ask the question, totally let me explain this part that I was going to gloss over and forget to mention."
And so, ideally, that is what will happen down the road. And then from that, hopefully, some artifact becomes clear, whether it's a documentation page in the repo or a comment in the class if it's simple enough or maybe even it's a recording of a pairing session. And that's the artifact that we keep around that explains this piece of the application. So I definitely think a version of that makes sense, but I am not doing it yet.
STEPH: It's funny; you’re saying so many good things that I agree with. I love the just-in-time education; that part is fun. And yeah, there's a part of me that definitely still leans into the idea because we've talked about this in the past too, where we write down, in the moment, the things. Having that context when we're implementing it is really important and helpful. So even if it's not this grand explanation…which I really like what you said around having someone to explain it to or have that conversation with so that way you're documenting the useful bits, that part I like very much, but capturing the intent as soon as the change was introduced. So even if it is a very high level like, hey, we are using dry-monads and Adder Extras, even if it's just links to those things, that's something that I think I would still favor just to go ahead and start surfacing this is a pattern. This is a choice. And then, as you continue to work with the pattern, if you change your mind, it's still very easy to scrap that documentation.
So I think if it were me, I would still go ahead and document it. I think it's that piece about discoverability that's calling to me so strongly where that's where I want to then highlight the things that are in use. So even if there's not an explanation, people can find the resources very easily. Because you're right, you did say a lot of interesting bits in describing this pattern. And the fact that we're talking about it now also just deepens my suspicion that it would be nice to comment somewhere, and perhaps a repo is a perfect place for it and just get it out there, and then it can always be revisited later and improved.
CHRIS: Okay. I like that you are keeping me honest on this because I do think there's a certain amount that I'm just being lazy here and not wanting to do that because it is actually really hard to try and document something like this. Like, what are the important pieces versus what are the extraneous details that people don't actually need?
I do wonder, so the pull request that did this refactoring and introduced this base command object that does have the explanation captures the point in time and whatnot. And so I wonder, is there a version of tagging important pull requests that tell the story of the application? A lot of things are just going to be like; this is adding a feature. It's the same as the other 30 pull requests recently that added a new feature. But this one is special from an architecture perspective. And so let's tag this, let's add a label. I don't know what it is but something that allows for discoverability of the story of how this application became what it is today because anything else I worry will go out of date almost instantly. But this pull request is true fundamentally in that same way that we say commit messages should capture as much of that detail.
So I did do that writing for the pull requests/the commit message. And I wonder if maybe that's the best artifact for this moment but then the question of surfacing it and making it discoverable because otherwise, it's just lost in the sea of other pull requests. So I don't know. But I do like the slight push back that you're giving me here of like, yeah, but what if you did something though? And I'm like, yeah, that's fair. I should probably do something.
STEPH: Being able to pin those specific PRs that have significant architecture changes sounds really novel, but I'm going to take this opportunity for me to be lazy. And if I'm joining a project, I don't want to read through what has happened. I just want to know what's true now. And if I go back and look at those PRs, I won't know if all of that is still relevant and how it's changed. So it sounds neat from telling the story of how an application has evolved. I like that sort of developer lens, and what are the things that we have tried and then changed over the years? But from I am onboarding to this application, I just want to know what's true today? What are the things that you want me to follow? What are the patterns that are going to be really helpful for me to look at? And so then, I don't know if I would use it in that context.
And this may be one of those areas where I'm feeling overly skittish in response to the number of things that you said and the use of inheritance. Because I have felt so much pain of where I'm going up the tree to figure out what the heck is happening in the world and then to understand all of those pieces, and then swimming all the way back down to the class that I'm actually working in. So it could just be past experiences that are now influencing how I want to document or work with inheritance. It probably is. [chuckles] That's probably a big factor of it. It doesn't mean I disagree with it because those painful experiences are meaningful. [chuckles]
CHRIS: Yeah. I think the foundational thing...I tried to start this with the context of like; I did a thing. This is another example of good idea, terrible idea; my favorite segment on The Bike Shed. I stand by it. I think it was useful. It does use things that we have traditionally moved away from. I say we because, again, I think we have a shared approach to development at this point. But I think it's worth it. I hear everything that you're saying about the documentation, but I've been burned by that so many times where the documentation is like, here's what's true now. And you're like, no, there isn't even a class called that anymore, and no less does it work that way. And so, my inclination is not to go that way.
The solution that I have in mind is when someone is onboarding into the application, I don't expect there to be documentation and other things that I can hand them so that they can run. I expect to sit down with them and work with them. I'm going to pair with people when they join a team for a long time. There's a period where that's true, I think, and then you get to a certain size of an organization, and you're onboarding enough people regularly enough that that's a thing that you should get better at.
But for I think a surprisingly long time, my answer I'm more than happy for it to be, yeah, we're bringing someone new into the team. Let's sit down with them. Let's spend the time. Let's tell them what's true because I know currently, and I can give them an up-to-date version of that. And ideally, as part of that, then update the static documentation, the repo, the README, the other things based on the conversation that we have and recognize oh, that that link is very out of date. Let me change that one real quick. But I'm not expecting to have comprehensive documentation for that. I'm expecting to use real human interaction to fill that gap.
STEPH: Yeah, I really like that you're also calling out how fallible documentation is and how it has misled us so many times. I also love what you highlighted where when somebody new is joining the team, we are more than often going to sit with them and then explore the app together. And it just made me revisit that phrase that you used earlier about the just-in-time education. Because for this command object, you may join the project and not need to interact with this design pattern for your first couple of weeks, first couple months, who knows?
So then it comes back to the idea of how when someone is in the space of where using a command object feels like the right approach, then how do we introduce them to this pattern and then make sure that they have the tools that they need? And if someone is accessible to then sit down and go with them, that's great. But if someone is not accessible, then I still want them to have at least a few of the resources that they need to dive into some of the more complex things that are being included. So, yeah, it's a tricky one. I like this thought experiment.
CHRIS: But yeah, overall, I'm happy with it for now. I'm hopeful it will work out for us moving forward, and I'm hopeful that it will also be a sufficiently discoverable or teachable thing within the application. But again, I will certainly report back and see how that one plays out for us. But yeah, that's what's up in my world. What else is going on in your world?
STEPH: Something else that's up in my world is I have pulled in a tool that I've used in the past, and I really like it. So I'd really like to talk about it here for a bit because I just find it so useful. And now that I've added it to this new project, it's just really top of mind for me. So I found that when working on a project, there are often times where I want to run something right after a deploy has happened, and I want that to be automated. I can do it manually. I can hop in, but then perhaps if you're deploying across many environments or many systems, you don't want to have to do all that manual work, or you also just want the convenience of you can set it and forget it. And that way, you know something's going to happen. So perhaps it's something where you want to change some data, or if you want to enable a feature flag, then this is really helpful.
So the gem I've been using for this is called after_party, where you can write automated deploy tasks that essentially behave very similar to migrations. So you can write a Rake task. It has a timestamp. You can implement the logic that you want to be run right after your code has deployed, and then after_party itself, we'll check the timestamp. It will see if it has been run. If it's already been run, it won't run it again. Or if you like, you can set it up so that way, you can tell after_party to say, "Hey, after every deploy, I want you to run this particular task," but it's such a nice improvement to the workflow.
And the other thing that I really like about this that I feel is a bit contentious is separating changing data outside of migrations. So I am a big fan of migrations are focused on changing your schema itself. But if there's actual data that you need to change, I really like when that is separated outside of the migration. There are definitely times that I understand it's really nice to just do it all at once, and it's easier. But anytime it starts to get even a little complex, I immediately want to write tests for it. And I can't test my migration. But if I'm changing some meaningful data on production, I want tests to back it up to make sure that I'm scoping correctly, that the outcome is exactly what I expect. It also makes it easier for other people to review. And after_party gives me that functionality so then I can have my migration.
But then I'm like, oh yeah, but I still want to automate changing this data because that's often one of the complaints that I hear from people when I do ask them to separate into a Rake task, changing the data. They're like, "But I don't want to have to then follow up and then run this task later." And I'm like, that's cool. After_party has you, and you can automate it and not worry about it. So after_party has been one of my favorite gems to add to applications.
CHRIS: That's interesting. There's a bunch of layers to everything that you just said. I think I've worked with after_party on a project. I think we were working together on that project, if I'm remembering correctly. I have no bad memories of it, which given the nature of the tool, makes me think it did its job very well because its whole point is just like, oh cool, now you can just do this thing, and you don't even really have to think about it. Because there are plenty of other times where I've had to orchestrate or do a deploy. And then I SSH tunnel into production, which is a bad idea, and then I'm running Rake tasks manually. And so, I think the fact that I don't have any pointed memories of this is a really good sign for a tool like this. So that's a weird vote in its corner for me.
You did say something that was interesting that I want to poke at a tiny bit which was you can't test migrations, and I think that's true. Like, I don't know of any way. And it feels like a thing that is sort of fundamentally deeply true. But I do wonder, is there any gem out there? Has anyone done a weird science experiment to figure out like, I would actually really like to be able to test my migrations? So I think the idea of having to pull data change out of migrations for the reasons that you said totally makes sense.
But there are often times where I want to convert from non-nullable to nullable. And in the process, I want to backfill with a given value or something to that effect. And I like to encapsulate that altogether such that if it fails or succeeds, it's transactionally consistent. And I do wonder, could I wrap a test around that? I don't know of a way, and I think it may actually be the Rails testing infrastructure is just like no, we prepare your schema for you in the background, so it's just up to date. And therefore, you don't even have a way to be in a state where the migration hasn't run. But it's an intriguing one.
STEPH: Yeah, that's probably a hard absolute that I said where you can't test it, and I'm sure there is a way to test it. How friendly or how easy that is to do, I'm really not sure of. It also feels like one of those areas where it feels like I'm testing this other service that I should trust fully, so then I'm not necessarily testing the migration itself. I'm testing some logic that I've added inside of the migration where I'm changing some data. And the example that you provided is perfect because that's one of those that I'm inclined to include in a migration. It's more like where we want specific users who have this value or in this category. And then, we want to migrate them from this data to the other data. And when we start getting complicated in our migrations, that's when I'm like, this is a bit much, and I'd really like a test that documents that we're doing this correctly. That's where I get squeamish about having data changes in migrations. But you do raise a good point. I don't know; I’ve never tried to test one. I've just always reached for this other approach, but that is more the pain point of if I could test this data change inside of migration, then that would work for me. That would solve my problem.
CHRIS: I wonder if an alternative approach would be to just introduce an object or a class that does this work. So like a command object as it were, to do a call back to earlier in the episode, that does that data transformation because it’s exactly what you're describing, for this subset of users do this. But if they're in this state, then do these things and create two new records for any user like this. That sort of stuff is really complicated. Definitely want to have some tests around it. But you're talking about a gem that allows you to extract it into a Rake task-like situation. But I wonder, could we just have a class for that?
And I used to be a big believer in your migration should live forever, and they should always be runnable from the beginning of time. I've given up on that belief. That's one of the things that I've been like; I don't know. It turns out I've never done that. It's not an important thing. Just DB schema load is going to be fine most of the time. It's great for the past ten migrations to be around just to tell a little bit of a story. But I'm not tied to migrations being runnable forever. So the idea of you introduce this class, it encapsulates that data transformation. You can test it because it's its own thing. It will still be run within the context of the transaction of the migration. And then you throw it away down the road along with the migration, and you do that migration roll-up thing. It's just a different thought there, although I do like the...well, I guess that would also run automatically, but that runs as part of the deploy as opposed to after the deploy, which is meaningfully different than what after_party does because there might be one of these migrations that takes a long, long time to run because you've got a ton of data. And you want to decouple it from the true deploy release sequence that happens and the time limits that are there. So I think I've now talked myself in three circles, and I'm going to stop.
STEPH: I like how you highlighted that part where it does decouple you from the deploy process where it's still automated; it runs afterwards. But say if it's something that doesn't need to hold up the deploy, you don't need to wait for this data to be migrated before the deploy can go out. Then that's a nice separation because then it can happen afterwards. Or if you do need it to happen part of the deploy, yeah, there's lots of interesting bits there. I feel like you and I could talk about it for a while.
But we have a listener question that I'm really excited for us to talk about. So I'm going to hard pivot over to our listener question. This question comes from Jonathan. And Jonathan wrote in, "Hey, gang, longtime listener, first-time emailer. I've heard you reference retrospectives a few times as part of your normal development practice. In my limited experience with them, I often find retrospectives don't feel productive because team members are reluctant to raise issues without seeming critical or blaming another team member. I would love to hear you describe how you typically run retrospectives to foster open discussion and make it a productive use of time. Bonus points," oh, I love bonus points "if either of you have experienced rescuing an existing team that was not having productive retrospectives. P.S. Thank you for ongoing participation in the Ruby and Rails communities. I look forward to seeing a new episode pop into my podcatcher each week."
All right, retrospectives. I love this question because I've definitely been part of teams that are really struggling to have a productive retro. So I think it would be helpful, as Jonathan highlighted, to go ahead and share how thoughtbot runs a retro. And then I'd also love to touch on some of the areas where I have seen teams really struggle to have a productive retro. So with the thoughtbot format, there are really two questions that we focus on. The first question is, what went well? And this starts the meeting on a positive note, which can help people get engaged before then we move on to heavier topics like concerns and issues.
When we run a retro, we ask each person these two questions. So that first question, we go around to the room, and we say, "Hey, what went well for your week or for your last two weeks?" And then we document all of those positive things that people say. The next question is, "What concerns do you have, or what are you worried about?"And the goal here is to highlight issues early, which then gives us the chance to address them as they come up rather than waiting till an issue has grown out of control. And it's usually during the concerns portion that I often see retrospectives fall apart. The reason for that is hearing someone describing a concern is often something that can stir up a lot of emotions. And I know for me, it certainly triggers my instinct to where I really want to dive into that issue, and then I want to solve it.
But by reacting to a specific issue and then trying to solve that issue, I'm interrupting that retrospective flow to then focus on that issue. And we may not get to a bunch of other important issues that people had. So that's often where I see retrospectives fall apart. And the way to fix that is to then have the team consensus that hey, this is a space where everybody gets to air concern. We're going to go around the room, so everybody has a chance to speak. We're going to document it, but then we're going to move on and then come back to this later.
So when do we talk about concerns? So once everybody's had a chance to share their concern and that's been documented, during that process, you're often upvoting other concerns. So someone may bring up a concern that I also have as well. So when it's my turn to speak, I'll say, "I'd like to plus-one that particular concern," and then maybe add my own or just plus-one some of the others. So then, by the time that everybody's had a chance to speak, you already have an idea on…whoever's taking notes or if it's being ideally shared so the whole team can see. You can already see the concerns that most of the team is identifying with or that are the more popular concerns. So then, as a team, you can say, "Hey, we're going to focus on the top two concerns because that's really the amount of time that we have," and that way, we're focusing on concerns that impact the majority of the team. So at that point, then we can start talking about those specific issues and how we'd like to address them.
And then out of that conversation is then the next part of the retro format, our action items. And then action items are where we can capture the things that we would like to try during our next iteration of work until our next retro. This is our experiment area. So then we can say, "Yes, we'd like to try something different, or we'd really like to monitor how this goes."
And then one other fun thing that I typically include in retros are housekeeping. So then we can talk about time off, team celebratory events, anything like that that's helpful to highlight to the team.
That's a quick overview of how typically I myself run a retro. Chris, do you have anything you'd like to add or anything that I've missed?
CHRIS: No. I think that that mirrors pretty well the best retros that I've been a part of. There are a couple of things that I think I would add or emphasize in that. So one is foundationally, with a retro, what are we doing? What's the goal? And the goal with a retro is to identify and evolve our process. So identify where there are any bottlenecks or things that aren't working, and then ideally change things over time. I've been on many teams where just the same issues get brought up over and over in retro, and nothing changes. And that will just completely deflate the team. And so, if that is happening, that's a fundamental thing that we need to fix.
And I can totally understand folks being like, "Retro is awful. We just sit down and say the same things, and then nothing ever changes." If that's happening, we have to fix that at a more fundamental level. That is going to be more than a retro’s worth of effort. But ideally, retro is now this structured space each week, each iteration, whatever it is where we are discussing what's going on and ideally, slowly, incrementally making the process slightly better. In my experience, it's something that I really love because I come to associate it with stuff is going to get better now. That's what retro means. If that's not the feeling you have, then I totally get why you wouldn't want retro. But I promise that that can be a reality.
And then to touch on some of the particular procedural points, everything you said definitely maps. And I've found that structure works really well, but there's a lot of subtle things in that structure that I think are important to highlight. So one, going around the room and actually asking everyone individually for their thoughts, I find to be so useful because it's very easy for one or two more vocal individuals to just dominate the conversation. So particularly by starting with what went well and then also by actually going around the room and requesting "Everyone reply to this question please," even if it's just like, "Yeah, you know what? It just felt like a good week." That's an answer we'll accept but ideally, a little more structure or a little more meat to it. But I find that to be really important.
Likewise, I have found that having a facilitator, so someone who is guiding the retro but not actually a part of it. They're not going to be saying what went well or what didn't go well. They are just directing the conversation and somewhat critical as you're going around and asking for concerns. They are the person whose job it is to prevent the team from starting to try and address the concern when it's first voiced. So ideally, we're just collecting the concerns. We're collecting the plus-ones so that we know which are the more prominent ones, and then we can focus on those.
And I think that idea of the plus-oneing of concerns and then really focusing on the ones that have more folks that are concerned about it feels really critical in my mind. So ideally, we are a team. We're working as a team, and if one person has this gripe that they really feel deeply, but nobody else really cares about it; ideally, we find a way to help that person not feel that way. But that's not necessarily where the team collectively should put all of their energy. So yeah, that's a bunch of little pieces.
Also, just as a note, we'll include these in the show notes, but there are a couple of previous episodes, so Episode 132: “What Went Well?” is a discussion between Derek and Sage, previous hosts of the show, talking about retros. Episode 172: “What I Believe About Software” was the first guest visit by a certain Steph Viccari. And so that is a wonderful episode in which we dug into retro because it's one of our favorite topics. Also, Episode 299: “Is Agile Over?” We definitely touched on...that was a pretty recent one, but we touched on retro. Then there's also a video on Upcase called “Running a Retrospective” that basically describes exactly this process and shows actually an example retro and running through it. So there are lots of other things that we can point out here. But again, I think fundamentally, what are we doing, and how are we doing it? If we can answer those questions well, retro is going to be great. If not, it's probably not going to be that great.
STEPH: I appreciate you calling out all of those important nuances because those nuances are what lead to then a retro feeling more productive. And to address Jonathan's other question around if people are feeling timid to bring up an issue because they don't want to blame anyone, then I think to address that one; specifically, you have to come to retro with a WE mindset. And I think HBO accidentally sending a test email is a really good example of that. Because in the Twitter thread, a bunch of other I presume developers were commenting and responding in support of the person that sent that out to say, "Hey, you discovered a missing safety net in the system," or the fact that it was fairly easy to make this mistake and send it out. So if you come to retro with this mindset of if a mistake was made, how can we as a team improve this so then it's less easy to make that mistake? Then you won't have the sense of we're blaming this on one person, but instead, we as a team are responsible for helping each other out.
CHRIS: It's interesting to have that conversation in the context of retro because I don't necessarily think of retro in exactly this way. But there is the idea of blameless postmortems, which come out of the Google Site Reliability Engineering; I think it's a book, maybe it's a website. We can include a link regardless. But that idea of blameless postmortems of collectively as a team, this thing made it out into the world, this bug, this problem. So we need to own that as a team, and we need to have a blameless conversation around that, just talking about what happened. And there are subtleties there. And that's a nuanced idea that needs to be evolved, but that is at least some writing that exists in the world that talks specifically to that part of it. That said, I wonder if a true postmortem, so a distinct meeting just dedicated to those more pointed issues, might be more relevant, and then retro is more of a shared overall conversation. But if there are smaller versions of that, then I think using that framing could be really helpful in retro.
STEPH: Yeah, I think you said that perfectly where there needs to be team ownership over all of the issues that are being discussed. And I think there is one other very tricky area to navigate with having a productive retro. And I don't know of a better way to say this. But you have a grumpy goose on your team. You have someone who doesn't like retros, and they're going to be negative, and they're going to be vocal. And that is a hard one. I have been there before. And I often approach that situation by speaking with them specifically around what are your concerns with retro? Are you willing to at least buy-in and give this new format a chance? But you essentially need them to buy in or have leadership buy-in so then they know to follow suit as well that this is a team process that we're going to improve and work on together. And if you don't like it, then that's what retro is for. So how we can make this a better, more productive meeting? But just showing up and being grumpy isn't helpful. And then helping people who have been burned by retros overcome that negative reaction to retros is something that takes time.
CHRIS: Oh yeah. The grumpy goose just affects everything on the team. But definitely retro is one where I've seen that particularly pointed. I think in those cases, the best luck I've ever had is to, like you said, have a separate conversation but have the conversation at a higher level. So the question isn't about do we have retro or do we have it in this shape? The question is, do we think we are operating at our best? Do we think everything is going perfectly? And almost never will the answer be "Yeah, this is great. We have no bugs. We're moving as fast as we possibly can. Everyone is happy. No one is burnt out." And so if we get to an agreement that is like, well, yeah, sure, there are things that we could improve, then I think that's a toehold that we can then build on and say, "Okay, so how do you want to go about that?
I am fine to explore a different approach than retro as a meeting to continually improving and evolving our process. I'd love to know what thoughts you have, Mr. Goose." But if they don't have an alternative, retro is the most effective structure that I've found for this continuous feedback loop around the process. I'm very happy to find an alternative, but I critically think we need something like that. And so if they're going to be pushing back on retro specifically, then I'll bump up to the higher level and say, "Okay, how do you want to be improving our process? Let's try something else, but let's make sure we are asking the question of how do we improve our process and is that succeeding?" And also, stop being so grumpy. Come on, what are you doing?
STEPH: [chuckles] I recognize that approach so much because then it really gets to the heart of the purpose of retro whether it's actually called retro or how we handle it is not significant, but the fact that we together as a team can get together and discuss how to improve. That's really the important thing that we're after. And retro just happens to be the format that I use and really enjoy. But like you said, it's always open to each team's interpretation.
On that note, Jonathan, I hope this quick overview of the thoughtbot retro has been helpful. And we will also include some other links that also highlight how thoughtbot runs retros and some other discussions that we've had about retrospectives. But on that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Tune in as Co-founder and CTO of Honeycomb, an observability platform, Charity Majors joins Chris to drop some knowlege bombs such as:
And bunches more, since y'all know you hear her name come up at least once during every other episode!
Transcript:
CHRIS: Hello, and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. And this week Steph is taking a quick break, but while she's away, I was joined by a special guest, Charity Majors. Now, folks who've been listening to the show lately will know I've been mentioning one idea or another from Charity almost every episode these days. Charity's work spans from the deeply technical through to the deeply human. And across all of it, she brings such a wealth of experience in pragmatism while consistently providing grounded, actionable advice about how we can improve all aspects of our work.
And to give a bit more context for those who aren't as familiar with Charity's work, she is the co-founder and CTO of Honeycomb, which is an observability platform that we talk about more in the episode. Charity is also a prolific blogger, tweeter and speaker, and general leaver of digital breadcrumbs for the rest of us to hopefully follow. And Charity is also one of the hosts of the o11ycast podcast. That's observability, o11y podcast. And in fact, in the intro to the first o11ycast episode, Charity provides a beautiful summary of her approach to the varied work that we do. Quote, "I'm someone who's always been drawn to where the beautiful theory of computing meets the awkward, messy reality of actually trying to do things." And that quote rang so deeply true to me when I heard it and really encompassed what I see across the variety of work that Charity has shared with us. And frankly, I've been so impressed with the quality and quantity of wonderful content that Charity has shared over the years. I was really just thrilled to get the chance to sit down and talk with her directly. So without further ado, here's our conversation with Charity Majors. Thanks so much for joining us today, Charity.
CHARITY: Thanks for having me. It's great to be here.
CHRIS: As I've mentioned on many an episode, I've been following your work for a while now. And at this point, I would say that just about every Bike Shed episode has a reference to you and some piece of work that you have put out into the world, whether it be a tweet or a blog post, or a conference talk or something. So I'm so grateful for all the work that you put out into the world and for taking the time to chat with us today.
CHARITY: That's so exciting. Yay. I feel right at home here then. [chuckles]
CHRIS: Fantastic. Well, I want to dive in. I think it's sort of the core of some of the conversation that we'll be having, which is around instrumentation and observability, and observability as a newer, noveler form of how we think about this space. But to give a bit of context, I was hoping you might be able to give just the quick summary for anyone who might not be as familiar with observability as a concept and what that means now, and Honeycomb as a product and how it offers affordances around observability and pushes that envelope forward.
CHARITY: Yeah, I think of the observability as being about the unknown unknowns. For a long time, all of the complexity was really bound up in the app. You had the load balancer, you had the app, and the database. And all the complexity you could just attach to a debugger and step through it if you had to. But then we kind of blew up the app, the monolith, and now it's in services scattered to the winds, and you can't just trace it. And so observability is a way of passing that context along hop by hop so that you can actually slice and dice in real-time. And the hardest problem is not usually debugging the code. It's finding out wherein the system is the code that you need to debug.
And observability, if you accept my definition, which is it's about unknown unknowns, that you should be able to ask any question of your systems, understand any internal state just by observing it from the outside, well, then a lot of things proceed from that, in my opinion. Like, you need to be able to handle high cardinality, high dimensionality. You need to be able to string together a lot of these high cardinality dimensions. You need to... any kind of schema or indexing scheme in advance is verboten because you don't know what questions you're going to need to ask. And so there's a lot that flows from that definition; arbitrarily-wide structured data blobs is the source of truth, et cetera. But at its heart, it's just about the concepts, that our problems are getting harder and harder. We don't get paged to go, "Oh, that again? Oh, that again?"
CHRIS: [chuckles]
CHARITY: Ideally, we fix those things. But we still get paged. What the hell is this? It's about allowing engineers, empowering them in a reasonable amount of time to be in constant conversation with that code that's out there in the world because most problems honestly we never get paged about. They're too subtle until they snowball, and they pick up other problems. It's like a hairball under your couch until it gets so big and so impacting that it actually does alert someone. And then you just start picking up the rock and be like, oh God, what's that? Well, we've never understood this. And that's why ops has such a reputation for masochism. [chuckles]
CHRIS: Absolutely. There are so many little pieces in what you just said that really deeply resonate with me, although there is one facet of some of the way that you talk about observability that I find interesting. I'm someone who likes to cling to the perhaps unrealistic these days ideal of a monolith of what if we were to just keep everything in the same place and all the data lived together in one database, and I could have foreign keys, and consistencies, and asset compliance?
CHARITY: Which you should do for as long as you possibly can. You should never impose more complexity on yourself than you absolutely need to. And I would say that it's never not better to have observability than the older paradigms of monitoring and so forth. Some of Honeycomb's biggest and best customers still use monoliths. But they still find it really valuable to be able to apply the principles. I think that it's the microservices revolution, if you will, that forced this set of changes. It was inevitable. The steps that I started talking about, like, somebody would have because the older way just became untenable when you started adopting containerization and a lot of these things that made everything suddenly a high cardinality including the number of applications you have. But it's never not better to have high cardinality tools and to be able to instrument your code for spans and tracing. Tracing is still valuable even in the monolith.
CHRIS: Yeah. As I've observed and started to play around with Honeycomb, that's definitely what I've seen is I'm almost exclusively working in the context of monoliths and, like I said, clinging to them for as long as I possibly can, which isn't going to be forever.
CHARITY: It's true. [chuckles]
CHRIS: I recognize that truth, but already I see the value. And so Honeycomb is a platform that you've built that allows for this high cardinality, high dimensionality ad hoc queries at any point in time. And so the idea that I can come into the tool and say, "Huh, I've got a new novel problem today." I don't need to re-instrument my code. I can just ask a new question, and the system will responsively be able to answer that question, ideally. And that feels like it holds true in a monolith all the more so, like you said, in an SOA architecture. But even in my safe little playground of everything is in the same space, I still don't know how everything's working all the time if we're being honest. So being able to answer those questions feels meaningful.
CHARITY: Totally. I think that one way of thinking about the SOA or microservices is that it pushes a lot of what was in the operations realm into a realm of development, and suddenly you're responsible for a lot more of the operating of your services, things like retries and backoffs, and load distribution, and thundering herds, and all these things that ops traditionally took care of. Well, now you have to think about them. So you need some ops tools, too. What I like about...of course I like everything about Honeycomb because we designed it for this problem. But it speaks in the language of variables, and endpoints, and functions, and not in the low-level language of proc IPv6 timeouts and stuff where I feel like ops has also traditionally been the translation layer between software engineers and their actual code in production. And it's time to start giving software engineers those tools in their own language.
CHRIS: Yeah. I love that. And I'm very happy to have Honeycomb as part of an instrumentation stack, which actually shifts me to the next question, which as I look at Honeycomb, very quickly the first time I saw it, I was like, oh okay, this makes sense. I want this in the world.
CHARITY: Oh, I like you. [laughs] Not all people are like you.
CHRIS: It might have been my second or third look, but it was definitely...once I got it, I was like, oh yes, I absolutely want that. But now, the question that I have is I typically will have a collection of tools that exist in this space. And there's a weird Venn diagram overlap of well, there's logging, and there's error tracking, and there are APM performance tools, and there's metrics, dashboards. And my sense is that Honeycomb perhaps can or an observability tool more generally can subsume a bunch of those, but it's not clear to me exactly. I think I probably still want logging. I think I still want error tracking as a discreet service tool that I'm using but maybe not APM and maybe not metrics as a distinct thing. Maybe I can infer those from a tool like Honeycomb. But I'm wondering what's the current thought on that?
CHARITY: Well, part of what you're seeing is just observability tooling is very new, and we haven't had time to grow up. And here I'm like, officially, we play very nicely with all other vendors, and none of us would ever try to compete or take away from each other's faces. But I do think that ultimately, logging pretty much the only real use case for it is security stuff, the security archiving, just keep every log light. It's gotten cheap enough, but it's not actually useful for debugging or understanding your system, not really. It's useful for compliance. It's useful for proving that you did something in the past. Most logs are just a pile of trash, but they can be useful trash. And I understand people's emotional want to hold onto them for a while, and there's nothing wrong with that. There's nothing wrong with keeping some trash around for a while, while you make it...[laughs] Sorry, not to totally slam on logs, but they are trash.
CHRIS: I love the analogies that we're going for. [laughs]
CHARITY: But the thing about observability is I do think the kind of center of the world is these arbitrarily-wide structured data blobs from what you can infer logs, from which you can infer metrics, from which you can roll-up. So I do think that well metrics are the right first tool for understanding infrastructure. Like if you're Amazon and you're responsible for all this hardware and stuff, you should be asking yourself, is my service healthy? But if you're someone who's writing and shipping code on top of that service you care about, can my request complete? What is my user's experience? And that's observability's territory. So I think that ultimately, I do think metrics, logs, and traces all get subsumed under the observability umbrella and performance management, too, if the tools get built correctly. There will still be use cases. They will just get smaller, for logs, for standalone metrics tools.
Honeycomb just launched our metrics product. Metrics is like a 30-year-old piece of technology. Prometheus and Datadog are going to be the last best metrics tools ever built. We have wrung the water out of this laundry. [chuckles ] But we aren't trying to compete with that. What we are trying to do is give people an on-ramp into Honeycomb. They've got decades’ worth of stuff. They've been corralling metrics, structuring them. You rely on them. You don't want to give them up. So yeah, let's feed them in. Let's give them an overlay. And number two, the more interesting use case for me is when you're a software engineer who's writing and shipping code, you do care about did the memory usage just triple, or is the CPU completely buzzing after I shipped my last change? But there's really only like three or four of those metrics that you really care about as system metrics. The rest are mostly legacy.
CHRIS: I like the idea that aspirationally, Honeycomb is moving towards a place where given sufficient input data, given this arbitrarily-wide data blob with high cardinality, et cetera, that we can infer basically all of those others from it. But also speaking to also observability is somewhat new, and so we got to build a lot of product to get there and that idea that there is perhaps a space right now where you might be bringing together a few of these tools. But if there is a future world in which I can have one of these tools that just handles everything and tells me about my code and directs me to the line of code that I incorrectly instrumented, that would be wonderful. Happy to do the work in the interim to cobble it together from the pieces.
CHARITY: The place in the meantime that we're at where all of these big vendors are acquiring other vendors and trying to put together...they're like, we have three pillars. Coincidentally, we have three products to sell you. It's like, it's not good for the users because when you're...like, you're sitting in the middle here. You've got your metrics dashboard. It's telling you that there's a problem. Okay, if you can't slice and dice and figure out what it is, you have to jump over into logs and visually correlate based on the times and hope no timestamps are wrong and try and find the thing. And then, oh, okay, so you want to trace it. So you've got to copy over and try and find that in your tracing product and hope that that would get sampled in. It's not good. You can't follow the question from the beginning. I have a problem to the end. I have a solution and back. And it's not linear. You're going to be following a trail; then you're going to need to back up, then you're going to find another trail. And then you're going to want us to zoom out and see who else is impacted. And you really can't back your way into that with different products. You have to start with the arbitrarily-wide structured data blob.
What does confuse me is I know that New Relic is built on this. New Relic has these. And we almost didn't start Honeycomb because we were just like, edit data, and New Relic is going to figure it out. Here we are like six years later, and they still haven't fcking figured it out. [laughs] But like Datadog, they aren't based on that arbitrarily-wide structure, so they are really...and I know that they're trying to get...all of these big vendors are trying to get to where Honeycomb sits technically faster than we can grow up and become a business.
CHRIS: The race is on.
CHARITY: Yeah. It's fun.
CHRIS: One of the related things that I've seen you talk about a few times is the idea that instrumentation is a muscle. It's a habit that needs to be developed and fostered, and that rings very true to me. At the same time, a lot of my instrumentation work has been more in a reactive space. If we're being completely honest, something went wrong; we can't figure it out from the information that we have available, so then we go in, and we add a new logging line. We wrap the code in some way. And so I'm wondering if you can talk a little bit more about that. What does that look like in practice or perhaps some examples or something? But how can we tease that apart and understand that a little bit better? Because it sounds wonderful to me.
CHARITY: I think of instrumenting a lot like commenting your code. It's a way of thinking to the future and reverse engineering; what am I going to care about? What is someone else going to care about? And I really do think of commenting as just a less true version of instrumentation, honestly. It's you talking about what you think the code should be doing, but you've left production out of the loops. You don't know what the code is doing. [chuckles] But ideally, they're kind of the same muscle. It's why you're writing your code. You've just developed a monitoring thread almost in your brain. It's like, yeah, this is going to be valuable. Oh, this is going to be valuable. And so I do think that it's on vendors to make sure that we do as much for you as possible. And this, honestly, is the long winding journey to Honeycomb finding product-market fit, which took almost three and a half, four years.
And for a long time, I was like, it’s not magic. You have to understand your code. You have to blah, blah, blah, which is true. But also, we need to walk closer to the user. We need to make it easier. We need to do the beeline, which will initialize the event, pre-populate it with a bunch of stuff, create the framework so that all you have to do as a user is just printf now and then just stuff this in the blob, vendors making it as easy as possible, as automated as possible. We have more to do. We really should be pre-populating it with all of the language internals and all of the stuff about the environment. We'll just be glad to tap that well. But there's something that we can't do for you, which is understand what you're trying to do and what is important.
Honestly, here's a story from the past. The reason that New Relic was so big, they hit the ground, and they super hockey-sticked everything was because they dovetailed with the rise of Ruby and Rails because Ruby allows for so much fcking monkey patching. Every web app looks the same. You can just be like, we assume all this crap, and so we could make it just like magic for you. You just install this library. Boom, you're off to the races. Well, try as you might, I want to say a type language like Go, you can't do that stuff with. You can't make it as magical. You have to think a lot more about how you're structuring things for better or for worse, which is why their growth slowed because those languages just aren't so popular anymore.
So it's trade-offs all the way down. Yes, everybody should be an expert in forecasting the future and understanding all the subtle things that you don't know you're going to know, but you're super are going to want to know. But as you've discovered, most of your learning comes from being in the trenches, which is why it's so good for devs to be on call and be close to their code and be in this constant conversation with it because you develop a sixth sense. I can't tell you exactly why I know it's going to be a problem, but I'm just going to wrap it because I'm pretty sure it is.
CHRIS: There was a tiny bit that I was hoping that you would have some very specific like, oh, you just do X, Y, and Z. I kind of knew that wasn't going to be the answer, but it also represents something that I so appreciate about your thinking and the work that you put out into the world, which is it's realistic. Sometimes you're like, you know what? There's going to be some tacit knowledge involved here. You got to put in the work. You got to learn the thing, and that's just true sometimes. And so I appreciate your willingness to be like yeah, you know what you got to do? You got to do the work. And then after that, you'll know...and so there's sort of a virtuous cycle that can happen here. There is a feature, as far as I understand it, of Honeycomb, too if I can briefly hype up your products slightly but the idea that you can observe the series of questions that another developer asks. So if they were in a debugging session, you can see like, oh, they asked this, and then they asked this, and then they filtered on that.
CHARITY: It's like your Bash history but for debugging. [chuckles].
CHRIS: I want this for everything.
CHARITY: Right?
CHRIS: Let's have a shared hive mind of the developers on a team, both in terms of our observability tool but also just kind of everything.
CHARITY: What did you do?
CHRIS: Yeah, what did you do, and why? What were you thinking? I saw you went down a road there, but then you stopped and backed up, and you went a different way. That's interesting to me.
CHARITY: This is why we keep trying to build things into the product that will incentivize people to write texts about what they're doing, whether it's retroactively applying tags or writing a breadcrumb to yourself. Why was this meaningful? As you're putting it in your bookmarks, why are you putting it in your bookmarks? Collaboration is just as much about collaborating with your past self and your future self as it is with the rest of your team. I don't remember why the fck I did that two years ago. I don't know. I don't know why I did that two months ago. But the more you can leave breadcrumbs for yourself and then surface that to the team, you're right; it’s transformational.
I wanted this so selfishly because I have never been that person on the team who loves graphs. I hate graphs. I don't think visually very well at all. I've been working with my friend, Ben Harts, off and on for like 10, 12 years now. He's always the person I've hired repeatedly. He's always the person who comes in and makes the graphs. And then I look over his shoulder, and I bookmark them. I can be up all night making the perfect dashboard. And then I'm like, great, mine. [chuckles] So there's room in the world for both of us. But the point is that not all of us should have to go through that effort. [chuckles] We should be able to learn from each other. Only one person should ever have to have to craft the perfect query, and then the rest of the team should be able to effortlessly piggyback on it.
CHRIS: Yeah, absolutely. And again, I want that but for everything. I dream of a future in which that's true.
CHARITY: And so much of debugging is this wandering path where you go down the wrong place, and you need to be able to zoom back to all right; where did I first know that I had a beat on it?
CHRIS: There's a corollary that I see to pair programming where one of the things that I find so valuable is, what Google query do you type in when you hit that wall? When you're like, oh, this isn't working as I'm thinking, and then you type something and I'm like, whoa, wait, I wouldn't have even thought to ask that question of the internet.
CHARITY: Oh, I love that. That's fantastic.
CHRIS: But now you've productized that, and I love that. So thank you for building that thing in the world.
CHARITY: Excellent.
CHRIS: Shifting gears slightly, one of the other themes that you really pushed for in the world is the idea of continuous deployment and not like yeah, you should ship your code pretty quickly after you merge it, but true, sincere continuous deployment.
CHARITY: 15 minutes or bust.
CHRIS: 15 minutes of bust, test in production. There are some really wonderful if we're being honest, scary themes that you talk about. I love the ideas that you're putting out there, but they're probably the things that I look at, and I'm like, ooh, that seems like a whole thing right there.
CHARITY: It assumes a lot. Let's put it that way. It assumes a lot.
CHRIS: It definitely does that. I desperately want to get to that world. I want to get to the place where there's that confidence. And similarly, there's a theme that you've talked about around Friday deploy freezes and why that's not a good thing. And the empathy for humans that part's good, but maybe we're applying it in the wrong way if we say we're not allowed to deploy code on Friday. Because it's like yeah, deploying code is terrifying and scary. No, let's solve that problem. But I wonder if you can talk a little bit about that. How do you get there? How do you get to the place where continuous deployment is a realistic outcome for you?
CHARITY: Yeah, that's a very good question. There are no easy answers, unfortunately. And the answer is always going to depend on where are you starting from? Are you starting from a clean slate? Are you starting...a lot of the advice that I give sounds like Looney Tunes to someone who's coming from enterprise because they're just like, "You don't understand the constraints that I am operating under." And I'm like, "Yeah, you're right. I'm not of your world. That probably shows." [chuckles] So I think the easiest way, though, is always when you're starting a new project that what you do on day one would be to set up your CI/CD and deploy it to prod before you've even started building. My favorite analogy to that is to like...you know the myth about Alexander the Great and his horse how when he was a little boy he would pick it up every day before he had breakfast? And so, by the time he was an adult, he could pick up his horse because he picked it up every day, and it was never hard.
When you start deploying that way, it's never hard. When you're just like, okay, anytime this gets above 10 minutes, we're going to put in a couple of hours of work, and it's never hard. It's just the easiest thing in the world. And everything's easier because you get to watch what you're doing and in real-time, and you develop that muscle of I’m merging it to main. I'm going to go look at it in a couple of minutes. And you don't feel done in your gut until you've looked at it. And that's doing it on easy mode. And you can do this in a hybrid way. Even if you have like, well, I'm paying for a deploy. Nobody is saying you have to sign up for a long, painful deploy process when you got to spin up a new project. And I've seen it gain momentum. If you start something that's clearly the new way, everybody sees how fast this team is executing. Everybody wants a piece of it. And so you start learning from the way that you are able to do it in your unique environment. You're the best evangelist to the rest of your team members because you know the subtleties. You know the problems. So that's the easy answer is start fresh. [laughs]
CHRIS: [laughs] That makes sense. I do, again, I appreciate the pragmatism or the realism of the way that you approach a lot of the topics.
CHARITY: Another answer, though, it's just that the engineering work involved in taking a deployed pipeline down from hours, days, to 15 minutes it's just engineering work. It is just labor. It can be done. The political problems are the hard ones. I mean, in the past, sometimes our deploy probably would get up to two or three hours, and we were just like, oh God, this is not…put in the work. You just start instrumenting your pipeline, and you start looking at where the tests are taking time. And it will pay dividends every bit of time that you pay down, which is why I really see these long…our own pipelines is it's a vacuum of engineering leadership that they've allowed it to happen because there's nothing fancy about it. You just put in some work.
CHRIS: Yeah, the solvability of the technical challenge feels very true, but what you're saying of it's people problems which again, that's always true of the tech stuff.
CHARITY: It is people problems, but I also hate it when people are just like, oh, it's people problems. That means mysterious and unsolvable. Now, most of the time, when you see this, it's a lack of collective confidence in themselves. They see this as being as just for the elite engineers, or only ex-Googlers are allowed to do this or something. Or they go to conferences, and they hear about it, and they're just like, God, I wish I was allowed to do that, or I wish we could do this.
But the thing is that engineers have more power than they realize. We build these companies. They wouldn't exist if it's not for us. We have all the power if we just choose to use it. I know that a lot of these people who I've talked to that were just like, "Oh, I wish we…" I'm like, "Have you ever lobbied for it?" And they're like, "No, I just know we could, or that's someone else's decision." I'm not going to promise you that you can get whatever you want. But I promise you that if you start speaking up if you start talking to your colleagues and being like, "Wouldn't it be nice?" And they start speaking up...if a quarter of the engineers want something in the company, it gets done. [chuckles]
CHRIS: That definitely feels true. And to the topic of actually lobbying for this and having the hard conversations internally and working on the people problems, you have done, I think, a really fantastic job of providing actual benchmarks in terms of timing and what does this look like as a practice and what are the multitasks?
CHARITY: It's so expensive. It's so costly to organizations. And it's the easy answer for any engineering leader to be like, "Well, we need to hire." That is the laziest answer in the world. You probably don't. You probably just need to fix your CI/CD system and then bask in the resources that you suddenly freed up. [chuckles]
CHRIS: You have a wonderful blog post that really I think does such a good job of highlighting the cost that you're talking about there, the human costs for every slowdown in your deploy process, it has this downstream ramification. And having that as sort of a piece, a bargaining chip in the conversation of here's a voice that is saying a very clear thing about this cost of not doing this work, which granted, it's always trade-offs. Everything is an optimization. But here is a way to actually measure the cost of not going with this approach. And again, I appreciate you're putting that out there in the world so that the rest of us can be like, "Look, on the internet, it says so."
CHARITY: [chuckles] Exactly. I'm happy to be the internet for you. But it's so true because other people in your business don't want you to suffer too, either. They don't want everything to get slow. They just aren't equipped to understand the cost of this slowness the way that engineers are. And I feel like sometimes this is...it's like we're always lamenting like, why does product get to own all the engineering cycles? Where aren't we allowed to do all this other stuff? I promise you're allowed to. You just have to make the case because the case is righteous and justified. But you have to explain to them the cost that it's incurring your organization in terms of your ability to execute and in terms of your ability to hire and retain people. You just have to explain those costs. And engineers are just like, "Well, we only say it once, right?" Well, that's not how you win arguments. You have to say it. You'll probably lose. And you say it again, and you'll probably lose. You say it a third. And you will win eventually because you control all of the creative labor of the technical organization. So just make the fcking case. [chuckles] I don't know. I make it sound simple; it’s not.
CHRIS: I love the sound bite of the cause is righteous, and that is the kernel of the thing here, which is like, just to be clear, this is a virtuous path that you were going down, battle for it, work towards it, absolutely. So I think a related topic here, so continuous deployment is one of those things that you want to get to and a practice that you want to evolve to. But in exploring some of your other work, one of the things that I was exposed to is the DORA metrics, which is not something that I hadn't seen before. But for anyone who's unfamiliar, the DORA metrics is a set of four key metrics to track developer and team productivity, so their deployment frequency, lead time for changes, change failure rate and the time to restore the service. And they are deeply interesting because frankly, I have for a long time felt like developer productivity was not really a quantifiable thing.
CHARITY: It's not, yeah.
CHRIS: Individual developer productivity I still feel like this is a bad thing. Don't do that. But team productivity these metrics actually are like oh, actually, as I look at those, those seem like the good ones. We should do that. I'm wondering, what does that look like in practice when you see that actually employed within an organization? What are the feedback loops, and how does this appear in the world?
CHARITY: Yeah. We all owe a huge debt of gratitude to Jez Humble, Gene Kim, and Nicole, who worked on this for years and got this out into the world, just putting some actual research behind the stories that we were telling ourselves about productivity. And people who haven't read Accelerate...a lot of people are always asking me, do we have any stories? Do you have any research? Do you have any proof or something? I just always point to the book Accelerate. That's where it all comes from. Yeah, it's true because it's such a noisy world. When you're an engineering org, and there's just so much going on, and there's so much stuff that bugs you personally, and some of the stuff that you have true beliefs about. And it's hard to just cut through the noise.
And I feel like that's the great gift of the DORA metrics. If you start focusing on one of them, you will lift your org out of poverty, and the others will get better too. And it provides just this wonderful focus point for teams that aren't sure where they stand or aren't sure how to get better because it can be so mystifying. When you're in the trenches, and you're just like, why does everything feel so hard? Why is it that we thought this would take two days, and here it is two months later, and we can't ship anything? And it feels like the more we ship, the farther behind we get. These are the beacon of hope. It's like, you pay attention to these, your lives will get better. You can dig yourself out of this ditch.
That's certainly been true for the teams that I've been on. And high-performing teams, I think we all have this idea in our heads that high-performing teams are ones where the great engineers join when in fact, those great engineers could join your team, and they wouldn't get any more done than you are. Because most of our productivity is defined not by the data structures and algorithms that you know but by these social-technical systems that we swim in every day, it’s the water around us. It's the friction involved in getting that code to production. If it takes the magical engineer from Google 24 hours to get their code changed out, well, they're not a member of a high-performing team either.
You mentioned earlier all these people are out there who haven't experienced a world like this don't live in a world like this. And in my experience, they often lack a lot of confidence because they don't think they're that good, or they don't think that they can have nice things. And the DORA metrics that's your ticket to a better life. It's like go to college and graduate because it kicks off these virtuous feedback loops, these cascading cycles of things getting better for everyone and people getting more excited and energized. And they just don't get burned out by shipping too much code. They get burned out by not being able to ship code.
And if you're a leader in any type of organization, and I don't just mean manager, I mean any type of senior engineer or manager or whatever, then it's part of your job to pay attention to these metrics, lobby for them, track them, track them on your own if you must, and try to make them better because every engineering team has two customers or two...whatever. I'm blanking on the word. But it's your customers and your engineering team. You're responsible to both of them. And I've never seen one of those sets deliriously happy and the other set miserable. They tend to rise and fall in tandem.
CHRIS: I'm just nodding along for anyone in the audience who can't see what my head's doing. But I love so much all of the things that you're saying and, again, the passion and conviction that you bring to this conversation because these are amorphous, hard to pin down ideas. But I appreciate the North Star that you're setting across all of these different things that as I'm reading, I'm like yeah, that sounds true. I want that. Those things are the things that I want. But interestingly, one of the other threads that I see weaving through a lot of your work is obviously we've talked a bunch about just deeply technical topics thus far, but also a lot of your work spans across to the interpersonal. And frankly, even dividing in that way is not representative of the world because it's a Venn diagram mishmash of some days it's technical, some days it's personal, some days it's both. But one of the things that you've talked about is the engineer manager pendulum which I find super interesting. I think every engineer at some point has that question, that internal oh, do I want to go engineer track or manager track? And this distinct idea or the idea that management is a promotion and any other movement would be different, and you have wonderful things to say about that.
The other thing that you've pointed out is that former managers can often make great engineers after the fact because of the earned empathy that they have now from looking at things from a slightly different angle.
CHARITY: Amazing engineers.
CHRIS: But I'd love to hear a little bit more of your thoughts on that because I think it's such an important space, and I've definitely previously operated under I'm an engineer, and then I guess I got to be a manager, and then I guess I don't know where I go from there, but it's this very linear path. And you shook that worldview of mine, and again, I appreciate that shaking. But yeah, I'd love to hear a little bit more about that.
CHARITY: The best people that I've ever worked with have been engineers who had been managers for a while and then went back to engineering, and it's not just empathy, although there's a lot of that too. It's also a deeper understanding of the business and the reason that we do things. So much of being a powerful engineer is choosing the right work to work on so that you get a lot done very efficiently and quickly, and you don't spend a lot of time just foundering, which you've mastered, and you know the basic technical principles. And how do you get better? A lot of it is just getting better at identifying what to do and what not to do because we have to not do so much more than we can ever do in order to move forward.
I wrote a blog post as a present for a friend of mine who was a director of engineering at the time, and he was suffering. He was just miserable, and he kept thinking about going back to engineering, just kind of dragging his...because he wasn't in an org where that was really celebrated or anything. When you've been there from the beginning, you built the organization; you’re like a senior director and everything. It feels like a long way to fall. And I wrote that post for him. And he did end up going on to be an engineer after that. And he was so much happier. But I think he was surprised at how he didn't fall at all. He actually probably had...I think the engineers had a higher opinion of him afterwards when he was one of them again. And he still had this vaunted voice because he could speak to how the system had been there since the beginning. And he basically got to look around and look out farther than the engineers who were heads down every day and go, "This is going to bite us. I'm going to take a small team. We're going to do this forward-looking security product."
I don't want to identify details, but that for me really just kind of cinched...It was like the more we can strip hierarchy out of these discussions; the healthier everyone's going to be because we're just monkey brains. And the monkey brain in our skull hates losing hierarchy, hates losing power or stance or anything. And I think that the thing that you learn after you've been a manager is a lot of it is just the wizard behind the curtain, the idea that you have more power as a manager. You have more of some types of power, and you have a lot less of other types. And you're just as constrained as the engineers but in other ways. And the path moving forward is not to dominate people or be above them but to combine your powers for good and self-sort to find a place that actually gives you the most joy.
CHRIS: It's a wonderful philosophy. And actually, a thing that you said in there really stuck out to me, which was you wrote that blog post as a gift to someone, and that is such a kind thing to do. And it also, again, reflects what I see in your work overall. You're really clearly leaving a trail of breadcrumbs behind you to help other folks that are traversing a similar path by questioning aspects of it. Or how do we do this well? Why is everyone sad, and why is it bad? And so again, I so appreciate all of that work that you've done.
CHARITY: I think that that comes from my lifetime in the trenches of operations. [chuckles] Ops is notorious for the pain that we bring upon ourselves and try to solve. But I would just like to add a pitch out there for other ops engineers of the world and our colleagues. I was fortunate enough to rise up through the ranks in organizations that really respected operations. We always felt we ruled the roost. We felt like we were way above all the other developers. We got to say what went into production and what didn't. And I feel like ultimately...if you have to have an imbalance of power, I think that's slightly healthier than the developers ruling the roost. Ultimately, there shouldn't necessarily be any imbalance of power. But I just want to pitch it; this whole no-ops thing really got my goat for a while there because operations is just the engineering workaround delivering value to users. I think the second wave of DevOps is now about okay, software engineers; it’s your turn. It's time to learn to write operable software. And so I just wanted to throw in my hat in the ring for all the ops people out there. You're just as good. You're just as good as anyone else. [chuckles]
CHRIS: I mean, it's sort of a theme that I've seen in your writing of everybody's doing good, important work and breaking down hierarchy and just collaboratively moving in the same directions and trying to choose the right North Stars to aim towards. And yeah, it's all fantastic. And so with that, I think we probably reached a perfect spot to wrap up. But Charity, if folks want to keep up with more of your work online, where are the best places to find you?
CHARITY: My blog post is at charity.wtf, and I'm @mipsytipsy on Twitter, and of course Honeycomb.io and our blog.
CHRIS: We will include links to all of that and many of the blog posts, and other podcasts interviews that you've been on, and a bunch of just various things that I collected as I was preparing for this episode because, again, you've produced such a wealth of information on the internet that I want to point as many folks as possible towards those things. But yeah, thank you so much for taking the time.
CHARITY: My pleasure.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes,; you as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success._
What do you get when you mix a worm and a hammerhead shark? Also ants. Steph made some cool new discoveries in bug-land. She also talks about deploys versus releases and how her and her team has changed their deploy structure. Two words: feature flags.
Chris talks about cookies: cookie sessions, cookie payloads, cookie footprints, cookie storing. Mmm cookies! The convo wraps up with lamenting over truthiness in code. Truthy or falsy? What's your call?
Transcript:
STEPH: At the top of my notes for today, I have marauder ants and hammerhead worms. [laughs]
CHRIS: I'm sorry, what? I lost you there for...not lost you, but I stopped following. I...what? Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, how's your week going?
STEPH: Hey, Chris, it's been a good week. It's been busy, lots has been happening. I learned about a new creature that's in our backyard. They're called hammerhead worms. Have you ever heard of those?
CHRIS: I've heard of hammerhead and worms, but not together. The combination is new and novel for me.
STEPH: Cool. Cool. So take a hammerhead shark and a worm and combine the two and then you have a hammerhead worm. And it rained really heavily here recently because there's a tropical storm that's making its way up the East Coast. And when I was outside on the porch, I noticed that there were these new worms or worms that I'd never seen before on the back porch. And so I had to Google them to understand because they had the interesting hammer-shaped head. And I found out that they're called hammerhead worms. They're toxic worms that prey on earthworms. And they're basically immortal because if you cut them into multiple pieces, each section can regenerate into a fully developed organism within a few weeks, which is bananas. And a lot of people online highly recommend that you should kill them because they are a toxic predator and they prey on earthworms, which you want in your garden and in your yard. But I didn't, but I learned about them.
CHRIS: Wow. That's got some layers there, toxic, intense worms that you can cut in half. And so does their central nervous system just spread throughout their whole body? Where's their brain? How does it...I don't have any real thoughts here. That's just a bunch of stuff, and it's awesome. Thank you for sharing.
STEPH: I will warn you. I wouldn't read about hammerhead worms right before bed. Otherwise, you might have some nightmares because the way that they do prey and consume earthworms or other creatures that they prey on is the stuff of horror movies, which I find happens so much in nature, but them especially they fall into that category. So just be aware if you're reading about hammerhead worms and how they consume their food. Now I feel like everybody's going to go read. But as long as you have that warning, I feel safe sending you in that direction.
CHRIS: Yeah, first thing in the morning on a very sunny morning, that is the time to do this research.
STEPH: Exactly. He got it. I also learned about marauder ants because apparently, this is the day that I'm having. I'm learning about all these creatures. But I won't go into that one, but they're really interesting. And this one's thanks to someone on Twitter who shared, specifically @Rainmaker1973 is their Twitter handle if you want to go see what they shared about marauder ants. So I'll just leave that one for those that are curious. I won't dive into that one because I don't want to take us in the direction of that we're all about worms and ants now.
CHRIS: Not all about worms and ants but definitely some.
STEPH: But in technical news, I've got some stuff to share, but I was so excited about worms and ants that now I have to figure out which is the thing that I want to share from the week. So there's a couple of interesting things that I'd love to chat about with you, one of them, in particular, is there's been some interesting conversations going on with my client team around deploys versus releases and how we have changed our deploy structure, and then how that has impacted the rest of the team as they are communicating to customers as to what features are available. And there have been some interesting conversations around how to migrate this process forward.
So to provide a bit of context, we were previously having very strict, rigid deploys. So we would plan our deploys typically every Tuesday. It was usually once a week. And then we would make sure that everything had been through QA, things had been reviewed and tested. And then we would have one of those more like grand deploys, things are going out. And then hey, if you need to get something into the deploy, let us know; we need to talk about it. So there was just more process and structure to that. And so deploy really mapped to the idea that if we are doing a deploy, then that means all these feature bug fixes are going out, and this is now the time that we can tell customers, "Hey, this new feature is available or this bug that you reported to us has now been fixed." We have since been moving towards a more continuous deployment structure where we're not quite there where we're doing continuous deploy, but we are deploying at least once a day, so it's a lot more frequent.
And so this has changed the way that we really map the idea of the work that's being done versus the work that's actually available to customers. Because as we are merging work into the main branch, and then let's say if I'm working on a feature and then I merge that into the main branch and then push it up staging, we have an overnight QA process. So then overnight QA, if they say, "Hey, there's something that's wrong with this feature. It didn't quite meet the required specs," then they can kick that ticket back to me, but that's not true for my code. We could do a revert and take my code out at that point. But at this point, it's in main, and main may have been deployed at that point. So there have been some interesting strategies around how can we safely continue to deploy while we know we often have a 24-hour wait period for QA and to get sign-off on this work? But we want to keep moving forward and then also communicate that just because the code has been deployed doesn't necessarily mean that it's available to customers. There's a lot there. So I'm going to pause and see if you have questions.
CHRIS: Well, first, I'm just super excited to talk about this. This is something that's been very much top of mind for me, and it's a direction that I want to be going more and more, so yeah, excited that you're pushing the boundaries on this. I am intrigued. I'm guessing feature flags is the answer about how you're decoupling that and how you're making it so that you've got that separation of deployment and actual availability of the feature. So, yeah, can you talk more about that?
STEPH: Definitely. And yes, you're right. We're using feature flags, so we'll use the same scenario. I'm working on a feature, and I want to be able to release it safely, so I'm going to wrap it in a feature flag. And I'll probably wrap it, and maybe it's like a beta feature flag, something to indicate that this is a feature that's going to be available to all, but we don't actually want to turn it on until we know that it's truly ready to be turned on. So then that way, it's hidden, but then we can still merge it into the main branch. We can still have a deploy even if my code hasn't gone through QA at that point, but we know it's still safe to deploy. And then, QA can go to a staging environment; they can test it. And if they say, "No," it's fine because nothing was churned in production. But then, if it gets approved, then we can turn it on, and then we'll have a follow-up to then remove that feature flag.
CHRIS: So some follow-on questions. I'm wondering about the architecture of the application. Is this like traditional Rails app rendering HTML on the server, or do you have any more advanced client-side stuff? And then I'm also wondering what you're using for the actual feature flagging, and those will probably inform each other. But what's the story on both of those fronts?
STEPH: It's a traditional Rails application. So we're not using any other client-side application. It is Rails and rendering HTML. As for feature flags, so we're not using something traditional. And by traditional, I mean I typically have reached for Flipper in the past for managing feature flags. We're using more of a hand-rolled approach because there's a lot of context there that I don't know is necessarily helpful. But to answer your question, we essentially do have feature flags as columns in the database, and we can just check if they are enabled or disabled. And then that also allows us to easily turn it on, turn it off as well since it's just a database update.
CHRIS: Okay, that makes sense. I think the nature of being a Rails application rendering HTML on the server like what you're doing totally makes sense in that context. I think it becomes a lot harder the more complex the architecture of your application is. So if you've got microservices, then suddenly you've probably got to synchronize across some of them, and that sounds like a whole thing. Or even if you have a client-side application, then suddenly you've got to serialize the feature flag stuff across the boundary or somehow expose that, which really does push the issue of we could just render stuff on the server and send it to the client and let that be good enough, then man, is stuff simpler. But unfortunately, that's not the case in a lot of situations.
I'm expecting to be introducing feature flags on the app that I'm working on pretty soon. And again, we've got...so it's a Rails server-side thing. So there's going to be plenty of feature flag logic on that side. And then I'll need to do something to serialize it across the boundary and get it onto the client-side without ballooning every payload and adding complexity, and lookups, and whatnot. I think it's doable. Inertia, again, being the core architecture of the application, I think will make this a little bit easier, but I am interested to see what I'm able to pull off and how happy I am with where I get to.
Another question that I have for you then are you testing the various flows? So given a Boolean feature flag, you now have two different possible paths for your code to go through. And then there may be even more than Boolean, or you may have feature flags that sort of interact with each other. And how much complexity are you trying to manage and represent in the test suite?
STEPH: Yeah, good question, and we are. So we're testing both flows, especially if it's a new feature, then we are testing when the flag is enabled or disabled. One that's been tricky for me is what about a bug fix? Is that something that should be feature flagged? And I think at the surface level, if you're presuming that it needs to go through QA before this is live on production, then the answer is yes, that then you have to feature flag a bug fix, which feels weird. But then the other consideration would be, well, it is a bug fix. And could we find another way to QA this faster or some other approach so that way we don't have to wrap it in a feature flag? And I don't have a great answer for that one because I can see arguments in favor of either approach. Although wrapping everything in a feature flag does feel tedious, it's something that I'm not accustomed to doing. And it's something that then becomes a process for the team to remind each other that, hey, is this wrapped in a feature flag? Or just being mindful of that as part of our process. And it prompted me to think back on the other projects that I've worked on and how did we manage that flow? How did we go from development to staging to QA and then out to production?
And one additional consideration with this flow is that we do have an overnight QA team. So in the past, when I've worked with teams, often product managers or even other developers, we would QA each other's work. So then it was a pretty fast turnaround that then you could get something up on staging. Someone could check it out and say, "Yes" or "No." But then I'm also pretty confident most of the teams that I've worked with we have had a distinct staging branch. So we would often merge work into a staging branch, and then deploy that work, and then get it tested. And then, if it passed everything, then we would essentially cherry-pick that work and move it over into production.
And I can see there's a lot of arguments against that, but then I have also experienced that and had a really positive experience where we could test everything and not have to worry about going out to production. We didn't have to wrap everything in feature flags, and it just felt really nice to know that everything in the main or production branch, whatever you call your production branch, that everything in there was deployable versus having to go the feature flag route, or the hey, did this go through QA? I don't know. Let me check. Can I include this? Should I cherry-pick some commits into our actual deployment to avoid stuff that hasn't gone through QA? I've been through that dance before too, and that one's not great.
CHRIS: I like the way you're framing the different sort of trade-offs that we have there in velocity or deployment speed and ease of iteration versus confidence as things are going out. I have worked with a staging branch before, and I personally did not find it to be valuable. It ended up adding this indirection. Folks had to know how to use Git in a pretty deep way to be comfortable with that just as a starting point. So it already introduced this hurdle of knowledge, and then beyond that, that idea that you have commits going in in a certain order on the staging branch. But then say we verify the functionality of the third commit in that list, and we want to cherry-pick it across to the main branch. Commits don't actually...you can't just take the thing that you had there. That commit existed in the context of all the others. There are subtleties of how history exists in Git. And I would worry about those edge cases where you're taking a piece of work out of the context of the rest of the commits that were around it or before it is, more importantly…that preceded it in the history on the staging branch, and you're now bringing it across to the main branch. Have you now lost something that was meaningful?
Ideally, you would get a conflict if it was really bad, but that's more of like a syntactic diff level thing. It's not a functionality-level thing. So personally, I may be overly cautious around this, but I really like as much as possible to have the very boring linear history in Git and do everything I can such that work happens on feature branches and then gets merged in as a fast forward into the main branch or rather the main branch is fast-forward marched into my feature branch such that I'm never working with code that I haven't fully worked with in an integrated way before. But again, even that, as I'm saying that, I have this topological map of Git in my head as I'm saying all of that, and it's complicated. And having any of that complexity leak out into the way we talk about the work is something that I worry about, but maybe I'm worried about a bunch of things that don't matter. Maybe a staging branch is actually fantastic.
STEPH: I think you make a lot of good points. Those are a lot of good concerns that come up with...it comes back to the idea that we want to mimic production as much as possible, and we don't want to lose that parity. So then, by having a staging branch, then it feels that we've lost that parity. There could be stuff that's in staging that's not in production. And so staging could be a little bit of this Wild West area, and then that doesn't fully represent then what's going to production. So I certainly understand and agree with those points that you're making. And to speak specifically to the Git challenges, I agree. It does require some more Git knowledge to be able to make that work. Specifically, I think how we handled it on a previous project is where we'd actually cherry-pick our commits into staging and then deploy that. But we always had the PR issued against main. So then merging into main was often a bit easier.
But then you're right; things could get out of sync. And the PR is issued against main, so then you still could run into those oddities where then if you are cherry-picking commits in the staging, but then you have your final draft that's going into main. And then what are the differences between those, and what did you lose along the way? And as I say all of that out loud, I definitely understand the Git concerns. And I don't know; I just feel like there's not a great answer then here, which is shocking to me. I've been doing this for a while, and yet here I am feeling like there's not a great answer to this very vital part of our workflow. And I'm surprised even though that we do have a delayed QA process that this still feels like a painful thing to figure out how do we have a continuous deployment workflow even though we do have that delayed QA process?
CHRIS: I think somewhat fundamentally your comment there of "I'm surprised that we don't have a good answer to this is," I'm not surprised, I guess, is my reaction. I don't want to go to the software is bad and broken, and we don't know anything end of the spectrum. But I don't feel like we have great answers to a lot of the things about development. I feel like software is more broken than it should be. It costs more to develop. It is difficult. It's hard to create, and maintain, and build over time. And that's just, to get lofty about it, that's what the entire focus of my career is, is trying to solve that problem. But it's a big, hard problem that I do not think is solved, unlike just about any of the fronts. I know how to put stuff in a database and take it back out. And even that, I'm like, oh yeah, but what if the database gets really big? Or what if the database...everything has complexities and edge cases.
STEPH: [laughs]
CHRIS: And we've joked a handful of times about the catchphrase of The Bike Shed being it depends, and that really feels true, though. I don't know that that's unique to this industry either. I feel like everything in the world is just more complicated the more you look at it, and there aren't clear, good, obvious answers to just about anything in the world, but that's the human condition. I got weirdly philosophical on this, so we should probably round this out. [laughs]
STEPH: Well, I can circle us back because I was providing context, and I went a bit into the deep end providing all of that context. So if I circle back to what I wanted to share with you around deploys and releases, there has been that interesting conversation. Now that we have the context, there has been that interesting conversation around originally; we had this very structured deploys, a deploy map to the fact that features were going out to the world. And now we have this concept of a deploy doesn't necessarily mean that's available to customers. It doesn't mean that the code is running. It is more a deploy represents that we have placed a commit. We have placed code on the server. But that doesn't mean that it is accessible to anyone because it's probably hidden behind a feature flag.
But from the perspective of the rest of the team that then is communicating these changes out to customers, they still really need to know, okay, when is something actually available to customers? And we kept using this terminology around deploy. And so Joël Quenneville, another thoughtboter who's on this project with me, has done a lot of great, thoughtful work around how can we help them know when something is truly available versus when something is deployed? Because right now, we're using Jira for our ticket issue tracking. And there's a particular screen in Jira that was showing what's being deployed. And from that screen, you can see the status of the ticket, and you would see stuff like in code review, in QA. So, of course, those looking at the tickets are like, hold up, you're deploying something that's in QA? That sounds really dangerous and risky. Why are you doing that? And then we'd have to explain, well, we're deploying it, but it's not actually live or accessible to anybody, but we want to get close to that continuous deploy cycle.
So we have shifted to using the terminology of a release. So a deploy is more for the we're putting the code on the server and then release really represents okay, we have now released these features and these bug fixes, and they're now available all with the goal just to make sure that our teams are working well together. But it's been such an interesting conversation around how tickets move, the fact that they can progress linear and then also get moved backwards. But in continuous deployment, things don't go backwards and then making those things align. Typically, things don't go backwards. Technically, yes.
CHRIS: History is a directed acyclic graph that only points forward. The arrow of time is very clear on this matter. Yeah, that really does add one more layer of like; what does it mean to actually be out there in the world? I do wonder if giving view-only visibility to the feature flag dashboard and only when it's fully green does someone think that that's deployed? But if you're putting feature flags around everything, there's complexity. And yeah, it's just one more layer to having to manage all of this. And it sounds like you've gotten to a good place, or at least you're evolving in a way that's enjoyable. But yeah, it's complicated.
STEPH: Yeah, it definitely feels like we're moving in the right direction and that this will be a better...I want to say workflow, but it really focuses more around vocabulary and some of the changes to our processes and how we surface tickets in Jira. But it's more focused on how we talk about the changes that are getting shipped and when they're available. So, yeah, that's my story. What's new in your world?
CHRIS: Well, I very much appreciate your story. In my world, I am in the thick of the MBP initial drive to get something into production, which is one of my favorite times, especially if everyone's in agreement about what exactly do we mean by MVP? Who are the users going to be? What's it going to look like? What's the bar that we're going to maintain? What features can we drop? What can't we drop? When there's a good collaborative sort of everyone rowing in the same direction set of conversations around that, I just love the energy of that time. So I'm happily in that space hacking away on features building as much as I can as quickly as I can. But as part of that, there are a lot of just initial decisions and things that I have to wire up and stuff that I have to change or configure. Thankfully, Rails makes a lot of that not the case. I can just go with what's there and be happy about that.
But there is one thing that I did decide to change just today. But it's interesting; I don't think I've actually ever made this change before. I'm sure I've worked on an app that had this configuration, but typically, a Rails app will store the session in a cookie. So there is a signed HTTP only encrypted. I think those are all the things, but it uses a cookie to store that. And the actual data of the session lives in the payload of that cookie. And so, each time there's a request-response lifecycle, the full payload of that cookie is going up and down from the server to the client and then back and forth with all of the requests. And there's a limit; I think it's 4k is the limit on the cookie session.
But there are some limitations to cookie sessions as far as I'm coming to understand them; one is the ability to do replay attacks. So if someone gets a hold of that cookie, then unless you rotate the secret key base, which will have some pretty wide-ranging effects on your application, that cookie can be reused in the future because it basically just has like, this is the user's ID. There you go. And there's no way to revoke that other than rotating the secret key base. Additionally, there are just costs of that payload of data, especially if you're putting a non-trivial amount of stuff. Like, if you're getting close to that 4K limit, then you have 4K of overhead, both on the request and the response of your HTTP requests. So especially in apps that are somewhat chatty and making a bunch of Ajax requests or doing different things, that's some weight that you should consider.
So all of those mixed together, more so on the security side, I decided to look into it. And I have now switched from a cookie store, and I went all the way to the ActiveRecord database store. So I skipped over...there's a middle option that you can do with Memcached or Redis. We do have Redis in this particular application. We don't have Memcached yet; we probably will at some point. But you can do a memory store, so do Redis and store the session there, but I opted to go all the way to the database. And my understanding of the benefits here are we have a smaller cookie footprint, so smaller overhead on all the requests because now we're only sending the session ID. And then that references the actual payload of data that's stored in the database. We do have the ability now to invalidate sessions, so we can just truncate that table if we just want to sign all the users out and reset the world, which can be useful at times.
We also have the ability...if there's any particular user that's like, "I left myself logged in somewhere," we can…well, I actually don't know how to do this now that I say that. I don't know how to log out a specific user because the sessions don't inherently have the user associated with them. You can have an unauthenticated session, which then transitions to be authenticated when someone signs in, and then the user ID gets installed in there. I would love to have these indexed to users such that I could invalidate and have a button on the admin dashboard that says, "Sign out all instances," and that will revoke all of the sessions or actually delete them from the database table now. I think I would have to add some extra instrumentation to do that. So anytime a user signs in via device, we annotate the session records so that it's got a user ID column and then index on that so that we can look them up efficiently. I think that's how that would work, but that's one of those things that I'm like; I think I should think very hard about this before I do it. It has security implications. It's not part of the default package. There's probably a reason for that. I'm going to do that another day.
But yeah, overall, it was a pretty easy upgrade. I think I'm happy with it. It feels like one of those things that it's not clear to me why this isn't the default sort of thing where SQLite is often the database that you use just because it's slightly easier to get up and running? But for any application that we're working on, we're like, no, no, no, we're going to go to Postgres for local development and for everything because obviously, that's what we want to do. And I'm wondering if this should be in that space, like yeah, of course, the session should go in the database. There are so many reasons that it's better that way. I'm wondering if there are some edge cases that I'm not thinking about, but overall it seems cool. Have you ever worked with an alternative to the cookie store?
STEPH: I'm thinking back to the recent projects that I've worked on. And it's been a while since I've mucked around with session work specifically. And the more recent projects that I've been on, we've used JWTs, or they're pronounced jots, I found out, which is really surprising. I don't know why, but that's a thing.
CHRIS: What?
STEPH: [laughs]
CHRIS: This doesn't feel true.
STEPH: It's JWT, but it's pronounced jot, J-O-T.
CHRIS: I think I'm just going to not do that. This is a trend I'm not going to get on board with. [chuckles]
STEPH: I don't even know if it's a trend. I'm not sure who decreed this into the world.
CHRIS: You're familiar with the great internet war around GIF versus JIF, right? I think there's room for different opinions.
STEPH: I mean, it's really not a war. There's a correct side.
CHRIS: We're on the same side, right?
STEPH: [laughs] And this is how The Bike Shed ended. No, this is perfect for The Bike Shed. What am I talking about?
CHRIS: This is perfect for The Bike Shed. I'm just going to need to hear you say the word real quick. [chuckles]
STEPH: Oh, it's GIF, absolutely,
CHRIS: Okay. All right, phew. Steph, I was worried, I was worried. Also, anyone out there that says JIF, it's fine. These things don't really matter. Although I am surprised when you have an acronym that gets turned into...I think it's an initialism, like jot versus JWT. I forget which is which. I think JWT would be the acronym. But jot, that's not even...I'm going to move on and say...[laughs] And so I think that JWTs, which is what I'm going to call them in this context, are, as far as I understand it, an orthogonal, different sort of thing. Like, you can put a JWT in the session, and the session can be stored in a cookie or in the database or wherever. You can also put JWTs...often, they are in local storage, which my understanding is that's a bad idea. That is a security vulnerability waiting to happen from cross-site scripting, I think, is the one that is coming to mind. But I think that's an independent thing where JWT is this signed assertion that you are someone. But it's coming often from an external system versus I'm using devise in this case on a Rails app and so devise is using the warden session, which is signing and encrypting and a bunch of stuff that I'm not thinking about. But it's not using JWTs at the end of the day. Jot, really, huh?
STEPH: [laughs] I like how that's the thing that stuck out to you.
CHRIS: Of course it is.
STEPH: But it's fair because it did the same to me too, so I had to share it. [laughs]
CHRIS: This is The Bike Shed, after all. [laughs]
STEPH: So, going back to your question, what you've done sounds very reasonable to me, especially because you wanted to address that possibility of a replay attack. So I like the idea. I'm also intrigued by why it's not the default. What's the reasoning there? And I'm trying to think of a reason that it wouldn't be the default. And I don't have a great answer off the top of my head. Granted, it's also been a while since I've been in this space. But yeah, everything that you've done sounds really reasonable. I like it. I also see how being able to sign out a specific user would be really neat. That seems like a really nice feature. I don't know how often that would get used, but that seems like a really nice thing to be able to do to identify a particular user if they submitted and, I don't know, if some scenario came up and someone was like, "Help, please sign me out," then to have that ability. So I'll be intrigued to hear how this advances if you still really like this approach or if you find that you need to change back to using Memcached or the cookie store.
CHRIS: Yeah, I'm in that space where as I'm looking at it, I'm like, I only see upside here. I guess there's a tiny bit of extra complexity. You have to watch that database table and set up a regular recurring job to sort of sweep old sessions that haven't been touched in a while because this is sort of like an append-only store. Every time someone signs in anew, they're getting a new session. So over time, this database table is just going to grow and grow and grow. But it's very easy to stay on top of that if you just set up a recurring job that's cleaning them. It's part of the ActiveRecord session store is the name of the gem. It's under the Rails namespace or the Rails GitHub organization. So that seems manageable. Maybe that's the one complexity is it has this sort of runaway trait to it that you have to stay on top of, whereas the cookie-based sessions don't. But yeah, I'm seeing a lot of upside for us, so I'm going to try it. I think it's going to be good.
I'm also unfortunately in that space where I think I see all the moving parts as to how I could implement the sign out a user in all of their sessions. But I'm worried that I'm tricking myself there. It's one of those things it's like this feels like it would be built in if it was that straightforward, or it could easily have subtle...it's like, don't invent your own crypto. Like, I think I know how crypto algorithms work. I can just write one real quick. No, don't do that, definitely don't do that. And this one, it seems clear enough, but it's still in the space of crypto security, et cetera, that I just don't want to mess with without really thoroughly convincing myself that I know what I'm talking about. So maybe six months from now, I will have talked myself into it. Or if anyone out there is listening and knows of a good founded, well-thought-out version of yeah, this is totally a thing that we do; here’s what it looks like; I would love to hear that. But otherwise, I'll probably just be happy with the ability to wipe everyone's session as necessary. If any one user leaves themselves logged in at a library and needs me to log them out, I'll just log out every user. That's fine. That's a good enough solution.
STEPH: Yeah. All of that makes sense. And also, the part that you highlighted around that there is that additional work of where then you have to make sure that you have a rake task that's running to then sign people out since there's that additional lift that you mentioned. But I'm excited to hear what folks have to say if they're using this approach and what they think about it. It is super interesting.
CHRIS: Well, yeah, I am very excited about this new development and the management of sessions. And I will let you know if I make any headway on the signing out a user sort of thing. But I think that covers that topic. As an aside, I just wanted to take a quick moment to ask folks out there; we are getting to the bottom of our listener question queue, and we absolutely love getting listener questions. They really help us find novel things to talk about that whenever we start talking about them, it turns out that we have a lot to say. So please do send in any questions that you have. You can send them to [email protected]. That's an email option. You can tweet at us; we're @bikeshed, or either of us individually. I'm @christoomey.
STEPH: And I'm @SViccari.
CHRIS: And we also have a Google Form, which we will link in the show notes of this episode. So any of those versions send us questions. It can be about more tech stuff, more process stuff, more team-building, really anything across the spectrum. But we really do love getting the questions in, and definitely helps provide a little bit more structure to the show. So, with that aside, Steph, what else is going on in your world?
STEPH: Yeah, I love when we call from our listener questions, for the reason that you highlighted because it often exposes me to different ways of thinking in topics that I hadn't considered before. And you're right; we’re often very opinionated souls. [laughs] And along that note, so I have a question for you. The context is another developer, and I ran into a bug. And when we initially looked at the bug, it was one of those there's no way. There's no way the code is in this state. That does not make sense. And then, of course, it's one of those well, the computer says otherwise, so clearly we're wrong. We just can't see how the code is getting to this place. And what was happening is we were setting a value. We were parsing some JSON. We're looking for a value in that JSON, and we're using dig specifically in Ruby. So if it's the JSON or if it's a hash, and then we're doing dig, and then we're going two layers deep. So let's say we're going foo and then bar, and then dig; if it doesn't find those values, instead of erroring, it's just going to return nil. And then we have an or, and then we have a hard-coded string.
So it's like, hey, we want to set this attribute to this value. If it's the hash, then give us back that value; if not, it's going to be nil, and then give us this hard-coded string. What we were seeing in the actual data is that we were getting an empty string. And initially, it was one of those; how are we possibly getting an empty string when we gave you a hard-coded string to give us instead? And it's because empty strings are truthy. When we were performing the dig, it was finding both of those values, but that value was set to an empty string. And because that evaluates to truthy, we weren't getting the hard-coded string, and then we were setting it to an empty string, and then that caused some problems. So then my question to you is should we have truthiness in our code?
CHRIS: Oh wow. That's a big question. It's also each language I might have a slightly different version of my answer. Yeah, I'm going to have to go sort of across languages to answer. I think in Ruby, I have generally been happy with Ruby's somewhat conservative implementation of truthiness. Yeah, anything that isn't nil false...is that it? Are those the only falsy values? There's maybe one more, but zero is not a falsy value. Empty string is not a falsy value. They're truthy, to name it in the affirmative. And I like that Ruby has a more conservative view of what things are. And so it can have this other surprising edge. I will say that I do reach for present? in Rails, so present? Present with a question mark at the end, that method in Rails, which I pronounce as present, huh?
STEPH: Which is delightful, by the way.
CHRIS: Well, thank you. That method I reach for often or presence would be the variant in this case where you can presence or and then chain on the thing that you want, and that gets the value. It will basically do the thing that you want here. And so, I do find myself reaching for that, which does imply that maybe Ruby's default truthiness is not quite what I want. And I want a little more permissive truthiness, a little more like, no, empty strings are not truthy. Empty string is an empty value, so it is empty. But yeah, I think I can always convince myself of the other argument when I'm angrily fighting against a bug that I ran into, and I'm surprised by. Like, I've experienced this from both sides many times in my life. I will say in JavaScript, I am constantly surprised by the very, very permissive type coercion that happens where you compare a string and a number, and suddenly they're both strings, and they get smashed together. It's like, wait, how is that ever the thing that I would want? And so JavaScript's version feels like it is definitively foundationally wrong.
Ruby's feels like it's maybe a tiny bit conservative, but I like that as a default and then Rails building on top of that. I think I lean towards that most of the time. I will say at the other end of the spectrum, I've worked with Haskell, and Haskell has I want to say it's like a list of chr, like C-H-R list of characters as the canonical way to do strings. I may be mixing this up. It may be actually the string type, but then there's also a text type, and they're slightly different. Maybe it's UTF. I forget what the distinction was, but they both exist, and they are both often found in libraries and in code. And you end up having to constantly convert back and forth. And there are no subtle equivalents between them or any type coercion between them because it's Haskell, and there isn't really any of that. And this was early on.
I never got particularly far in Haskell, but I found that so painful and frustrating. It was just like, come on; they’re like strings. Please just do the thing. You know what I mean. And Haskell was like, "I do not. And I require you to be ridiculously specific about it." So that was sort of the high end for me of like nope, definitely not that JavaScript of like anything's anything and it's fine. That feels bad. So somewhere in the middle, Ruby feels like it's a happy in the middle. Maybe Rails is actually where I want to land, but I don't know that there is a good answer to this. I don't know that there's a language that's like, we got it. It's this very specific set of things. It's truthy, and these are falsy, and it's perfect every time. Like, I don't think that can happen.
STEPH: As an aside, I like how your Haskell voice had the slight air of pretension that really resonated with me. [laughs]
CHRIS: I don't know what you're talking about. That doesn't sound familiar to me at all. [laughs]
STEPH: I agree. I don't know that anyone has gotten this perfect. But then again, I also haven't tried all the languages that are out there, so I don't feel like that's really a fair statement for me to make either. Specific to the Ruby world, I do think Boolean coercions are a bit nice because then they do make certain checks easier. So if you are working with an if statement, you can say, "If this, and then do that, else, do this." And that feels like a pretty nice common idiomatic flow that we use in Ruby but then still feels like one of those areas that can really bite you.
So while having this conversation with some other thoughtboters, Mike Burns provided a succinct approach to this that I think I really like where he said that he likes the use of truthy and falsy for if statements, Booleans for the and statement, and only truthy falsy for Booleans, so no nulls. So Boolean should not have three states is what that last part is highlighting. It should be just true or false. And then if we're working with the double ampersand and in Ruby, that then if you have that type of conditional that you are conveying, then to use a strict Boolean, be more strict and use the methods that you were referring to earlier, like empty and explicitly checking is this an actual...like turn it into a Boolean instead of relying on that that truthy falsy of is it present? Is it an empty string? Does that count? But then, for the if statements, those can be a little more loose.
And actually, now that I'm saying it, that first part, I get it. It's convenient, but I still feel like bugs lie down that path. And so, I think I'm still in favor of being more explicit. If I really care if something is true or false, I want to call out explicitly. I expect this to be true or false versus relying on the fact that I know it will evaluate, although I'm sure I do it all the time, just because that's how you often write idiomatic Ruby. So I'm interested in watching my own behavior now to see how often I'm relying on that truthy, falsy behavior, and then see the areas that I can mitigate that just because yeah, that bug is fresh in my mind, and I'd like to prevent those bugs going forward.
CHRIS: I really liked that phrase of that bug is fresh. So that bug is going to own a little bit more mindshare than that old bug that's a bit stale in the back of my brain. I will say as you were talking about idiomatic Ruby, I think you're right that the sort of core or idiomatic way to do it would be if the user or whatever to see is the user here, or are they nil? Did we find one, or did we not? That sort of thing is commonly the way it would be done. I almost always write those as if users are not present? I will convert it into that because A, I'm writing Ruby, and I write Ruby because I want it to sound like the human words that I would say. And so I wouldn't say like, "If user," I would say, "If the user is present, then do the thing." And so I write the code to do that, but I also get the different semantics that present? Brings or blank? Is the counterpart, the other side of it. That seems to be the way that I write my code. That's idiomatic me, Ruby, and I don't know how strongly I hold that belief. But that is definitely how I write those, which I find interesting in contrast to what you were saying.
The other thing that came to mind as you were saying this is that particular one of an empty string. I kind of want to force empty strings to not be okay, particularly at the database level. So I'll often have null false on a string column, but then I'll find empty strings in there. And I'm like, well, that's not what I meant. I wanted stuff in there. Database, I want you to stop it if I was just putting in an empty string because you're supposed to be the gatekeeper that keeps me honest. And so I do wonder if there is a Postgres extension that we could have similar to the citexts, citext, which is case-insensitive text. So you can say, "Yeah, store this as it is, but whenever you compare it, compare case-insensitively," because an email is an email. Even if I capitalize the third letter, it doesn't make it a different email. I want a non-empty text as a column type that is both null false but also has a check constraint for an empty string and prevents that.
And then similarly, the three-state Boolean thing that you're talking about, I will always do null false on a Boolean column because it's a lie if I ever tell myself. I'm like, yeah, but this Boolean could be null, then you've got something else. Then you've got an ADT, which I also can't represent in my database, and that makes me sad. I guess I can enum those, but it's not quite the same because I can't have additional data attached. That's a separate feeling that I have about databases. I'm going down a rabbit hole here. I wish the database would prevent me from putting in empty strings into null, false string columns. I understand that I'm going to have to do some work on my side to make that happen, but that's the world I want to live in.
STEPH: I'm trying to think of a name for when you have a Boolean that's also a potential null value. What do you have? You have nullean at that point?
CHRIS: Quantum Boolean.
STEPH: Quantum Boolean. [laughs]
CHRIS: Spooky Boolean.
STEPH: The maybe Boolean?
CHRIS: Yeah.
STEPH: No, that's worse. [laughs] Yeah, I'm with you. And I like the idiomatic Ruby. I think that is something that I would like to do more of where I'm explicitly checking if user instead of just checking for that presence and allowing that to flow through doing the present check and verifying that yes, we do have a user versus allowing that nil to then evaluate to falsy. That's the type of code that I think I'd like to be more strict about writing. But then it's also interesting as I'm formulating these ideas. Is it one of those if I'm reviewing a PR and I see that someone else didn't do it, am I going to advise like, hey, let's actually check or turn this into a true Boolean versus just relying on the truthy and falsy behavior? And probably not. I don't think I'm there yet. And I think this is more in the space that I'm interested in pursuing and seeing how it benefits the code that I'm writing. But I don't think I'm at the state where then I would advocate, at least not loudly, on other PRs that we do it. If it is, it'd be like a small suggestion, but it wouldn't be something that I would necessarily expect someone else to do.
CHRIS: Yeah, definitely the same for me on that, although it's a multi-step plan here, a multi-year plan. First, we say it on a podcast, then we say it again on a podcast, then we change all the hearts and minds, then everyone writes the style, then we're all in agreement that this is the thing that we should do. And then it's reasonable to bring up in a pull request, or even then, I still wouldn't want it. Then it's like standard rb or somebody else's job. That's the level of pull request comment that I'm like, really? Come on. Come on.
STEPH: This is a grassroots movement for eradicating truthiness and falsyness. I think we're going to need a lot of help to get this going. [laughs]
CHRIS: Thankfully, there are the millions of listeners to this show that will carry this torch forward, I assume.
STEPH: Millions. Absolutely.
CHRIS: I'm rounding roughly a little.
STEPH: There are a couple, yeah. [laughs] I'd be far more nervous if I knew we had millions of people listening.
CHRIS: I kind of know that people listen. But at the same time, most of the time, I just entirely forget about that, and I feel like we're just having a conversation, which I think is good. But yeah, the idea that actual humans will listen to this in the future is a weird one that just doesn't do good things in my head. So I just let that go. And you and I are just having a chat, and it's great.
STEPH: Yeah. I'm with you. And just to reiterate what you were saying earlier, we love getting listener questions. So if there's anything that you'd like to send our way and have us to chat about or something you'd like to share with us, then please do so. On that note, shall we wrap up?
CHRIS: Let's wrap up. The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us at @bikeshed or reach me on Twitter @SViccari.
CHRIS: And I'm @christoomey.
STEPH: Or you can reach us at [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeeee.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success._
The big "Three Oh Oh!" What a milestone for this podcast! Aside from celebrating that the show has made it this far, Chris gives some followup on some Inertia.js issues he had been having, and talks about open source licenses and legality and testing against external APIs. Steph has thoughts on mozzarella sticks and what makes good ones; particularly the cheese to bread ratio...
They then, together, answer a listener question re: knowledge silos:
Jan asked, "Our team (3 pairs) is currently working on two different projects due to that fact we are creating information silos. Now we are looking into ways how we can minimize those information silos. Do you have any ideas how we could achieve this?" With switching pairs they are unsure about it as it can be difficult for new pairs to get up to speed.
Transcript:
STEPH: I have no shame.
CHRIS: That's important in this industry.
STEPH: [laughs] Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we learned along the way. Hey, Chris. So today's an exciting day. It's a rather momentous day, at least in my world, because today is our 300th episode.
CHRIS: 300? That is incredible.
STEPH: That's an incredible amount of episodes. And it made me pause and reflect on how many episodes I have been a part of. And I've realized it's over 100. I think it's around 104 or something like that, and I can't believe it. Time flies when you're behind the mic.
CHRIS: Time does fly, yeah. So yeah, fully a third of these you've been involved in. I don't know what the number is. And I'm just so grateful to Derek Prior and Sage Griffin, who started this whole process. And then to Thom Obarski, who was the producer for so long, and Mandy Moore, who recently joined us and has been doing a wonderful job of carrying that forward and to you, Steph, because this has just been such a joy to work on. Yeah, it's just a joy to be on the show and to get to chat with you each week and share some things. And frankly, learn from folks writing in questions and sharing pointers with us, and it really is such a delight. And yeah, 300 is pretty momentous.
STEPH: The listener questions and feedback have undoubtedly been a highlight for me. That is one of the areas that I love the most. I love the questions. I also love when people provide helpful answers to us, and then they help us out in return and also, all the incredible guests that we've had on the show. It has been phenomenal. I'm also very thankful to have been part of this journey and appreciate everyone that has got us here today. I wonder what the fourth iteration of The Bike Shed looks like. I consider this the third iteration because the first iteration was Sage Griffin and Derek Prior. The second iteration was where you took over The Bike Shed, and then you were hosting a number of incredible guests on the show. And then the third iteration is the iteration that we're living, so I wonder what the fourth will look like.
CHRIS: Oh, that is an interesting question. Hopefully, you and I get to hang out for a good bit longer. But at some point, much like the Green Lantern, this will get passed on, and someone else will take up the mantle and tell some stories. But, yeah, hopefully, that's not too soon because I certainly enjoy hanging out with you.
STEPH: Oh, I agree. I certainly enjoy this, and I'm in no rush to leave The Bike Shed. But I think it's just fun thinking about the next people that will carry this journey forward.
CHRIS: And determine the color of The Shed.
STEPH: And determine. I mean, that is their right. As host and co-host, they get to determine the color of The Shed.
CHRIS: 300 episodes in, and we still haven't figured it out. So I guess we got to keep trying.
STEPH: Oh, I have. I already know what color it is.
CHRIS: Is it yellow?
STEPH: It's yellow.
CHRIS: Yeah. Okay. [laughs]
STEPH: I like how we said yellow at the same time, you know. [laughs]
CHRIS: I do, although I feel like it's wrong to have a color in mind, or at least I want to dig in and talk about it for a while just to be in keeping with the show, but...
STEPH: One must first argue before deciding and then argue again. But to not continue bikeshedding on The Bike Shed, what's new in your world?
CHRIS: My week has been good. Actually, I have two quick updates on various Inertia things that I've shared in previous weeks. So we can include a show notes link for the two different episodes where I talked about these respective things. But there was one weird issue that I ran into with Inertia where it could start clicking a button that would delete, was behaving weirdly and occasionally, intermittently; some of the responses would end up as a full HTML page response as opposed to the expected Inertia response. And there's a bunch of subtlety around this. I actually reported it as an issue to the Inertia team. And they very kindly pointed me to the HTTP semantics at place. So it's the difference between a 302 redirect and a 303 redirect. And so, in their code, they were correctly doing a 303. They were standards-compliant; everything was great. But for some reason, it was still misbehaving sort of randomly, and I could never pin it down. I ended up working around it and opting out of Inertia behavior for those endpoints. But my assumption was that something in my Rails Middleware Stack was behaving weirdly and occasionally overriding Inertia Rails' setting of the status. So Inertia Rails was saying, "303," which is a special version of redirect, and something else in the Rails Middleware Stack, was saying, "302, it will be fine."
Turns out, in retrospect, the Inertia Rails team has discovered that this was, in fact, a threading bug on their side. So it's not Inertia's fault. Inertia as a core concept and as a protocol was definitely doing the right thing. And the Inertia-Rails Middleware was attempting to do the right thing. But threads and concurrency got in the way, which I'll be honest, I don't deeply understand those concepts. So I was just like, oh okay, that sounds like a thing that could go wrong occasionally, which is exactly how I experienced it. But now they've made an update to the project, so that should be resolved in a deep way. But goes to show you threading and concurrency are really tricky to chase down.
STEPH: I appreciate that you're coming back to give us the conclusion to that issue because I remember talking about it, and you were still going off on a journey and finding out what's wrong, so that's super interesting. And yeah, threads and concurrency those are super easy, like cache invalidation and naming, that's right up there.
CHRIS: It's actually kind of funny. One of the issue threads where I wrote about it, someone followed up and asked if I'd come to any solution. And I said, "Oh, I've gone kind of this weird way, and I'm doing these things." But I shared a code sample, and I said, "Just to be clear, this is 100% about something Rails is doing and not Inertia, which remains a stellar project." And then, very shortly after that, someone from the Inertia-Rails team was like, "Ah, actually, I think it was us. Sorry about that, but we fixed it now." And I was like, "I still love you guys. This is great. You're doing a great job. [chuckles] You continue to push the envelope in a wonderful way." But it was a funny interaction where I was like, never shall I let the name be dragged through the mud. Whoops. Okay. Never mind.
STEPH: You're an excellent hype man for Inertia.
CHRIS: I try, I really try. I believe in it to my core. And actually, there's another one that this one's not really related to Inertia at all, although I've seen it discussed within the context of Inertia. And again, I think the Inertia team has done a really great job of responding and pointing to here are the HTTP semantics, and adhering to the standards, and the way that things should work. But this one has to do with the back button. When you're doing sequential forms or really any sort of form type thing, the browser will just pull from its back/forward cache, which is a local cache of the HTML of the page as it just had it. And I had come to the understanding that this was not something that I could workaround. This was not something that I could control. I had tried every combination of headers, at least I thought I had, in Rails to try and control this from the server-side because ideally, the server is the one who knows about when data is changing and things of that nature. The server should be able to inform the browser, "Hey, don't cache or store this page in any way, always revalidate it."
It turns out there was a bug in Rails that was improperly normalizing the Cache-Control header and always removing the no-store Cache-Control value. So there are like five different or a handful of possible values that can be set for that header for the Cache-Control header. And Rails has a bunch of internal logic that says, "Okay, if you've set this, then I'll put these two, but not that one." And they're just trying to manage it and do nice things on our behalf. But unfortunately, they were being a little overzealous in that normalization effort. And so they were dropping an important value, which is no-store. So now there's a PR opened in Rails, or I think it's actually been merged in at this point that will fix that and allow you to set that particular header value, which then should get the behavior of "Hey, browser, if I hit the back button, please go ask the server. Don't trust your local cache, “which is exactly what I want.
STEPH: Interesting. Wow. So that's two very helpful resolutions to some of those strange issues you were running into before.
CHRIS: Yeah, definitely. And actually, for that issue, in particular, it was a very kind Bike Shed listener; Alexei Vasiliev wrote in and shared some initial thoughts, pointed me in the direction of some things. In that case, I actually was like, "I don't think that's the case. I tried it." And he was like, "No, no, no, pretty sure." And he was definitely correct in this case and was very kind and gave me an example of code reproduction and all of those nice things. So I was able to chase this down and then eventually find the issue in Rails, which had been opened like eight days before. So I think for me, I just happened to run into a weird period of time where Rails was subtly broken around this behavior. And therefore, I determined that the world was broken when, in fact, it was just a tiny slice of Rails' history. But yes, thank you so much, Alexei, for writing in and pointing me in the right direction on that.
STEPH: The dream came true. We talk about some of our troubles and our strifes, and people respond and help us out.
CHRIS: That is the dream. But yeah, so those are some quick updates, not really about me, although tangentially, I got to go along for these rides, and it was fun. But what else is up in your world?
STEPH: Let's see. Well, I also have a small update that I can share. It's circling back to the conversation that we had talking about extracting an untrustworthy service to a new location. And at that time, I don't remember exactly the process I laid out. But at that time, it’s the idea that it is a bit untrustworthy, but we have some security in how this process works, and it is ideal that we move it to this other location. So let's just go ahead and move it wholesale, bugs and all to the new location. And then there, we will start to refine, and we'll start to improve the service. Well, the update is that we have realized that the untrustworthy service is untrustworthy enough that I'm actually working on improving it in its current place just to a certain extent that then it feels like we can move it to another location. There have been enough issues with it that it has taken my focus to continue patching those bugs and making sure everything is working appropriately. But now I'm in the space of where I'm like, goodness, I thought I knew this thing and now I'm realizing I don't. And so, I'm looking for ways to inform myself and the team when something isn't working when we think it is.
So to provide a bit of context, this service is sending a bunch of messages to other systems, and most of the time, that is working, but there are times that it's not. And when it's not working, it's silent about the fact that those messages aren't being sent, and it's very important that we send those messages. So what's been on my mind is looking for a way to then elevate myself and the team to say, "Hey, these are the number of messages that are being sent on average." And then suddenly, let's say it dropped by 50%, or maybe we typically send 98% successful messages, and we have a 2% failure rate, but suddenly we have a 50% failure rate, but looking for those metrics that I can capture and then alert the team if something is going wrong.
And one of the suggestions that was bubbled up by Chad Pytel, who's a developer, he's also founder and COO of thoughtbot, and we're working on the same project together. And he had highlighted that a previous project that he worked on used AWS specifically to leverage the idea of tracking how many successful messages are being sent, or perhaps in their particular project, it was focused on how many orders were being processed. That was important to know. And in our case, we could do a similar metric where we look to see are we still sending messages? Has the number dropped significantly lately so then we can be notified, and then we can escalate that to PagerDuty? So then we notify the team that something's going on.
I don't know the specific mechanics of how I'm going to implement that yet. So I will report back, but I'm excited to have something that's going to alert me for when things aren't working the way I expect versus waiting for then someone that's a customer to notice it and then get back to us. It's very in line with a number of the topics that you've brought to the show recently, talking about how we can measure more of the user's experience and be notified sooner versus waiting for a user to bump into an error and then they reach out and notify us.
CHRIS: I'm super interested to hear where you get with that because that's definitely an area that I've poked at but not dug into particularly deeply. I know there are a number of projects like StatsD is one of them. I think there are others in that space, but that's where you're sending metrics just out to some service, and then you can aggregate and graph. I've also done similar things with Papertrail; I want to say, where you can do a very specific search in the logs, and then within that, you can aggregate and graph and show things over time. So you can do a very simplified version of what you're describing to sort of visualize a rate of something over time. And then I think they might have some thresholding alerts.
But also, that's one of those super hard things to do because it turns out like Monday morning, a lot of emails get sent and then Friday afternoon, fewer, and then on the weekend, none. And so, there's going to be an inherent sort of fluctuation to the data. And so then what is normal? What does the baseline look like? And then how do you do anomalies around that? Because inherently, there's going to be noise in the data. And so is it a 10% band around the normal? And I'm just saying a lot of words now that I barely know the meaning of. But it's one of those things where it's like, oh yeah, just let me know if it's behaving abnormally. There's so much in that one little sentence. And it's one of the like; I love the fractal complexity of this space where every part of that sentence that I just said is like, oh, that's way more complicated than it sounds when you just say that word. So very interested to hear where you get with this. And this is also something that I'll probably be pushing on in my work in the near term. So maybe we can even compare notes, but as of now, I just have, I think, buzzword-level knowledge of it.
STEPH: Well, I love that phrasing fractal complexity because yes, that was also where my brain got hung up in starting to think about this process and like, well, what's normal? I don't actually know what normal looks like because I haven't been tracking this until now. So do I go back a week and say, "Okay, let's compare our average sent rate to in the past week and try to define normal in that timeframe?" And I think the answer, for now, is to do the smallest thing but also has the biggest impact, and that's to notify the team if messages just stop. That feels like the first, small step to take, and then we can fine-tune. Do we want to know if suddenly successful messages are being marked as a failure? We have an increase in failed messages versus successful messages. But I think the first iteration is just to know or to confirm that we are sending messages and send us an alert if suddenly we're not sending messages for...ooh, I just realized there's a complexity in that statement too. It's like, how long are we not sending messages for? Is it for an hour? Is it for a day?
CHRIS: I was going to ask.
[laughter]
STEPH: I just caught myself there. Yeah. I don't have an answer to that right now. I have to think about it, but there's an answer there. I will have to choose an answer.
CHRIS: You sure will. And then you'll probably have to tweak it over time. It's also one of those topics where false negatives and false positives are really easy to fall into where the system's alerting too often. And so people then start to ignore the alerts versus it's too cautious before it will send out an alert and, therefore, you're missing things and so finding that optimum level. It also might be different days of the week. Aah. [chuckles]
STEPH: Yeah, I think that's very true. It will be different for different days of the week. So I have a lot more to think about in regards to how we're going to report on this. But that still feels very much like something I want in the world because right now, it's a lot of spelunking and production consoles to find out what's going on with the data and making sure that it's going through. And that feels like the least favorable option as to the world that I want to live in.
Oh, on a completely unrelated topic, I saw an article that I'm very excited to read. And it's not related to technology at all, but it looks like a very delightful article that someone wrote and titled My 14-Hour Search for the End of TGI Friday's Endless Appetizers. And I haven't read it in-depth yet, but I just read the first bit, and it seems like it's going to be delightful. But I thought of you because we've had previous outtakes around mozzarella sticks. And you were very excited when you thought thoughtbot had mozzarella sticks, the actual fried kind versus just the healthier cheese stick kind. So this seems like a thing that you'd enjoy.
CHRIS: I feel like it may have even ended up in an episode, and we talked about mozzarella wedges and the ratio of surface area to volume.
STEPH: Yes.
CHRIS: I don't know if that made it into an episode or not, but we have definitely you and I discussed mozzarella sticks before. And I'm definitely intrigued by this article. I will add it to Instapaper immediately and then probably never read it again because Instapaper is where I put things to forget them. But maybe someday I'll sit down with a coffee and read things.
STEPH: I've heard you mention Instapaper before, and I've looked into it. And I don't know why, but it just hasn't stuck for me. So I always throw anything that I want to explore or something that is also critical for me to do. I use Todoist. I don't know if you're familiar with that app, but that's my go-to.
CHRIS: Well, I'm familiar with Todoist. I take a slight line between my to-do list, which I want to be as, I don't know, clean and tidy and only the things that I have to do versus for me, Instapaper is a list of when I get around to it when I've got those ten free minutes, which apparently don't exist in the world. But when I have them, this is the list of things that I can read. But I think I've heard this from a number of people of having a more integrated system that all the stuff's in the same place. I keep my to-dos in Trello, also as an aside, and I'm not super happy with that. How do you like Todoist? Is it bringing you joy?
STEPH: I really like Todoist. I find it is simple enough an interface that I'm not spending a lot of time customizing it or messing around with it. I can just go there and log the things that I want. I can create individual projects and spaces as well. So if I want to separate my personal to-do list from my work to-do list or if I have a project, that's a really nice feature as well. I think my only small complaint is if I'm writing a date or if I'm writing tomorrow, Todoist will try to do the smart thing and say, "Oh, I'm going to add a due date for you since you mentioned a date." And I'm like, no, no, no, I don't want a due date. I just want to mention the specific date because somehow it's relevant. And undoing that is sometimes a little tricky. But otherwise, I have found Todoist very helpful. I'm a big fan. Also, you and I are slightly different creatures in terms of how neat and tidy we keep our spaces. I think how we both manage our email inbox is a really good indicator of this where you are more organized than I am when it comes to emails. And so, our to-do list might be similar. I'd be interested to see if Todoist fits your needs or if it doesn't offer enough structure.
CHRIS: I almost certainly could make it work. And it's one of those things where I've actually settled on Trello, which is a very loose tool. And so I've been able to shape it sort of to what I want, but it doesn't really have that many true productivity-type features. It's just a loose board where I can drag around cards and move them through. And that's worked fine, or I've been able to talk myself into not trying to be as neat and tidy and intentional with my to-do list, which I think has been good overall. I've looked at Todoist in the past. And the thing that gives me pause sort of related to what you were talking about with the date things, but I get the idea, or I get the sense that Todoist really, from a fundamental philosophical approach, really wants things to have dates and to have priorities, and my thinking is not quite that. Like, there is a priority, but it's relative. So it's the order of things in a list, but it's not this is a one, and this is a two, and that's another two. I find that logic of like there are different tiers of importance doesn't really map to my world, nor do dates. Almost everything I do has no date, has no context. It's just like when I'm at the computer because that's the only place I ever am. So it's when I'm at the computer, it's all kind of important-ish. Nothing really has a date, but it should probably be done pretty soon. That sort of stuff doesn't quite map to what I see in Todoist. So I've always found a little bit of a mismatch between what I think I want and what Todoist, as far as I understand, provides. I know they added Kanban-type boards recently. So I think that might help with just visualizing workflow and being a little closer to Trello, which I'm familiar with. But I'm sort of on the search right now for another to-do list.
I like what you said about being able to separate the work and personal because that's definitely a thing that I would want, although there's always the added complexity of whatever tracking tool that we're using as a team at work and which things go into my list versus that list. And do I try and synchronize them in any way? And then I do what I do, which is I start to imagine this ridiculously complex, fully integrated, bi-directional sinking nonsense system where like, never mind. Stop it. Pen and paper, Trello. I don't know; you’ve lost your privileges, though. This is me talking to myself. I lose my privileges much like I'm not allowed to ever try Emacs. I have had a multi-year moratorium on exploring new productivity tools, but I think maybe, just maybe, now is the time to revisit that.
STEPH: If you ever disappear for a week or two, I'll know that you tried Emacs or something like that happened
CHRIS: [chuckles] My beard is three times longer when I come back, and I'm like, "All right. I figured some stuff out, though."
STEPH: I'm with you in regards to trying to bucket all of your to-do items as if it's a priority one, two, three. I am not good at that, and I'm always wrong. So I've also given up on that system. I would describe myself as a minimalist user. I'm using all the basic functionality. I'm not leveraging what a lot of stuff that Todoist probably can do for me. And so I have a very just flat list of things that I'd like to do. I do have a couple of projects because I do try to have that personal versus work, and maybe I have some other project that's on there as well. And then, in my mind, I try to avoid due dates unless it's really important. Although I say that if it's really important, it's going on my calendar too because I'm going to budget time for it or make sure that I don't forget it. But then each day then I go through that full list, and then I pick the things that need to be done that day or it's reasonable to get done that day, and then I kick everything to the next day. So that way, I'm always reevaluating a fresh list of what do I need to tackle? What's reasonable for today, and what can I punt on? And Josh Clayton said this to me before, and I really liked it in terms of punting on work because typically, when you're really busy, something's always going to drop. You're always going to push something to the next day. So then it's just figuring out what's going to bounce and what's going to break? So I'm always looking for what's going to break? And let's prioritize that for today to make sure it gets done. If it will bounce, then I'm going to kick it to the next day, and I can't see it until I'm going back through that full list again.
CHRIS: I really like that framing around you're going to have to drop things. That's just the nature of life. There's always more to do than there is time. So will it bounce, or will it break in that? And that framing around how to decide which things get moved out. Interestingly, I just looked up because I wanted to know does Todoist support snoozing things? Which is something that I use constantly in Trello and Gmail and basically everywhere else. I'm just like, nope, future me problem, future me problem, and I just keep pushing things into the future. But critically, I want them to be hidden until that time. And it sounds like Todoist; you can set a future due date, and then it'll show up in today. But again, that's sort of conflating how I think about productivity and whatnot.
Also, I found…this is a Reddit post that I'm looking at where I'm determining this. And there is the question, and then there's someone answered, but the answer is deleted. And then there's someone replying to that saying, "Wow, what a thoughtful response. Have you written this up anywhere else, like a blog post? You sound like an absolute pro." But the parent comment, which apparently was beautiful, and articulate, and well-written, has been deleted. And this is the sadness of the internet. So a really beautiful xkcd about the saddest thing you can see is you search for a question, and you find Stack Overflow from 10 years ago one person asking the question and no answers. And you've got one other person out there in the world who cares the same way you do, but you have no answers, and it's sad. But I'm just sad about the loss of information.
STEPH: That's so tragic, or that's a really pro troll move. And you leave a comment, and then below, you're like, “Wow, that was amazing. That was beautiful.” And then you delete your own previous comment. So then you're just tricking people into thinking there was an answer.
CHRIS: It does sound almost performative, especially the last line, "You sound like an absolute pro." So I could see that being the case. And you know what? I'm going to choose to believe that that's what it is because then I can sleep better at night. So thank you, Steph.
STEPH: Happy to help.
CHRIS: But I think we should probably move on to perhaps a listener question or something. But before we do that, I do want to ask if anyone out there has a to-do list that they're using and they love; I would love to hear about it. I think I'm familiar with most of them, but votes of confidence from the listeners of this show will certainly go a long way with me. Because I think you folks are all very smart people. I mean, you're listening here, so, obviously.
STEPH: Yes, obviously. This very deeply intellectual show about mozzarella sticks and the ratio of cheese to fried and what's the best.
CHRIS: It's an important question.
STEPH: It is an important question. I have strong feelings about it. That's why we've talked about it. [chuckles]
CHRIS: On this very serious show that we host.
STEPH: [chuckles] Yes, we have an awesome listener question that I'm really excited to dive into. But before we do, I have a quick git thing that I'd love to share. It's a tip that Dimitry, another thoughtboter, shared with me today that I think is just really nice and something that I have not used before. And it's specific to a workflow where if you need to grab a file from another branch or from another commit, and then if you want to bring it into your current branch. And there are a couple of ways to go about it. One of them is you can do git checkout main and then pass the file presuming the file that you want is in main and then you want to bring it to your current branch. And that will copy over the file to that exact location.
But if you wanted to grab a file that's on the main branch but then you want to port that file to a new location, then you can use git-show and do git show branch. So let's say you're bringing a file from main over to your current branch, so it would be git show main: and then pass to the file that you wish to copy, and then the greater than sign and the path to where you want that file to live. So you can grab that file and then stash it in a new location, and you can also do it for commits too. So if someone has pushed up a commit and you want to copy a particular file, say if you need to bring in some of their work into your branch, then you could also do git show commit, and then that colon, and then the path to the file. And then, if you wanted to move it to a new location, you can use that greater than sign and then the path to where you would like that file to live. So it's a nice combination of the git command of git show and then also shell redirection. So then, you can pipe that content from the file that you wish to copy over to the new location that you would like. And it's not something that I've reached for very often, but I find lately I've been in a mode where I'm trying harder and harder to stay within my terminal and not have to jump over to GitHub or to external UIs if I can. And so this just feels like a nice additional tool where then I can use this one more thing where I don't have to either...I guess it's small. I could check out main locally. But even with this way, I don't have to switch branches, grab something, and bring it over, or I don't have to go to GitHub and then look for something. It feels like a nice way that then I could grab that file locally and bring it over to my branch.
CHRIS: That's a nice combination of tips there. Like you said, a bunch of different pieces at play, but that is definitely a super useful thing. It's one of those that I've not gotten that into muscle memory yet or even close to muscle memory. Git is complicated in terms of the interface that it provides, at least at the command line. I've been trying to make sense of it all and then trying to find what are the useful workflows that I want to build? Because you can do anything, and you can do most things in five different ways. And so finding that set that you do want to know deeply but then also getting that committed into your hands, not even into your head, is the thing that I strive for. But that particular one is one that I struggle with every single time. So especially, I think you broke that down really nicely, so it makes sense.
There's a corollary in Fugitive for any Vim users out there. There's a Gread command, so it's capital G-re-a-d. And then after that, it takes some identifier, and I've never gotten the identifier right. But as you just described it, it's the same as the git show sequence. So it's a commit or a branch name, colon, and then the file path that you want. And then, in Vim, you can use % to reference the current file. So I've tried really hard to teach my brain Gread main :%, and somehow, my brain doesn't want to remember that ridiculous sequence of characters. So, only in this moment am I like, oh, it all kind of fits together.
STEPH: Oh, that's nice. I am a Vim Fugitive user, but I didn't know that one. And I'm with you; I rarely remember all these off the top of my head unless I've done them like a hundred times, and it finally starts to sink in. So I always have a cheat sheet, or since we were talking about tooling earlier, I use Notion to capture tidbits for myself. So this is a place where I would probably stash in a web development folder that I have. And it's just a tip to my future self as to like, hey, remember when you were trying to do that thing, and then you had to look it up and figure it out? Well, here's how you did it, so then I can revisit it in the future.
CHRIS: I thought a number of times about introducing a flashcard system to revisit these sorts of things. Gary Bernhardt, who I had on a while back now, is building a platform that does this essentially for TypeScript and regular expressions in JavaScript arrays and a bunch of different topics. But it's got built into it the idea of spaced repetition, so you review a thing and then three days later, you review it again and then seven days later, and then ten. And there's a particular sequence to it, but it helps you to really internalize that knowledge. I've never gotten to the level of going to that, but I like that idea of being purposeful and trying to commit some things to memory because having them at your hands and being able to stay, like you said, in the terminal and closer to the work and not having to break out of the context, I do find a lot of value in that. But it does take some effort to build that up. So I've never quite gotten to that flashcard system myself.
STEPH: Yeah, that's interesting. I think I have mixed feelings about it because, on one hand, it is nice to commit some things to memory. And on the other hand, I'm totally cool with having a way to organize stuff so I can easily search it and find it later and not use up memory space for something that I don't use that often that then I just can't commit it. So I could definitely see it being useful. But I'm also okay with just having a nice way to search for it.
But pivoting a bit and circling back to the listener question that you alluded to earlier, we have a listener question from Jen and Jen wrote in about knowledge silos across different projects. Specifically, Jen wrote in "Hello, Steph and Chris, first of all, I want to say that I love to listen to your podcast for multiple years now." That's awesome. Thank you, Jen. "I like how you both share things along your week and fill the discussion with so many useful things and findings. Our team, which consists of three pairs, is currently working on two different projects. And due to that fact, we are creating information silos. Now we are looking into ways into how we can minimize those information silos. And do you have any ideas for how we can achieve this? Some additional context, switching pairs we're unsure about as this will be difficult for the new person to get up to speed. And currently, we are thinking about having a mob review session. But of course, with those, you only get a limited overview." All right. Well, thank you, Jen, for the question. I'm excited for knowledge silos because, I'll be honest, I am guilty of this one right now. I am a bit of a knowledge silo on my current project if we're telling our truths here on the show today.
CHRIS: Steph, I thought I knew you.
STEPH: You know, I'm full of surprises.
CHRIS: Aren't we all at various times? This really does feel like one of those core things that I associate with you, though. So it is interesting. But it's so easy to fall into this space. I think without purposeful, intentional effort, this is the natural way things will trend. It's so much easier for the person who understands a portion of a system or an entire system to take on the next piece of work for that system. And I think we can probably offer some specific advice. But to talk about it more generally, Jen, I think you've found yourself in the pretty common position of there isn't a great answer here. There's going to have to be an investment of some amount of effort; some potentially decreased productivity for a period of time in order to get out of the situation that you're in. But that's just the name of the game. So if we name it as that, and we say that, then the question becomes how much effort do we need to put towards that, and what are the different ways that we can do it?
So to go through the two that you listed, mob review sessions, I think can be a great way to give an introduction to a project, but I think they'll very quickly taper off in my experience. So I think it's a great way, especially if you're going to do any more formal things after that; a mob review or even a mob overview of the system is a great way to introduce new folks into it. But then from there, I personally would think that if you are feeling pain around the knowledge silos or even if you're not, because frankly, knowledge silos can very quickly become a major problem, say if someone needs to...if someone happens to leave the company or if someone needs to take some time off, anything of that nature, this is one of those things that can be fine until it's not, and then it's not in a very serious way, and that's the wrong time to try and resolve it. So I would very much be in favor of more purposeful things.
As you described, switching pairs is an interesting one. I think that's a cost you're probably going to have to pay. I am interested; the way you're talking about it, it sounds like your teams are paired up consistently, so you're working exclusively in those pairs, which frankly is a really interesting thing. I think it was the previous episode where Steph and I talked about agile and particularly 100% pairing, and that's a pretty intense idea. It also does potentially lean towards this. Now, each of those groups of people, each of those pairs is collectively aware of the same subset of the application. But now, if you were to split that up and you have six individuals that pair in varying sets across the different projects, you have this sort of Venn diagram tapestry of knowledge of the different systems and the subsets and the features. And for that reason, I actually would probably question, at least if I'm correctly interpreting it, that you have three consistent pairs; maybe you shuffle that up. Maybe that's a practice that should be unwound. And now the pair should rotate on a daily basis or something to that effect. But overall, I think this is a cost you're going to have to pay but will pay off longer term. And it's definitely worth doing in my mind. But yeah, that's some high-level thoughts. What do you think, Steph?
STEPH: I agree with all of those sentiments very much. And as you're talking about the cost and investing in the team, I think that's very true and something that needs to be done. The fact that they're working in pairs is already reducing knowledge silos because you at least have another person. Because I have been part of teams where there's one person that is that knowledge silo. So at least here, we already have two people that are aware of how code works and then why code was implemented in a certain way. So then, to categorize how painful that knowledge silo is or how risky that knowledge silo is, I think there are really two ends of the spectrum. And on one side, there's that example that you alluded to a little bit ago about isolating one developer on a project for six months, and they have minimal code reviews. And then suddenly that person leaves, and that's the hardest silo to then rectify. And it will probably be a lesson that stings enough that hopefully it won't be repeated where someone gets that isolated and then others have to figure out what was going on while that person was working on something independently.
And then on the other side of that spectrum is you need to take some time to explore and understand a portion of the application that you haven't worked on before, or perhaps it's you need to understand how to work with an internal API. And stuff on that side of the spectrum feels more addressable with documentation and also mob reviews. And maybe there are also demos as well because a lot of the knowledge that goes into building a product may not be specific to the code, but it's more why was this done, and why was it built, and why did we go this way? And that feels more addressable with documentation, with commit messages, with those mob review sessions, and also with demos where then you can show the high-level functionality of a feature that's being implemented. So then, even if everyone else on the team doesn't have the technical knowledge as to how it was built, they'll have more of the user context, and the product context as this is a feature that we built, and this is why it's useful to the world. I find a lot of that knowledge is what's harder to capture because then you'll find a feature and wonder who uses this and how is it in use? And that stuff is harder to backtrack.
Circling back to something that Jen caught out in their question, highlighting that it takes time for someone to get up to speed. That's a really interesting one for me because it goes back to the idea of wanting to know well; what’s difficult? Not specifically what is difficult, but let's define difficult and what's a reasonable level of difficultness because onboarding to any applications or onto a new section of code is always going to take some time to process and understand. But what's an acceptable timeline in which someone can onboard and be productive? There's something that I've heard from someone at thoughtbot. I don't have the exact context to quote them directly. If I find it, then I'll be sure to add it to the show notes. And they shared that another company is measuring this difficulty of onboarding by they take the person's first starting date, and then they track to see when that person has merged in 10 PRs because they are looking to see how long it took for that person to get up and running to then feel comfortable, to then make some contributions.
Often, your first couple of PRs might be something that's less challenging. It might be something that's updating the README because you are going through that onboarding process. And that's a great time to then reevaluate how clear the instructions are. But by the time you get to the 10th PR, you've probably addressed something that's a bit meatier and impactful to the product. And then they use that as a metric to then calculate okay; how well are we doing? Is it a month? Is it six months until someone gets there? How complicated is the application is another way that you could look at that metric to say, "Well, if it takes people a very long time to get there, maybe it's because of the codebase versus processes." And I really like that thinking of we have knowledge silos; let’s think about where it's actually hurting us. And then, if we think it's specific to the onboarding process where that part is painful, then let's break down how we can measure how difficult it is, and then look for ways to improve it but then also track that improvement.
CHRIS: Well, I like that idea of trying to quantify and measure onboarding. I've heard a lot of organizations having like, "We want you to ship a PR on your first day," that's a meaningful thing. But obviously, that first one will probably be pretty small, and it's sort of getting that first one out of the way, if anything. But it's not truly representative of someone being able to comfortably work within the repo, but ten, that starts to feel like a real number. And I do like quantifying it. More generally, I'm intrigued. Metrics around developer productivity is such a difficult thing to pin down. And it can, I think, become really complicated, especially if you're looking at individuals and trying to say, "Well, you had four PRs, but you had two PRs," and comparing individuals. But I do really like the idea of more aggregate stats of on average; right now last month, we were doing 1.2 PRs per week per developer, and now we're down 2.7 PRs per week per developer, something like that, and seeing that looks like something that we might want to address. Are there fundamental things that are happening that are causing development to slow down? Are we doing bigger PRs, et cetera? And starting to look at that, but try and have a metric to keep an eye on that. So I'm super intrigued by that and then again, more specifically to the onboarding one that you were talking about there.
Actually, popping up a slightly higher level, though, I think both you and I sort of jumped into this conversation as, like, yes, knowledge silos got to fix those, that's a problem. And I do feel that way. This is a topic that I feel pretty strongly about and pretty clearly about that knowledge silos are the natural state that things fall to, and it's not a good thing, and we want to avoid it. But it is important to ask the question of who is deeming this to be a problem and for what reasons? And we had a good conversation two episodes back in response to a different listener question about consulting versus building product. And I feel like, with this, we can almost go up to the consulting level of this can be a problem, but it also maybe isn't. Or, who believes it's a problem? Is it management thinking, "Oh no, when that person went on vacation, suddenly everything ground to a halt? This is a problem, and we need to resolve that." Or is it the development team themselves saying, "Hey, we feel like we're a bit siloed here, and that's a problem we're recognizing," but they don't have buy-in from management. Or worst case management saying, "This is a problem, but you get no time to resolve it." As long as everyone's in agreement of the potential benefits and aligned to this is a thing that we would want to improve, and then also aligned to there will be a cost to resolving it, that it's not free to try and unwind this siloing of knowledge, then I think everything can be great. But any mismatch at sort of any level of that either on the cost or the benefit side can be problematic. And so getting to the point where you've had a clear conversation that defines this and gets everyone to come to an idea of yes, we think it's a problem, and yes, we want to put in the effort to resolve it, then I think you can move forward and tackle any number of different approaches. But I think you have to start from that conversation.
STEPH: I love asking that question of how has this manifested into a problem or a concern? Because you just highlighted a really great example where if it's only a concern because someone was on vacation and the team couldn't respond to a customer request or couldn't respond to an outage, then there are different ways to address that. So documentation may not be the best way to help out with that. That's probably a pairing session. So then someone can respond quickly to an outage versus you don't want to say, "Okay, here's a couple of pages of documentation," and then have that developer go on vacation again, and then there's an outage, and you're trying to read through those pages to figure out what's wrong. So figuring out the right approach based on the pain that's being felt feels like a really great way to go about this. Because frankly, breaking down a knowledge silo is always going to have a cost. So you want to make sure that you're being as cost-efficient as possible with your approach and then addressing the root concerns and making everybody's lives better.
Because I do think there's some knowledge silo that's appropriate. And I think silo may be the wrong word, but someone who is more skilled or an expert in the area or has more context for a particular area of the application. Because applications can get so large that not everyone's going to know everything and context switching between all of those can be really challenging. So I think it's very natural that you're going to have different people that you go to around a certain feature. If there is some lofty feature around search and you know a particular person that has worked on it for a while, then you go to them, and that feels like an appropriate level of knowledge that someone has obtained. And I wouldn't classify that as a silo at that point. But then if you do get to the point where that person went on vacation and then search broke, then you can start to revisit okay, maybe this person does have too much context, and then we can offload some of that context to someone else.
CHRIS: There was a phrase I used earlier of like a patchwork quilt, but I think that's not quite the right image. There's an image in my mind of little islands of color that are fully separated; that’s bad. And then there's a version of more like a Venn diagram overlap where each of the colors sort of bleeds into the other ones, and I think that's good. But then the perfect overlap where it's just one big blob of brown because all the colors are the same, that's bad. And I think that's what you're highlighting is like, you don't want to go to that. You don't need the perfect overlap of everyone having a complete shared knowledge set. I'm trying to make word pictures over internet radio. So it's probably not going great, but it's something to that. Like, there is an optimization here, and I think the way to find that is by starting from what are the pain points? What are we feeling that is less than optimal? And then coming up with solutions that directly address those pain points, not generically try and target like knowledge silos bad. And retros are a perfect way to do that. So if you listen to our previous episode where we talk about the virtues of retros and other agile philosophies...This is great. I feel really good about being able to reference previous episodes. I think we've talked about good stuff in the previous episodes.
STEPH: You've been on fire with this episode. I think you've referenced at least two, three episodes at this point. [chuckles]
CHRIS: Yeah. Wow. Well, I mean, we're at 300 now, so we've got plenty to go back to.
[laughter]
STEPH: We've got plenty of content to reference. I think you and I do have an advantage here based on our experience where we have had to join a number of projects. And then we know our time with that project is very determined, and we want to make sure that we don't take any knowledge with us. So something that you and I have acquired as a skill is seeking knowledge when we first join a project and asking a lot of questions around how the application works and then understanding more about the intent of different features, and then knowing where to dive into a codebase to then make fruitful contributions. And I think there's a similar approach that can be taken when trying to break down a knowledge silo is a person who is that silo may be in a spot where they're having trouble communicating all that information and then dispersing it to others. So then us, as their teammates, can go to them and try to ask those types of questions to then help ourselves level up and then recognize areas that don't feel documented. And maybe it's adding documentation, maybe it's adding tests, or maybe it's doing a demo, maybe it's recording something about the feature and then sharing that with the team. But then you can be an advocate for that person who is in a silo position to then help them share that knowledge because they may be too far down that path where they don't recognize what they know, and other people don't. I don't know if that's directly related to being a knowledge silo but just an additional way to approach helping breaking down when you recognize that a silo does exist and looking for ways to then help that person communicate and distribute their knowledge.
CHRIS: Yeah, I think you're describing a distinction between a push versus a pull. It could be incumbent upon the person who has the knowledge to try and push it out to the team. But often, they're going to be perhaps a more senior person. They've got code review to do. They've got other meetings, and planning, and things, and they just may not have the time. But is there a way that other team members can proactively pull that information from them and help them find the moments that will clarify that? So, yeah, broadly, as a team, let's rally around the desilofication of the whole adventure.
STEPH: That's exactly what I was going for is that push versus pull mentality and how we can break down the silo from both sides. So thank you, Jen, for that wonderful question. I hope we gave you some helpful ideas and suggestions around addressing a silo and then also identifying the pains that you're feeling so that way you can find the most cost-effective approach. But on that note, shall we wrap up?
CHRIS: Let's wrap up.
STEPH: The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed on Twitter. And I'm @christoomey.
STEPH: And I’m @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeeee!
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success._
Let's talk about Agile! What is it, what do we like, we do we not like?
In this episode, Steph and Chris discuss:
And also, hit specific topics and practices like Scrum, Kanban, and Extreme Programming.
Transcript:
CHRIS: I feel like we should try a couple of different byes just so we have sort of a smorgasbord of options, and then we can pick the best one.
STEPH: With countdowns, [laughter] because I do so well with countdowns.
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, I thought we would try maybe something a little bit different this week, a little bit more of a structured topic. In particular, I've been gathering little tidbits of information. I've been seeing conversations happen all around the topic of Agile, things that people like about Agile, things that people hate, mostly it's things that people hate about Agile. Lots of ire on the internet about Agile, but I think also some disagreement about what it actually means. And I think; generally, you and I are probably fans, so I want to talk about that. What parts do we like? What parts do we not like? What do we think Agile actually means or, at its best, maybe what it means? But yeah, let's start at the very top stuff. Steph, what do you think about Agile?
STEPH: I am generally a fan. I'm with you. And yeah, the internet being full of more negative remarks and ire, that sounds very true. But generally, I am very much a fan of Agile, and the very broad scope of this is how we work, and this is how we plan our work, and this is how we collaborate as a team, and then how we reflect on the work that we have completed. I can also pick apart some of the things I don't like about Agile, but in the broad umbrella definition, I'm a big fan. I've enjoyed that approach. Granted, I've also only ever used Agile. I haven't written software using a Waterfall style, at least not purposefully. And then if I have encountered a team that was using more of a Waterfall style, then we changed it quickly. I really only have known the more Agile approach to writing software.
CHRIS: I think that's largely true of me as well, where most of my work would fit somewhere under the umbrella of lowercase "a" agile, although I've tried variants of Scrum and Kanban and a bunch of other things that we'll probably chat about today. But I think in general, I find that things are most effective; things seem to move the most smoothly. And I think the software that we come out with is the best one. It's closest to those very simple ideals of Agile. And every layer of process that gets added on even though, like you, I've not done true Waterfall where it's like six months requirements gathering and then it gets handed off, and no one talks for a while. I've never done that.
STEPH: I have to interject because I actually think you have in a previous life when you were an engineer. You have done the more Waterfall. Like, you have to plan very far in advance.
CHRIS: I think this is one of those cases where people think "engineering" quote, unquote like mechanical engineering is one thing and it's actually...there is a little more structure, and there's a little more necessity of sequencing where you've got to figure out what you need to buy first because sometimes it takes a while to find the particular piece of metal that you need in the world. But it also has a lot of figuring out as you go and being like, well, we've got a bunch of stuff, and we're just going to figure it out. And also, this is something that as I was studying software while working as a mechanical engineer, I started to hear about this whole Agile thing, and I was like, huh, I wonder how I can bring more of that? Because I definitely saw cases where a more Waterfall-centric approach to engineering projects was leading to bad outcomes. It's like we decided upfront what we're going to do, and then we went away for six months, and we did it. And then we came back, and it turned out it was wrong. So that was solvable along the way. There were ways to build prototypes and things like that. So that is definitely a part of the mechanical engineering world.
Although I think there are some true constraints, but I think there are also some occasionally self-imposed constraints, but again, I see sort of the same thing in software. Anytime that we can shorten feedback loops, that's what I like. And I think that for me, that's the core of Agile. Specifically, to come to the Agile Manifesto, to start at the very top, the thing that kicked it all off is a very simple document that the first line of it is "We prioritize individuals and interactions over processes and tools." It's like, yeah, that seems like a great thing, having more regular conversations about the things that we're building rather than having those initial conversations. And everybody goes away for a while and tries to build that thing, and then they come back, and hopefully, the thing that they've produced actually solves the problem. But I think almost always there are some deviations like, oh, actually, it would have been better if it was like this or now that we're actually trying it in the field, it's fundamentally different. So in that way, I think there's actually a lot of commonality between mechanical engineering and software development.
STEPH: Okay, that makes sense. Yeah, I was thinking around the process of where you'd have to order stuff in advance versus for us; we can describe everything that we need as we need it unless we're having to procure some specific software or licensing. But otherwise, we don't have to wait on that shipment flow to then have our goods. And then, if we also mess something up, then we don't have to reorder more pieces. But I like how you started talking about that agile with lowercase "a" and then talking about the manifesto because I suspect most people are familiar with Agile, but it wouldn't hurt just to read off some of those top things about what Agile is so that way we're all on the same page together for this conversation.
So you already covered the first one that talks about individuals and interactions over processes and tools, and then the others are working software over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. And that's it; those are the aspects of Agile. And so then, circling back to what you were saying earlier where people are having more criticisms around Agile, it sounds that it's less about Agile, and it's often more about the implementation of these ideas and then how you're approaching them. Because, boy, do we have several ways to implement Agile. We have Scrum; we have Kanban; there’s Extreme Programming. Does that fall within the Agile umbrella? I think it does.
CHRIS: I believe so. And I think a lot of the things that people take issue with particularly come from Scrum and Extreme Programming. We're taken to their extremes. Yeah, it's right there in the name, so you should probably know that it's going to be a little out there. But taken to the extreme and especially where it becomes rigid and dogmatic, then it becomes a problem. But again, so we listed out now the four items that are the core of the manifesto. There is a separate part of the manifesto, which is the principles, which digs in a little bit deeper, but it's still very much in that same ethos. But I do want to highlight because there's a subtext to the Agile Manifesto that I really love, which is given there are things on the left and then things on the right when the Agile Manifesto was presented. And so it's like we like individuals and interactions, that's the thing on the left, over processes and tools. And so the subtext below it is that is while there is value in the items on the right, we value the items on the left more. And that's one of the things that I love about the Agile Manifesto is it's not this very rigid thing that says, "This is good, and that is bad," it is a statement of a preference of well, yeah, it's definitely good to have comprehensive documentation. That's a really nice thing to have, but it's incredibly difficult. And if we have to choose, we're going to choose working software. We're going to prioritize that well before we have comprehensive documentation. So I really love the juxtaposition, and the emphasis on it's not that one is good and one is bad of these two things that we're comparing but that we have a preference, and that we want to orient our work around the items on the left rather than the items on the right, which I think the items on the right are more traditional or were more traditional to the Waterfall approach.
STEPH: I like how you highlighted how those statements are presented to the reader. So then that way, as you mentioned, we still value what's on the right, but we favor more what's on the left. So one of the things that I saw recently was something that you shared with me in regards to where you're bringing up the idea of, like, hey, let's talk about Agile. And you shared with me a clip or a specific tweet that linked to a clip from the Go Time Podcast, which is a podcast that I hadn't listened to before. But I listened to that episode or at least part of it, and it's really delightful. I enjoyed listening to them very much. And they had Kris Brandow on the episode. And at the end of the episode...and they do something really fun where they ask the guests, "Do you have an unpopular opinion that you'd like to share?" And one thing that I like about their unpopular opinions is they often have polls afterwards, and they want to see was this truly an unpopular opinion? And if most people agree with you, then they actually consider it nope, he didn't win. I don't know if they use the word win if you didn't achieve the unpopular opinion. And in this instance, Kris shared the opinion that Agile is done and over with and that we should move on, which is a big thing to say. Everyone on the podcast reacted in a similar way that I would where it's like, well, how do we track things? And there are still things that we need to care about. But then also, there's a part of me that's just like, yes. I am not sure where Kris has heard it just yet when I heard that, but I'm already tuned in and very interested.
And one of the things that Kris said that also really resonated with me is he mentioned that "I've never worked on a team where Scrum specifically like sprint and story points functions well," and I absolutely agree. There are parts of Scrum, we can get to the specifics that I think are fine that I've certainly used in the past and that have worked. Story points resonate deeply. I very much agree that story points are something that I do not enjoy using, and I do not find that they really lead to building software. There's even a blog post that I published along with Matt Sumner, a former thoughtboter and guest on this show, where we talk specifically about story points and some of the concerns and issues that we have with using story points.
CHRIS: It was actually also the first episode where you came on as a guest to Bike Shed; that that was the topic that we dove into because it was so near and dear to our respective hearts.
STEPH: That's right. I forgot about that. So yeah, story points are certainly up there on my don't list. I feel like we're doing a fashion do and don't, but we're doing the Agile do and don't list. [laughs]
CHRIS: I kind of like that. Yeah, we should lean into that vibe. But yeah, continuing on with the poll there, it was interesting to see also, like you said, they tweeted out, and then there's the poll that comes after. And it was 64-ish percent of folks agreed that Agile's time is over and done with, and we need to move. Granted, it wasn't a huge sample size. It was like 85 people that took the poll but still, seeing both the statement and then also the general support from folks on the Twitter, it was interesting to see. So I do have the question of like, well, okay, if not, what else? And I share your sentiment of we should be able to ask questions and iterate. And nothing is so precious that it can't be replaced by something else that's better. So we always need to be trying to find the best ways to work. But again, I think there are still kernels of good stuff in the Agile. So I found this and I was like, oh, this is interesting. What's going on here?
STEPH: So I'd love to dive into some of the specifics around Agile to understand what are the bits and pieces that work for you and the bits and pieces that don't. So if we are taking our Agile approach and reviewing the things that do and don't work and changing that process, what are the things that you would keep, and what are the things that you would throw out?
CHRIS: Yeah, well, we can dig in, and we can bounce back and forth, I think, on this. But again, there are sort of a few different camps. So I collected together some of the lists of practices associated with some of the different approaches to Agile. So starting with Scrum, which I think perhaps is one of the most rigid, most structured, and perhaps most ire-deserving of the approaches to Agile, one of the first things is sprints or iterations, so the idea of starting...before you begin the work, you sit down, you define how much work you think you're going to take on. There's often an estimation process. Actually, we'll say that because that's maybe a separate idea, but even just broadly the idea of sprints and iterations, which often involve the idea of committing to a certain body of work. And that commitment is always handwavy and loose. No, no, no, we won't hold you to it, but then it's a constraint that's placed on the team. It's an expectation that's set, but it's wildly difficult to estimate software, as we all know. So sprints and iterations, personally, I am not a fan of. I really like a more continuous flow where we're constantly reprioritizing the work to be done. We're constantly measuring against what we built, what we think we need to get out there. How can we get something out in front of users as quickly as possible? But I've not found a ton of utility in the sprint or iteration workflow. But what do you think of that one?
STEPH: Yeah, I'm generally not a fan of sprints, and it has taken me a while to get there. And I feel like I can admit that openly because it is something that I feel like when I first started doing software development, sprints were life. It was how you planned everything. It was how you committed to work. It's how you measured your work. It's how you then looked back to see what you could and couldn't accomplish in two weeks’ time or maybe a week's time, depending on how long your sprint is. But over time, I have realized that I don't like the mentality of sprinting, and that may just be a nitpick on my part, but that is something that I don't enjoy because we write better software when we have breaks. And with the sprint methodology, there's really never that break unless you're going to plan that into your sprint.
And then there's the idea of the upfront commitment, as you'd mentioned, it's one of those, don't worry, we're not going to hold you to this, but can we all commit to this work? And it's one of those you just feel compelled to say, "Yes," to the person who's asking because then you feel like a jerk if you push back and you say, "Well, actually, I don't know if I can, so I'm going to commit to way less." And then that's the approach that I started taking of, well, I don't know. So I'm going to always commit to a little bit because I'd rather overachieve and then deliver more than come in under because I could work really hard, but I've over-committed and then still feel like I didn't reach my goals, and that's a rough feeling. So I found that I was already lowering my commitment there. So then, it felt more appropriate to be in line with that sort of continuous workflow instead of trying to commit to all these features or all these tickets that needed to get done. I think those are the two areas for sprint where it doesn't align with me and where it can work for teams. But I feel like there's always that underlining unhappiness that a lot of us just don't want to talk about because we don't know what else to do other than to keep sprinting.
CHRIS: Yeah, I think you said something about the specific, like nitpicking the word sprint, but I do think that's actually meaningful. It's The Bike Shed, after all; if we're not going to Bike Shed about some words, what are we doing here? But I do think that we're using that word...it's obviously the wrong word; this thing's a marathon. You can't have 26 2-week sprints back to back throughout a year. That's not going to work. That's not how humans work. But any amount that we let that thinking into our head, I think, is problematic. If I'm understanding correctly, it sounds like you've come to a place of comfort around committing to a smaller body of work and then ideally overdelivering. But in my experience, many developers, perhaps even most developers, don't feel comfortable. It's so difficult to say, "Yeah, I know that the login form should take a day. That's what I feel in my heart. But let's be honest, every other time we've done a form, it's taken a week. So I'm going to say a week." It's so hard to do that. And so I think continuously, we end up in a mode where we are failing to meet the collective commitment that we made, and that's demoralizing. That's going to constantly just be a drag on the team, even if they're fake, made-up deadlines that we're constantly setting, that we're constantly not hitting. Just doing that over and over, I think, is really detrimental to the morale of the team, to the cohesions, and the feelings of are we actually doing this work? So perhaps pedantic, but I definitely share all of that.
STEPH: I do want to highlight, as I mentioned earlier, I'm feeling more comfortable that I can under commit and then I can overdeliver, and that is hard. That is something that still in the moment, even today, is very hard for me to do. And it's like how you said, in my heart, I feel like this should take a day, and the heart lies. But on top of that, it's often it's also my ego that's driving me all the time. And with that, it feels like a competitive environment to me where someone's saying, "Hey, can you get this done?" And in the moment, that brings out my more competitive side where I want to say, "Yes, I can get all this done, and I can deliver all the things." When, in truth, that's often not how it's going to work out.
There is one thing I do like about sprints that I want to reflect on, or perhaps it's actually two. And one of them is that we are getting together every so often, and we're agreeing on the important work to be done. And I really like that planning process that is typically coupled with a sprint. So you get together, you review the work, you address any concerns or raise any concerns. And then you could say, "Yes, we all agree this feels like important work." And essentially, we're buying into the work that's getting done, and I really like that process. And then, as an extension of that, I really like how we often then pick themes. So as we are agreeing to the work, we're often grouping together work that makes sense where it's either the most cross-functional or collaborative. We're already going to be in that space together. We're aware of what everybody is working on. And those are the aspects that I really do like about sprint and some of the other styles, that more continuous workflow of where we're always pulling from a backlog. It feels more of a grab bag in terms of I don't really know what I'm going to get next. I don't know how this work has been reviewed or vetted. I haven't really gotten to talk to anybody, perhaps. I'm making some broad statements here. But I haven't really gotten to talk to anybody from the product side to understand this change. And I also don't really know what the rest of the team is working on, so I feel more disconnected from them.
CHRIS: Yeah, I definitely share that, the planning or the meeting where we discuss the work that's coming up and shape it a little bit; I love that. Although it's interesting within the context of Scrum, I think like truly to the letter Scrum; my understanding is there are very discrete meetings, and they each have a distinct purpose. And so there's the sprint planning meeting, there's a backlog grooming, there's a sprint review and the sprint retrospective. And each of those are these four distinct meetings that are happening once every two weeks or so or whatever your sprint cadence happens to be. And the splitting of those becomes interesting. And some of the practices in there, I think, are...I think you and I share not being interested in doing them or not finding them to be super valuable. But I think broadly having some version of hey, let's sit down and talk about the work before we have to do the work, definitely a fan of that. For me, it often can be let's collapse four of those meetings into one sort of thing and maybe have it more regularly or something to that effect. But actually, we'll touch on the rest of those. But if you're good with bouncing from sprint/iteration, I think we've covered that topic well. Let's move on to one that I think we can do pretty quickly because I'm pretty sure I know how we feel, but sprint planning/planning poker/estimation. How do you feel about this one, Steph?
STEPH: We grouped a couple of things in there. There's sprint planning, and then there's sprint poker, and those are different to me.
CHRIS: Yeah. So let's go specific to the planning poker as the most pointed version of it but also generally estimation and sizing of stories.
STEPH: Nope. Throw it out. I don't know how to play poker. Let's just get rid of it. [laughs] I was never a good poker player.
CHRIS: Playing poker can be fun, but planning poker...Well, so actually, to ask a slightly different question, I think in the past we've talked about keeping aspects of it, definitely not keeping the let's figure it out, let's hash it out. Let's get down to an exact point value, and then we know we can have 34 story points a week, and that's what we're going to do. But the version of using planning poker, using this numerical communication tool to see if we're aligned, that one I think we've talked about liking that. I have enjoyed that, but under the strict guidelines that we throw the numbers out. The numbers are only a communication tool. They get thrown out after the fact. We do not commit to a set amount of work or anything like that. We just use it to say, "I think it's an eight. I think it's a one. Oh, we should talk," just for that. That's when it's useful.
STEPH: I agree. Yeah, in my previous answer I was being flippant about it, but I do agree very much where I don't like the specificity of where you're trying to plan exactly what numbers are these. But I do find it very helpful for the reasons that you just said where the team agrees with the estimation around how long they expect something to take. Because then that is really great where you have someone who's never touched the codebase, and they're like, "I think it's a five or whatever system we're using here." It's an elephant...whatever scale you're using. And then someone else is like, "Well, I think it's a doughnut size." I'm making up silly stuff because it's more fun for me. And then those two people can talk and reconcile. So I do like discussing the estimation of work for that purpose but then not actually writing it down or maybe going with t-shirt sizes, something that's more simple, and then doesn't have anything with points, really. Anything with points can then be gamified and also brings out people's more competitive side. So, if you can make it something that's more fun, maybe around t-shirt sizes or a bunch of cute animals, various sizes, whatever works for your team. I'm trying to think of other fun measurements now [laughs] that we could use instead of t-shirt sizes.
CHRIS: There are the sizes of bottles of wine as you go past. So there's a regular bottle of wine, and then there's a magnum. And then it gets to weird names like a Nebuchadnezzar and other things. These are big performative champagne bottles. So I think we should use that kind of sizing because I think they also have a geometric progression type thing, not quite Fibonacci but something like that. So I'm going to make that push for Nebuchadnezzar as being my go-to [chuckles] sizing in story points.
STEPH: I have never heard of that, and I love it. That's great.
CHRIS: Okay. We'll find a relevant link to the wine bottle sizing, and we'll put that into the show notes. We will also, of course, include a link to your wonderful blog post. What's the story with story points that you wrote with Matt Sumner? Because I think that really does dial into this topic really well. And again, coming back to that core idea around Agile, while we see value in the item on the...which side is it? While we see potential value in story points, I have worked with countless teams who desperately wanted to make this thing work. So it would be great if we could quantify the work and then numerically understand the work that we had ahead of us and sequence things and talk about deadlines and whatnot. Man, that would be amazing. I would really love to do that. So with every other developer and every manager of a team of developers in the world, I have not seen it done. I am still looking for that day. When that day shows up, then I think this will be a wonderful practice. But unfortunately, my experience has been that this doesn't work, and trying to do it causes more harm than good.
STEPH: I agree that I certainly understand the reason that people want story points to work because it's very nice to then say, "We can calculate, and we can measure, and then we can have delivery dates." And that's really nice from a management perspective. But that does blend in nicely to the next topic, which I think fits nicely underneath the Agile umbrella, our daily syncs. Because that does bring us closer to that goal of where we can't give real valid updates on how something is going and provide a more real estimate as to when we think something is going to get delivered. That doesn't have the same effect of where we think we're able to plan and then promise delivery dates a week in advance because we're getting those updates in real-time, but they're going to be more reliable. And that is, we're so much more than where we try to over commit to work or if we try to say how much time something is going to take. And that is so much more valuable to have that reliable update and estimate versus trying to trick ourselves into thinking that we know when something is going to get delivered.
CHRIS: Yeah, I think the daily sync or sometimes called the daily Scrum, or standup, or otherwise morning meeting often in the morning, this is one that I see lots of folks really hate, and I'm personally a big fan of. This is one that I would definitely hold onto. But I think you have to be very, very purposeful with how you structure it. It really should be as short as possible. And there's one particular thing that I see very regularly in teams, which is almost a performative version of what I did yesterday. It's trying to demonstrate to the team that yes, I, in fact, did work yesterday. I was a valuable team member. Please don't let me go from the team. And I think that's the sort of thing that we should try and just get rid of. There are definitely times where what you did yesterday is relevant to the team, or you worked on something, and now you have a bunch of questions, and bringing that to the team is useful. But that version of everyone needs to prove that they did work yesterday or...it's the sort of thing like if anyone says that sort of thing, then everyone else is like if you don't say what you did yesterday, then it sounds like you did nothing because everyone else is saying what they did.
So you have to, I think, get a team buy-in to do this, say, "We're not going to talk about sort of bullet-list what we did yesterday. That's not going to get us anywhere as a team." But what's useful are those little magical moments of connection where I say, "Yep, I'm working on this. I'm going to implement it in this way." And someone's like, "Wait, wait, that way? Oh, we shouldn't implement it that way." And then ideally, what happens there is okay; let’s connect after this meeting. You've now made this connection, but you don't need to hold up the rest of the meeting for that. You can just say, "Cool, this connection has been made. That's an incredibly valuable little point in time, but now let's continue on with the flow of the meeting," so that it keeps that rapid pace. And so times where you're blocked, times where you have questions, times where you're just describing what you think you're going to be working on. So if anyone's like, "Oh wait, no, we needed to stop that work because we actually made a decision yesterday that impacts whether or not we actually wanted to build that feature at all." If you can head off incorrect work at the pass, there's so much potential value in that meeting that it is interruptive. And it does take up some time, but I find that it is so, so worth it if you're able to really keep it focused, keep it concise, and keep that end goal of those little connections. When those happen, they're so valuable. So I think it's really worth the input.
STEPH: I'm still smiling from where you said performative of what I did yesterday because that is something that took me a while to understand, one of the things that I did not like about the daily sync or daily meeting whenever your team gets together to talk about the work that's being done. And it was finally when I realized we're just going through a list of who has the longest list of the things that they accomplished yesterday. And again, it felt like it was bringing out more of that competitive mode in folks to talk about what they did, and it didn't feel very useful. Every now and then, maybe there was one thing that was interesting that someone did. But most of the time, it was always more helpful to hear what the person was working on that day for all the reasons that you just highlighted.
There is one practical concern that I have with these types of meetings or with these types of events. And it's where you'd mentioned where if we can keep it concise…and someone brings something up, and it starts to devolve into a conversation right there. So then whoever was up next is now waiting while that conversation is happening. And that part gets awkward because then there's usually one person who is then willing or no one frankly is willing to then say, "Hey, so sorry to interrupt, but let's actually table this discussion and let everybody else go, and then we'll come back to this." And if you have people on the team that have been there for a long time with that culture, then that will just work because everyone will keep each other in check. But otherwise, if you're starting that new process, or if you start to notice there's always that one person who's doing that awkward thing of trying to then set that culture of this is how we do our daily chat, and these are the things that then we wait for later, it's really hard. And I say that because I have often been that person that's in that space where then I encourage people to table a conversation. And it always just feels awkward to interrupt someone and ask them to please wait until everybody else has gone.
CHRIS: I share your hesitations around that, but it is very important. And it's that sort of ideally someone in a more senior position will model that behavior and model it in a positive, friendly way. Where I have done that often it's in the form of a question, so it's, "Actually, do you think maybe we could take this offline?" or something like that. Not a command, not taking over or shutting people down because it is somewhat interjective, and you're sort of correcting course. And so, being as friendly and empathetic in that moment as possible, but that's a hard note to strike. And again, if it's something that only one person is like the taskmaster, the Hermione Granger of the team who's trying to keep everyone focused and doing their homework sort of thing, nobody wants to be that. Well, Hermione did, but otherwise, nobody wants to be.
STEPH: I love all the Harry Potter-themed references that have been coming through in the last couple of episodes. And I agree it is something that's hard to help teams course-correct, but it's important, and it's very much something worth doing. I just recognized that I think that's why these roles get implemented, why there's this concept of a Scrum Master, and then why we designate these tasks to specific people because then you have someone who can do it. And then when they do interject, it feels more appropriate because that is their role, and that's one of the things that they're supposed to do versus putting it more on the social pressure of whoever is comfortable speaking up to then course-correct. So I do understand where that implementation of Agile has then tried to create those roles, which I've been on teams that have a Scrum Master. And my experience is it's often been a very positive experience because the person that is in that role is often very kind and caring about that team. And so they are a wonderful person to work with, but it's also one of those...I've also been on teams without them, and things have been fine. So I have mixed feelings about that one. It's one of those; it feels like an extra heavy process, but I've also been on teams, and it worked.
CHRIS: It's interesting the way you frame it, of the utility of that role. Like, having a role where we've now all bought into the idea that this person may take these actions say, "Hey, can we take that conversation offline?" and rather than one individual choosing to do that. I like that framing. I share what you're saying about the rest of the baggage that comes along with having this formal position, and often, that person is otherwise removed from the work. That can often be an aspect of Scrum. I think that gets complicated. But now I'm wondering can we make a software solution to do this? Because, of course, that's where my head goes. Can we have a standup bot that is listening and is like, "Hmm, it seems like you two people have been talking for the past two minutes. I'm just going to interject like my little bot self that I am and ask maybe take this conversation offline," in the way that we've sort of automated a lot of code formatting things, and that's been really wonderful, so that's not a part of PR review. Can we do the same for standup? I don't know.
STEPH: I think all the award ceremonies have these where they start to play the music, and that's your cue to move off stage.
CHRIS: Oh, I like it.
STEPH: I think that's it. [laughs] So you cue the music whenever someone has been going for quite some time. On a slightly separate note but still related to this, some conversations that have been bubbling up around me have been related specifically to this idea around stepping in to say, "Hey, I'll take on that thing that you need a volunteer for," or "Hey, I will help the team stay on track," will often fall on people with a specific personality and then they will often be the one that continues to do that. And so they will end up taking on additional work or taking on additional roles just because they may be in a more empathetic spot where they feel that's the kind, helpful thing to do. And so, we've been looking for more ways to make sure that those tasks are being distributed evenly across the team. So we're not just waiting on someone to say, "Who would volunteer for this?" And then typically being the same handful of people that are always speaking up and then volunteering for it.
And then trying to shift to more of a purposeful approach of having a queue of people and then cycling through that queue, and then if someone can't do it at a certain time, then we move on and then we just put them back in the queue. But this way, we don't have people that are typically just always taking on these responsibilities. And that's something that is a new consideration for me but one that I have found really helpful to be aware of and notice on your team who's the one that's always volunteering for these roles and checking in with them to see if they're comfortable with this, or if they're feeling compelled to volunteer for stuff because they may feel more inclined to speak up versus others are okay with staying quiet. But circling back to some of the Agile discussions earlier, you'd mentioned a handful of meetings and that you have some feelings about those meetings. What are those meetings that you have feelings about?
CHRIS: Yes, the meetings. So again, this is somewhat contextual to Scrum, but the structure of Scrum has a handful of meetings that sort of define the sprint. So you have some at the beginning, the middle, and the end. So there's sprint planning, there's backlog grooming, there's sprint review, which typically includes a demo for stakeholders, and then there's sprint retrospective. And these, as far as I understand it, are four distinct meetings and are intended to be kept distinct so that their purpose stays purified in each of those meetings. And I think my feelings would be that again; I don't really find a ton of value in the sprint structure or in the two-week cadence or things like that. And so I think it can make sense in those contexts to be like, we need to make sure we have space for these things. But in a more continuous context, I think the backlog grooming or, more generally, let's talk about the work that's coming up. Let's make sure that we're all unified in how we're thinking about that work, what we think matters, what's prioritized. I think that is an incredibly valuable meeting. I think sprint review and specifically demo for stakeholders I'm really intrigued by that one. I don't know that I feel like that needs to be a distinct meeting. And in fact, more and more these days, almost every feature I deliver has either screenshots or a screen recording of what that workflow looks like. So we're continuously demonstrating to the stakeholders what does this look like now that it's a real thing? What does an end-user see? What's that experience like? And in retrospect, I think we'll probably spend a minute on that one. I like retros; some people hate retros. Yeah, let's loop back to that. But of those, what are your thoughts? What do you like? What do you not like about those meetings?
STEPH: I think grooming is a very helpful meeting that can help a product manager and a technical team have discussions about the upcoming work. I don't necessarily think it needs to be the whole team. I think it can be a couple of engineers from the team; maybe those people rotate, maybe it's the team lead. And they get together with the product manager, and they essentially answer any technical questions about upcoming work. So then it can be refined. So then, as we get closer to that planning session, whatever we want to call it, then it feels more in a ready state for folks to react to and then have opinions on. So I do like grooming, but I wouldn't necessarily advocate that the whole team needs to be present for those.
For a demo, I'm with you; it really depends. I've worked on projects where the stakeholders are less close to GitHub and Slack and areas that we could demo some of the work that's being done, and maybe they weren't poking around on staging as much. So it was really helpful to then have a more formal demo to then show them the work that's being done. And then I've also worked on plenty of teams where a demo was something that we used as a fun internal event where we have all these different teams, and we get together. And then we get to show off all the great work that we have done across all the different products. So then us, as fellow teammates, can then celebrate what the other teams are working on.
Retros, you know I love retros. I think retros are a microcosm of your team's culture and process. And if your team is struggling to have a productive retro, your team is struggling. Because I think that is representative of your team's ability to get together, and reflect, share concerns, celebrate wins, agree on what's important, and run measured experiments. And if you're not having a retro, then I think you're not going to know how your team's doing until it's too late, and it's going to be harder to course-correct.
CHRIS: #HottakeswithSteph. I like it. I like the intensity that you came in with there, but I know retro is near and dear to your heart. So I'm unsurprised that that is the line that you've drawn. I definitely share all of those feelings, particularly around retro, because I think much like the daily sync, I've seen many people who are just like, "This is a bad meeting. It's useless. Nothing ever happens. I don't like it." And I'm often surprised by that because I've found so much value in it. Retro similarly is this magical meeting that can just regularly change the course of how we're working as a team. But I also have come into plenty of teams where it definitely did not have that shape, where it was basically a place that everyone sits down, and somewhat downtrodden restates their list of grievances, their airing of grievances, and then nothing changes. And much like the sprint iteration thing where you're constantly missing the commitments, and that's just going to wear a team down. I think if you constantly have retro and nothing changes and it's that same list of concerns, then that is going to be bad, but that, like you said, is not the reason not to do it. [chuckles] Oh, we just keep saying the same things in retro, so I don't think it's even that valuable. I would say that maybe we should change the things. But I've definitely been on plenty of teams where retro was just so valuable. And it's definitely one where I feel like having a facilitator, having someone who is in that particular seat trying to guide the conversation without necessarily being in the conversation, can be incredibly valuable.
There are also structures that I've seen work particularly well. We have a video on Upcase that we can link to. That's a format that I've found; it’s a very lightweight format, but it basically involves getting everyone's input on a positive note, on a more critical note, and then revisiting and sort of sorting and waiting, and then digging into topics that need a little bit more focus. But I think a lot of different formats can work as long as retro is a way for people to sincerely meet up, safely talk about the things that they are feeling about the work, and then ideally, some change comes about as a result of that. You mentioned having measured experiments, and I love that as a framing or like something that retro can do for us.
STEPH: I really do think that retros are so important because they're the health check of the team. As you'd mentioned, if people are having a very negative retro experience, which I understand, I've had very negative retro experiences as well, and I've walked away feeling like that was not a productive use of my time. But then that is our warning. That is our signal that's saying, "Something is not right, and something's not great, and we're not working together as we really want to be working together." And this retro is just that reminder that is right in our face, that is making this so uncomfortable and feel like a waste of our time because it is informing us that something needs to be improved upon. And we can feel like retros are not productive when we feel powerless to make that change. And that again is then another discussion to have with the team, to have with management, leadership, to talk about how do we get the power to then make the changes that we need to then have productive, happy retros? Because that's going to be a reflection that you have a happy, productive team.
CHRIS: Love it, love the framing, love the symmetry there between team happiness and retro happiness. So to summarize, I think we've gone through most of Scrum now. So just to...correct me if I'm wrong on any of these, but I believe sprints and iterations, nah, we'll leave it. Planning poker, definitely not. That doesn't seem good, although maybe just to bring up conversations, but not as an artifact that we save in any way. And then otherwise, daily sync, we're fans. Retro, definitely fans. Sprint review, backlog grooming, some version of those, a lightweight version of a bunch of the meetings seems may be good, but a couple of things definitely are going to leave on the cutting-room floor. Does that sound about right to you for Scrum specifically? We've got other topics to cover.
STEPH: Yep. All of that list sounds really good.
CHRIS: All right. So we've now found our refined version of Scrum, re-Scrum as we'll call it. But now there's a couple of other pieces...So Scrum is very focused on the ceremonies and the team activities, but there's another facet of the Agile umbrella, which is Extreme Programming, which that's a book. I believe Extreme Programming Explained is the name of the book. And there are various different links that we'll include to point at those. But there are two particular practices that stand out that I have heard some people love, some people do not. So we'll go into both of them. The first is pair programming. What do you think, Steph? Do you like pair programming?
STEPH: I do. I'm a huge fan. [laughs] Yes, I very much like pair programming, although it still has its limitations. I definitely want time on my own, and I can get exhausted from pair programming. It is a very vulnerable experience, too, where you have to share with someone: this is what I know, this is how I work, this is how I think. And I think that is incredibly challenging. I find that I am typically more productive when I'm pairing with someone or when I have the opportunity to pair with someone at least every couple of days.
CHRIS: Yep. I'm definitely a huge fan of pairing. Although I think specifically to Extreme Programming, I think the idea is 100% pairing. I think you already spoke to this, but pairing is exhausting. And the idea of 100% pairing is I can't really even imagine that; even 50% pairing feels like an incredibly high bar to hold for any extended period of time. There's a recent article that was going around the mortifying ordeal of pairing all day, which spoke of one person's experiences getting deeply burnt out just going through that process. And so, as valuable as pairing is, it's definitely a tool to be used not all the time. That feels like a lot.
STEPH: That's a lot of Stephanie singing because I tend to sing a lot whenever I'm stuck or thinking through things. So that's a lot of singing that I don't know if the world wants.
CHRIS: I mean, based on all of the various Bike Shed intros that involve you singing, I think the world wants it. That's maybe one person's take. But definitely, something that you said of there's a vulnerability to it. And so many pairing sessions I've either been the one saying this or someone else that I was pairing with has said this to me, but they're like, "I swear I know how to type, just now that someone's looking, my hands don't work." It's like you're in a dream, and your legs don't work. You're like, I know how to run, I swear. But for some reason, my legs are made of jelly right now. Or you can't remember a particular method, or there's just something that happens, and so getting over that hump, getting comfortable with it, I think it is a skill and something to become accustomed to. And so, again, being conscious of that when you start doing it is super important.
STEPH: I don't know if this is true because I only have access to people’s thoughts when I'm pairing with them, and then they're sharing their thoughts with me. But I do feel like people tend to beat themselves up more when they have someone watching because then you feel the need to say, "Oh, I normally can type, but because someone's watching..." which is so true; that definitely happens. But those moments are some of those really great moments to then reflect on the fact that just because someone's watching us doesn't mean that then we suddenly need to beat ourselves up. And I don't know how philosophical that I want to get with this, but I feel like there are so many opportunities while pair programming to then encourage other people around us to be kind to themselves. That is one of the things that I have really benefited from pair programming is learning to be more kind to myself. And even if I don't know exactly what's happening or what I'm doing and I may not be as confident with someone else, I can still be positive and kind. Just because you're in a vulnerable space doesn't mean that you then need to be unkind to yourself.
CHRIS: Yeah. I definitely agree with the idea of being kind to yourself also, where you can, be kind to someone else who you're pairing with, especially if they're finding that they're like, "Ah, suddenly my hands don't quite work." But I have pretty uniformly seen that a pairing session may start out that way. And then as everybody kind of just relaxes into it, suddenly you'll see someone just kind of flying around their editor. And you're like, wait, what just happened there? That was so fast. I don't even know. And so there's just this comfort level that sometimes it takes a little bit of time to ease into. But yeah, so pair programming, broadly yes. 100%, oh, that's going to be a no, no, thank you, not that. All right, so one other practice that comes from Extreme Programming, which is Test-Driven Development AKA TDD. What do you think about that one, Steph?
STEPH: I feel like you're giving me lay-up questions here. For anyone that's familiar with us, [laughs] I feel like this is an easy one. Test-Driven Development is a thing. It's a thing that I enjoy. I don't always write tests first, though, so I don't always follow TDD, but I am definitely a fan of tests. So, I guess in that light, it’s not so much that I adhere always to TDD. I don't feel the need that I have to write tests first, but I have found that with practice, that often helps me write code where I have tests then help me write out the logic for my code. So generally, yes, thumbs up on TDD, but I'm also not terribly strict about it where if you want to write some code first, write some code first.
CHRIS: Yeah, I think I'm definitely in the mode where I like testing. I like Test-Driven Development. I can't always pull it off, frankly. It's hard. It is hard to know how to write a test in advance of the implementation that you're going to write such that the test will correctly constrain the system that you're about to write. That takes a couple of levels of knowledge that if I'm writing a Rail’s controller action form sequence, I can probably TDD that because I've done it so many times. But if I'm doing something that's a little bit more new, novel, less familiar to me, then likely I won't be able to pull it off. TDD is like a fancy move that I don't always have available to me. But I consider that whenever I'm in that mode like that's not oh, it's fine to just write the thing before the test. Like, I want to be able to do TDD 100% of the time. I'm just not a good enough developer, frankly. And I don't know that I ever will be because I always want to be working a little bit past the edge of my comfort. So it's a delicate line of when I will not use TDD, but wherever I can, wherever I do have that level of knowledge of the system and the frameworks and whatnot built up, I find it is a vastly more effective way to work. It's not that I feel cool when I do it. It's like I feel much more effective. It helps me stay focused and on task and get the thing done. So it's very utilitarian in that way but also not something I can always pull off.
STEPH: So, circling back to when we first started chatting, you were asking about Agile and then my thoughts about it. And having this conversation with you, I'm realizing, or I think I was already aware, but it's helping me re-solidify I'm very much a fan of Agile. There are specific implementations of Agile that I don't find enjoyable, and I don't find helpful to writing software, and I don't find helpful from the project management side either. But broadly speaking, I'm still very much a fan of the approach that we use generally for Agile, where we want to work in small deliverable increments, and then we also want to have the ability to change any moment what is the most important thing to work on? To me, that is the heart of following the Agile process. And I don't think that's going anywhere. Like, I don't think Agile's going to disappear. But I wouldn't be surprised if we see another implementation of an Agile variety of the things that you and I just shared and the things that we like. And so, I feel like most teams that I work with follow Agile within their own unique bespoke version. And we don't have to give it names because everybody's going to have their own custom version where they decide which process works for them and which one doesn't work for them. And that's what retros are for so then you can figure out which process works for you.
CHRIS: Once more, Steph on the record about her love of retro. I think the core of Agile, the Manifesto, those core ideas about small iterations, delivering value, staying close to stakeholders, all of that feels deeply true to me. And I would be really surprised if a year from now or two years from now I was doing something that was wildly different from that. But then each of the layers of practices on top of that to varying degrees I like or don't like. And I wouldn't be surprised if aspects of that were swapped out down the road. But that core, that idea of this is how we think about building software. I like that thing; that seems like a good thing. So I'm going to hold on to Agile for a little bit longer personally.
STEPH: Same. I still see Agile in my future. On that note, shall we wrap up?
CHRIS: Let's wrap up.
STEPH: Show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed on Twitter. And I'm @christoomey.
STEPH: And I’m @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeeeeee.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris gives some small updates on working with Svelte. He really likes Svelte so far. Svelte's great. Modals are complicated. He also talks about using a little JavaScript library, called Quicklink. Steph talks about sending data to a third-party system and using feature flags to help deprecate some code.
Finally, they both riff on a listener question on consulting. Said listener asked, "Do you think about your work as 'consulting first' or as 'building great software first and then good experiences for your clients will follow naturally?'" Find out their take and give us your own, here on this episode of 'The Bike Shed!'
Transcript:
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So hey, Chris, happy Friday. How has your week been?
CHRIS: Happy Friday. My week's been great, yeah. I've been writing a lot of code, moving things around, planning some features, and all that fun stuff that goes into building an app, so I'm enjoying that process. I'm also halfway through listening to your recent episode with Nate Berkopec, which was absolutely delightful, well, at least the first half that I've listened to so far. I assume the rest will continue to be absolutely delightful, but it does remain to be seen. So I'll report back next week when I've listened to the whole thing. But yeah, that's great. And I'm glad that Nate got to come on, and we got to share a little bit of his story as well.
STEPH: I like how clear you are in terms of like, "The part that I've listened to so far is great, but I reserve judgment until I've heard the rest of it." [chuckles] But that's awesome.
CHRIS: The thing about being a developer is it has broken my brain such that I am overly specific all the time because I just argue with a computer all day. It's what I do. So then I start talking to humans, and I'm like, wait, I should probably behave differently now. And I got to unwind some of those computer fights. But anyway, and let's see, small updates working with Svelte, really like Svelte. I'm leaning into it more and more and embracing...I think I'm starting to understand the aspects of it that I really like. And one of the things that I really like about it is that it is somewhat underpowered. And what I mean by that is working on React applications, I find that I can do some fancy stuff, and I can express it really well in TypeScript. And I can really go for it and create some components that are wildly variable and configurable and can take in any combination of props and do all sorts of things. And I can slice out tiny, little components and do all of this. When I'm doing that, I enjoy it.
But in Svelte, I have a little bit less power in my control. Svelte is closer to HTML, CSS, and JavaScript fundamentally. So you can make components, and I really like that. You can bundle up the pieces of functionality and display and formatting, and all of that, but it's not quite as powerful. It's not quite as expressive. And I've actually found that to be a useful limitation, which is an interesting frame. It's not something that I thought I would say, but I'm finding that the code that I'm authoring in my editor is so much closer to the code that's actually going to be presented to the end-user. That is really useful in my mind. I find that to be really valuable. There are small things like in Svelte; you can actually say class equals when you're trying to define a class on an HTML element. It turns out I really like that one instead of having to say class name or similarly HTML for. There is a handful of them in React that you have to change the name of. So if you copy a snippet of HTML from the web, and then you dump it into your editor, if you're working in React, you have to change a bunch of stuff. It doesn't work right away. And it's a small thing, but I found that I really seem to care about it. But there's the “it's nice that it just works” version. But I feel like there's also an actually practical, meaningful edge of it is so much closer to the thing that's actually going to be in the browser, and I like that.
STEPH: I liked the phrasing that you used just a moment ago where you said, "Useful limitation." Since I haven't used Svelte myself, one of my understandings is that you like the fact that it is that low JS in terms that we are introducing this framework, but it's not as heavy-handed as React or another framework that you could retour. But then you also said you're running up to areas where you feel like you're missing some stuff from React, is what I'm hearing. Is there a particular feature, or do you have a concrete example to help me understand some of the stuff that you are really missing?
CHRIS: It's not so much that I feel like there are specific features missing, but as a pointed example, I am not able to pass in the DOM element that I would like the component to render as. That's a weird thing, but often, component libraries will do this. So you have a button component, but the button can render either as a literal HTML button element or an anchor element. And you can pass in as equals and then button as the string there. And in React, you can do that, and then you can actually do some type inference across it and say, "Okay, now the rest of the props that you can pass in are button props.” And if you pass in as equals a, so implying that you want it to be an anchor or a link, then it will constrain you to the link properties and say, "Oh, you must have a HREF now." That's really cool that you can do that. It's also super complicated, and the TypeScript representation of it, while it works, is very, very complicated and the types of errors that you get. The complexity of what you can build with React is really interesting. But I worry now that I've spent a good bit of time in Svelte, I worry if it's overpowered. I've worked on plenty of applications where the system as designed in React, all the set of different components is very, very complicated. And you sort of have to learn that system in order to be able to work in it, whereas in Svelte, you just start, and you're writing in HTML and CSS. And then, as you need more fancy stuff, you can slowly layer it in. And to be clear, Svelte definitely has plenty of power.
This past week, actually, we were working on a modal component, but we were really focused on accessibility, which is probably a good thing that you should do, but it turns out modals are very hard to get right. The dialogue component that should exist in HTML is not complete, and it's not a thing that we can rely on. So we have to do certain things ourselves. So the idea of focus trapping when the model pops up, we need to say, oh, okay, the focus should be trapped inside of here, so you can tab forward and back, but it's going to stay within that modal component. There's actually a way that you're supposed to portal it. So you move it outside of the documents so that you can make the rest of the document...I want to say aria-hidden is the property, but you're basically saying the entire rest of the document that's behind this modal component should be inert to a screen reader essentially or invisible to a screen reader while the modal is up. And doing all of those sorts of things is super complicated. After you close the modal, you're supposed to refocus the button that opened it, the triggering element, and that's a tricky one where you have to pass down a reference to something. And that was all very expressive, actually, very straightforward in Svelte in a way that I was really impressed by. So it definitely has all the power that you need but not any more than what you need. Or there is a small line of it's just right.
STEPH: So we should just scrap modals. That's one of the things that I'm hearing from you. So I just want to clarify because I do feel a little confused because in the beginning, it sounded like you were saying that Svelte is wonderful, but you do feel like you're missing a little bit of functionality there that you do receive with other frameworks like React. But then that last thing you said where “it's just right” sounds like it's the Goldilocks. So I'm a little confused as to exactly how you're feeling about Svelte in the moment.
CHRIS: Yeah. I'm probably not being as clear as I should. I am a big fan of Svelte, so as the first answer, a big fan of Svelte. I'm recognizing that, strictly speaking, it is somewhat less powerful than React. But I'm also trying to say, perhaps failing at saying, but trying to say that I like that, that I'm finding its constraints are useful. React can do a ton of stuff. You can represent a real impressive array of component functionality and have components that take 17 different props that covary in different ways, and it's very complicated. And I've worked on plenty of React applications where I just have to stare very hard at the component library for a while. And I'm like, ugh, I still don't know how this works. And it's this custom bespoke language where Svelte feels like it is much closer to the thing that we're actually doing, which is rendering HTML and CSS and JavaScript and whatnot, and I like that. I'm finding that very useful. I'm finding that lack of power not to be a hindrance but, in fact, to be useful.
STEPH: Hmm. Okay. I like that last part. Yeah, there are often times where I feel like the less powerful something is, even if it means a little extra work on my end but it's clear as to the work that's being done...I'm going to take it back a couple of years to when I was first learning Elixir because that's how I felt jumping from Ruby to Elixir and from Rails to Phoenix, where suddenly I felt like I had more clarity. There were some things that I had to do more on my own, but I felt more clarity as to what exactly was being done versus Ruby and Rails doing a lot on my behalf. So I can certainly relate to that.
CHRIS: Yeah, I think that captures it well, that the expressive power of React can perhaps lead to somewhat more confusing code, and the small handful of cases where I need to be slightly more verbose in Svelte I actually find really useful. Like, Svelte is making sure that I'm writing components that are clear and easy to work with, but it still has all of the power that I need, and I can do everything I want in it. And yeah, overall, just yeah, Svelte's great. Modals are complicated. And that's my story. But yeah, that's a little bit of what's up with me. What's going on in your world?
STEPH: Before we switch gears. I want to add on a little bit more to what you just said because something that I have noticed with me is that the longer that I've been a developer, the more I want that lower-level control and understanding as to what is happening. And it sounds like that is very much what you're saying that you're enjoying with Svelte is even if it does require a little more extra effort, that then at least I have that ability to exactly control what's happening versus if you're using higher-level obstructions, you're stuck with the API that's been designed for you. And that API works 98% of the time, that's wonderful but then that 2% of the time you're in trouble. So I've definitely noticed that trend, that over time, I want that lower-level control over everything that I'm working with and building, although not all the way to C, let's not go that far.
CHRIS: I mean, there's Assembly underneath C. We can keep going, and we can just manually manipulate transistors as well if we really want to get after it. [laughs]
STEPH: Next week on The Bike Shed. [laughs]
CHRIS: Much, much higher level of abstractions are interesting to me, but yeah, there is a sweet spot. Svelte seems like it's the one for me.
STEPH: Nice. So then switching back to what's new in my week, it's been a little bit of a weird week in terms of there's been a lot of focusing on sending data to a third-party system. So we had a lot of data that they needed in their system. So I have been focused on running a number of processes that are then sending that data over and then essentially babysitting processes, making sure everything is going smoothly. Also communicating with their team to understand okay, what's being received? Do we have any errors? Is there any sort of miscommunication between our systems, and that's why we're needing to resend this data to you? So it's been very different in terms that it wasn't a typical feature development week. It was more, hey, I sent you some data. What did you receive? And then let's fine-tune both of our systems on each end, which that part I always enjoy. As soon as I can get to that level of collaboration with someone, I very much enjoy that part because initially, it felt like a stressful task of like, hey, we've got this giant CSV. We need to process and send data. But then as soon as I have someone else to work with, then I'm like, yeah, okay, this is great. They can update their system. We can fine-tune ours as well in case there's something that's not communicating properly, and that part I really enjoy. I really enjoy collaborating with someone else so then we can both improve our systems together, so that part was a little different.
But the actual weird thing that I did this week is we have feature flags, and we are using those feature flags to help us sunset and deprecate some code. So we have a controller path that is pretty gnarly. It is one of the more dense, difficult areas of our codebase to understand. And so we are refactoring it and creating a new green space for it so we can start to pull in some of that behavior and then also refactor as we go. So we essentially have class version one, and we now have class version two, which is always something. And we want to be able to feature flag this because, with our deployment workflow, we need the ability one; we want to be able to switch back quickly. So that way, if something goes awry, we can switch back to the original code if we've made some misassumption in our V2 version. And then we want to leave that on for a while to make sure things are running smoothly, and then we can go back and actually remove that class.
But then the question came up is like, well, if we have these two files, how do we tell the team not to touch this particular file but only contribute or make a change to this other file? Because we have a sizeable team, and we work in different time zones. And there is a very reasonable answer that we communicate with the team that other folks are aware because they've seen the PR. There's a whole self-discipline of we review PRs and make sure stuff wasn't changed. All of that stuff is fine. It's reasonable. But I wanted to do something a little less reasonable [chuckles] that would still fail loudly in case someone changed a file. So the question was presented is there a way that we could fail loudly if someone changed this file? And there's a fun thing that we'll do at some of our daily syncs where someone will say, "That's a good idea. I have a bad idea." There's a fun thing that happens at our daily syncs where someone will often ask a question, and someone will provide an idea. And then someone else will say, "That's a good idea, but just to throw it out there, I have a bad idea. So let's just explore all of the ideas." And one of them was like, "Could we write a test around this? So if the file hash or something about that changed, then could we alert the team so then we know that this file changed and you're not supposed to change this file?" And essentially, having that discussion of like, well, then we're reimplementing Git because we're trying to track file changes. That seems like a bad idea but still a novel one to talk about for a few minutes.
The implementation that I landed on and then shared with a person that's working on this is you do have the ability with Ruby, the file class itself; you can open a particular file. And for this one, select class one, and then you can use the function mtime, which returns the modification time for a file. So you can check the last time that a file was changed. So I wrote a test that says that "This file was last altered at…" and I grabbed that file's last altered at time with mtime. And then, I compared that to a particular DateTime. And then that DateTime could be any DateTime in the future once we deploy this class version two, so we don't expect that file to be altered. So this test will always pass until someone changes that file. And then Ruby is going to say, "Oh, your time is now greater than that other time you said." And so it's going to fail, which actually works pretty well. It's not as ugly as I thought it was going to be. [chuckles] As to whether it's a good thing to add to the codebase, I don't know, but it was a fun thing to write.
CHRIS: I like it. I've definitely written things like that in the past, and I guess; therefore, I'm biased. [chuckles] I'm a fan of this sort of thing. But when you can take that group knowledge that is just shared in communication or via code review and you can capture it in the code, especially if you can do it in a stable, robust way…In particular, the first thing that comes to mind with that is like, well, are there going to be different representations of the timestamp on your system versus CI? Will that ever change over time? Like, Linux versus OS X or things like that. I actually have reached for Git in situations like this in the past. So, in particular, the one that I found myself doing a few times is trying to instrument code generation. So say we're working with Apollo, and we are generating the TypeScript types associated with a GraphQL request. I wanted to put something into CI to say, "If we haven't committed those changes," because we're supposed to be committing those files alongside, "then warn." And so the idea was take a snapshot of what things look like right now, run the command that does the code generation, and then check after that.
I've done different versions where it's like, hey, Git, is the working directory dirty at this point? That's a version. I've also done one recently where I got the checksum of the file but again, asking Git. Because you're totally right that a lot of this...this is what Git does, and we don't want to rewrite Git. But I did feel okay reaching out and being like, "Hey, Git, can you help me understand the word?" But I like these sorts of things, particularly if you can do it in a way that won't ever require someone turning it off. I don't know if you've worked on projects where ESLint is enabled, but every third line has an eslint-disable-next-line. And it's just like, well, we have a bunch of rules, but we ignore them in a lot of cases. And those sort of...the like trust scenario with an automated tool I think is so important. If it's ever giving you false positives, false negatives, whichever it is, then it immediately, I think, loses so much of its utility. But if you can do it in a way that is stable and robust, then I am a huge fan.
STEPH: Well, we'll see if the person decides to include it in their PR or not. But I do like that idea of where we can take away the idea that we're going to catch it if it changes in a PR because then we're just going to end up in a bad place that if we fix a bug in the class V1 but don't apply that to class V2, we're just going to be in a bad spot. And it's likely we'll forget about it when we go back to then delete class version one. There is something that you said that has reminded me of a very small change that I made to my process, but I feel like it had a big impact. And it's specific to working with feature flags, how often you'll have your tests where it's like if feature flag is on, this behavior should happen, if it's off, this behavior. And I often would wrap my test in the default path where the feature flag is off, and then I'd have my other if the feature flag is on; this is the behavior. But as we are migrating with the intent that this feature flag at some point in the near term future is going to always be on, so we know we're going to come back and remove all of the other code. I switched those two paths and treat the default happy path as the new if the feature flag is on; this is the new world. So then when folks are going back to say, "Okay, I just need to delete everything that represents when the feature flag is off," suddenly, it's just very easy to find that context to say, "Hey, feature flag is off and then boom, delete all of those tests." And that's been really nice.
CHRIS: I really like that lens of designing or coding for deleteability. How easy is it to just rip this thing out? It's one of the things that I love about Tailwind, or one of the purported facets of Tailwind that makes it really nice is when you're looking at a given template, you can just rip it out. You don't have to worry about it because there's no associated CSS that you need to think about because the CSS is sort of generated available, whatever you want to call it with Tailwind. But I really like that idea of making it easy to delete stuff. Because it's so easy to just have your codebase slowly grow over time and look at files and be like, "I don't know if we're using that, but better to be safe." Cool. I'm excited to hear if that does land in the codebase and how folks respond to it. What did you phrase the message as? So if there's a test failure, did you give a particular like a special RSpec formatted message to be like, "Hey, friend, you're not supposed to touch this file. I know you're well-intentioned, but…" or is it just like, "Failure, bad. Mtime is different." Which end of the spectrum are we on there?
STEPH: I love that you asked that question because I almost went down that path, and I was like, well, this should really have its own custom failure message because it's odd enough that I want to tell someone a little bit of a story when it fails. But I didn't because this was something that one; I just want to see if I could do. So I initially started looking at standard rb in RuboCop because at first, I was wondering if this was something I could solve via linting if it was something that RuboCop…if I could say, "Hey, RuboCop, if you notice that this file changed…" I didn't know if they had a hook into Git as they're looking for files to analyze. So I first leaned on RuboCop standard rb, which essentially then uses RuboCop under the hood, and I didn't find anything there. So then that's when I was like, okay, maybe Ruby has something, and that's when I found the file mtime. So at that point, once I'd gotten the test to pass, I'm like, you know, this is good. There's a very nice, friendly test description that goes along with if this fails; this is the reason why. But I do think that would be like cherry on the top addition to the test to have a very nice error message that goes along with this. So if I were the one that was adding this to the codebase, I would take a few more minutes to do that myself. It definitely felt like one of those moments where I had gone far enough into an experimental mode, and I felt like I had just reached that point where this is useful, and I want to share it with the person who's actually working on this. But then I pulled back going further because I'm like, I don't actually know if they want to use this and if they're going to implement it. So it felt like that right friendly balance of like, here's something that works. Feel free to use as is, make it better, don't use it, totally up to you.
CHRIS: Yeah, I think given that context, that's definitely I feel like a good line to draw, not like, “Here's fully completed code that you can now just drop in. I did all the work, but here it is.” Versus like, “Oh, here's a kernel of an idea if you want, run with it, but if not...”But yeah, [chuckles] if you went to the length of writing a nice paragraph summary message to the end-user, that feels like you're really taking over the show. So cool. Well, yeah, interested again to hear how that goes and hear if it does, in fact, stop. That's the other thing. It's like, if it never actually fails, then everybody was just fine with the human process. But I'm intrigued to see how many times it actually does stop unwanted modifications of the file. So that's an interesting measure to track.
STEPH: Yeah, that would be an interesting thing to track because if we do have it, then we may have less visibility into knowing if it failed because then someone will see it fail locally, but then we will have prevented it from getting to that PR state. It is one of those “did someone not change it because we added the test, or could we have skipped that process?” It feels like one of those nice safety measures, but that would be a fun thing to measure, I agree.
CHRIS: Yeah, especially if it's a small change; in this case, I think it's totally worth it. But now, as I said it, I didn't mean it to be more of a thing. But now that I think about the question, I wonder if all tests should fail at some point. Like, all tests have a cost, both in terms of development and then thinking about them in runtime and all of that. And a good test is one that eventually fails because you change the system in a way that broke some constraint. And so, therefore, I'm now asking the question, like, should every test fail at some point? Are tests that only ever pass actually not that useful? I don't think so. Now there's a story running in the back of my head that's like, I kind of want to look at the CI stats. And feature specs will occasionally fail for unrelated reasons. But unit-level tests that never break, that never fail and catch something that was broken…I don't know that I actually believe this, but I'm just intrigued. As I asked the question, I was like, huh, should all tests fail? Sort of like one hand clapping kind of thing, anyway.
STEPH: I like the question, or it's making me stop and think because my initial answer is yes, as long as it's failing for a meaningful reason, as long as it's not a flaky test or something along those lines. But otherwise, as you're working on the system and you're making changes, then I'm inclined to say that yes, every test should fail at some point. But I agree, if we're getting into existential test area, then I don't have concrete feelings about this yet.
CHRIS: Yeah, and I feel like it's one of those sorts of questions. So pivoting off of that ever so slightly to bring us to something much more practical, I have a tiny utility that I want to chat about. And then I think we have a listener question that we want to discuss. But the utility, I think I brought this up on a previous Bike Shed episode, but the tool it's a little JavaScript library, but it's called Quicklink. And so the heading is instant next-page navigations. And so the way it works is it's just a little snippet of JavaScript that you'll include from a CDN, or you can NPM install it or any number of ways. But it's a tiny, little one kilobyte JavaScript thing that basically what it does is it attaches to every link on the page whenever you use that link. So you click on it or if you're on mobile if you tap, or however you're interacting with it, if it's an internal link, so not external to your site and not going to a different domain, but if it's internal to your domain, what it's going to do is it's actually going to prefetch in the background as you hover on that link. So it's going to say, "Hover is a good indication of intent to follow this link. So we're going to prefetch it in the background." And then when the user actually subsequently clicks it, which is often a couple of 100 milliseconds later, that's often enough time actually for the page to load in the background. And then, when they click the link, it almost feels like instant navigation. There's a similar thing that happens based on when you tap and when the actual firing of the link happens on mobile. So there's another delay that they can take advantage of there that's not quite the same as hover. But overall, it just takes basically any webpage, any website, and makes it feel very much faster. And it's cheap, easy, just kind of works. I really like it. It's a very interesting little project.
STEPH: I'm fascinated by how that would feel as a user because if I'm hovering over a link, I'm thinking through my specific navigation habits. So if I'm going to a link, like, I don't hover very long. I don't think of myself as a hovering internet user. [laughs] I'm probably going to click on it right away. So I wonder if I would still feel that same speediness versus...yeah, I am interested in the metrics if they have something around like...I don't know why they would know this or have this, but like, most people hover for this long. And so then it speeds up their feeling of the page load. I'd be interested in that.
CHRIS: I like the idea that you're bracketing yourself into the quickest click of a link in the west. I'm looking around on their website, seeing they have a quote from NewEgg at the top, which is, "We implemented Quicklink and saw a 50% increase in conversions and 4x faster page transitions." So it sounds like I'm reading an ad for this now, which I'm not because it's a free project. So you can use it or not and pay the $0. They have a demo, and then they have a measure page. So I think you can actually get to...I think they're just talking about how to measure it. But I've definitely seen another page where you can click on a link, and it will tell you what was the difference between hover and active when you actually interacted with it. And it turns out the bounding box for a link is bigger than what you see. And you're often moving your mouse not entirely to the center, but you're not just getting to the edge of it and clicking. And so that period of time where you're moving your mouse onto the link, there's actually often a couple of hundred milliseconds, which is enough to really make a difference if you've got a speedy site. You can take what feels like a couple of hundred milliseconds and turn it into nothing.
STEPH: All I can think of right now is the image of a little mouse that's moving closer to a link with the Jaws' theme song playing. So it's ta-dum ta-dum. [vocalization] And this whole time, Quicklink is getting ready to then load as soon as the mouse reaches that perfect zone to then start loading. That's what I'm getting is Jaws and Quicklink. [laughs]
CHRIS: I like the...it's not personification, but it's jawsification that you're doing of this JavaScript library where it's like, I just imagine them hovering on the side really watching intently. But on the sites that I've used it, it does make a noticeable difference. I feel the difference even with very active clicking.
STEPH: That sounds really neat. I'll have to look into it. Maybe I think I'm the quickest click in the west. That's very hard to say. And it turns out that I'm actually quite slow, who knows?
CHRIS: You might just be average; that’s fine.
STEPH: No way.
CHRIS: Most people are, mathematically speaking at least. [laughter]
STEPH: Not possible. I'm certain that I'm special. I hope listeners get a kick out of my oddities, [laughs] my very honest self that's coming through on the mic today.
CHRIS: We're all a little special. But pivoting one more time…
STEPH: That means no one's special. [laughs]
CHRIS: Are you just doing the quote from Incredibles, or are you actually trying to say that? [laughs]
STEPH: I wasn't intentionally quoting The Incredibles, but I did just watch that movie recently, and you're totally right. I am quoting The Incredibles.
CHRIS: This is our second episode in a row then with a Pixar theme, which is always fun. But pivoting ever so slightly, I think our final pivot for the episode, we have a listener question today. So this question comes in from Matt Swanson, and he is asking about consulting first versus software first. So his question is, "One of the biggest turning points in my career was realizing that software consulting is, well, consulting. Do you think about your work as 'consulting first' or as building great software first and good experience for your clients will follow naturally?" So, Steph, what do you think?
STEPH: I liked this question because it really made me stop and think about the differences in how I approach my client work. So I will say that I do think it varies slightly for each client, but most of the time, I do think of my work as first building great software. And then, once I've had time to understand how the team works and then identify opportunities for improvement, then I'll put on my consultant blazer and start scheduling meetings. I'm just kidding. I don't like meetings, so I don't do that part. But I do find that most of my engagements are looking for initially a strong developer to help contribute to the team and mentor. And then, I find that a lot of my consulting skills can then start to shine once I have that opportunity to build trust and then share outsider views with the team and then coach them in other directions. So I do take the approach of building great software first. Although this question really made me pause and think about it because I do think of the consulting and building software as so tightly coupled. It's a little hard for me to define when am I switching from my developer hat over to more of my consulting hat.
CHRIS: Yeah, I think my initial reaction to the question was similar where I don't view these as two different modes that I'm fundamentally operating in. It's a continuum, or it's like a two by two grid thing, and I'm sort of moving around between the different spaces, but there's always a little bit of both. And I think if I were to answer the question directly, I would lean towards building great software. That's always the thing that I'm trying to do but often that requires some other more human-centric interactions. So having a difficult discussion around a feature and why we may not reach a deadline that we're going for or talking about ways in which the workflow is not necessarily going as well as it could, and we're ending up losing information along the way or different process things, all of that is a little bit removed from building great software. But at the same time, it's...actually, this is true of me now. I'm not technically a consultant anymore. I've stopped doing that, and I'm now full-time at an organization. And I'm not imagining my role changing fundamentally. I was consulting with them. I've now come on as a full-time employee, and I'm still viewing my work as very much the same thing. Maybe that's because I spent so long consulting that that's sort of the mode that I think of as how I work. But I think yeah, it's not necessarily two different modes. It's definitely a continuum that I'm operating across.
STEPH: Yeah, I think that's why for me, it often varies. I like that word that you're using around how it's a continuum and that you're constantly sliding back and forth between one mode and the other. And if I think back to earlier days when I was working specifically with product teams before then, I joined thoughtbot and trying to think, well, what are some of the differences? How would I define what is more of my consulting mode versus then the building great software mode? Although I think the latter does encompass the consulting skills. But thinking back to when I was working on a product team, I found...and this may also just be because I was new in my career. But I found that I often referred to whoever was more senior on the team to handle a lot of those more human-centric topics, as you phrased it earlier. So if there was some communication that we needed to share in regards to why we were delayed on implementing a feature, if we needed to run a retro, if there were some meetings that needed to be scheduled, it always felt something like, oh, this leader of the team is going to take care of that. I am more in the development role, so I will do my job but then defer a lot of that to them.
And then since joining thoughtbot with the way that we operate, I feel like I have more ownership in the process, and I want more ownership in the process. I want to be someone that is very aware of what are the specific goals that we're looking to reach? What are the deadlines? What's behind those deadlines that's encouraging us to push hard? And then also understanding how is the team functioning? What's something that we could do to improve the team's efficacy? Is the team happy? Are there areas there that we could improve? So I think for me, that is one of the core parts where I feel like I transitioned from being more focused on development to being more...you know, I don't have a great word for it. I often referred to it as being more of like a product owner. And since then, I feel like I have more ownership around the code that I'm working with and the team, and then the processes and the decisions for the product. But I actually don't have a great word that encompasses that sense of I want to be part of this and help make decisions and look out for everyone else that's around me. Does that resonate with you? Do you have any particular way that you would describe that or a word for it?
CHRIS: I don't have a specific word for it. In my mind, this is just how we build software. But I think that that speaks to the culture that we grew up in as software developers. It's so strongly in our minds to think this way. A thing that we've talked about in the past is encouraging software developers to observe the sales demo, to see what it looks like when we're talking to end-users, or, similarly, to sit on customer support calls or listen to user interviews or things like that. And the reason for that is we want...I believe strongly that developers will do better work if they understand the context of the end-user of the application. But I think fundamentally, that sort of loads things up in someone's mind that might encourage them to push back or to suggest a different way of working down the road, and I think that's a good thing. I think every software developer should have some amount of that going on. And so that idea that consulting is this other thing that you sometimes do I feel like that stuff fits under the umbrella of consulting and, therefore, I think it's just part of how we build good software, but maybe it's a nomenclature thing, and I'm just thinking about it wrong.
STEPH: Well, I want to pull at that thread a little bit because I was having that internal discussion with myself when I was thinking about this question is in regards to that being more aware of how the other teams are working to then help inform our decisions around the software that we're helping build, and implement advocating for a new process or advocating for how to build great software, is that consulting? I think you and I fall more into the camp of that's just how you build great software is; you have to be part of those decisions to be able to have more insights into the work that's being done. So I don't know if I could even really classify that as a consulting skill.
CHRIS: Yeah, that matches my thinking. There is a distinction between consultant and contractor that I'll sometimes push on a little bit where I see consultants as being perhaps a bit more strategic and not necessarily being handed the work to do. I see that perhaps more on the contractor end. It's like, "We need a website built. Here are the specs. Here's the design mock-up. Please build it," and that's that. Versus a consultant being like, "We need a website, but we're not even sure exactly what that means. Can you help us think about the features and prioritize? Do we need a mobile app or not?" And a consultant potentially working more in that space of helping to determine what is the work that we're even going to do. But again, that's a question of like, how do we build good software? We have to answer those questions, and maybe not everyone on the team is always answering those questions. But the more people feel empowered to and feel like they've got the context to be able to make those sorts of at least suggestions around those sort of decisions, I think the better.
STEPH: Yeah. I agree with the distinction in regards to being a consultant or a developer versus being a contractor because one definitely feels more removed from that decision or with that team collaboration process where you are more handed work, and then you implement that work, but you don't necessarily ask questions and be like, "Well, what are the benefits of adding this particular feature? Are we tracking to know that we've added the right thing?" those types of things that I would naturally include as part of my work. Versus if you're doing more of the contract work, then maybe you just implement and then don't ask those questions. Thinking back to then, what's different about being a consultant versus then doing development work…and I'm totally sidestepping all the financial stuff here. Like, if you're a consultant, then your world may be very different in terms of how you are acquiring jobs and then your marketing. So I am sidestepping that big conversation there but then focusing more on your day-to-day, how it may be different.
And the times that I do feel that I'm wearing more of my lower-casey consulting hat is where I am joining teams that have a very specific goal that they have brought thoughtbot on to help with. So maybe there is a particular certification that they want their software to achieve, or maybe they're looking to level up their team and a particular tech stack, maybe it's Rails, maybe it's testing. And that one feels more focused on I am here to help provide an outsider opinion, to help evaluate your team, to help you provide advice, to communicate more with leadership that's on the team so then they know how things are going. That feels more like a consulting engagement that is less focused on building great software. But I feel like that often still starts with we want that stuff, but we also still want great software. So I always feel like I'm in both, and I really can't be as effective at the consulting part without actually working with the team and understanding the struggles that they're going through. So I still feel like they fit very hand in hand, but I do find that there are certain engagements that do require more external communication versus the others are often more internal with the team that I'm helping build software with.
CHRIS: Well, I like that as a framing, the internal versus external communication and sort of the ratio of those. That's an interesting one.
STEPH: To me, that's really what then sort of differentiates the consulting versus the just focused on building great software is if I'm doing more external communication, I'm focused less on the building part of the software but more on the guidance part.
CHRIS: Yeah, I think that's a really good encapsulation or perhaps a way to differentiate the two ends of this. But I think both you and I probably feel that this just varies project to project. In some cases, we need more of what would fall into the consulting bucket, and other days, it's just nope, we got to go in. We got to implement. We got to build a bunch of features. We've got to get to the MVP launch and whatnot. And that often requires a little bit less on the consulting or the external communication side. But I think it's a case-by-case thing. And it's not that I think of myself as one or the other; it's I'll scale up or down as necessary based on the context of the situation. So I am both, I think.
STEPH: Two for one, consulting and building great software. [laughs]
CHRIS: One-stop shopping, everything you need.
STEPH: So, I do have a couple of examples that I can provide that may provide some insight as to how we view consulting a little differently than necessarily focusing on implementation. I feel that I'm still reaching for that separation between consulting and developing. So I'm going to focus on the external communication and the implementation. I feel like those are the two areas that are trying to be divided in this particular question. But I do have some examples from thoughtbot discussions around consulting. So every so often, we get together at thoughtbot, and we have these internal discussions where we talk about the different consulting challenges that we have faced. And it's a really nice time where we get together, and we may discuss ongoing active consulting challenges and questions that we have, or it may be scenarios that have happened in the past. And so then we present that scenario to groups, and then we break off into smaller groups, and then everybody has an opportunity to talk through how they would react, what advice they would give, how they would approach it. And I have found those sessions to be incredibly helpful, but I think it could be fun to share some of those examples. Folks can think about them as to how they would react to them. But I think this helps highlight why those consulting skills and then also building great software are so tightly coupled together.
So this first example focuses on building MVPs. So let's say that you're working with a client, and you've been focused on building an MVP, and the engagement is coming to a close in a few weeks. But the client is disappointed that there is a particular feature that they're really excited about that's not being included in the MVP, and they'd really like to know why that particular feature was cut. And they are worried that that will actually cause the business to fail if they don't have that feature in the MVP. So that's something that often comes up when we are focused on scoping MVPs to make sure that we are aligned with the client team to understand what is very important for the MVP and what can be a fast follow. And that can be a thorny one, especially if someone feels emotionally attached to a feature that is something that can be tricky to navigate. And how do you help the team reach a consensus that this feature really does need to be in the MVP, or it's okay that it doesn't need to go out now, and it can be in a future iteration?
And for another example, this one is more focused on communicating the progress of particular work and how it's going. So you can imagine this scenario coming from the client saying that they have been working with you for a few weeks and you've made good progress, but it feels like the last week things have stalled. And they don't understand why a particular feature is taking longer than expected to ship. And they haven't had any communication from the team regarding what's taking that feature a longer time to get out. So, again, these are just some scenarios that you can think through and imagine how then you would respond or handle each of these situations. But I think both of those are really great examples that focus on the more consulting aspect of our work and then when we need to have more external communication with teams, so then they feel confident that we are developing great software.
CHRIS: I think this is the first time that I've observed us giving homework to the listeners. But I think one thing that I'll highlight is we are talking about this in the context of consulting or being a consultant. But I think both of those examples that you gave, and more generally, most of these sort of conversations, actually apply pretty equally to working within an organization as an employee. You're still working on projects. You still have deadlines. You still need to ship things. You maybe aren't shipping as quickly as you need to; that maybe needs to get communicated to both internally within your team and externally within your larger organization. So yeah, I think these are broadly applicable, and I think, yeah, rolling them around in your head, let us know if you come up with any great solutions.
STEPH: And if folks are interested in these types of scenarios, then I'm happy to share some more of them. I could share them on Twitter or anywhere else that folks find helpful. But I really like that nuance where I feel like is a nuanced discussion between building some great software and then those consulting skills. So thanks, Matt, for submitting such a great question.
CHRIS: And as an aside, just to give a little more context on Matt, he runs a blog called the Boring Rails, which,, if you are not following it, it is a wonderful, straightforward summary of small, useful tidbits of information in the Rails world that are boring, but that's part of what we love about Rails. So I highly recommend that as well, and we'll include a link in the show notes. But yeah, thank you so much, Matt. And on that note, shall we wrap up?
STEPH: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed on Twitter. And I'm @christoomey.
STEPH: And I’m @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Chris gives the deets on that new new – (he joined a startup!) and laments about the back button being so complicated. Steph talks about extracting an untrustworthy service and likens the scenario to making a Pixar movie. You don't wanna miss this hero's journey!
Transcript:
STEPH: Yes, I was getting text messages from you where you were like, “Go on without me.”
CHRIS: [laughs] Leave me behind!
STEPH: [laughs] No developer left behind!!
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I’m Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So, Steph, how's your week going?
STEPH: Hey, Chris. It's been a very busy week. There's been a lot going on. But the most delightful part of my week has been that Eric Bailey, another thoughtboter and also a former guest on this show, has a tiny, little baby bunny living in his backyard, and so he has been sharing updates about this little baby bunny. In fact, he's been sharing some pictures on Twitter as well. So I'll include a link in the show notes so other people can experience the joy. Also, the name of the bunny gets me every time. But they have named the bunny “Corndog.”
CHRIS: Checks out. It seems like a very obvious name for a small bunny.
STEPH: It gets me because that's such a big name. I don't know why it's a big name, but it feels like a big name for a little bunny.
CHRIS: I can say yeah, it's a cornball. Yeah, that's a large name. And so a tiny bunny is a...it's like Little John from Robin Hood. It's perfect.
STEPH: [chuckles] I kept referring to him as Corn Nugget, I guess, because of size. But yes, it’s not corn nugget; it’s Corndog. [chuckles] So watching Eric's little bunny has been delightful and a wonderful addition to the week. How about you? How has your week been?
CHRIS: My week has been great. I was off on vacation last week (so you had a guest on), which was fun to just take a week off and reset the system. But actually, this week has been interesting. It was my first full-time week with a new startup that I have joined. I think, yeah, that seems to be the truth in the world. So a bit of a shift from what I've been doing for the last year and a half, almost something like that. The reason there's hesitance in my voice is because I've actually been working with this organization for six months-ish, depending on how you count it. I've been having conversations, and then it’s slowly grown over time where it was just conversations, and then it was an afternoon a week, and then one day a week, and then two, and three. And finally, we decided we think we've got an idea. We've got a thing that we want to build. And so I am the developer on this team, but we are an early-stage startup trying to build something. I'm now full-time on the project. I rotated down the other projects that I was working on from a freelance consulting perspective, and now I'm trying something new. So it's a very different vibe. Even though I'd been working with the organization for a long time, this week just felt so much more real. And there was so much more space, so much more room for activities, having a full week to actually work on things. So yeah, it's very exciting, it’s very new, it's very early stage, so all of those things are true. But there are a lot of great aspects to that, and I'm super excited about it.
STEPH: That is some big news. That's a big change too. Well, I guess with consulting, there are the stresses that go into consulting and then changing projects and managing the projects that you're taking on. But then to joining a team and such an early startup team too...anytime, someone says startup life, I'm always like, well, tell me more. How calm is the startup life, or how uncalm is this startup life?
CHRIS: It's somewhere between calm and uncalm, I would say, but in a, I would say purposeful and intentional way. I was looking for...this has largely been true over the entire time that I was freelancing, but freelancing was a way for me to keep the lights on, and stay engaged with tech, and continue working, and frankly, have more conversations and meet more organizations. But I was looking for something that I could engage with a bit more. I was looking for, largely, something like this. So it definitely is occupying a different space in my head than, say, any individual consulting client where with consulting, I was pretty rigid, you know, these are the hours that I'm working. When I'm off the clock, I'm really not thinking about it too much. I'm responsive if I see an incident or something like that or if the database falls over; I’m going to look at that on the weekend but otherwise, largely not doing anything. Whereas with this project, I'm somewhat purposefully allowing it to have a little bit more space in my head off-hours, that sort of thing. And I'm more invested in the work. It's not just a thing that I'm doing, but it's a project that I believe in. It's something that I want to exist in the world. And so, I'm engaged with it in a different way in that manner. I'm also engineer number one, so I'm choosing all of the technologies and setting the standards. Thankfully, there's a lot of good thoughtbot material out there that I can link to, which is great. But yeah, so it's mostly within the context of what I think startups can be. The expectations and the way that the team is working is very reasonable. And I think it's more for my own self. I'm allowing it to occupy a little bit more of my space, but in a fun way so far.
STEPH: Well, along that line, in terms of choosing the tech stack and starting greenfield, I am curious to hear more about the type of project that you're going to be working on. But I'm also recognizing y'all may be in stealth mode. Is that where you're at, or can you talk a bit more about the type of work you'll be doing?
CHRIS: We're stealth-ish right now, I would say, partly because we're likely in the process of rebranding, and renaming, and things like that. So partly it's just like, oh, I probably shouldn't say that. But at some point, this will become public, and so at that point, I can probably be a little bit more open about it. But at the end of the day, we're building a financial product, FinTech sort of thing. And the tech stack is relatively straightforward. I'm actually using my preferred tech stack is...I got to choose, so it's Rails, Inertia, and Svelte with some TypeScript because why not? And I love it, and it's fantastic. I continue to believe deeply in that tech stack. So, yeah, that's most of what I think is good to say now. But I think over the coming weeks, I'll be able to say more and share more. And I certainly will be able to talk about the details of building and growing a team and things like that.
STEPH: Awesome. Yeah, you answered my other question too. I was going to ask what tech stack you chose.
CHRIS: I chose the tech stack, the one with the acronym, which I don't even know...the STIR stack I think we went with or something.
STEPH: I was about to say I don't remember the acronym. [laughs]
CHRIS: I think I never committed to an acronym previously, and then that was the one that got thrown around on the internet. I think I just was like, in the next episode of The Bike Shed, I'll choose an acronym so STIR, why not?
STEPH: I like it, causing a stir.
CHRIS: But yeah, so it's a pretty sizable shift in my life. But frankly, I don't even know exactly the shape that the coming weeks will have. So it will be interesting to report back as things evolve and as new concerns and considerations come up. But, yeah, we'll save that for future weeks. For now, what else is up in your world?
STEPH: Yeah, it's been an interesting week. There have been really two things on my mind, so one of them has been focused on writing a task that's going to process a sizable CSV. And then it's going to essentially enqueue a bunch of jobs and send off a bunch of data to other third-party systems. So that's been a big focus of the week. The other topic is what I'm going to call extracting an untrustworthy service into its own service. And I know that’s a bit vague, but I've got both of those topics. So which one would you want to hear about first?
CHRIS: I definitely want to hear about both. But because you veiled it in mystery and said, “An untrustworthy,” that one's just going to call to me a little bit more. So yeah, what about this extracting and untrustworthy service? What more can you say there?
STEPH: Good question. I'm glad that you picked the mysterious one and started there. That feels right. So this is a part of our codebase, and it's very related to also the task that I'm writing. So to provide a bit of context, this particular portion of the codebase manages a big part of where we are sending data from our application over to third-party systems. And it's a very important feature of our application. And it's also probably one of the gnarliest sections of our application in terms of there are tons of conditionals based on which type of service we're sending to or the discreet customer that we're sending it to, and any particular preferences that they need and how we're sending that data. And then there's also just a lot of room for ambiguity and errors. And when we are sending that data, was it actually successful? And what if it was successful, but we still got back error messages? What does that mean? Is that successful with warning? And so there are just a lot of unknowns.
It's also one of the less tested areas of the codebase. So even though it's important, we really don't feel confident making changes at this point until we've added some more test coverage. And testing it can be a beast because right now, we really just want to add some security around that section of the codebase. So we're often going for high-level tests, which are then our slower tests, but then also means it's hard to test the more granular aspects of that code. This is that untrustworthy section of the code in terms that we're a bit skittish to make changes, but yet it's a very active part of the codebase, so not the best place to be. But we also recognize that this part of the codebase would be really well-fitted to live outside of the application. It really doesn't need to live with the rest of the application. And there are other services that need to be able to talk to the service as well. So instead of having it grouped together, which -- It's funny. I see your eyebrows go up when I talk about -- For people who can't see, Chris raised his eyebrows when I talked about extracting this to another service. [chuckles]
CHRIS: That doesn't sound like me at all. I don't ever…
STEPH: [chuckles] And since we do have other services that need to be able to pull data or to talk to this particular portion of the codebase, we are looking to then move it out into its own application, so that way, it can stand alone. It can focus on this one task, and then other services can benefit from it as well. And there's been an interesting discussion around, well, we need to make changes to this codebase. And we also have some recognition that we need to make improvements. Do we go ahead and go heads down for a bit and improve the section of the codebase, add more test coverage, get to understand more of what this code does, where the risks are? Or do we go ahead and extract it in its current form to the new greenfield space and just essentially port it, and then we work on it from that space? And so, there's been a conversation around which one do we do first? And I'll tell you my thoughts, and then I'd love to hear yours.
As one of the primary individuals that's been working in this codebase, my stance has been let's leave it in place for now because I want to build some confidence around what this does. So I really want to have some confident understanding about the requirements, about when we extract this, what is that going to look like? But also, I feel like I'm in a place where I'm starting to understand the beast enough that I want to continue that progress and add some testing around it before then we just move it to this new location. And I can't decide if that's one of those decisions where like, I just feel too close to it, and extracting it feels risky to me. So I feel like we're adding on this extra level of complexity. Like, this is already code that's hard to understand. And then we're going to add this network connection on top of that where then we have to talk to it in a different way. And in my mind, that's adding another level of risk and another level of having to debug this service. So my current approach is let's leave it in place. Let's try to identify some low-hanging fruit. Let's go ahead and add some more tests. And I feel pretty good about that decision. I'm curious, what are your thoughts?
CHRIS: I have a bunch of them. The first is that the story that you're telling here feels like the hero's journey of software development. Like, all right, we got this gnarly bit of the code. It's super important. It's super complicated. It doesn't really have any test coverage for historical reasons that are complicated, but here we are. What do we do? That story feels so true. It feels like there are nine Pixar movies about it if Pixar made movies about writing code, and they would be great movies.
STEPH: That's amazing. [laughs] I would watch those movies.
CHRIS: I think of it like Katrina Owen’s therapeutic refactoring, which I feel like is probably my most referenced...It's one of my two most referenced talks that I bring up on the show all the time, but it is almost exactly about that sort of thing. We've got this gnarly piece of code. It's super important, but nobody really knows how it works. But we know it does work, which is an interesting bit. And so to the question of would you extract as is or would you try and shore it up before you extract it? I am 100% on the side that you are on, which is let's shore this thing up before we move it over. Because moving it over, like you said, that's going to add the additional complexity and failure modes of network latency, network timeouts, async disconnects, whatever, any of those complexities. That's another set of failure modes that you'll be introducing or just complexity and things that you have to think about. So that feels complicated. Also, there's probably a poor analogy that I have in my head. But imagine that you're moving, and your bedroom is just a complete mess, and you're like, oh, there are some old to-go food containers over there. And I haven't done my laundry in a couple of weeks. I'm just going to throw it all on a blanket and take it to the new house, and I'll figure out what I want to keep on the other side. It's like, that doesn't feel like the right move. I would definitely throw some things out before I move to a new house. So I definitely lean in to let's clean this up and understand it so that when it's in the new place, we have a slightly more contained, understood, manageable version of the software to try and extract to a service.
STEPH: I feel very judged for my moving style.
CHRIS: [laughs] I mean, obviously, with software, you're doing the one thing. But did I just describe exactly how you move house?
STEPH: [laughs]
CHRIS: To each their own now, you know, whatever works for you.
STEPH: No, I'm with you. I'm definitely the person that's going to clean up first before I put stuff in boxes. I'm going to try to give away as much stuff as possible.
CHRIS: It's a great time to just figure out what's true in your life or what's true in your software. I am intrigued. So yes, I did raise my eyebrows when you mentioned extracting a service and other services talking to each other. In particular, the way you described this piece of the system, I would be surprised if there weren't data requirements and/or transactional consistency things that you wanted to uphold. And that's one of the main things that causes me concern when we're extracting services is if this thing still needs to know about a bunch of different pieces of data and if it's going to make multiple updates to different records where if one succeeds and the other doesn't, we should roll back the whole thing. You lose all of that by moving to a service. And so that's where my broad…like, I'm always going to question if we're going to surface this. So I'm intrigued. Is this thing a very functional piece of your system where some data comes in, some stuff happens, and you get data out at the end of the day? Or is it more operating on related data within your system and potentially updating records after the fact?
STEPH: Yeah, that's a great point. For this area of the codebase, it does feel more functional in terms that we have data, and we essentially want to notify other people that we have this data, and then we want to share it with them. So there is still that coupling of where we need access to those values. So if we're sending it over to the new system, either that new system needs to be able to read from the same database, or we have to send all of those details over to the new system. So then it can build up the message and then send it over to the other third-party systems. So it feels more functional, but there are still some of those requirements that we need to think about.
CHRIS: Okay. That definitely clarifies things. And I wouldn't say that I have a unified theory of services. But what you're describing feels like the type of service that I'm more open to. It sounds almost like a SendGrid where I want to deal with all of my application data. And then I send a bunch of structured data over to SendGrid, and their job is to send an email and retry as necessary or send a text message or even do a voice call if it's Twilio or something like that. And so they're really good at those weird things and the failure modes that exist in those communication channels. But that's not logic that I need to live in my app. And so what you're describing there definitely makes sense as something that could comfortably be extracted to a service and not have more complexity be introduced by that. You did mention something about services talking to services and other things. So is the idea that this would be extracted, then other parts of the system would also use it to communicate out messages or something like that.
STEPH: Yeah, one of the motivations for extracting this is because we have another application that also wants to perform similar behavior. So now we have two applications that need to do similar work, and they feel more in that line of functional work where it would be great if we could share this. But it doesn't fit in the space that we want to extract it in regards to extract it to a gem and make it shareable. It feels more appropriate for it to be its own service and then also capture. Because the other nice thing that we want to include that we're doing now as well is we want to capture feedback from whenever we are sending that data over to other systems. We want to know, hey, how did it go? Did you give us back that successfully, but maybe with some warnings or some errors? Maybe you accepted the data, but then you also gave us a response about something else.
I think one really important question to consider is when is it trustworthy enough to extract? Because we know we're headed down this path. So at what point are we ready to then go ahead and extract this over to its own service? And that was the more interesting conversation because I think those who were in favor of extracting it now had the concern that we can't add test coverage in its current form. So my first response was if I need to make changes and I can't add test coverage, I will sound the alarm, and we will reconsider. But my goal right now is to turn this untrustworthy service into a little more trust. Just dial up the trust a little bit further, and then we can port this over. So then, as we do add some network complexities on top of this, we will at least have more faith and understanding the underlying behavior of the system. But then we still want to understand that it's not going to be perfect. And we're not going to wait until it's perfect before we do extract it. But that's the tale or the mysterious extracting an untrustworthy service. So I think it will be an interesting journey. And it was a very interesting conversation that I was excited to have your thoughts because I know you and I often lean so far away from extracting stuff to a service that it was an interesting conversation to have around; well, this code is a bit of a mess. When do we start to tackle that mess?
CHRIS: I like that you didn't even frame it necessarily in terms of that, but I still definitely got there. I was like, wait, wait, wait, but let's actually talk about whether or not. But this is definitely the sort of thing that I think makes sense to consider as a service extraction. I think the question that you're asking around when do we feel good enough in its current state to do the extraction? That's right on the line of art in the software world as opposed to the science of this is how we connect HTTP. So I'm very interested to see where you get to both with that question and how you actually make that decision and then how the extraction goes. And I imagine this will be the sort of thing that goes on for a bit of time. So it feels like we could make a mini-story arc that'll span a couple of episodes, and you can follow the characters on their journey. This is the Pixar movie. We're making a Pixar movie.
STEPH: We're making a Pixar movie. They're missing an entire genre for their Pixar movies. If they just appeal to developers, that'd be wonderful. I’m so in for that. We should write Pixar.
CHRIS: There are more developers every day, so think Hackers meets Up. That's what we're going for. We're just going to fuse those two together. It's going to pull at your heartstrings, but it's also going to talk about hacking the Gibson. It's going to be great.
STEPH: Oh man, you reached for the most heartfelt one going for Up. That one has the toughest beginning. [laughs]
CHRIS: That's what I'm going for here.
STEPH: For anyone that hasn't seen Up, you can go watch the beginning of it. Just be prepared.
CHRIS: And if anyone hasn't seen Hackers, also be prepared. [laughs]
STEPH: Which is me. I haven't seen Hackers.
CHRIS: All right. You still haven’t. All right, that's a thing we need to work on.
STEPH: [chuckles]
CHRIS: But cool. Okay. So we're going to work on the Pixar movie. You're going to update us because we need to actually gather the information. But yeah, we'll come back to that in future episodes. But shifting gears just a little bit, actually, I have a couple of things, two small things, and then one more sizable thing that is more just like, I'm confused. So yeah, we're going to go in that order. Thing number one is, we are, again, it's a very early-stage startup that I'm working with. And part of what we're doing that I really like is that we are talking to potential customers, potential end-users of the application doing lots of user interviews, which is a thing that I have more from a distance seen often. But now, because we're actually working as a distributed team, we're remote because that's the nature of the world right now. We'll probably meet each other in person at some point, but that's down the road. But all of these conversations are happening over Zoom calls, all of these user interviews. And so I made the suggestion that we use a tool to actually manage those. And so we're using a tool called EnjoyHQ, I think is the name of it. There's another similar tool called Aurelius. We can put the links in the show notes for both of those. But what it does is it basically makes the video available after the fact. I think it automatically transcribes it, and then it allows you to annotate and add notes and things like that, which is great for aggregating this body of information that we're collecting over time as we do all of these user interviews and start to tag common themes that we're seeing. And bringing them together will also allow us to revisit them. But for me as the developer, I've been to a few of them, but not as many as the rest of the team. And what's great is I've now taken to...as I'm doing more mundane…cleaning up email or whatever sort of tasks, I will just put on one of these videos in the background at 2X. And what's great is I can now just hear literally the voice of the users of the application. What are the words that they're choosing? How are they talking about it? What matters to them? What doesn't matter to them? What do they get really passionate about? And it's been just such a wonderful thing to have available. It's almost like a podcast of our app that we're building, and it's like, that's awesome.
STEPH: I love that. Yeah, I would love to be able to hear from people that are using the application. And like you mentioned, just turn it on in the background so that way I can process what they're saying. But then, I don't know, depending on what they're saying, maybe it needs full attention or otherwise, maybe you're able to just absorb little bits and pieces while you're hacking away on something else. And now I've got the word hacking stuck in my mind. [laughs]
CHRIS: It's the best word to describe what we do. Yeah, there's definitely a version of someone should be reviewing...someone's actually doing the interview, so they're going to be very close to it. And then there maybe is a secondary someone's watching it closely and trying to glean, and categorize, and all of that. And I could potentially be any one of those, but I really like this version of this is just a background soundtrack that I'm exposing myself to so that I'm all the more immersed in the problem space that we are working on. And it's one of the things that I fundamentally believe about software development is developers shouldn't be hidden in the corner just writing code. We should always care about what the end-user wants, and what better way to get there than to actually hear their voice and hear the words that they're using. So this is a magical little trick that I have now found that I'm like, oh my God, this is amazing.
STEPH: Funny enough, I had a similar experience this past week where I realized I was feeling very disconnected from the people that are using the application and also the people that are setting priorities for the work that our team is doing. And that is something that I'm very accustomed to with thoughtbot that we always want to be part of the team. We're not necessarily just we can churn through a backlog. But we also really want to be in touch with product decisions, and share opinions, and then also be in touch with users too. So I had some similar revelations this week where I realized I was feeling very disconnected where I was picking up tickets, and I was like, I don't really understand why this is great or how this is helpful. And so, I shared that with the team, and someone encouraged me to attend a specific meeting. And that was wonderful because then I got to hear from the people who were creating those tickets and then giving them a high priority because something was urgent and why it was urgent. And having that insight was huge to me. And I realized that it was incredibly motivational as well. Because then I'm like, yes, okay, I understand how this is going to impact someone. And I'm now very encouraged to get this done.
CHRIS: I think that idea, that ethos of wanting to get into the user persona and understand that better is a very strong thoughtbot ideal. So it’s unsurprising both of us share that. But yeah, that was a really great thing and particularly a tool that facilitated that in a really straightforward way, which I appreciated. Another thing that I used this week, which I've talked about at length in a previous episode, so we can link to that episode, but it's a project called dry-monad. So there's dry-rb is the collection of, I think, a set of gems, but dry-monad is one that allows for defining sequential tasks, so tasks that you have to do a bunch of steps in order and the outcome of a previous step will be the input of the next part of the process. So it can fail in a bunch of ways like, okay, fetch this thing from the network and then look up a user based on that. And then get the user's profile, which may or may not exist, and then assuming that all of that's gone well, actually persist to this new record, to the database. And they're really finicky to write that sort of sequential processing. And so I actually had written that thing manually. And part of it was I'd wrapped the whole thing in a database transaction, but I was trying to make it so that if something went wrong, I would manually roll back the transaction. And then I wanted to return an object to the caller that indicated that things had failed and an error message or something like that. And that was actually really hard to do because of the way transactions work.
The mechanism that I was using was apparently deprecated in Rails. And so the whole thing was just kind of confusing, and it was a bit messy in the code. And I knew in the back of my mind that dry-monad exists. I've used it before. I've really enjoyed it. But I was trying to minimize the amount of new technologies that I'm bringing in this early on in the project. It's like, yeah, I'll bring that in when I need it. But finally, I was like, you know what? I think I've reached that point. I grabbed it, brought it in, and I haven't worked with it in a while, but I was very quickly able to refactor my class to use dry-monad. It cleaned it up immensely. The tests remained identical, which was really interesting. I didn't have to change anything on the test side. And one of my tests was failing before and then passed because of the introduction of dry-monad. And yeah, it was just like a win-win-win, and also the fact that I was able to revisit dry-monad as a library and just get running with it again was really interesting to me because it is a bit complicated and interesting in how it works. But again, I was able to just sort of pick it up and run with it. So that was wonderful. And I will now all the more staunchly suggest that folks reach for that when they have more complex, procedural type code that they need to write.
STEPH: I remember you highlighting dry-monad before in previous episodes and talking about the pain of writing that sort of procedural code, but then we also want to return something helpful. And I looked at it briefly, but I haven't used it. But now that you are reminding me of it, I'm very interested in it because I agree that process is difficult to write, at least in Ruby it's difficult to write. I understand the hesitancy that you have around bringing something in that's new. But then if you recognize that it's going to be a theme in your application around this is something that we're going to do a fair amount, and we want to do it in a clean, efficient way, then it starts to feel more reasonable to say, “Okay, I'm bringing in something new, but it is representative of how we want to handle this step or this type of process in our application.” So it's not just bringing in a gem to handle one small area of the code, something that we could have written, but it is elevating our process and our system.
CHRIS: Yup. Indeed. In this case, these are command objects within the system. That's actually the name that I got from the creator of the project. That was his suggestion on Twitter as to what to call these objects. And it's a pattern that I do want to encode and has become the standard within the application for any of these more complex processing tasks. So, again, we'll link to the previous episode. I talked about it in more depth and the ideas behind it. Railway Oriented Programming is a phrase that's used, which talks about how to sequence failures or successes and whatnot together. And there's some good material behind it, more general, but yeah, wonderful, little library.
STEPH: What is Railway Oriented Programming? I'm not familiar with that term.
CHRIS: That refers to the sequential processing that I was describing. So imagine that you have a bunch of different steps where first you fetch from the network to get this record, then using what you got back, you look up a user in your database, then you fetch that user's profile. Then you do something else. Each of those steps along the way could fail. And so the railway metaphor is the track is going forward, but if at any point you branch off the track because of a failure, then you're in the failure track, and that's a different thing. And so it's a very...the dry-monad or other similar Railway Oriented Programming or monads generally I think is the actual...it's the words in there. And I wish it wasn't in there because it's such a complicated word. But that idea is the fundamental, underlying thing that's going on there. And it is conceptually somewhat complicated, but if you don't try and think about the category theory behind it, and you're just like, well, I want to do a bunch of stuff, and it may fail at any point, and I want to return either a success message with everything having gone well or an error message at the point that it failed and stopped processing, then that's what this thing does, and it's fantastic at it.
STEPH: Okay, cool. Thank you.
CHRIS: You are welcome. And I think there's a bit more in the previous episode as well. So if that sounded interesting to anyone, I think I rambled more in a previous episode about it and probably better because I feel like I was more prepared that time than this time.
STEPH: Well, along those lines of running a process and then being able to fail at any moment, I'm going to circle back to that other topic that I highlighted where most of my week has been focused on writing a task that is processing a CSV, something probably a number of us have done at some point in our career but processing a number of rows, and then sending and queuing jobs to then send data to a third party system. And it was really interesting less so because of the processing of the CSV and then enqueuing jobs. But it was more of the reporting that went behind it and the process that went into writing this task. So Joël and I were pairing on this task. Joël being another thoughtboter and also a former guest on this show. And we had an interesting process of where we started with one, let's do the simplest thing. Let's get it done. Let's also check through the CSV because you're often going to find stuff that doesn't align with what you expect it to when it's a CSV that's provided from an external source. One of the risks that we highlighted right away was how are we going to get the CSV on the server? Because we just have this one CSV that we need to run. We don't want to add it to the repo, and we can't generate it ourselves. So how are we actually going to get the CSV in a place that we can run this in a production mode? I learned that I could pass a CSV as standard input into the Rake task. So then I could actually run it locally because we're using AWS. So I could inform AWS to run this task, but then I could actually stream the CSV into the task that way. And that was really nice because then we no longer had the question of how are we going to get this file on the server?
CHRIS: That's interesting. I didn't know...Yeah, the streaming of it from local to remote is an interesting one. On Heroku, I will typically open up a bash prompt, so Heroku run bash. And then, I will curl the file down onto the server and then run it locally. But that’s an ephemeral dyno. It may die at any point. There are various things that could go wrong there. So that's always interesting. I imagine a similar thing could be done, but I don't know, actually, if you can directly stream into a Heroku dyno like that, which is an even more straightforward one because I end up having to bounce a file off of a random. Like, I'll often put it in a Gist or a Pastebin or something like that. And then I'll curl it down to the server, and yeah, this is interesting.
STEPH: Yeah, I'm also not sure the specifics of how it would work with Heroku. But it was a really nice process for us to be able to use versus having to then read the file from, like you mentioned, curl it from somewhere else and then be able to parse it that way. Two other things that were top of mind for working on this task is one, idempotency. You're going to rerun it, friends. At some point, your task is either going to bomb, and it's going to err. And then you're going to have to triage and run it again. Or whoever requested that you run this task and they said, “Oh, it's just temporary. We're just going to run the once,” that's not true. You're going to run it again. So keep in mind how to make that safe, that you can rerun it. And then that won't be its own scenario that then you have to triage and figure out.
CHRIS: Idempotency is one of those critical ideas, and I just wish the word were different. I feel ridiculous every time I say it. And I feel like I have to push my glasses up on my nose, and I'm like, well, have we considered idempotency for this? But it's such a good idea. And it's the sort of thing that...you're totally right. Every time you're doing this sort of thing, it is something that you should consider. And we use GET requests, and they have rules about it. And it's such a good idea and such an important idea. And I just wish the word were different so that I felt more comfortable using it in polite conversation.
STEPH: [laughs] I don't know why… and this may be sharing too much of myself. But the song Under Pressure by Queen and David Bowie the Under Pressure song has been in my head. But I've been replacing the under pressure part with idempotent. So it's [singing] under pressure, and I've been [singing] idempotent. [laughter] And that has just been my song for the week.
CHRIS: You've normalized it enough for me now that I'll just hear you singing it every time, and I'll be like, this is a nonsense word. We're fine. We can just go – [laughs]
STEPH: That's what I'm here for, to turn technical terms into nonsense. [laughs]
CHRIS: It's really what this show is about at the end of the day. So you are our hero.
STEPH: I just have to work on more lyrics for the song. I really just have that one line, that one hook. [laughs]
CHRIS: Now I just want to scrap the rest of the episode and just come up with lyrics to idempotent. [chuckles] But maybe we don't do that.
STEPH: [laughs]
CHRIS: Maybe that'll be after the credits B-roll, something like that.
STEPH: The other way I do phrase that question is I'm like, what happens when it fails? And that always feels like a safe way. Because if I ask someone like, “Hey, is the idempotent?” It feels more natural for people to be like, “Oh, it's fine. It doesn't need to be.” But if you say, “What happens when it fails?” It's harder for someone to say, “Oh, it's never going to fail. [laughs] There is nothing that could go wrong.” So it feels like a more intentional question in regards to how are we going to handle this when we need to rerun it? The other part that really came in handy was the fact that we spent more time on the reporting as well. So we really wanted to know what happened when we are processing all of these rows. So were there any invalid rows? And if we do encounter an invalid row, do we want to just stop processing and stop right there, or do we want to keep moving? Do we have any rows that didn't match, a row in our database, and how do we capture those? And because it's idempotent, maybe we want to capture skipped rows so then that way when we rerun it, we can see okay, well, we skipped, you know, a thousand rows because we'd already run them before. And all of that reporting has been so handy because we're also using this to triage. Like, hey, we're sending a bunch of messages to this third-party system. We reach out to that third-party group, and we say, “Hey, we sent you all of this. This is how it went. Let us know how it went on your end.” And then, we can have a more collaborative discussion around what happened on their end versus what happened on our end, and then we can make tweaks to each system.
So overall, it felt more of that run-of-the-mill task where you're going to write a Rake task, you're going to parse a CSV. But something about the reporting really resonated with me because in the past, when I've written Rake tasks, I've leaned more on the this is temporary, so it's okay if it's not great. But the reporting has been so crucial that from now on, I always want reporting from any Rake tasks that I run, and I want to know what happened. And then I also want to be able to rerun it. And I'm very wary of any time that someone says, “Oh, this is temporary,” because then I also think that leads to interesting discussions around testing. Because initially, when we started this, we were under some pressure. Hey, that goes back to my song. We were under some pressure for writing this particular task. And then the question came up: do we want to test it? And to be frank, testing a Rake task isn't great; it’s not fun, which is one of the reasons we get out of a Rake task as quickly as possible and put it into a class. So that was also one of the drivers or one of the conversations that went against, like, oh, this is temporary. So it's okay if it's not the best code. It's okay if it's not tested. And then I was more of an advocate for, like, well, I don't feel good about this. And I'm rerunning the Rake task every time I want to confirm that the changes that I've made are correct. And so once I hit that manual labor point where I'm like, okay, I'm testing this. I just don't have automated tests for this, that then I actually started adding test coverage around it.
CHRIS: I'm so excited that we have transcripts, particularly for that last minute that you were just talking about, because I feel like that was a mini master class in software development. And more generally, there's been almost like a poetic something to the two different topics that you’ve brought up today are the sort of mundane, very real things that actual software development is made almost entirely of. It's not often that we're just starting with a greenfield app and building a new thing. I happen to be doing that this week, but it's rare. It's going to be over very soon. And then I'm going to be in the world of oh; we have to backfill a bunch of data. How are we going to do that? Or we have this portion of the code that, frankly, we should have been testing more, but we didn't. How do we deal with that? And these murky, gray areas where there isn't a clear answer and you have to go with intuition, and you have to...a bunch of the things that you just listed as these good heuristics that you have around how you think about software. I'm just really excited for the transcript to that because that was awesome.
STEPH: I'm so glad you enjoyed it because I think it's not until right now where I'm processing this and talking about it with you that it is...I was trying to think earlier, like, why is this so interesting? Why am I so excited to bring this here to this conversation? And I think it is for the reasons exactly that you said, that it does feel like one of these...this is a mundane task. We write a task; we process some things; we send some data. We do that all the time. But then it's all the other bits around it, and the other ways that we've been bitten, and how we avoid those scenarios, and then how we identify a risk like when someone says, “Oh, it's temporary. It's fine.” That part, I think, is always the very interesting aspect of writing software.
CHRIS: Do you consider this sort of stuff the distraction from the work or actually the work? And in my experience, this makes up a lot of the work. And treating it like what you were saying about testing like, “Yeah, that thing where you're telling me that it's going to be temporary and we probably don't need to test it, I've been told that before,” and I just want to spot-check that real quick. Or what you said of the when I was manually testing, and I crossed a threshold where I'd done that enough, that now adding a test harness around that totally makes sense. It's worth the investment at that point. Those little heuristics that we build up over time are the things that are hard to get. And so, yeah, I love that conversation.
STEPH: I really like how you also asked and then responded to that question around is this distraction, or is this the work? And I am wholeheartedly with you that this is the work. This is the part of the work that I do find interesting, and knowing when to make those trade-offs, and when you've hit a decision point, and which direction you're going to go, and being able to recognize something that otherwise could have been a fire. It could have been much worse in terms of if we'd built a task that wasn't robust. Because of course, then the second time that I ran it, you know, emphasis on the second time that I ran it because we needed to do it again to process more data. It erred halfway through, and I panicked in the moment. But then I was like, oh yeah, this is fine. We planned for this. This was in the game plan. So it goes back to that we want the calm execution. We want to plan so we are back in that calm state. We want calm software. And this feels very in line with how do we make this more calm?
CHRIS: I love that theme that you're bringing up there, which I think is a theme that we've touched on a bunch of times. I think we actually have an episode called Seeking Calm. And I think that little title there, as much as I love the nonsense titles that we have for most of the episodes, that one I think really captures the theme that a lot of what we talk about is in orbit around yeah, we want it to be calm. We don't want things to be on fire every day. And what does it look like to build software with that in mind?
STEPH: Yeah. I also love that theme. And I like that it's something that we have surfaced and then really stuck to because it resonates deeply with me. But that's pretty much all I have for my Rake task adventure. What else is new in your world?
CHRIS: Well, I have one more hopefully quick thing. I'm going to try and boil this down to its essence, but I ran into, let's call it, a subtlety. It's not an issue. It wasn't a bug per se. But looping back to the previous episode that you and I recorded together, we talked a bunch about multi-step forms, which was a great conversation in and of itself. But I eventually completed the feature that I was working on, put it up for acceptance. And the product manager who was reviewing it highlighted a couple of different things. They recorded a video which, as an aside, I love that as a way to do acceptance and show what's going on and talk through it. There were a couple of smaller issues, which I was able to resolve very quickly, but there was one more interesting one that I've extracted this as future work because it was too complex to try and solve in the moment. But basically, what's happening is imagine that we have a two-step form. So there's the first page of the form. The first form that you see is for your name. So it's just an input that says, “Name,” and you fill in your name and then you hit continue. That posts to the server. We save off that data. And then, we redirect you to the next page on the form, which is, say, phone number. So two steps. We start with name; we go to phone number. What happens if you type in your name, you hit continue, everything processes correctly. You end up on the phone number page, but you hit back. What do you think happens?
STEPH: I would expect to go back to the name field and probably expect my name to be populated but would also be fine if it's not.
CHRIS: I like that you would be fine with the fact that it's not, if it's not, because it's not is the answer. And what's unfortunate is so if someone goes back, they will see the unpopulated form, so not filled out. But if they reload at that point, we will serialize down the value and pre-fill the input with their saved data. And so that inconsistency is not great. It's all the more unfortunate because as I tried to resolve it, I'm like, oh, okay, this feels like a solvable thing. I just need to tell Chrome, “Hey, if someone hits the back button, do a better thing than what you're doing.” I needed a way to instruct Chrome or whichever browser because this should be a standards-level thing. And there are things in the HTTP spec about this. So there's the Cache-Control Header is one of the headers that you can send down with a response. And there's a bunch of different values that can be in there, no-cache, no-store. There's also the…I want to say it's the max-age, or I think it's Expires. That's a different header. But you can set it to have an expiry that's just already expired. There's also a Pragma, which you can say no-cache. Some of these are standard. Some of these are not standard. Chrome ignores all of them. Chrome’s just like, “Nevermind.” So the idea is that those headers are intended to inform intermediate proxies.
Say you have a caching layer, so you're using Fastly or CloudFront or something like that. When that service fetches the page from your backend, from your actual, say, Rails app, then it will look at that header and say, “Should I hold onto this for a little while or not? Is it public, or is it private? What should I do as an intermediate caching proxy?” Ideally, Chrome would also look at those and say, like…there should be a version of me being able to tell Chrome, “Listen, if someone hits the back button, please go to the server and ask for it.” Like, I'll take the second of latency that that introduces in navigating back because I always want to show them the correct data. Unfortunately, I have not found a way to do that. There's a bunch of things on Stack Overflow and other places of JavaScript solutions where I can listen to the window.popstate event and then force location.reload. But that feels like a pile of hacks that I don't want to get into. It feels like it will be very inconsistent between browsers. So I am still searching for a solution. But I would like to figure something out here.
As a more pointed version of this to try and explain an example where this could happen, imagine that you've got the header of your application, and in it, you have a sign-out button. And so that sign out is going to delete to the session's endpoint. So you're deleting your session. And after that, you get redirected to the login form. If you then hit back, you will be taken back to the browser’s cached version of the previous logged-in state page that you were at. This is probably fine in a lot of cases. If you reload, you can't do any nefarious action at that point because you are logged out. But you are seeing potentially sensitive information. So imagine that you log in in a cafe, you go through Gmail, or whatever, or your bank, then you log out, you walk away. If you leave that page up and someone hits back, they can now see what was on the page. And part of that particular version, I read a bunch of backstory about that on the Inertia repo because someone posted this as an issue against Inertia as a framework. And the Inertia team...and I really love how they handle these sorts of things. So they were very kind, very welcoming to the issue but also said, “Actually, we're doing...like, this isn't us. But let's talk about it,” and gave a ton of detail and went through the HTTP spec. And it's a fantastic issue as a read. It's like a fun bedtime reading sort of thing to learn about how the internet works. But the Inertia crew really, really cares about being spec-compliant and doing the right thing. So, unfortunately, this is outside of their purview as well. But yeah, I don't have a solution, and it makes me sad.
STEPH: I liked that second example that you provided because I feel like I see that one more commonly when I'm on an application, and I don't know why. But I hit back, and then it shows that I'm signed in, and I'm like, that's a lie, I'm not signed in. I also really appreciate how Inertia is responding so kindly and welcoming to folks and then providing such thoughtful responses. That sounds immensely helpful. I don't know, yeah, I am also interested in this. It's something that I haven't worked with in a while, so I don't have any grand ideas at the moment. So I'm also curious if other people have run into this and how they've approached it.
CHRIS: Yeah. If we're being honest, partly I wanted to share this with you, but also I wanted to say this into a microphone, and then hopefully someone out there on the internet knows an answer. I've tried, I think, all of the normal things, all of the different variations of headers. I haven't actually poked at the JavaScript things yet, but that's probably the direction I'm going. But if anyone out there has an idea, I would absolutely love it. I think in my mind, the ideal version of this is if I'm making GET requests and I'm clicking around on a page, it's perfectly fine for Chrome to use its cache version of the previous page because, sure, that's fine. It may actually be stale just based on it's been a few seconds, and something's changed on the server, but I'm willing to accept that. But if I've posted, or patched, or deleted, or done any action that by definition should be changing data on the server, then I would love for a way to invalidate Chrome's back cache, so its version of the pages that it's restoring when I'm hitting back. I'd love that as the heuristic to get to. I don't know if I can get there. My sense says chrome’s like, “No, I want to go fast. That's all I care about.” [chuckles] I’m like, all right. Well, I get that vibe but --
STEPH: Yeah, that’s a nice, succinct way to say that if I've changed data, then I want to invalidate that browser cache, so then we don't show them a fresh page and we actually show them the name that they entered on the form.
CHRIS: As we know, though, cache invalidation is one of the very easy things to do in software development. So I'm sure my naive, quick idea is very easy to implement and will have no edge cases of its own.
STEPH: Well, this will be our parallel Pixar movie. We have one that we highlighted earlier, and this will be the other one, The Cache Buster. I'm not great with titles. [laughs] This will be our other Pixar movie.
CHRIS: Buster the Lonely Cache. Yep.
STEPH: All right. Well, in parallel, we'll work on Buster the Lonely Cache. Is that the name of this?
CHRIS: Yep.
STEPH: Cool. We'll work on that script. And in the meantime, I'll also think about it if I encounter this or come up with some ideas and share them with you. And then also if other people have any ideas, that'd be fantastic.
CHRIS: That would be fantastic.
STEPH: Yes. Please write in to help Buster with the lonely cache, which, wait; I don't get it. Why is it the lonely cache?
CHRIS: Because the cache has been busted and evicted. So he's got no friends. There's nothing...There's no data left. I don't know.
STEPH: [laughs]
CHRIS: I came up with it real quick. I don't stand by it. It's not a great idea, but we'll workshop it. It'll be fine.
STEPH: That's true. Yeah, we'll go through it. I'm asking too many questions for a very quick creative. We're in the creative space, not the critical space. But please write in to help Buster figure out [laughs] the lonely cache or how to bust the cache. Oh, goodness. I'm done with my jokes for today. I'll try to stop.
CHRIS: I believe that's a perfect note. Shall we wrap up?
STEPH: Let's wrap up. Show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed on Twitter. And I'm @christoomey.
STEPH: And I’m @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
Nate Berkopec is the author of the Complete Guide to Rails Performance, the creator of the Rails Performance Workshop, and the co-maintainer of Puma. He talks with Steph about being known as "The Rails Speed Guy," and how he ended up with that title, publishing content, working on workshops, and also contributing to open source projects. (You could say he's kind of a busy guy!)
Transcript:
STEPH: All right. I'll kick us off with our fancy intro. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I’m Steph Viccari. And this week, Chris is taking a break. But while he's away, I'm joined by Nate Berkopec, who is the owner of Speedshop, a Ruby on Rails performance consultancy. And, Nate, in addition to running a consultancy, you're the co-maintainer of Puma. You're also an author as you wrote a book called The Complete Guide to Rails Performance. And you run the workshop called The Rails Performance Workshop. So, Nate, I'm sensing a theme here.
NATE: Yeah, make code go fast.
STEPH: And you've been doing that for quite a while, haven't you?
NATE: Yeah. It's pretty much been since 2015, or so I think. It all started when I actually wrote a blog post about Turbolinks that got a lot of pick up. My hot take at the time was that Turbolinks is actually a good thing. That take has since become uncontroversial, but it was quite controversial in 2015. So I got a lot of pick up on that, and I realized I liked working on performance, and people seem to want to hear about it. So I've been in that groove ever since.
STEPH: When you started down the path of really focusing on performance, were you running your own consultancy at that point, or were you working for someone else?
NATE: I would say it didn't really kick off until I actually published The Complete Guide to Rails Performance. So after that came out, which was, I think, March of 2016…I hope I’m getting that right. It wasn't until after that point when it was like, oh, I'm the Rails performance guy now. And I started getting emails inbound about that. I didn't really have any time when I was actually working on the CGRP to do that sort of thing. I just made that my full-time job to actually write, and market, and publish that. So it wasn't until after that that I was like, oh, I'm a performance consultant now. This is the lane I've driven myself into. I don't think I really had that as a strategy when I was writing the book. I wasn't like, okay, this is what I'm going to do. I'm going to build some reputation around this, and then that'll help me be a better consultant with this. But that's what ended up happening.
STEPH: I see. So it sounds like it really started more as a passion and something that you wanted to share. And it has manifested to this point where you are the speed guy.
NATE: Yeah, I think you could say that. I think when I started writing about it, I just knew...I liked it. I liked the work of performance. In a lot of ways, performance is a much more concrete discipline than a lot of other sub-disciplines of programming where I joke my job is number go down. It's very measurable, and it's very clear when you've made a difference. You can say, “Hey, this number was this, and now it's this. Look what I did.” And I always loved that concreteness of performance work. It makes it actually a lot more like a real kind of engineering discipline where I think of performance engineering as clarifying requirements and the limitations and then building a project that meets the requirements while staying within those limitations and constraints. And that's often not quite as clear for other disciplines like general feature work. It's kind of hard to say sometimes, like, did you actually make the user's life better by implementing such and such? That's more of a guess. That's more of a less clear relationship. And with performance, nobody's going to wake up ten years from today and wish that their app was slower. So we can argue about the relative importance of performance in an application, but we don't really argue about whether or not we made it faster because we can prove that.
STEPH: Yeah. That's one area that working with different teams (as I tend to shift the clients that I'm working with every six months) where we often push hard around feature work to say, “How can we measure this? How can we know that we are delivering something valuable to users?” But as you said, that's really tricky. It's hard to evaluate. And then also, when you add on the fact that if I am leaving that project in six months, then I don't have the same insights to understand how something went for that team. So I can certainly appreciate the satisfaction that comes from knowing that, yes, you are delivering a faster app. And it's very measurable, given the time that you're there, whether it's a short time or if it's a long time that you're with that team.
NATE: Yeah, totally. My consulting engagements are often really short. I don't really do a lot of super long-term stuff, and that's usually fine because I can point to stuff and say, “Yep. This thing was at A, and now it's at B. And that's what you hired me to do, so now it's done.”
STEPH: I am curious; given that you have so many different facets where you are running your consultancy, you are also often publishing a lot of content and working on workshops and then also contributing to open source projects. What does a typical week look like for you?
NATE: Well, right now is actually a decent example. I have client work two or three days a week. And I'm actually working on a new product right now that I'm calling Sidekiq in Practice, which is a course/workshop about scaling Sidekiq from zero to 1000 jobs per second. And I'll spend the other days of the week working on that. My content is...I always struggle with how much time to spend on blogging specifically because it takes so much time for me to come up with a post and publish that. But the newsletter that I write, which I try to write two once a week, I haven't been doing so well with it lately. But I think I got 50 newsletters done in 2020 or something like that.
STEPH: Wow.
NATE: And so I do okay on the per-week basis. And it's all content I've never published anywhere else. So that actually is like 45 minutes of me sitting down on a Monday and being like rant, [chuckles] slam keyboard and rant and then hit send. And my open source work is mostly 15 minutes a day while I'm drinking morning coffee kind of stuff. So I try to spread myself around and do a lot of different stuff. And a lot of that means, I think, pulling back in terms of thinking how much you need to spend on something, especially with newsletters, email newsletters, it was very easy to overthink that and spend a lot of time revising and whatever. But some newsletter is better than no newsletter. And especially when it comes to content and marketing, I've learned that frequency and regularity is more important than each and every post is the greatest thing that's ever come out since sliced bread. So trying to build a discipline and a practice around doing that regularly is more important for me.
STEPH: I like that, some newsletter is better than no newsletter. I was listening to your chat with Brittany Martin on the Ruby on Rails podcast. And you said something very honest that I appreciated where you said, “Writing is really hard, and writing sucks.” And that made me laugh in the moment because even though I do enjoy writing, I still find it very hard to be disciplined, to sit down and make it happen. And then you go into that editor mode where you critique everything, and then you never really get it published because you are constantly fixing it. It sounds like...you've mentioned you set aside about 45 minutes on a Monday, and you crank out some work. How do you work through that inner critic? How do you get past it to the point where then you just publish?
NATE: You have to separate the steps. You have to not do editing and first drafting at the same time. And the reason why I say it sucks and it's hard is because I think a lot of people don't do a lot of regular writing, maybe get intimidated when they try to start. And they're like, “Wow, this is really hard. This is not fun.” And I'm just trying to say that's everybody's experience and if it doesn't get any better, because it doesn't, [chuckles] there's nothing wrong with you, that's just writing, it's hard. For me, especially with the newsletter, I just have to give myself permission not to edit and to just hit send when I'm done. I try to do some spell checking,, and that's it. I just let it go. I'm not going back and reading it through again and making sure that I was very clear and cogent in all my points and that there's a really good flow through that newsletter. I think it comes with a little bit of confidence in your own ideas and your own experience and knowledge, believing that that's worth sharing and that's worth somebody's time, even if it's not a perfect expression of what's in your head. Like, a 75% expression is good enough, especially in a newsletter format where it's like 500 to 700 words. And it's something that comes once a week. And maybe not everyone's amazing, but some of them are, enough of them are that people stay subscribed. So I think a combination of separating editing and first drafting and just having enough confidence and the basis where you have to say, “It doesn't have to be perfect every single time.”
STEPH: Yeah, I think that's something that I learned a while back to apply to my coding process where I had to separate those two steps of where I have to let the creator in me just create and write some code and make it work, and then come back to the editing process, and taking a similar approach with writing. As you may be familiar with thoughtbot, we're big advocates when it comes to sharing content and sharing things that we have learned throughout the week and different projects that we're working on. And often when people join thoughtbot, they're very excited to contribute to the blog. But it is daunting for that first post because you think it has to be this really grand novel. And it has to be something that is really going to appeal to everybody, and it's going to help everyone. And then over time, you learn it's like, oh well, actually it can be this very just small thing that I learned that maybe only helps 20 people, but it still helped those 20 people. And learning to publish more frequently versus going for those grand pieces is more favorable and often more helpful for people.
NATE: Yeah, totally. That's something that is difficult for people at first. But everything in my experience has led me to believe that frequency and regularity is just as, if not more important than the quality of any individual piece of content that I put out. So that's not to say that...I guess it's weird advice to give because people will take it too far the other way and think that means he's saying quality doesn't matter. No, of course, it does, but I think just everyone's internal biases are just way too tuned towards this thing must be perfect. I've also learned we're just really bad judges internally of what is useful and good for people. Stuff that I think is amazing and really interesting sometimes I'll put that out, and nobody cares. [chuckles] And the other stuff I put out that's just like the 45-minute banging out newsletter, people email me back and say, “This is the most helpful thing anyone’s ever read.” So that quality bias also assumes that you know what is good and actually we're not really good at that, knowing every time what our audience needs is actually really difficult.
STEPH: That's totally fair. And I have definitely run into that too, where I have something that I'm very proud of and excited to share, and I realize it relates to a very small group of people. But then there's something small that I do every day, and then I just happen to tweet about it or talk about it, and suddenly that's the thing that everybody's really excited about. So yeah, you never know. So share it all.
NATE: Yeah. And it's important to listen. I pay attention to what people get interested in from what I put out, and I will do more of that in the future.
STEPH: You mentioned earlier that you are working on another workshop focused on Sidekiq. What can you tell me about that?
NATE: So it's meant to be a guide to scaling Sidekiq from zero to 1000 requests per second. And it's meant to be a missing guide to all the things that happen, like the situations that can crop up operationally when you're working on an application that does a lot of work with Sidekiq. Whereas Mike Sidekiq, Wiki, or the docs are great about how do, you do this? What does this setting mean? And the basics of getting it just running, Sidekiq in practice, is meant to be the last half of that. How do you get it to run 1,000 jobs per second in a day-to-day application? So it's the collected wisdom and collected battle scars from five years of getting called in to fix people's Sidekiq installations and very much a product of what are the actual problems that people experience, and how do you fix and deal with those? So stuff about memory and managing Sidekiq memory usage, how to think about queues. Like, what should your queue structure be? How many should you have? Like, how do you organize jobs into queues, and how do you deal with problems like some client is dropping 10,000, 20,000 jobs into a queue. And now the other jobs I put in that queue have 20,000 jobs in front of them. And now this other job I've got will take three hours to get through that queue. How do you deal with problems like that? All the stuff that people have come to me over the years and that I've had to help them fix.
STEPH: That sounds really great. Because yeah, I find that teams who are often in this space with Sidekiq we just let it run until there's a fire. And then suddenly, we start to care as to how it's processing, and we care about our queue structure and how many workers that we have that are pulling from that queue. So that sounds really helpful. When you're building a workshop, do you often go back to any of those customers and pull more ideas from them, or do you find that you just have enough examples from your collective work with clients that that itself creates a course?
NATE: Usually, pretty much every chapter in the workshop I've probably implemented like three-plus times, so I don't really have to go back to any individual customer. I have had some interesting stuff with my current client, Gusto. And Gusto is going through some background job reorganization right now and actually started to implement a lot of the things that I'm advocating in the workshop actually without talking to me. It was a good validation of hey, we all actually think the same here. And a lot of the solutions that they were implementing were things that I was ready to put down into those workshops. So I'd like to see those solutions implemented and succeed. So I think a lot of the stuff in here has been pretty battle-tested.
STEPH: For the Rails Performance Workshop, you started off doing those live and in-person with teams, and then you have since switched to now it is a CLI course, correct?
NATE: That's correct. Yep.
STEPH: I love that very much. When you’ve talked about it, it does feel very appropriate in terms of developers and how we like to consume content and learn. So that is really novel and also, it seems like a really nice win for you. So then other people can take this course, but you are no longer the individual that has to deliver it to their team, that they can independently take the course and go through it on their own. Are you thinking about doing the same thing for the Sidekiq course, or what are your plans for that one?
NATE: Yeah, it's the exact same structure. So it's going to be delivered via the command line. Although I would say Sidekiq in practice has more text components. So it's going to be a combination of a very short manual or book, and some video, and some hands-on exercises. So, an equal blend between all three of those components. And it's a lot of stuff that I've learned over having to teach; I guess intermediate to advanced programming concepts for the last five years now that people learn at different paces. And one of the great things about this kind of format is you can pick it up, drop it off, and move at your own speed. Whereas a lot of times when I would do this in person, I think I would lose people halfway through because they would get stuck on something that I couldn't go back to because we only had four hours of the day. And if you deliver it in a class format, you're one person, and I've got 24 other people in this room. So it's infinitely pausable and replayable, and you can go back, or you can just skip ahead. If you've got a particular problem and you're like, hey, I just want to figure out how to fix such and such; you can do that. You can just come in and do a particular thing and then leave, and that's fine. So it's a good format that way. And I've definitely learned a lot from switching to pre-recorded and pre-prepared stuff rather than trying to do this all live in person.
STEPH: That is one of the lessons that I've learned as well from the couple of workshops that I've led is that doing them in person, there's a lot of energy. And I really enjoy that part where I get to see people respond to the content. And then I get a lot of great feedback from people about what type of questions they have, where they are getting stuck. And that part is so important to me that I always love doing them live first. But then you get to the point, as you'd mentioned, where if you have a room full of 20 people and you have two people that are stuck, how do you help them but then still keep the class going forward? And then, if you are trying to tailor this content for a wide audience…so maybe beginners could take the Rails Performance Workshop, or they could also take the Sidekiq course. But you also want the more senior engineers to get something out of it as well. It's a very challenging task to make that content scale for everyone.
NATE: Yeah. What you said there about getting feedback and learning was definitely something that I got out of doing the Rails Performance Workshop in person like three dozen times, was the ability to look over people's shoulders and see where they got stuck. Because people won't email me and say, “Hey, this thing is really confusing.” Or “It doesn't work the way you said it does for me.” But when I'm in the same room with them, I can look over their shoulder and be like, “Hey, you're stuck here.” People will not ask questions. And you can get past that in an in-person environment. Or there are even certain questions people will ask in person, but they won't take the time to sit down and email me about. So I definitely don't regret doing it in person for so long because I think I learned a lot about how to teach the material and what was important and how people...what were the problems that people would encounter and stuff like that. So that was useful. And definitely, the Rails Performance Workshop would not be in the place that it is today if I hadn't done that.
STEPH: Yeah, helping people feel comfortable asking questions is incredibly hard and something I've gone so far in the past where I've created an anonymous way for people to submit questions. So during class, even if you didn't want to ask a question in front of everybody, you could submit a question to this forum, and I would get notified. I could bring it up, and we could answer it together. And even taking that strategy, I found that people wouldn't ask questions. And I guess it circles back to that inner critic that we have that's also preventing us from sharing knowledge that we have with the world because we're always judging what we're going to share and what we're going to ask in front of our peers who we respect. So I can certainly relate to being able to look over someone's shoulder and say, “Hey, I think you're stuck. We should talk. Let me walk you through this or help you out.”
NATE: There are also weird dynamics around in-person, not necessarily in a small group setting. But I think one thing I really picked up on and learned from RailsConf2021 which was done online, was that in-person question asking requires a certain amount of confidence and bravado that you're not...People are worried about looking stupid, and they won't ask things in a public or semi-public setting that they think might make them look dumb. And so then the people that do end up asking questions are sometimes overconfident. They don't even ask a question. They just want to show off how smart they are about a particular issue. This is more of an issue at conferences. But the quality of questions that I got in the Q&A after RailsConf this year (They did it as Discord chats.) was way better. The quality of questions and discussion after my RailsConf talk was miles better than I've ever had at a conference before. Like, not even close. So I think experimenting with different formats around interaction is really good and interesting. Because it's clear there's no perfect format for everybody and experimenting with these different settings and different methods of delivery has been very useful to me.
STEPH: Yeah, that makes a ton of sense. And I'm really glad then for those opportunities where we're discovering that certain forums will help us get more feedback and questions from people because then we can incorporate that and to future conferences where people can speak up and ask questions, and not necessarily be the one that's very confident and enjoys hearing their own voice. For the Rails Performance Workshop, what are some of the general things that you dive into for that workshop? I'm curious, what is it like to attend that workshop? Although I guess one can't attend it anymore. But what is it like to take that workshop?
NATE: Well, you still can attend it in some sense because I do corporate bookings for it. So if you want to buy 20 seats, then I can come in and basically do a Q&A every week while everybody takes the workshop. Anyway, I still do that. I have one coming up in July, actually. But my overall approach to performance is to always start with monitoring. So the course starts with goals and monitoring and understanding where you want to go and where you are when it comes to performance. So the first module of the Rails Performance Workshop is actually really a group exercise that's about what are our performance requirements and how can we set those? Both high-level and low-levels. So what is our goal for page load time? How are we going to measure that? How are we going to use that to back into lower-level metrics? What is our goal for back-end response times? What is our goal for JavaScript bundle sizes? That all flows from a higher-level metric of how fast you want the page to load or how fast you want a route to change in a React app or something, and it talks about those goals. And then where should you even start with where those numbers should be? And then how are you going to measure it? What are the browser events that matter here? What tools are available to help you to get that data? Because without measurement, you don't really have a performance practice. You just have people guessing at what stuff is faster and what is not. And I teach performance as a scientific process as science and engineering. And so, in the scientific method, we have hypotheses. We test those hypotheses, and then we learn based on those tests of our hypotheses. So that requires us to A, have a hypothesis, so like, I think that doing X makes this faster. And I talk about how you generate hypotheses using profiling, using tools that will show you where all the time goes when you do this particular operation of your software—and then measuring what happens when you do that? And that's benchmarking. So if you think that getting rid of method X or changing method X will speed up the app, benchmarking tells you did you actually speed it up or not? And there are all sorts of little finer points to making sure that that hypothesis and that experiment is tested in a valid way.
I spend a lot of time in the workshop yapping about the differences between development/local environments and production environments and which ones matter. Because what differences matter, it's not often the ones that we think about, but instead it's differences like actually in Rails apps the asset packaging and asset pipeline performs very differently in production than it does in development, works very differently. And it makes it one of the primary reasons development is slower than production, so making sure that we understand how to change those settings to more production-like settings. I talk a lot about data. It’s the other primary difference between development and production is production has a million users, and development has 10. So when you call things like User.all, that behavior is very different in production than it is locally. So having decent production-like data is another big one that I like to harp on in the workshops. So it's a process in the workshop of you just go lesson by lesson. And it's a lot of video followed up by hands-on exercises that half of them are pre-baked problems where I'm like, hey, take a look at this Turbolinks app that I've given you and look at it in DevTools. And here's what you should see. And then the other half is like, go work on your application. And here are some pull requests I think you should probably go try on your app. So it's a combination of hands-on and videos of the actual experience going through it.
STEPH: I love how you start with a smaller application that everyone can look at and then start to learn how performant is this particular application that I'm looking at? Versus trying to assess, let’s say, their own application where there may be a number of other variables that they have to consider. That sounds really nice. You'd mentioned one of the first exercises is talking about setting some of those goals and perhaps some of those benchmarks that you want to meet in terms of how fast should this page load, or how quickly should a response from the API be? Do you have a certain set of numbers for those benchmarks, or is it something that is different for each product?
NATE: Well, to some extent, Google has suddenly given us numbers to work with. So as of this month, I think, June 2021, Google has started to use what they're calling Core Web Vitals in their ranking of search results. They've always tried to say it's not a huge ranking factor, et cetera, et cetera, but it does exist. It is being used. And that data is based on Chrome user telemetry. So every time you go to a website in Chrome, it measures three metrics and sends those back to Google. And those three metrics are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). And First Input Delay and Cumulative Layout shift are more important for your single-page apps kind of stuff. It's hard to screw those up with a Golden Path Rails app that just does Turbolinks or Hotwire or whatever. But Largest Contentful Paint is an easy one to screw up. So Google's line in the sand that they've drawn is 2.5 seconds for Largest Contentful Paint. So that's saying that from clicking on your website in a Google search result, it should take 2.5 seconds for the page to paint the largest component of that new page. That's often an image or a video or a large H1 tag or something like that. And that process then will help you to...to get to 2.5 seconds in Largest Contentful Paint; there are things that have to happen along the way. We have to download and execute all JavaScript. We have to download CSS. We have to send and receive back-end responses. In the case of a simple Hotwire app, it's one back-end response. But in the case of a single-page app, you got to download the document and then maybe download several XHR fetches or whatever. So there's a chain of events that has to happen there. And you have to walk that back now from 2.5 seconds in Largest Contentful Paint. So that's the line that I'm seeing getting drawn in the sand right now with Google's Core Web Vitals. So pretty much any meaningful web application performance metric can be walked back from that.
STEPH: Okay. That's super helpful. I wasn't aware of the Core Web Vitals and that particular stat that Google is using to then rank the sites. I was going to ask, this kind of blends in nicely into when do you start caring about performance? So if you have a new application that you are just starting to get to market, based on the fact that Google is going to start ranking you right away, you do have to care some right out of the gate. But I am curious, when do you start caring more about performance, and are there certain tools and benchmarking that you want to have in place from day one versus other things that you'll say, “Well, we can wait until we have X numbers of users or other conditions before we add more profiling?”
NATE: I'd say as an approach, I teach people not to have a performance strategy of monitoring. So if your strategy is to have dashboards and look at them regularly, you're going to lose. Eventually, you're not going to look at that dashboard, or more often, you just don't understand what you're looking at. You just install New Relic or Datadog or whatever, and you don't know how to turn a dashboard into actual action. Also, it seems to just wear teams out, and there's no clear mechanism when you just have a dashboard of turning that into oh, well, this has to now be something that somebody on our team has to go work on. Contrast that with bugs, so teams usually have very defined processes around bugs. So usually, what happens is you'll get an Exception Notification through Sentry or Bugsnag or whatever your preferred Exception Notification service is. That gets read by a developer. And then you turn that into a Jira ticket or a Kanban board or whatever. And then that is where work is done and prioritized. Contrast that with performance; there’s often no clear mechanism for turning metrics into stuff that people actually work on. So understanding at your organization how that's going to work and setting up a process that automatically will turn performance issues into actual work that people get done is important.
The way that I generally teach people to do this is to focus instead of dashboards and monitoring, on alerts, on automated thresholds that get tripped and then sends somebody's an email or put something in the Kanban board or whatever. It just has to be something that automatically gets fired. Different tools have different ways of doing this. Datadog has pretty much built their entire product around monitoring and what they call monitors. That's a perfectly fine way to do it, whatever your chosen performance monitoring tool, which I would say is a required thing. I don't think there's really any good excuse in 2021 for not having a performance monitoring tool. There are a million different ways to slice it. You can do it yourself with OpenTelemetry and then like statsD, I don't know, or pay someone else like everyone else does for Datadog or New Relic or AppSignal or whatever. But you got to have one installed. And then I would say you have to have some sort of automated alerting. Now that alerting means that you've also decided on thresholds. And that's the hard work that doesn't get done when your strategy is just monitoring. So it's very easy to just install a dashboard and say, “Hey, I have this average page time load dashboard. That means I'm paying attention to performance.” But if you don't have a clear answer to what number is good and what number is bad, then that dashboard cannot be turned into real action. So that's why I push monitoring so hard is because it allows people to ignore performance is all that matters, and it forces you to make the decision upfront as to what number matters. So that is what I would say, install some kind of performance monitoring. I don't really care what kind.
Nowadays, I also think there's probably no excuse to not have Real User Monitoring. So there's enough GDPR compliance Real User Monitoring now that I think everyone should be using it. So for industry terms, Real User Monitoring is just performance monitoring in the browser. So it's just users’ browser APIs and sends those back to you or your third-party provider, so having that so you actually are collecting back-end and front-end performance metrics. And then making decisions around what is bad and what is good. Probably everybody should just start with a page load time monitor, Largest Contentful Paint monitor. And if you've got a single-page app, probably hooking up some stuff around route changes or whatever your app...because you don't actually have page loads on every single time you navigate. You have to instrument whatever those interactions are. So having those up and then just drawing some lines that say, “Hey, we want our React route changes to always be one second or less.” So I will set an alert that if the 95th percentile is one second or more, I'm going to get alerted. There's a lot of different ways to do that, and everybody will have different needs there. But having a handful of automated monitors is probably a place to start.
STEPH: I like how you also focus on once you have decided those thresholds and have that monitoring in place, but then how do you make it actionable? Because I have certainly been part of teams where we get those alerts, but we don't necessarily...what you just mentioned, prioritize that work to get done until we have perhaps a user complaint about it. Or we start actually having pages that are timing out and not loading, and then they get bumped up in the priority queue. So I really like that idea that if we agree upon those thresholds and then we get alerted, we treat that alert as if it is a user that is letting us know that a page is too slow and that they are unable to use our application, so then we can prioritize that work.
NATE: And it's not all that dissimilar to bugs, really. And I think most teams have processes around correctness issues. And so, all that my strategy is really advocating for is to make performance fail loudly in the same way that most exceptions do. [chuckles] Once you get to that point, I think a lot of teams have processes around prioritization for bugs versus features and all that. And just getting performance into that conversation at least tends to make that solve itself.
STEPH: I'm curious, as you're joining teams and helping them with their performance issues, are there particular buckets or categories of performance issues that are the most common in terms of, let's say, 50% of issues are SQL-related N+1 issues? What tends to be the breakdown that you see?
NATE: So, when it comes to why something is slow in a Ruby application, I teach a method that I call DRM. And that doesn't have anything to do with actual DRM. It's just memorable because it reminds me of things I don't like. DRM stands for Database Ruby and Memory and in that order. So the most common issue is database, the second most common issue is issues with your Ruby code. The least common issue is memory. Specifically, I'm talking about allocation of objects, creating lots of objects. So probably 80% of your issues are in some way database-related. In Rails, it's 50% of those are probably N+1. And then 30% of database issues are probably what I would call unnecessary SQL. So it's not necessarily N+1, but it's a SQL query for information that you already had, or you could do in a more efficient way. So a common thing for unnecessary SQL would be people will filter an ActiveRecord::Collection like ten different ways when they could have just loaded the whole collection, filtered it with Ruby in the ten different ways afterwards, and that works really well if the collection that you're loading is like 10, 20. Turning that into one database query, plus a bunch of calls to innumerable methods is often way faster than doing that as ten separate database queries. Also, that tends to be a more robust approach. This doesn't happen in most companies, but what could happen is the database is like a shared resource. It's a resource that everybody is affected by. So a performance degradation to the database is the worst possible scenario because everything is affected. But if you screw up what's happening at an individual Rails process, then only that Rails process is affected. The blast radius is tiny. It's just that one request. So doing less stuff in the database while it can actually seem like, oh, that doesn't feel right. I'm supposed to do a lot of stuff in the database. It actually can reduce blast radiuses of performance issues because you're not doing it on this database that everyone has to have access to. There are a lot of areas of gray here. And I talk a lot in all my other material like why -- There's a lot of nuance here.
So database is the main stuff. Issues in how you write your Ruby code is probably the other one. Usually, that's just what I would call code that goes bump in the night. It's code that you don't know is running but actually is. Profilers are what help us figure that out. So oftentimes, I'll have someone open up a profiler on their controller action for the first time. And they're like, wait a minute, I had no idea that such and such was running during this controller action, and actually, we don't need to do that at all. So why is it here? So that's the second most common issue. And then the third issue that really doesn't actually come up all that often is object allocation, numbers of objects that get created. So primarily, this is a problem in index actions or actions transactions that deal with big collections. So in Ruby, we often get overly focused on garbage collection, but garbage collection doesn't take any time if you just don't create objects. And object creation itself takes time. So looking at code through the lens of what object does this code create? And trying to get rid of those object allocations can often be a pretty productive way to make stuff faster.
STEPH: You said a lot of amazing things there. So I'm debating on which one to follow up on. I think the one that stuck out to me the most where I have felt pain around this is you mentioned identifying code that goes bump in the night or code that is running, but it doesn't need to be run. And that is something that I've run into with applications where we have a code path that seems important, but yet I can't prove that it's being executed and exactly why it's there and what flow it's supporting. And I'm curious, do you have any tips or tricks in how you’ve helped teams identify that this code path isn't used and it's something that we can remove and then that itself will help speed up the performance of that particular endpoint?
NATE: Like, there's no performance cost to like 100 models in an application that never actually get used. There's really no performance downside to code in an app that doesn't actually ever get run. But instead, what happens is code gets added into callbacks that usually is probably the biggest offender that’s like, always do this thing after you do X. But then, two years later, you don't always need to do that thing after you do X. So the callbacks always run, but sometimes requirements change, and they don't always need to be run. So usually, it's enough to just pop the profiler now on something. And I have people look at it, and they're like, “I don't know why any of this is happening.” Like, it's usually a pretty big Eureka moment once we look at a flame graph for the first time and people understand how to read those, and they understand what they're looking at. But sometimes there's a bit of a process where especially in a bigger app where it's like, “Such and such is running, and this was an entire other team that's working on this. I have no idea what this even does.” So on bigger apps, there's going to be more learning that has to get done there. You have to learn about other parts of the application that maybe you've never learned about before. But profiling helps us to not only see what code is running but also what that relative importance is. Like, okay, maybe this one callback runs, and you don't know what it does, and it's probably unnecessary. But if it only takes 1% of the total time to run this action, that's probably less important than something that takes 20% of total time. And so profilers help us to not only just see all the code that's being run but also to know where that time goes and what time corresponds to what parts of the code.
STEPH: Yeah, that's often the code that makes me the most nervous is where it's code that I suspect is being run or maybe being run, but I don't understand why it's there and then figuring out if it can be removed and then figuring out ways to perhaps even log when a call is being made to that code to determine if it's truly in use or not or at least supported by a code path that a user is hitting. You have a blog post that I read recently that I really appreciated that talks about essentially gaming benchmarking where you talk about the importance of having context around benchmarks. So if someone says, “I've improved something where it is now 10% faster.” It's like, well, what is that 10% relative to? And if it's a tool that other people are using, what does that mean for them? Or did you improve something that was already very fast, and you made it 10% faster? Was that a really valuable use of your time?
NATE: Yeah. You know, something that I read recently that made me think of that again was this Hacker News post that went viral. That was like, how I optimize an AWS EC2 instance to take 1.5 million requests per second on my JSON API. And out of the box, it was like 500 requests per second, and then he got it to 1.5 million. And the whole article was presented with relative numbers. So it was like, “I made this change, and things got 33% faster. And if you do the whole thing right, 500 to 1.5 million requests per second, it's like my app is three times faster now,” or whatever. And that's true, but it would probably be more accurate to say, “I've taken three-millionth of a second out of every request in my app.” That's two ways of saying the same thing because latency and throughput are just related that way. But it's probably more accurate and more useful to say the absolute number, but it doesn't make for great blog posts, so that doesn't tend to get said. The kinds of improvements that were discussed in this article were really, really low-level stuff. That was like if you turn off...I think it was like turn off iptables or something like that. And it's like, that shaves a microsecond off of every time we make a syscall or something. And that is useful if your performance goal is to serve 1.5 million requests per second Hello World responses off of my EC2 instance, which is what this person admittedly was doing. But there's a tendency to walk that back to if I do all things in this article, my application will be three times faster. And that's just not what the evidence says. It's not what you were told. So there's just a tendency to use relative numbers when absolute numbers would be more useful to giving you the context of like, oh, well, this will improve my app or it won't. We get this a lot in Puma. We get benchmarks that are like, hey, this thing is going to help us to do 50,000 requests per second in Puma instead of 10,000. And another way of saying that is you took a couple of nanoseconds off of the overhead of every single request to Puma. And most Puma applications have a hundred millisecond response time. So it's like, yeah, I guess it's cool that you took a nanosecond off, and I’m sure it's going to help us have cool benchmarks, but none of our users are going to care. No one that's used Puma is going to care that their requests are one nanosecond faster now. So what did we really gain here?
STEPH: Yeah, it makes sense that people would want to share those more...I want to call them sparkly stats and something that catches your attention, but they're not necessarily something that's going to translate to us in the way that we hoped that they will in terms of it's not going to speed up our app 30% or have those same rewards or benefits. Speaking of Puma, how is it being a co-maintainer of Puma? And how do you balance that role with all of your other work?
NATE: Actually, it doesn't take all that much of my time. I try to spend about 15 minutes a day on it. And that's really possible because of the philosophy I have around open-source maintenance. I think that open source projects are fundamentally about collaboration and about sharing our hard-fought extractions and fixes and knowledge together. And it's not about a single super contributor or super maintainer who is just out of the goodness of their heart releasing all of their incredible work and time into the public domain or into a free software license. Puma is a pretty popular piece of Ruby software, so a lot of people use it. And I have things on my back burner of if I ever got 20 hours to work on Puma, here’s stuff I would do. But there are a lot of other people that have more time than me to work on Puma. And they're just as smart, and they have other tools they've got in their locker that I don't have. And I realized that it was more important that I actually find ways to recruit and then unblock those people than it was for me to devote as much time as I could to Puma. And so my work on Puma now is really just more like management than anything else. It's more trying to recruit new contributors and trying to give them what they need to help Puma. And contributing to open source is a really fraught experience for a lot of people, especially their first time. And I think we should also be really conscious of that. Like, 95% of software developers have really never contributed to open source in a meaningful way. And that's a huge talent pool of people that could be helping us that aren't. So I'm less concerned about the problems of the 5% that are currently contributing than I am about why there are 95% of us that don't do anything. So that's what gets me excited to work on Puma now, is trying to change that ratio.
STEPH: I really like that mindset of where you are there to provide guidance but then essentially help unblock others as they're making contributions to the project but then still be there to have the history and full context and also provide a path forward of a good direction for Puma to head. In regards to encouraging more people to contribute to open source projects, I've often heard people say how challenging that is, where they have an open-source project that they would really love people to contribute to but finding people is really hard or just letting people know that they're interested in contributions. Have you had any strategies that have been successful for you in encouraging people to contribute?
NATE: Yeah. So first thing, the easiest thing is we have a contributing.md file. So that's something I think more projects should adopt is have an actual file in your project that says everything about how to contribute. Like, what kinds of contributions do you want? Different projects have different things that they want. Like, Rails doesn't want to refactor PRs. Don't send a refactor PR to Rails because they'll reject it. Puma, I'm happy to accept those. So letting people know like, “Hey, here's how we work here. Here's the community we're creating, and here's how it works. Here's how to get involved.” And I think of it as hanging out the shingle and saying, “Yes, I want your contributions. Here's how to do it.” That alone puts you a step above other projects.
The second thing I would say is you need to have contributor-only communication channels. So we have Matrix chat. So Matrix is like this successor to IRC. So we have a chat channel basically, but it's like contributors only. I don't enforce that, but I just don't want support requests in there. I don't want people coming in there and being like, “My Puma config doesn't work.” And instead, it's just for people that want to contribute to Puma, and that want to help out. If you have a question and come in there, anyone can answer it.
And then finally, another thing that I've had success with is doing one-on-one stuff. So I will actually...I have a Calendly invite that I think is in contributing.md now that you can just book 30 minutes with me anytime about contributing to Puma. And I will get on a Zoom call with you and talk to you about what are your concerns? Where do I think you can help? And I give my time away that way. The way I see it is like if I do that 20 times and I create one super contributor to Puma, that is worth more than me spending 10 hours on Puma because that person can contribute 100, 200, 1,000 hours over their lifetime of contributing to Puma. So that's actually a much more higher leverage contribution, really from my perspective. It's actually helping other people contribute more.
STEPH: Yeah, that's huge to offer people to say, “Hey, you can book time with me, and I will walk you through and let you know where you can start making an impactful contribution right away,” or “Here are some areas that I think you'd be interested, to begin with.” That seems like such a nice onboarding for someone who says, “I'm interested, but I'm nervous,” or “I'm just not sure about where to get started.” Also, I love your complaint department voice for the person who their Puma config doesn't work. That was delightful. [chuckles]
NATE: I think it's a little bit part of my open-source philosophy that, especially at a certain scale like Puma is at that we really kind of over-prioritize users. And I'm not really here to do support; I'm here to make the project better. And users don't actually contribute to open source projects. Users use the thing, and that's great. That's the whole reason we're open-sourcing is so more people use it. But it's important not to prioritize that over people who want to make the project better. And I think a lot of times; people get caught up in this almost clout chasing of getting the most GitHub stars that they think they need and users they think they need. And you don't get paid for having users, and the product doesn't get any better either. So I don't prioritize users. I prioritize the quality of the project and getting contributors. And that will create a better project, which will then create more users. So I think it's easy to get sidetracked by people that ask for your time when they're not giving anything back to the project in return. And especially at Puma’s scale, we have enough people that want my time or the time of other maintainers at Puma so that they can contribute to the project. And putting user support requests ahead of that is not good for the project. It's not the biggest, long-term value increase we could be making, so I don't prioritize them anymore.
STEPH: Yep. That sounds like more the pursuit of sparkly stats and looking for all those GitHub stars or all of those likes. Well, Nate, if you're game, I have two listener questions that I'd like to run by you because I shared with some folks that you are going to be on The Bike Shed today. And they're very excited and have two questions that they'd like me to run by you. How does that sound?
NATE: Yeah, all right.
STEPH: So the first question is, are there any paradigms or trends in Rails that inherently hurt performance?
NATE: Yeah. I get this question a lot, and I will preface it with saying that I'm the performance guy, and I'm not the software design guy. And I get a lot of questions about does such and such software design...how does that impact performance? And usually, there's like a way to do anything in a performant way. And I'm just here to help people to find the performant way and not to prescribe “You must always do X, Y, or Z,” or “ActiveRecord is bad. Never use it.” That's not my job here. And in my experience, there's a fast way to do almost anything.
Now, one thing that I think is dying, I guess, or one approach or one common...I don't know what to call it. One common mistake that is clearly wrong is to not do any form of server-side rendering in a web application. So I am anti-client-side app. But there are ways to do that and to do it quickly. But rendering a basically blank document, which is what most of these applications will do when they're using Rails as a back-end…you'll serve this basically blank document or a document with maybe some Chrome in it. And then, the client-side app has to execute, compile JavaScript, make XHR requests, and then render the page. That is just by definition slower than serving somebody a server-side rendered page. Now, I am 100% agnostic on how you want to generate that server-side rendering. There are some people that are working on better ways to do that with Rails and client-side apps. Or you could just go the Hotwire Turbolinks way. And it's more progressive enhancement where the back-end is always just serving the server-side rendered response. And then you do some JavaScript on top of that.
So I think five years from now, nobody will be doing this approach of serving blank documents and then booting client-side apps into that. Or at least it will be seen as outdated enough that you should never design a project that way anymore. It's one of those few things where it's like, yeah, just by definition, you're adding more steps into a rendering flow. That means, by necessity, it has to be slower. So I think everybody should be thinking about server-side rendering in their project. Again, I’m totally agnostic on how you want to implement that. With React, whatever front-end flavor of the month you want to go with, there's plenty of ways to do that, but I just think you have to be prioritizing that now.
STEPH: All right. Well, I like that five-year projection of where we're headed. I have found that it's often the admin-side where people will still bring in a lot of JavaScript rendering, just to touch on a bit of what you're saying, in terms of let's favor the server-rendered HTML versus over-optimizing a space that one, probably isn't a profitable space in terms that we do want our admins to have a great experience for our product. But if they are not necessarily our users, then it also doesn't need to be anything that is over the top or fancy or probably uses a lot of JavaScript. And instead, we can start simple. And there's a number of times that I've been on projects where we have often walked the admin back to be more server-rendered because we got to a point where someone was very excited to make the admin very splashy and quick but then couldn't keep up with the requests because then they were having to prioritize the user experience first. So it was almost like optimizing the admin, but then it got left out in the cold. So then it's just sort of this poor experience.
NATE: Yes. Shopify famously walked back their admin from I think it was Backbone to Turbolinks. And I think that that has now moved back to React is my understanding. But Shopify is a huge company, so they have plenty of time and resources to be able to do that. But I just remember that happening at the time where I was like, oh wow, they just rolled the whole thing back to Turbolinks again. And now, with the consolidation that's gone on in the React world, it's a little bit easier to pipe a server-side rendering into a React app. Whereas with Backbone, it was like no one knew what you were doing. So there was less knowledge about how to server-side render this stuff. Now it doesn't seem to be so much of a problem. But yeah, I mean, Rails is really good at CRUD apps, and admin is like 99% CRUD. And adhering as closely as possible to the Rails Golden Path there in an admin seems to be the most productive way to work on that kind of feature.
STEPH: All right. Ready for your second question?
NATE: Yes.
STEPH: Okay. This one's a bit more in-depth. They also mentioned a particular project name. So I am going to swap it out with a different name. So on project cinnamon roll, we found a really gnarly time-consuming API endpoint that's getting hammered. And on a first pass, we addressed a couple of N+1 issues and tuned the performance, and felt pretty confident that they had addressed the issue. But it was still fairly slow. So then they took some additional incremental steps. So they swapped out to use OJ for serialization that shaved off an additional 10% but was still slow. They also went the route of going straight to Rails cache with a one-minute expiration. So that way, they could avoid mucking with cache busting because they confirmed with the client that data could be slightly stale. And this was great. It worked out well. So it dropped their average response time down to less than 70 milliseconds. With all that said, that journey took a few hours over a few days, and multiple production deploys. And had they gone straight to the cache, then they would have had a 15-minute fix with a single deploy. So this person's wondering, are there any other examples like that where, rather than taking these incremental seemingly obvious performance whims, there are situations where you want to be much more direct with your path?
NATE: I guess I'd say that profiling can help you to understand and form better hypotheses about what will make things faster and what won't. Because a profiler can't really lie to you about where time goes, either you spent 20% of your time in this method, or you didn't. So I don't spend any time in any of my material talking about what JSON serializer you use. Because really that's actually never...that's really never anybody's bottleneck. It's never a huge proportion of people's total percentage of time. And I know that because I've looked at enough profiles that the issues are usually in other places. So I would say that if your hypotheses that you're generating are not working, it's because you're not generating good enough hypotheses. And profiling is the place to do that. So having profilers running in production is probably the biggest level-upscale-like that most teams could take. So having profilers that you can access as on production servers as a user is probably the biggest level up that you could make to generating the hypotheses because that'll have real production data, real production servers, real production environment. And it's pretty common now that pretty much every team that I work with either has that already, or we work on implementing it. It's something that I've seen in production at GitHub and Shopify. You can do it yourself with rack-mini-profiler. It's all about setting up the authorization, just making sure that only authorized users get to see every single SQL query generated in the flame graph and all that. But other than that, there's no reason you shouldn't do it. So I would say that if you're not generating the right hypotheses or you don't...if the last hypothesis out of 10 is the one that works, you need better hypotheses, and the best way to do that is better profiling.
STEPH: Okay, better profiling. And yeah, it sounds like there's also a bit of experience in there in terms of things that you're used to seeing, that you've noticed that could be outliers in terms of that they're not necessarily the thing that you want to improve. Like you mentioned spending time on how you're serializing your JSON is not somewhere that you would look. But then there are other areas that you've gained experience that you know would be likely more beneficial to then focus on to form that hypothesis.
NATE: Yeah, that's a long way of saying experience pays off. I've had six years of doing this every single day. So I'm going to be pretty good at...that's what I get paid for. [laughs] So if I wasn't very good at that, I probably wouldn't be making any money at it.
STEPH: [laughs] All right. Well, thanks, Nate, so much for coming on the show today and talking so much about performance. On that note, I think it's a good place for us to wrap up. If people are interested in following along with what you're working on and they want to keep up with your latest and greatest workshops that are coming out, where can they find you on the internet?
NATE: speedshop.co is my site. @nateberkopec on Twitter. And speedshop.co has a link to my newsletter, which is where I'm actively thinking every week and publishing stuff too. So if you want to get the drip of news and thoughts, that's probably the best place to go.
STEPH: Perfect. All right. Well, thank you so much.
NATE: No problem.
STEPH: The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or a review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us @bikeshed on Twitter. And I'm @christoomey.
STEPH: And I’m @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
Together: Bye.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
After the last episode where database switching was discussed, a number of listeners reached out with thoughts. In particular, one listener gave a reproducible example of how to make things better. Chris talks about why he always moves errors to the left, and Steph gives a hot take where she admits that she is not a fan of hackathons and explains why.
Steph and Chris also share exciting Bike Shed show news in that we now have transcripts for each episode, and tackle another listener question asking, "How do you properly implement a multi-step form in a boring Rails way?” Chris talks about his experiences with multi-step forms and gives his own hot take on refactoring: he doesn't until he feels pain!
Transcript:
CHRIS: Happy Friday or whatever day it happens to be in your future situation.
STEPH: Happy day. [chuckles]
CHRIS: Happy day or night. I'm sorry, I'm done. [laughter]
STEPH: Shut up. [laughs]
Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I’m Steph Viccari.
CHRIS: And I’m Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey, Chris, happy Friday. How's your week been?
CHRIS: Happy Friday to you as well. My week's been good. It's been busy. I am taking next week off for a quick vacation. So it’s that…I think I've talked about this every time before I go on a vacation on the podcast, that focusing lens that going on vacation gives you. I want to make sure everything's buttoned up and ready to hand off, and I'm not going to be blocking anyone. And so, I always like the clarity that that brings. Because a lot of times I can look at well, there are infinity things to do, how do I pick? And now I'm like, no, but really, if I'm going to be gone for a week, I must pick. And so yeah, I'm now very excited to lean into vacation mode and relax for a bit.
STEPH: Yeah, that's awesome. I hear you. I always go into that same mode pre-vacation.
CHRIS: But in tech news, after the most recent episode that was released where we talked about the database switching stuff, a number of listeners were very kind and reached out with some thoughts. In particular, Dan Ott is one listener who reached out not only with just some generic thoughts, but he also gave a reproducible example of how to make things slightly better. So the particular thing that a few folks honed in on was the idea that I was describing the feeling of in production; we can occasionally run into these ActiveRecord read-only errors, which is a case where you have a GET request that happens to try to create or update a record. And as a result, you're going to get this ActiveRecord read-only because you're using the follower database, which has a read-only connection. All of that is fine, but ideally, we would want to catch those before production. We want to catch them in development. And broadly, the issue that we have here is that in production, our system is running in a different way. It's running with two different database connections, one for read-only, one for writing, and that's different than in development, where we're running with a single connection.
As an interesting thing, a lot of the stuff that I see on the internet is about using SQLite in development and then Postgres in production. And so that's an example of development production parity that we've really...I think thoughtbot is definitely a place that I internalize this very strongly. But you've got to have the same database, and especially because it's relatively straightforward to run Postgres locally, I'm always going to be running the same version of the database locally as in production. But in this case, I'm now getting this differentiation. And so what Dan and a handful of other folks highlighted was you can actually reproduce this functionality in development mode with a fun little trick where you end up creating a secondary connection to your development database, but you mark it as replica:true. And so, by doing that, Rails will establish a read-only connection. And then, all of the behavior that you configure for production can also be run in development. So now, as you're building out a new feature, and if you happen to implement a GET request that does some side effect in the database, that'll blow up in development as opposed to production, which is very exciting.
STEPH: Yeah, that’s awesome. I love that Dan reached out and shared this example with us. I actually haven't read through all of the details just yet. In fact, I just opened it up, and I started going through it, and there's a lot of really...it looks like a lot of great notes here and a really nice example that walks you through how to have that production parity locally. So this is really neat. I appreciate Dan sending this to us.
CHRIS: Yeah, this is a wonderful little artifact actually that’s interesting just in and of itself. We'll certainly include a link in the show notes to the gist that Dan shared. What's interesting...I think I knew of this, but I've never actually seen it before. This is a single-file Rails application, which is a very novel concept, but it's got a bundler/inline call at the very top. And then there's an inline gemfile block, and then a set of requires to pull in the relevant Rails stuff. And then it configures the database connection, configures a single controller or actually a handful of controllers, it looks like, and then it renders inline HTML. And so it has all of the pieces. And I didn't realize that at first, but then I pulled it down and I just ran it locally. So it’s just Ruby and then the file that this just represents, and suddenly I had a reproducible Rails app. I believe this is used in reporting issues to Rails so you can get the minimum reproducible test case. And that's why this works is, I think, the Rails core team, over time, has pushed on any of the edges that wouldn't have worked and made it so that this is possible. But it's a really neat little thing where it's this self-contained example. And so running this file just via Ruby does all of this stuff, installs everything that's necessary. And then, you can click around in the very minimal HTML page that it provides and see the examples of the edges that it's hitting. And again, this is in development mode, so it's pushing on that. But yeah, it's both a really interesting tip as to how to work with this and a really interesting way to communicate that tip—so double points to Gryffindor, aka Dan Ott.
STEPH: Double points to Gryffindor. I love it.
CHRIS: I’m cool. [laughs]
STEPH: That's very charming. [laughs] I've never seen anything like this either, in terms of one file that then can reproduce and run in a Rails app. Agreed, double points to Gryffindor, aka Dan.
CHRIS: Aka Dan.
STEPH: [chuckles] I hope Dan’s a Harry Potter fan.
CHRIS: I hope so. And I hope he's a Gryffindor, who knows? Maybe Ravenclaw. It's really up in the air. But the other thing that is interesting that I haven't yet figured out here is this works for development mode. I've tested it in development. It's great. I was able to remove the fix line that I had in my code where I had one of these breaking controller actions and run with this configuration in development mode. And then boom, it blew up in development, and I was like, yay, this is great. Move those errors to the left, as they say. But I realized there are some other edge cases, known ones actually. Another developer on the team mentioned something where he knows of a place that this is happening, but that code path isn't running right now just because it's a seasonal thing within the app. And I was like, oh, that's really interesting. I wish there were a way to test all the behavior. Oh, tests, that's what I need here. And so I tried to configure this in test mode, but I wasn't exactly clear on what was failing. But at a minimum, I know that the tests run in transaction, so I think that might make this more complicated because if you have two connections to the same database, but you have transactions, I feel like that might be conflating things, and it wouldn't necessarily map perfectly in. But if we could get that, that would be really great. Moving forward, any new development the development configuration will cover what I need. But retroactively, as I'm introducing the database switching to the application, it would be great if the test suite were a way to find these edge cases. So that's still an open question in my mind. But overall, the development fix is such a nice little addition to this world. And again, thank you so much to Dan for sending this in.
STEPH: Yeah, I agree; having this in tests would be wonderful. I am intrigued not having read through the full example that's been provided. But I'm wondering if this is one of those we default to read-only mode, although that feels like too much because we're often creating data for each test. So maybe we default to...yeah, you have to have both because you have to have your test set up where you're going to write data. So you can't default to just being in read-only mode. But then say you want to run a controller action or something else in a read-only mode. So then you would have to change your database connection for that action, and that sounds complicated. You also said something else I'm intrigued by. You said, “Move errors to the left.”
CHRIS: Yes. Now that you're asking me, I'm trying to remember the exact context. But it's the idea that there are different phases in your development and eventually getting to production life cycle, and so a bug that a user sees that's all the way to the right. That's as far along the development pipeline as it can be, and that's the worst case. You don't want a user to see a bug. So QA would be a step right before that. And if you can catch it in QA, you've moved it to the left, which is a good thing. But even better than that would be to catch it in your automated tests, and maybe even better than that would be static analysis that's running in your editor, and maybe even better than that is a type system or something like that. So the idea of moving to the left is to push those errors or when you're catching the errors closer to the point where you're actually introducing them. And that's just a general theme that I like or a Beyoncé song.
STEPH: I was just going to say, all right, move over, Beyoncé. There's another phrase in town, moving to the left. [laughs]
CHRIS: I'm really going for a lot of topical pop culture references today. That's what I'm about.
STEPH: We’ve got Harry Potter, Beyoncé. We've got to pull out one more at some point.
CHRIS: We'll see. I don't want to stretch myself too thin right before vacation. But yeah, thanks again to Dan and the handful of other folks that reached out either on Twitter or via email to point me in the right direction on the database switching stuff. At some point, I should definitely do a write-up on this because I've now collected together just about enough information that it feels like it's worthy of a blog post, or at least that's the story in the back of my head. I got to cross a certain threshold before I'm probably going to write a blog post. But yeah, that's a bit of what's up in my world. What's going on in your world?
STEPH: I love it. You're saying write a blog post into the mic, so then that way you know it's going to encourage you to write it later.
CHRIS: That’s the trick right there. [chuckles]
STEPH: Let's see, today's been a lovely day. It's been a lovely week in general. Today is especially lovely because it is thoughtbot’s Summit, and Summit is where we all gather. We do this once a year. So the whole team, all of us across all of our...I was going to say offices but now just across all of our home offices. And we get together, and we have a day filled with events, and we usually have a wonderful team that helps organize a bunch of events that then we get together for. So a number of those fun events are like paired chats, which is one of my favorites because I often talk to people that I haven't talked to in a long time or perhaps people that I haven't even met yet that have just recently joined the company. We also have lightning talks, and I know I'm very biased, but I think we have some of the best lightning talks. They are just hilarious. So I love our lightning talks. We're also doing escape rooms. Oh, speaking of which, there's a Harry Potter-themed murder mystery that's happening. We have Nintendo Switch parties and a professional tarot card reading, which I've never done, but I'm actually doing that later today after we're done chatting.
CHRIS: Wow, that is an adventurous day. And I like that it's fun, and it's connecting people and getting to know your teammates and all those nice things.
STEPH: Very much. I also have a hot take. I don't know if I've shared this with you, so I'm going to share it here with you on the mic in regards to this. So previous years, for Summit, we used to have more coding projects, too. They were often opt-in, but that's something that happened. And specifically, we have Ralphapalooza, which is our hackathon. And it recently came up where a number of us were talking about Ralphapalooza, and I have come to the potentially contentious point of view that I don't like hackathons. I'm not a fan of them.
CHRIS: Hot take. I like that you led in calling it a hot take, and then you provided said hot take, so I have to respond as if it's a very hot take.
STEPH: That's true. Maybe it's not a hot take. Maybe people disagree. What do you think? Do you like hackathons?
CHRIS: I have enjoyed them in the past. But I will say, particularly within the context of Summit or Ralphapalooza, I always felt a ton of pressure. It's so hard to right-size a project to that space, to that amount of time. You want to do something that's not trivial. You want to do something that at the end of it you're like, oh cool, I did that. Either it's like a novel thing that you're creating, or you're learning something new or whatever, but it's so hard to really do something meaningful in that amount of time. And often, people are shooting for the moon, and then they're just like, “Ah, so it's just a blank page right now. But behind the scenes, there's a machine learning algorithm that is generating the blank page. And we think with enough inputs to the model that it'll…” and it is actually super interesting work they did. But there's the wonderful pressure at the end to present, which I think is really useful. I like constraints. I like the presentations; they’re always enjoyable, even in a case where it's like, this project did not go well, let's talk about that. That's even fun. But it really is so hard to get right. I've never gone to a hackathon outside of thoughtbot, so I can't speak to that, but I know that I have heard folks having a negative opinion of them. And I don't know that I'm quite at the hot take level that you are, but it's complicated, if nothing else. It's a lot of fun sometimes. I particularly remember the Elm project that you and I worked on. Well, we worked in the same group. We didn't actually work together, but same idea. That was a lot of fun. I liked that.
STEPH: That’s a good point. Even within the context of Ralphapalooza, our hackathons are more...I’m going to use the word sustainable because they're nine-to-five hackathons where we are showing up; we are putting in the work. There is pressure, and we do want to present. But it's not one of those stay up all night and completely leave your family for a day or two to hack on some code. [laughs] Sorry, I'm throwing some shade right now. But even with that sustainable approach, I've always felt so much pressure. I enjoyed that green space and then getting to collaborate with people I don't typically collaborate with, but it still felt like there was a lot of pressure there, especially that presentation mode always made me nervous. Even if it is welcoming to say, “Hey, this didn't go well,” that doesn't necessarily feel great to present unless you are comfortable presenting that scenario. And I also really look forward to these company events as a way to connect and have some downtime and to just relax because then the rest of our days are often more stressful. So I want more company time for me to connect with colleagues but then also feel relaxed. So I was always, in the beginning, I was like, yeah, Ralphapalooza, woo, let’s go. And now I'm just like, nah, I'm good. I'd really just want a chill day with my colleagues.
CHRIS: Is there an option to go for a walk with friends? Because if so, I will be taking door number two.
STEPH: Cool. Well, I feel better having gotten that out into the ether now. But switching just a bit, there is something that I'm very excited about where we now have transcripts for each episode. This is something that you and I have been very excited about for a while and wanted to make happen but just weren't able to, but we now have them. And so people may have noticed them as we're adding them to the show notes. And I'm just so excited for a number of reasons, one, because there are a number of times that I have really wanted to search the shows or an episode for a particular topic and couldn't do so. So I'm just sitting there listening, trying to find a particular topic. There's also the fact that it will make the episodes more accessible. So for anyone that is hearing impaired or maybe if English isn't their first language, having it written down can make the episode more accessible. And there’s the massive SEO boost that's always a win. And then I don't know if this is going to happen, but I'm excited that transcripts may help us repurpose content because there's a number of our topics that I would love to see turned into blog posts, and I think having the transcript will make that easier.
CHRIS: Yeah. I'm equally super excited about the addition of transcripts, and across the line, SEO is cool, I think. Yeah, that sounds nice. Being able to reuse the content is very interesting to me because this is definitely my preferred medium. I find that I can just show up on the microphone, and it turns out I have opinions about a lot of stuff but trying to write a blog post is incredibly difficult for me. The small handful of good things that we might have collectively said over the years if we can turn those into more stuff that sounds great and honestly, just the ability to search for and find older episodes now based on like, I know we talked about inbox zero. I remember that was an episode, but I don't know which one, now that’s searchable, and that's a thing that we can find. I actually still use the Upcase search for…I know I said something. I know there was a weekly iteration where I talked about some topic. And I built the search on Upcase for me as the primary user because one, I'm often referencing content on Upcase, and I want to be able to find it more easily, so I made the search. I also put a SQL injection vulnerability into the search in my first implementation so, go me. But then I got rid of it shortly after.
STEPH: I love when people bring that energy of “I introduced this issue, go me,” because I find that very fun and also just very healthy in terms of we're going to make mistakes. And I have noticed a number of times at thoughtbot standup that whenever we make a mistake, or it’s like, I accidentally sent out real emails on production for a job that I thought I was testing on staging. Sharing those mistakes in a very positive light is a very honest way to approach it. So I just had to comment on that because I'm a big fan of that.
CHRIS: I'm glad you enjoyed my framing of it. I really enjoy that type of approach or way to communicate, although I think it is a delicate line. Like, I don't want to celebrate these sorts of things because an SQL injection vulnerability is a non-trivial thing. It shows up in tons of applications, and we need to take security seriously and all of that sort of stuff. But I think the version that I think is good for that type of thinking or communication is the psychological safety. If we're scared of admitting that we introduced a bug, that's bad. That's going to lead to worse outcomes longer term. And so having the shared communication style openness to like, yep, that happened yesterday. And there should be a certain amount of contrite in this where it’s like, I feel bad that I did that. I even feel worse because when it happened, I recognized that it happened, and then I tried to exploit it in development mode to prove it to myself, and I couldn't exploit it. So I was like, I feel doubly bad as a programmer today. I both introduced a bug, and I'm not even smart enough to exploit it. But I know that an uber lead hacker out there could, and so I got to fix it. But that sort of story is part of the game. It's a delicate equilibrium, but having the ability to talk about that and having a group that can have a conversation, I do think that's very important.
STEPH: Yeah, well said. I do think there's an important balance to strike there. Pivoting just a bit, we have a listener question, and this question comes from Benoit. Benoit wrote in to the show, “How do you properly implement a multi-step form in a boring Rails way?” I'm very interested in this question because I am working on a project that has a multi-step form. There are probably about maybe six, seven steps, and those steps can change based on different configurations. And our form is not implemented in a boring way at all. It's a very intricate, confusing design, I would say, which I think is fairly common when it comes to multi-step forms. I'm curious, what experience do you have with multi-step forms, and what's your general feeling with them?
CHRIS: Well, I happen to be working on one right now. So generally, I don't have an oh, I got this, I know the answer. This is one of those that I'm like; I feel like each time I reinvent it a little bit. But the version that I'm working on right now is an onboarding flow. So we create a user record, which at this point I only have email associated with, and then from there, when a user lands, they need to provide a bunch of profile information, and it is a requirement. They have to fill it out. We need to have all of it before we can actually start doing the real stuff of the application. And so, the way that I've ended up modeling it is interesting. I'm going to use the word Interesting. I think I like it, but I'm not sure. So I have this model; let’s call it a profile that we're going to associate with the user. And the profile has a bunch of fields: first name, last name, address, phone number, and a handful of other things. And again, I need to have these pieces of information. So I want those to be non-nullable columns. But as someone is walking through this form, I'm not going to have all the information. So there's going to be a progression. We'll get first name, then we get last name, and then we get the next piece of information. So I need a nullable storage, but I don't want to just put it into the session or something like that, which I think would be an option. So what I've done is I've introduced a secondary model. So this is a full ApplicationRecord database-backed model called partial profile. And it is almost identically the same interface as the profile, but each field is nullable. There's also a slight difference in that the profile field has an additional status column that talks about once we've gone through all of this, we can add some status and track other things. But yeah, that main difference of in the profile, everything is non-nullable, and the partial profile is nullable.
So then there's a workflowy object, a command object, as I like to have in my systems these days that handles the once they've gathered all their information, turn the partial profile into a profile, send it out to an external system that does some verification and some other lookups and things like that. And then, based on the status of that, mark the status of the profile. But one of the things that I was able to do is make that transition from partial profile to full profile. I'm doing that within a transaction. So if at any point anything fails within all of this, I can roll the whole thing back, and I'll be back to only having the partial profile, which was a very important thing. I would not want to have a partial profile and a profile because that's a bad state. But a lot of this for me is about data modeling and wanting to tell truths with the database and constrain what are the valid states of my application? So one solution would be to just have a profile model that has nullable columns for all of these fields. But man, do I hate that answer. So I went what feels like an extreme take of having two fundamentally different models, but that's where it's actually working out well. I'm able to share validations across them. So as new data is added, I can conditionally validate as new things are shared, and I'm able to share that via concern in the two models. So it's progressively getting more constrained as I add data to this thing. And then, in the background, there is a single controller that skips through all of the steps and has an update action that just keeps pushing data into this partial profile until, eventually, it becomes a profile. So that's focused specifically on the data model stuff. I think there are other aspects of a more workflowy type thing in Rails, but that's our thing. What do you think, good idea, bad idea, terrible idea?
STEPH: [chuckles] One, I love that you have this concrete example because I have some higher-level ideas around this particular question, but I didn't have a great example that I wanted to share. So I love that you have that, that we can talk through. I really, really like how you have found a way to represent the fact that each valid state of your application as you refer to it….so you have this concept of someone's going through the flow and their address can be nullable at this stage, but by the end of this flow, it shouldn't be nullable anymore. So you have that concept of a partial profile, and then it gets converted into a profile. I am intrigued by the fact that it's one controller because that is where I am feeling pain with the multi-step form that I'm working on where we have one very large controller that handles this entire...I'm going to call it a wizard since that's how it's referred to, and there are seven or eight different steps in this wizard. And the job of this controller is each time someone goes to a new step; this controller is trying to figure out okay; what step are you on based on the parameters that you have, based on some of the model attributes that are set? What step are you on, and what should we show you? And that has led to a very large method and then also complex, lots of conditional-based code. And instead, I would really like to flip that question around or essentially remove the what step am I on? And instead, ask what step is next? So instead, take the approach that each step of the form should have a one-to-one mapping to another controller. And that can get really hard because we're often conditioned to the idea that we should have a one-to-one correlation between each controller and an ActiveRecord model, but that's not necessarily what happens in our form.
You have the concept of a partial profile versus being able to map to a full profile. So I am very much in favor of the idea of trying to map each step of the form to a controller. So that, to me, makes the code more boring. It makes it more understandable. I can see what's happening for each step. But then it's not boring in terms that it requires creativity to say, okay, I don't have a perfect ActiveRecord model that maps to this controller, but what resourceful controller can I make instead? What is the domain object that I can put here instead? Maybe it's an ActiveModel object instead. So that way, we can apply ActiveRecord-like behavior to plain old Ruby objects, or maybe it's using a form object. That way, we can still validate all the fields that the user is providing to us, but that doesn't necessarily map directly to a full profile just yet. So I really like all the things that you've said. But I am intrigued by the approach of using a single controller. How's that feeling so far?
CHRIS: That part is actually feeling fine. So a couple of things you said in there stand out to me, one, where it's a very big controller. That is something that I would definitely avoid. And so, I have extracted other pieces. There is an object that I created, which at this point is just in-app models because I didn't know where else to put it, but it's called onboarding. And so the workflow that I'm trying to introduce, the resource maybe is what we would call it, is the idea of onboarding, but it's not an ActiveRecord level thing. At the ActiveRecord level, I have a profile and a partial profile, and then there's an account, and there's also a user. There are four different database level models that I want to think about. But fundamentally, from a user perspective, we're talking about onboarding. And so I have an object that is called onboarding, and it contains the logic around given the data that we have now, what step comes next? Is this a valid step? Should the user go back? Et cetera, et cetera. So that extraction is one piece that definitely makes sense. Also, thus far, mine is relatively straightforward in terms of I get data in, and I just need to update my partial profile record each time. So the update action is very straightforward. But I've done different versions of this where there are more complex things that happen. And so what I've done is basically make a splat route. So it's like onboarding/ and then the step name and that gets posted or gets put, I guess, along with everything else for the update. And so now the update says, “Well, if I'm updating for this, then handle it this way; otherwise, just update the profile record.” And so then I can extract maybe another command object that handles like, “Oh, when we're doing the address stuff, we actually have to do a little bit of a lookup and a cross-reference and some other things, but everything else is just throwing data into a database record.” And so that's another place where I would probably make an extraction, which is this specialized case of handling the update of the address is special. So I want to extract that, be able to test around that, et cetera. But fundamentally, the controller thing actually works out pretty well. The single controller with those sorts of extractions has worked out well for me.
STEPH: Okay, cool. Yeah, I can see how depending on how complex your multi-step form is, having it all in one controller and then extracting those smaller objects to then handle each step makes a lot of sense and feels very friendly to read, and is very testable. For the form that I'm working in, there are enough steps and enough complexity. I'd really love to break it out. In fact, that's something that we're working on right now is taking each of those chunks, each state of the form, and introducing a controller for it. So let's say if you are filling out an appointment and we need to get your consent for something, then we actually have a consent controller that's going to handle that part, that portion of it. And I'd be intrigued for your form if things got complicated enough that it’s the concept of onboarding or a wizard that leads us to having one controller because then we think of this one concept. But there are often four or five concepts that are then hiding within that general idea of an onboarding flow. So then maybe you get to the point that you have an onboarding address or something like that. So then you could break it out into something that still feels RESTful but then lets you have that very boring controller that does just enough and essentially behaves like a bi-directional linked list. So it knows, based on the route, it knows the step that it's on, and it knows where to go back, and then it knows the next step to go forward. And then that's all it's responsible for, so it doesn't have to also figure out what step am I currently on?
CHRIS: I like the bi-directional link list, dropping knowledge bombs right there.
STEPH: Pew-pew.
CHRIS: It's interesting. I don't necessarily feel...right now; I don't feel that pressure. I feel fine with the shape of the singular controller. This is perhaps not necessarily even a good thing, but I think my bias is always to think a lot about the URL structure and really strongly embrace the user point of view. I'm going through the workflow. I don't care if I'm picking from a calendar and setting up a date versus filling in an address field or how you're storing those on the back end; that’s your job, developer people. And I try as hard as I can to put myself in that mindset. And so the idea that there's this sequential thing that knows how to go back and forward and shows like, show which page we're on, that feels like it belongs in one controller in my mind, or I guess I'm fine with it being in one controller. And splitting it out feels almost more complicated in that I then need to share some of that logic across them, which is very doable by extracting some object that contains a logic of what goes back, what goes forward. But I think I like to align URL structure to how many controllers as opposed to anything else. And because I'm keeping a consistent URL structure where it's /onboarding/name /onboarding/address, and I'm stepping through in that way for all of those things, then it makes sense to me that those go to my onboardings controller. But I'm interested to see if I start to feel pain somewhere down the road because I expect this onboarding to get more complicated as time goes on. And will I bump my head on the ceiling? Probably. It seems likely. But for now, I'm liking it.
STEPH: Yeah, it certainly makes sense. It's one of those areas that you want to start small and then build out as it feels reasonable. But in regard to the URLs, I'm with you, where I very much want there to be a clean, nice URL for the user to see. And then we handle out any of those details on the back end since that is our work to do. But I am still envisioning that there is a clean URL. So it may be you have an onboarding/address and then onboarding/consent, borrowing from my previous example, but then that maps to where you have an onboarding namespaced controller that is then for an address or for consent. So you don't necessarily have an object that's having to be passed along that stores the state and the next step that the person is on. But that way, you do definitively know from the route okay, I am on this step. And so then that's how you get away from that question of what step am I on? Because that's already given to us based on the URL and then the controller. So then you only have to care about validating the input that's provided on that page, but then also being able to calculate dynamically okay, if this person needs to go back, what's the previous step and if they go forward, what's the next step?
CHRIS: What you're saying totally makes sense. And I'm now worried that I'm going to wake up a few days from now and look at my controllers and be like, I hate this. Why did I ever do this? I think the hesitation that I had, and this feels like a terrible reason, but in terms of what the config/routes.rb setup would be for this, it's namespace onboardings. And then within that, a bunch of singular routes and inside of that, inside that namespace, would be a bunch of singular resources so like, resource address, resource blah, blah. And I don't know why, but I don't like that. I don't like that. I don't like that. Now that I'm saying it out loud, I'm like, yeah, that actually would be a pretty clean mapping. And right now, I have implicitly what those available routes are but not explicitly. It also feels like there would be a real explosion of controllers there because there's a bunch of steps and growing in this controller or in this namespace. And they're all going to do the same thing, in my case at least, of just adding data in. But that's not a reason to not make...like, controllers are cheap; I should make controllers so, hmm.
STEPH: Yeah. So I think that's the part in my mind that maps to the boring part is because we are creating controllers. There's maybe an explosion of them, and it's boring. Like, the controllers don't do very much. And then that feels a little bit wrong to us because we're like, okay, I created this controller, it does very little. So maybe I should actually group this logic somewhere else. But I think that is the heart of it and how you stay boring is where you have just that code be so simple that it almost feels wrong.
CHRIS: That right there, that sound bite that we just had, that was a knowledge bomb drop, and I liked it. Now I've got to go back and refactor to the form that you're talking about because I am sold.
STEPH: Oh, I'm glad you like it. I am intrigued if you do refactor then what that would look like and how it feels. But I also totally understand you're busy, so if you don't, that's cool too, no pressure.
CHRIS: My honest answer is that I almost certainly won't refactor until I feel the pain. It's one of those things where like, okay, maybe I've now decided that this code is not the best, but the time to refactor it isn't when that code is just humming along working fine. It's a general thing that I think we share in terms of how we think about it. But the preemptive refactoring, I guess broadly speaking, I'm not a fan of preemptive factoring. I'm a fan of refactoring just in time or as we're feeling the pain, which is the counterpoint to that is let's not extract tech debt tickets because then they turn into preemptive refactoring again. It’s like, ah, I'm not really feeling...I'm not in there right now. But the version of the code that I have now is probably fine. I don't think it's a problem although I am convinced now of the boring way. I want to go back to the boring way, but it will feel like it's worth changing down the road when I feel any pressure in that system or need to revisit it. So it's like that. That's how I think about that sort of thing.
STEPH: Yeah, I wholeheartedly agree. It's one of those if you refactor...if this is a side project, if you want to refactor just for testing new software theories and then reflecting on what that new refactor looks like, that's awesome. In terms of any other refactors, then I wholeheartedly favor waiting until you feel that pain and it feels like the right thing to do; otherwise, it's unnecessary code turn. And while I strongly believe in experiments, I don't believe in putting teams through those personal experiments.
CHRIS: More hot takes from Steph. I like it.
STEPH: Circling back just a bit and talking about having one controller for each step of the form, that part I struggle with it frankly because it is hard to think about this is a concept, but what do I call this? Because it doesn't necessarily map to something necessarily in my database. There's a really great talk by Derek Prior that’s called In Relentless Pursuit of REST, where Derek does a great job of providing some inspiration around how to create routes that don't necessarily feel like they could be RESTful, or maybe they're following that more RPC format. And he does a great job of then turning around and saying, “Well, this is how we could think about, or this is how we could shift our thinking in turning this into a more RESTful route.” So then it does map to something that's meaningful in our domain. Because we have thoughtfully, or likely very thoughtfully, grouped this form together in a meaningful way to the user. So then that's inspiration right there to give us a way to name this thing because we are showing it to the user in a meaningful way. So then that means we can also give it a meaningful name. That’s all I got on multi-step forms. [laughs]
CHRIS: That feels like it was a lot. We've covered data models. We’ve covered controller structures. We fundamentally reoriented my thinking on the matter. I feel like we covered it.
STEPH: Yeah, I agree. Well, Benoit, thank you for sending in this question. I hope you found our discussion very helpful. And on that note, shall we wrap up?
CHRIS: Let's wrap up
STEPH: The show notes for this episode can be found at bikeshed.fm.
CHRIS: This show is produced and edited by Mandy Moore.
STEPH: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or review in iTunes as it helps other people find the show.
CHRIS: If you have any feedback for this or any of our other episodes, you can reach us at @bikeshed on Twitter. And I'm @christoomey.
STEPH: And I'm at @SViccari.
CHRIS: Or you can email us at [email protected]. Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Byeeeee.
Announcer: This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
On this week's episode, Steph and Chris respond to a listener question about how to know if we're improving as developers. They discuss the heuristics they think about when it comes to improving, how they've helped the teams they've worked with plan for and measure their growth, and some specific tips for improving.
Transcript
CHRIS: There's something intriguing about the fact that we're having this conversation, but the thing that's recorded just starts at this arbitrary point in time, and it's usually us rambling about golden roads. But, I don't know; there's something existential about that.
STEPH: It's usually when someone says something very funny or starts singing [laughs], and then that's when we immediately: record, record!
CHRIS: I've never sung on the mic. That doesn't sound like a thing I would do.
STEPH: [laughs]
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey.
STEPH: And I'm Steph Viccari.
CHRIS: And together, we're here to share a bit of what we've learned along the way. So Steph, how's your week going?
STEPH: Hey Chris, it's going really well. Normally I'm always like, wow, it's been such an exciting week, and it's been a pretty calm, chill week. It's been lovely.
CHRIS: That sounds nice actually in contrast to the "Well, it's been a week," that sort of intro of "I don't know, it's been fine. It can be really nice."
STEPH: By the time we get to this moment of the week, I either have stuff that I'm so excited to talk about and have a little bit of a therapy session with you or share something new that I've learned. I agree; it's nice to be like, yeah, it's been smooth sailing this whole week. In fact, it was smooth sailing enough that I decided to take on something that I've been meaning to tackle for a while but have just been avoiding it because I have strong feelings about this, which you know but we haven't talked about yet. But it comes down to managing emails and how many emails one should have that are either unread that are just existing. And I fall into the category of where I am less scrupulous about how many unread or managed emails that I have. But I decided that I'd had enough. So I used a really nice filter in Gmail where I said I want all emails that are before 2021 and also don't have a user label, so it's has:nouserlabels because then I know those are all the emails that I haven't labeled or assigned to a particular...I want to say folder, but they're not truly folders; they just look like folders. So they're essentially like untriaged or just emails that I've left hanging out in the ether. And then I just started deleting, and I got rid of all of those that hadn't been organized up until that point. And I was just like yep, you know if I haven't looked at it, it's that old, and I haven't given a label by this point, I'm just going to move on. If it's important, it will bubble back up. And I feel really good about it.
CHRIS: Wow, that is -- I like how you backed me into a corner. Obviously, I'm on the other side where I'm fastidiously managing my email, which I am, but you backed me into that corner here. So, yeah, that's true. Although the approach that you're taking of just deleting all the old email that's a different one than I would have taken [chuckles] so, I like it. It's the nuclear option.
STEPH: Okay, so now I need to qualify. When you delete an email, initially, I'm thinking it's going to trash, and so it's still technically there if I need to retrieve it and go back and find it. But you just said nuclear option, so maybe they're actually getting deleted.
CHRIS: They're going into the trash for 30 days; I think is the timeline. But after that, they will actually delete them. The archive is supposed to be the place where you put stuff I don't want to see you anymore. But did you archive or delete?
STEPH: Oh, I deleted.
CHRIS: Oh, wow. Yeah. All right, you went for it. [laughter]
STEPH: Yeah, and that's cool. And it's in trash. So I basically have a 30-day window where I'm like, oh, I made a mistake, and I need to search for something and find something and bring it back into my world; I can find it. If I haven't searched for it by then in 30 days, then I say, you know, thanks for the email, goodbye. [chuckles] And it'll come back if it needs to.
CHRIS: I like the approach. It would not be my approach, but I like the commitment to the cause. Although you still have...how many emails are still in your inbox now?
STEPH: Why do we have to play the numbers game?
CHRIS: [laughs]
STEPH: Can't we just talk about the progress that I have made?
CHRIS: What wonderful progress you've made, Steph. [laughter] Like, it doesn't matter what I think. What do you think about this? Are you happy with this? Does this make you feel more joy when you look into your email in the Marie Kondo sense?
STEPH: It does. I am excited that I went ahead and cleared all this because it just felt like craft. So I have taken what may be a very contentious approach to my email, where I treat it as this searchable space. So as things come in, I triage them, and I will label them, I will star them. I will either snooze them to make sure I don't miss the high actionable emails or something that's very important to me to act on quickly. But for the most part, then a lot of stuff will sit in that inbox area. So it becomes like this junk drawer. It's a very searchable junk drawer, thanks to Google. They've done a great job with that. And it feels nice to clear out that junk drawer. But I do have such an aversion to that very strong email inbox zero. I respect the heck out of it, but I have an aversion; I think from prior jobs where I was on a team, and we could easily get like 800 emails a day. My day all day was just triaging and responding to emails and writing emails. And so I think that just left a really bitter experience where now I just don't want to have to live that life where I'm constantly catering to what's in my inbox.
CHRIS: That's so many emails.
STEPH: It was so many emails. We were a team. It was a team inbox. So there were three of us managing this inbox. So if someone stepped away or if someone was away on vacation, we all had access to the same emails. But still, it was a lot of emails.
CHRIS: Yeah, inbox zero in a shared inbox that is a level that I have not gotten to but getting to inbox zero and actually maintaining that is very much a labor of love and something that I've had to invest in. And it's probably not worth it for most people. You could convince me that it is not worth it for me, that the effort I'm putting in is too much effort for not enough reward. Well, it's one of those things where I find the framing that it puts on it, like, okay, I need to process my email and get it to zero at least once a day. Having that lens makes me think about email in a different way. I unsubscribe from absolutely everything. The only things that are allowed to come into my email are things that I will act on that actually deserve my attention, and so it forces that, which I really like. And then it forces me to think about things. I have a tendency to really hold off on decisions. So I'm like, ah, okay. I can go see friends on Saturday or I can do something else. Friends like actual humans, not the TV show, although for the past year, it's definitely more of the TV show than the real people. But let's say there's a potential thing that I could do on the weekend and I have to decide on that. I have a real tendency to drag my feet and to wait for some magical information from the universe to help this decision be obvious to me. But it's never going to be obvious, and at some point, I just need to pick. And so for inbox zero, one of the things that comes out of it for me is that pressure and just forcing me to be like, dude, there's no perfect answer here, just pick something. You got to just pick something and not wasting multiple cycles rethinking the same decision over and over because that's my natural tendency. So in a way, it's, I don't know, almost like a meditative practice sort of thing. There's utility there for me, but it is an effort, and it's, again, arguably not worth it. Still, I do it. I like it. I'm a fan. I think it's worth it.
STEPH: I like how you argued both sides. I'm with you. I think it depends on the value that you get out of it. And then, as long as you are effective with whichever strategy you take, then that's really what matters. And I do appreciate the lens that it applies where if you are getting to inbox zero every day, then you are going to be very strict about who can send you emails about notifications that you're going to receive because you are trying to reduce the work that then you have to get to inbox zero. So I do very much admire that because there are probably -- I'm wasting a couple of minutes each day deleting notifications from chats or stuff that I know I'm not necessarily directly involved in and don't need action from me. And then I do get frustrated when I can't adjust those notification settings for that particular application, and I'm just subscribed to all of it. So some of it I feel like I can't change, and then some of it, I probably am wasting a few minutes. So I think there's totally value in both approaches. And I'm also saying that to try to justify my approach of my searchable inbox. [laughs]
CHRIS: There are absolutely reasons to go either way. And also, to come back to what I was saying a minute ago, it may have sounded like I'm a person who's just on top of this. I may have given that impression briefly. I think the only time this has actually worked in my life is when Gmail introduced snooze both in the mobile app and on the desktop. So this is sometime after Google's inbox product came out, and that was eventually shut down. So it's relatively recent because, man, I just snooze everything. That is the actual secret to achieving inbox zero, just to reach the end of the day and be like, nah, and just send all the emails to future me. And then future me wakes up and is like, "You know, it's first thing in the morning. I got a nice cup of coffee, and this is what you're going to do to me, past me?" So there's a little bit of internal strife there within my one human. But yeah, the snoozing is actually incredibly useful and probably the only way that I actually get things done and the same within any task management system that I have; maybe future me will do this.
STEPH: I think you and I both subscribed to the that's a future me problem. We just do it in very different ways. But switching gears a bit, how's your week been?
CHRIS: It's been good, pretty normal, doing some coding, normal developer things. Actually, there's one tool that I was revisiting this week that I'm not sure that we've actually talked about on the show before, but it's Rails Autoscale. Have you used that before?
STEPH: I don't think I have. It sounds very familiar, but I don't think I've used it.
CHRIS: It's a very nice, straightforward Heroku add-on that does exactly what you want it to do. It monitors your web and worker dynos and will scale up. But it uses a different heuristic than -- So Heroku has built-in autoscaling, but theirs is based on response time, which is, I think, a little bit laggier of a metric. Like if your response time has gotten bad, then you're already in trouble, whereas Rails Autoscale uses queue time. So how long is a request waiting before? I think it's at the Heroku router; it goes onto the dyno that's actually going to process the request? So I think that's what they're monitoring. I may be wrong on that. But from the website, they're looking at that, and you can configure it. They actually have a really nice configuration dashboard for configure between this range, so one to five dynos at most, and scale in this way up and in this way down. So like, how long should it wait? What's the threshold of queue time? Those sorts of things. So they have a default like just do the smart thing for me, and then they give you more control if your app happens to have a different shape of data, which is all really nice. And then I've been using that for a while, but I recently this week actually just turned on the worker side. And so now the workers will autoscale up and down as the Sidekiq queue -- I think for the Sidekiq side, it's also the queue time, so how long a job sits in the queue before getting picked up. And there are some extra niceties. It can actually infer the different queue names that you have. So if you have a critical, and then a mailer, and then a general as the three queues that Sidekiq is managing, you really want critical to not back up. So you can tell it to watch that one but ignore the normal one and only use -- Like, when critical is actually getting backed up, and all the other stuff is taken over then -- Again, it's got nice knobs and things, but mostly you can just say, "Turn it on and do the normal thing," and it'll do a very smart thing."
STEPH: That does sound really helpful. Just to revisit, so Heroku for autoscaling, when you turn that on, I think Heroku does it based on response times. So if you get into a specific percentile, then Heroku is going to scale up for you to then bring down that response time. But it sounds like with this tool, with Rails autoscaling, then you have additional knobs like the Sidekiq timing that you'd referenced. Are there some other knobs that you found really helpful?
CHRIS: Basically, there are two different sides of it. So web and background jobs are going to be handled differently within this tool, and you can actually turn them on or off individually, and you can also, within them, the configurations are specific to that type of thing. So for the web side, you have different values that you can set as the thresholds than you do on the Sidekiq side. Overall, the queue name only makes sense on the Sidekiq side, whereas on the web side, it's just like the web requests all of them 'Please make sure they're not spending too much time waiting for a dyno to actually start processing them.' But yeah, again, it's just a very straightforward tool that does the thing that it says on the tin. I enjoy it. It's one of those simple additions where it's like, yeah, I think I'm happy to pay for this because you're just going to save me a bunch of money every month, in theory. And actually, that side of it is certainly interesting, but more of my app will be responsive if there is any spike in traffic. There's still plenty of other performance things under the hood that I need to make better, but it was nice to just turn those on and be like, yeah, okay. I think everything's going to run a little better now. That seems nice. But yeah, otherwise, for me, a very straightforward week.
So I think actually shifting gears again, we have a listener question that we wanted to chat about. And this is one that both of us got very interested to chat about because there's a lot to this topic, but I'm happy to read it here. So the overall topic is improving as a developer, and the question goes, "How do you know you're improving as developers? Is your improvement consistent? Are there regressions? I find myself having very different views about code than I did even a year ago. In some cases, I write code now in a way that I would have criticized not too long ago. For example, I started writing a lot more comments. I used to think a well-named variable obviated the need for comments. While it feels like I'm improving, I have no way of measuring the improvement. It's only a gut feeling. Thanks. Love the show." And this comes from Tom. Thank you, Tom. Glad you enjoy the show. So, Steph, are you improving as a developer?
STEPH: I love this question. Thanks, Tom, for sending it in because it is one that I think about but haven't really verbalized, and so I'm really excited to dive into this. So am I improving as a developer? It comes down to, I mean, we first have to talk through definitions. Like, what does it mean to become a better developer? And then, we can talk through metrics and understanding how we're getting there. I also love the other questions, which I know we'll get to. I'm just excited. But are there any regressions? And also, in my mind, they already answered their own question. But I'm getting ahead of myself. So let me actually back up. So how do you know you're improving as a developer? There are a couple of areas that come to mind. And for me, these are probably more in that space of they still have a little bit of a gut feeling to them, but I'm going to try hard to walk that back into a more measurable state. So one of them could be that you're becoming more comfortable with the work that you're doing, so if you are implementing a new email flow or running task on production or writing tests that become second nature, those types of activities are starting to feel more comfortable. To me, that is already a sign of progress, that you are getting more comfortable in that area. It could be that time estimates are becoming more accurate. So perhaps, in the beginning, they're incredibly -- like, you don't have any idea. But as you are gaining experience and you're improving as a developer, you can provide more accurate estimates.
I also like to use the metric of how many people are coming to you for help, not necessarily in hard numbers, but I tend to notice when someone on a team is the person that everybody else goes to for help, maybe it's just on a specific topic, maybe it's for the application in general. But I take that as a sign that someone is becoming very knowledgeable in the area, and that way, they're showing that they're improving as a developer, and other people are noticing that and then going to them for help. Those are a couple of the ones that I have. I have some more, but I'd love to hear your thoughts.
CHRIS: I think if nothing else, starting with how would we even measure this? Because I do agree it's going to be a bit loose. Unfortunately, I don't believe that there are metrics that we can use for this. So the idea of how many thousand lines of codes do you write a month? Like, that's certainly not the one I want to go with. Or, how many pull requests? Anything like that is going to get gamified too quickly. And so it's really hard to actually define truly quantifiable metrics. I have three in mind that scale the feedback loop length of time. So the first is just speed. Like, how quickly are you able to do the same tasks? So I need to build out a page in Rails. I need a route; I need a controller. I need a feature spec, those sort of things. Those tasks that come up over and over: are you getting faster with those? That's a way to measure. And there's an adage that I think comes from biking, professional cycling, that it never gets any easier; you just go faster. And so the idea is you're doing the same work over time, but you just get a little bit faster, and you're always trying that edge of your capabilities. And so that idea of it never gets any easier, but you are getting faster. I like that framing. We should be doing the same work. We should never get too good for building a crud app. That's my official stance on the matter; thank you very much. But yeah, so that's speed. I think that is a meaningful thing to keep an eye on and your ability to actually deliver features in a timely fashion.
The next one would be how robust are the things that you're building? What's the bug count? How regularly do you have to revisit something that you've built to change it, to tweak it either because it doesn't exactly match the intent of the feature that you're developing or because there's an actual bug in it? It turns out this thing that we do is very hard. There are so many moving pieces and getting the design right and getting the functionality just right and handling user input, man, that's tricky. Users will just send anything. And so that core idea of robustness that's going to be more on a week scale sort of thing. So there's a little bit of latency in that measure, whereas speed that's a pretty direct measure.
The third one is…I don't know how to frame this, but the idea of being able to revisit your code either yourself or someone else. So if you've written some code, you tried to solve a problem; you tried to encode whatever knowledge you had at the given time in the code. And then when you come back three months later, how easy is it to revisit that code, to change it, to extend it either for yourself (because at that point you've forgotten everything) or for someone else on the team? And so the more that you're writing code that is very easy to extend, that is very easy to revisit and reload that context into your head, how closely the code maps to the actual domain context I think that's a measure as well that I'm really interested in, but there's the most lag in that one. It's like, yeah, months later, did you do a good job? And so the more time you spend, the more you'll have a measure of that, but that's definitely the laggiest of the measures that I have in mind.
STEPH: I love that adage that you shared that it never gets easier, but you get faster. That feels so relevant. I really like that. And then I hadn't considered the robustness. That's a really nice one, too, in terms of how often do you have to go back and revisit issues that you've added?
CHRIS: You just write code without bugs; that's why you don't think about it.
STEPH: [laughs] Oh, if only that were true.
CHRIS: Yeah, if only that were true of any of us.
STEPH: To keep adding to the list, there are a couple more that come to mind too. I'd mentioned the idea that certain tasks become easier. There's also the capability or the level of comfort in taking on that new, big, scary, unknown task. So there is something on the Teams' board where you're like, I have no idea how to do that, but I have confidence that I can figure it out. I think that is a really big sign that you are growing as a developer because you understand the tools that'll get you to that successful point. And maybe that means persuading someone else to help you; maybe it means looking elsewhere for resources. But you at least know how to get there, which then follows up on your ability to unblock yourself. So if you are in that state of I just don't know what to do next, maybe it's Googling, or maybe it is reaching out for help, but either way, you keep something moving forward instead of just letting it sit there.
Another area that I've seen myself and other people grow as developers is our ability to reason about quality and speed. It's something that I feel you, and I talk about pretty often here on the show, but it comes down to our ability to not just write code but then to also make good decisions on behalf of the company that we are working for and the team that we're working with and understanding what matters in terms of what features really need to be part of this MVP? Where can we make compromises? And then figuring out where can we make compromises to get this out to market? But what's really important then for circling back to your idea of revisiting the code, we want code that we can still come back and trust and then easily maintain and make updates to. And then I feel like I'm rambling, but I have a couple more. Shall I keep going?
CHRIS: Keep going. Those are great.
STEPH: All right. So for the others, there's an increase in responsibilities that I notice. So, in addition to people coming to you more often for help, then it could be that you are receiving more responsibilities. Maybe you are taking on specific ownership of the codebase or a particular part of the team processes. Then that also shows that you are improving and that people would like you to take leadership or ownership of certain areas. And then this one, I am throwing it in here, but your ability to run a meeting. Because I think that's an important part of being a good developer is to also be able to run a meeting with your colleagues and for that to be a productive meeting.
CHRIS: Cool. I like that one. I think I want to build on that because I think the core idea of being able to run a meeting well is communication. And I think there's one level of doing this job where it's just about doing the job. It's just about writing the code, maybe some amount of translating a specification or a ticket or whatever it is into the actual code that you need to write. But then how well can you communicate back out? How well when someone in project management says, "Hey, we want to build an aggregated search across the system that searches across our users, and our accounts, and our products, and our orders, and our everything." And you're like, "Okay. We can do that, but it will be hard. And let's talk about the trade-offs inherent in that and the different approaches and why we might pick one versus the other," being able to have that conversation requires a depth of knowledge in the technical but then also being able to understand the business needs and communicate across that boundary. And I think that's definitely an axis on which I enjoy pushing on as I'm continuing to work as a developer.
STEPH: Yeah, I'm with you. And I think being a consultant and working at thoughtbot heavily influences my concept of improving as a developer because as developers, it's not just our job to write code but to also be able to communicate and help make good decisions for the team and then collaborate with everyone else in the company versus just implement certain features as they come down the pipeline. So communication is incredibly important. And so I love that that's one of the areas that you highlighted.
CHRIS: Actually speaking of the communication thing, there's obviously the very human-centric part of that, but there's, I think, another facet of technical communication that is API design. When you're writing your code, what do you choose to expose and make accessible to collaborators? And I don't just mean API in the terms of a REST API that people are heading, but I mean a class that you have in your system. What are the private methods, and what are the public methods? And how do you think about the shape of it? What data do you expose? What do you not expose? And that can be really impactful because it allows how can you change things over time? The more that you hide, the more you can change. But then, if you don't allow your collaborators to access the bits that they need to be able to work with your system, that's an interesting one that comes to mind. It also aligns with, I don't think you were saying this exactly, but the idea of taking on more amorphous projects. So like, are you working within a system and adding a new feature, or are you designing a system? Are you architecting? The word architect that role can sometimes be complicated within organizations, but that idea of I'm starting fresh, and I'm building a system that others will then work within I think this idea of API design becomes really interesting in that context. What shape do you give to the system that we're working within, and what affordances? And all of that. And that's a very hard thing to get right. So it comes from experience of being like, I used some stuff in the past, and I hated it, so when I am the architect, I will build it better. And then you try, and you fail, and you're like, well, okay, but now I've learned. And then you try it, and then you fail for different reasons. But the seventh time you try, it may be just that time you get the public API just right on the first go.
STEPH: Seven times's a charm. That's how that goes, right?
CHRIS: That is my understanding, yes.
STEPH: I think something that is related to the idea of are you working in a structured space versus working in a new space and then how you develop that API for other people to work with. And then how do you identify when to write a test and what to test? That's another area that you were just making me think of is that I can tell when someone has experience with testing because they know what to test and what feels important to test. And essentially, it comes down to can I deploy with confidence? But there are a lot of times, especially if you're new to testing, that you're going to test everything, and you're going to have a lot of probably useless slow tests. But over time, you will start to realize what's really important. And I think that's one of the areas where then it does start to get harder to measure yourself as a developer because all of our jobs are different, and we work with different tech stacks, and we all have our unique responsibilities and goals. So it may be hard to say specifically like, "Oh, you're really good at X, Y, and Z, and that's how you know that you're improving as a developer." But I have more thoughts on that, which we'll get to in a moment where Tom mentioned that they don't have a way of measuring improvement. Shall I go ahead and jump ahead to I have no way of measuring that improvement, or shall we talk about regressions next?
CHRIS: I'm interested in your thoughts on the regressions question because it's not something that I've really thought about. But now that he's asked the question, I'm thinking about it. So yeah, what are your thoughts on that?
STEPH: My very quick answer is yes, [laughs] that there are regressions mainly because I respect that our brain can only make so much knowledge readily available to us, and then everything else goes into long-term storage. We can access it at some point, but it takes additional time, or maybe it takes some practice to recall that skill. So I do think there are regressions, and I think that's totally fine that we should be focused on what is serving us most at the moment and be okay with letting go of some of those other skills until we need to refine them again.
CHRIS: Yeah. I think there's definitely a truth to true knowledge and experience with, say, a framework or a language that can fade. So if I spend a lot of time away from JavaScript, and then I come back, I'm going to hit my head on a few low ceilings every once in a while for the first couple of days or weeks or whatever it is. It was interesting actually that Tom highlighted the idea of he used to not write comments, and now he writes more comments, and so that transition -- I think we've talked about comments enough so our general thinking on it. But I think it's totally reasonable for there to be a pendulum swing, and maybe there's a slight overcorrection. And you read some blog posts that tell you the truth of the world, and suddenly, absolutely no comments ever that's the rule. And then, later on, you're like, you know, I could really use a comment here. And so you go that way, and then you decide you know what? Comments are good, and you start writing a bunch of them. And so it's sort of weaving back and forth. Ideally, you're honing in on your own personal truth about comments. But that's just an interesting example to me because I certainly wouldn't consider that one a regression.
But then there's the bigger story of like, how do we approach building software? Ideally, that's what this podcast does at its best. We're not really a podcast about Rails or JavaScript or whatever it is we're talking about that week, but we're talking about how to build software well. And I think those core ideas feel like they're more permanent for me, or I feel like I'm changing those less. If anything, I feel like I'm ratcheting in on what I believe about good software. And there are some core ideas that I'm just refining over time, not done by any means, but it's that I don't feel like I'm fundamentally reevaluating those core ideas. Whereas I am picking up a new language and approaching a new framework and taking a different approach to what tools I'm using, that sort of thing.
STEPH: Yeah, I agree. The core concepts definitely feel more important and more applicable to all the future situations that we're going to be in. So those skills that may fall into the regression category feel appropriate because we are focused on the bigger picture versus how well do I remember this rejects library or something that won't serve us as well? So I agree. I am often focused more on how can I take this lesson and then apply it to other tech stacks or other teams and keep that with me? And I don't want that to regress. But it's okay if those other smaller, easily Google-able skills fall to the side. [laughs]
CHRIS: Wait, are you implying that you can't write rejects just off the top of your head or what's…?
STEPH: I don't think I could write any rejects off the top of my head. [laughter]
CHRIS: Fair. All right. You just go to rubular.com, hit enter, and then we iterate.
STEPH: Oh yeah. I don't want to use up valuable space for maintaining that sort of information. Rubular has it for me. I'm just going to go there.
CHRIS: I mean, as long as you have the index of the places you go on the internet to find the truth, then you don't need to store that truth.
STEPH: A moment ago, you mentioned where Tom highlights that they have different views about code that they wrote, even code that they wrote just like a year ago. And to me, that's a sign of growth in terms that you can look back on code that you have written and be like, well, maybe this would be different, or maybe this is still a good idea, but the fact that you are changing and then reevaluating, I think that is awesome because otherwise, if we aren't able to do that, then that is just a sign of being stagnant to me. We are sticking to the knowledge that we had a year ago, and we haven't grown since then versus that already shows that they have taken in new knowledge. So then that way, they can assess should I be adding comments? When should I add comments? Maybe I should swing away from that idea of this is a hard line of don't ever do this. I think I just have to mention it because there is one that I always feel so deeply about, DRY. DRY is the concept that gives me the most grief in terms that people just overuse it to the point that they do make code very hard to change. All right, that's my bit. I'll get off my pedestal. But DRY and comments are two things [chuckles] that both have their places.
CHRIS: I don't know if your experience was similar, but around DRY, I definitely have had the pendulum swing of how I feel about it. And I think again, that honing in thing. But initially, I think I read The Pragmatic Programmers, and they told me that DRY is important. And then I was like, absolutely, there will be no duplication anywhere, and then I felt some pain from that. And I've been in other systems and experienced places where people did remove duplication. I was like, oh, maybe it would have been better, and so I slowly got out of that mindset. But now I'm just in the place of like, I don't know, copy and paste not now, there was a period where I was like, just copy and paste everything. And then I was like, all right, I think there's a subtle line. There's a perfect amount of duplication, and that's the goal is to figure out that just perfect level. But for me, it really has been that evolution, and I was on one side, and then I was on the other side, and then I'm honing back in. And now I have my personal truth about duplication.
STEPH: Oh, me too. And I feel like I can be a little more negative about it because I was in the same spot. Because it's a rule, it's a rule that you can apply that when you are new to software development, there aren't that many rules that are so easy to apply to your codebase, but DRY is one of them. You can say, oh, that is duplication. I know exactly what that is, and I can extract it. And then it takes time for you to realize, okay, I can identify it, but just because it's there, it doesn't mean it's a bad thing. Perfect duplication, I like it.
CHRIS: Coming back to the idea of when we look back on our code six months, a year later, something like that, I think I believe the statement that we should always look back on our code and be like, oh, what was I doing there? But I think that arc should change over time. So early on in my career, six months later, I look back at my code, and I'm like, oh, goodness, what was happening there? I was very much a self-taught or blog internet-taught programmer just working on my own. I had no one else to talk to. So the stuff that I wrote early on was not good is how I will describe it. And then I got better, and then I got better, and I hope that I'm still getting better. And it's something that probably draws me to software development is I feel like there's always room to get a little bit better. Again, even back to that adage of it doesn't get any easier; you just go faster. Like, that's a version of getting better in my mind. So I hope that I can continue to feel that improvement and that ratcheting up. But I also hope that that arc is leveling off. There is an asymptotic approach to "good software developer." People in the audience, you can't see my air quotes, but I made air quotes there around good software developer. But that idea of I shouldn't look back probably this far into my career and look back at code from three months ago and be like, that's awful. That dude should be fired. I hope I'm not there. And so if you're measuring over time, what does your three months ago look back feel like? Oh, I feel like it's a little better. Still, you should look back and be like, oh, I probably would do that a little bit different given what I know now, what I've learned, but less so, I think. I don't know, what do you think about that?
STEPH: Yeah, that makes sense. And I'm also realizing I haven't looked back at my code that much since I am changing projects, and then I don't always have the opportunity to go back to that project and then revisit some of the code. But I do agree with the idea that if you're looking back at code that you've written a couple of months ago that you can see areas that you would improve, but I agree that you wouldn't want it to be something drastic. Like, you wouldn't want to see something that was more of an obvious security hole or performance issue. I think there are maybe certain metrics that I would use. I think they can still happen for sure because we're always learning, but there's also -- I may be taking this in a slightly different direction than you meant, but there's also a kindness filter that I also want us to apply to ourselves where if you're looking back three months ago to six years ago and you're like, oh, that's some rough code, Stephanie. But it's also like, yeah, but that code got me to where I am today, and I'm continuing to progress. So I appreciate who I was in the past, and I have continued to progress to who I am today and then who I will be.
CHRIS: What a wonderfully positive lens to put on it. Actually, that makes me think of one of -- We may be getting into rant territory here, but we talk a lot about imposter syndrome in the software development world. And I think there's a lot of utility because this is something that almost everyone experiences. But I think there's a corollary to it that we should talk about, which is a lot of people are coming into this industry, and they're like one year in, and the expectation that one year into a career that -- The thing that we do is not easy as far as I can tell. I haven't figured out how to make it easy. And the expectation that someone's going to be an expert that early on is just completely unreasonable in my mind. In my previous career, I was a mechanical engineer, and I went to school for four years. I actually went to school for five years, not because I was bad at school, but because I went to a place that had a co-op. And so I had both three different six months experiences working and four years of classroom education before I even got any job. And then I started doing things, and that's normal in that world. Whereas in the development world, it is so accessible, and I really feel like that's an absolutely wonderful thing. But the counterpoint of that is folks can jump into this career path very early on in their learning, and the expectation that they can immediately become experts or even in the short order I don't think is realistic. I think sometimes, when we talk about imposter syndrome, we may do a disservice. Like, it's not imposter syndrome. You're just new, and that's totally fine. And I hope you're working in an organization that is supportive of that and that has space for that and can help you grow in a purposeful way. In my mind, it's not realistic to expect everyone to be an expert a year in—end rant.
STEPH: Well, I would love to plus-one your rant and add to it a little bit because I completely agree. I also love the phrasing that you just said where it's not that you have imposter syndrome; it's just that you are new and that team should be supportive of people that are new and helping them grow and level up. I also think that's true for senior developers in terms that you are very good at certain skills, but there's always going to be some area of the web or some area of software development that you are new to, and that is also not imposter syndrome. But it's fine to assess your own skills and say, "That's something that I don't know how to do." And sometimes, I think that gets labeled as imposter syndrome, but it's not. It's someone just being genuine and reflecting on their current skills and saying, "I am good at a lot of stuff, but I don't know this one, and I am new to this area." And I think that's an important distinction to make because I still want -- even if you are not new in the sense that you are new to being a software engineer, but you still have that space to be new to something.
CHRIS: Yeah, it's an interesting, constantly evolving space. And so giving ourselves a little bit of permission to be beginners on various topics and for me, that's been an experience that's been continual. I think being a consultant, being a freelancer that impacts it a little bit. But nonetheless, even when I go into organizations, I'm like, oh, years in technology that only came out two years ago. That's pretty fresh. And so it's really hard to be an expert on something that's that new.
STEPH: Yeah. I think being new to a team has its own superpower. I don't know if we've talked about that before; if we haven't, we should talk about, it but I won't do that now. But being new is its own superpower. But I do want to pivot back to where Tom mentioned that I have no way of measuring that improvement. And I think that's a really great thing to recognize that you're not sure how to measure something. And my very first honest suggestion if you are feeling that way is to go ask your manager and ask them how they are measuring your improvement because that is their job is to understand where you're at and to understand your path as a developer on the team and then helping you set goals.
So since I'm a manager at thoughtbot, I'll go first, and I can share some ways that I help my team measure their own improvement. So one of the ways is that each time that we meet to discuss work, I listen to their challenges, and I take notes; I'm a heavy note-taker. And so once I have all those notes, then I can see are there any particular challenges that resurface? Are there any patterns, any areas where they continuously get stuck on? Or are they actually gaining confidence, and maybe something that would have given them trouble a couple of weeks ago is suddenly no big deal? And then I also see if they're able to unblock themselves. So a lot of what I do is far more listening, and I'm happy to then provide suggestions. But I am often just a space for someone to share what they are thinking, what they're going through, and then to walk through ideas and then provide suggestions if they would like some, and then they choose a suggestion that works best for them. And then we can revisit how did it go? So their ability to unblock themselves is also something that I'm looking for in terms of growth. And then together, we also set goals together, and then we measure that progress together. So it's all very transparent. And what areas would you like to improve, and then what areas would it be helpful for thoughtbot or as a consultant for you to improve? And then if I am fortunate enough to be on a project with them and see how they reason about quality and speed, how they communicate the type of features they're most comfortable to work on, and which tasks are more challenging for them, I also look to see do people enjoy working with them? That's a big area of growth and reflects communication, and reliability, and trust. And those are important areas for us to grow as developers. So those are some of the areas that I look to when I'm helping someone else measure their own improvement.
CHRIS: I really like that, the structured framing of it, and the way that you're able to give feedback and have that as a constant, continuous way to evaluate, define, measure, and then try and drive towards it. Flipping things around, I want to offer a slightly different thing, which isn't necessarily specifically in the question, but I think it's very close to the question of how do we actually improve as developers? What are the specific things that we can try and do? I'm going to offer a handful of ideas. I'd be super interested to hear what your ideas are. But one of the things that has been really valuable for me is exploring different languages and frameworks. I, without fail, find something in every new language or framework that I then bring back to the core things that I'm working with. And I've continued to work with Rails basically throughout my career, but everything else that I'm doing has informed the way that I work with Rails and the way that I think about building code. As specific examples, functional programming is a really interesting frame of mind, and Elm as a language is such a wonderful, gentle, friendly, fun introduction to functional programming because functional programming can get very abstract very easily. I've also worked with Haskell and Scala and other languages like that, and I find them much more difficult to work with. But Elm has a set of constraints and a user-centric approach that is just absolutely wonderful. So even if you never plan to build a production Elm application, I recommend Elm to absolutely everyone.
In terms of frameworks, depending on what you're using, maybe try and find the thing that's the exact opposite. If you're in the JavaScript space, I highly recommend Svelte. I think it's been very informative to me and altered a number of my opinions. A lot of those opinions were formed by React. And it's been interesting to observe my own thinking evolve in that space. But yeah, I think exploring, trying out, -- Have you ever used Lisp? Personally, I haven't, but that's one of the things that's on my list of that seems like it's got some different ideas in it. I wonder what I would learn from that. And so continually pushing on those edges and then bringing that back to the core work you're doing that's one of my favorite things.
Another is… It's actually two-fold here. Teaching is one, and I don't mean that in the grand sense; you don't have to be an instructor at a bootcamp or anything like that but even just within your organization trying to host a lunch and learn and teach a concept. Without fail, you have to understand something all the better to be able to teach it. Or as you try and teach something, someone may ask you a question that just shakes the foundation of what you know, and you're like, wow, I hadn't thought about it that way. And so teaching for me has just been this absolutely incredible forcing function for understanding something and being able to communicate about it again, that being one of the core things that I'm thinking about. And then the other facet sort of a related idea is pairing, pair with another developer, pair with a developer who is more senior than you on the team, pair with someone who is more junior than you, pair with someone who's at the same level, pair with the designer, pair with the developer, pair with a product manager, pair with everyone. I cannot get enough pairing. Well, I can, actually. I read a blog post recently about 100% pairing, and I've never gotten anywhere close to that number. But I think a better way to put it is I think pairing applies in so many more contexts than people may traditionally think of it. People sometimes like to compartmentalize and like, pairing is great for big architecture design, but that's about it. And my stance would be pairing is actually great at everything. It is very high bandwidth. It is exhausting, but I have found immense value in every pairing session I've ever had. So, yeah, those are some loose thoughts off the top of my head. Do you have any how to get better protips?
STEPH: Yeah, that's a wonderful list. And I'm not sure if this exactly applies because it's been a while since I have seen this talk, but there is a wonderful talk by Sandi Metz. I mean, all of her talks are wonderful, but this one is Go Ahead, Make a Mess. And I believe that Sandi refers to or highlights the idea of trying something new and then reflecting on how did it go? And that was one of the areas that I learned early on, one of the ways to help me progress quickly as a developer. Outside of the suggestions that you've already shared around lots of pairing that was one of the ways that I leveled up quickly is to iterate quickly. So I used to really focus on the code that I was writing, and I thought it needed to be perfect before my colleagues could review it. But then I realized that the sooner that I would push something out for feedback, then the faster I would get other more experienced developers' input, and then that helped me learn at an accelerated rate and then also ship more frequently. So I'd also encourage you to just go ahead and iterate quickly. We talk about with software in general, we want to iterate on the code that we are pushing up for other people to look at and then give us feedback on and then reflect on how did it go? What did we learn? What are some areas that we can improve? I feel like that self-evaluation is huge, and it's something that I know that I frankly don't do enough because one, it also prompts us to appreciate the progress that we have made but then also highlights areas where I feel strong in this area, but these are other areas that I want to work on.
CHRIS: While we're on the topic of talks that have been impactful in our journeys of leveling up as developers, I want to quickly list three that just always come to mind for me: Avdi Grimm's Confident Code, Katrina Owen's Therapeutic Refactoring, and Ben Orenstein's Refactoring from Good to Great. There's a theme if you look across those three talks. They're all about refactoring, which is interesting. That tells you some stories about what I believe about how good software is made. It's not made; it's refactored. That's my official belief, but yeah.
STEPH: Love it. That's also another great list. [laughs] For additional ways to level up, there are some very specific areas where it could be maybe do code katas or code exercises, or maybe you subscribe to certain newsletters, stay up to date with a language, new features that are being released. But outside of those very specific things, and if folks find this helpful, then maybe you and I can make a fun list, and then we could share that on Twitter as well. But I always go back to the idea of regardless of what level you're at in your career is to think about your specific goals, maybe if you are new to a team and you're new to software development, then maybe you just have very incremental goals of like, I want to learn how to write a test, or I want to learn how to get better at PR review or something very specific. But to have real growth, I think you have to first consider where it is that you want to go and then figure out a way to measure to get there. Circling back to some of the ways that I help my teammates measure that growth, that's one of the things that we talk about. If someone says, "Well, I want to get better at PR review," I'm like, "Great. What does that mean to you? Like, how do you get better at PR review? How can we actually measure this and make it something actionable versus just having this vague feeling of am I better?" I think I've ended up taking this a bit more broad as you were providing more specific examples on how to level up. But I like the examples that you've already provided around education and then trying something outside of your comfort zone. So what's coming to mind are more of those broad strategies of goal setting.
CHRIS: I think generally, you need that combination. You need how do I set the measure? How do I think about improvement? And then also ideally a handful of tactics that you can try out. So hopefully, we provided a nice balanced summary here in this episode. And hopefully, Tom, if you're listening, you have gotten some useful things out of this conversation.
STEPH: Yeah, this was fun. We managed to take this topic and make a whole episode out of this. So thanks, Tom, for sending in such a great topic.
CHRIS: Frankly, when I saw the topic, I was certain this was going to happen. [chuckles] This was an obvious one that was going to fill up the time for us. But yeah, with that, I think we've probably covered plenty here. Should we wrap up?
STEPH: I'm sure there's more, but sure, let's wrap up.
CHRIS: The show notes for this episode can be found at bikeshed.fm.
STEPH: This show is produced and edited by Mandy Moore.
CHRIS: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes, as it really helps other folks find the show.
STEPH: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed or reach me @SViccari on Twitter.
CHRIS: And I'm @christoomey.
STEPH: Or [email protected] via email.
CHRIS: Thanks so much for listening to The Bike Shed, and we'll see you next week.
Both: Byeeeeeee.
Announcer: This podcast was brought to you by thoughtbot. Thoughtbot is your expert design and development partner. Let's make your product and team a success.
On this week's episode, Chris and Steph share a speedy step to restart your rails server and chat about accessibility improvements and favorite a11y tools. They also dive into a tale of database switching and delight in a new Rails query method that returns orphaned records.
Transcript:
STEPH: People put microphones in front of us. That is their fault, not ours. We just show up. Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. Hey Chris, happy Friday.
CHRIS: Happy Friday.
STEPH: How's your week been?
CHRIS: It's been great. I did something that is wildly overdue, but I got a new chair and one day in. But it's also a very familiar chair because it's basically the same -- I think it's the same model as we had at the thoughtbot office. And it's nice to have a chair that is reasonable. And I think my old chair was maybe ten years old or something, deeply embarrassing and absurd like that for such a critical piece of infrastructure in my house.
STEPH: I mean, I guess depending on if it's a good chair. I don't know what the lifespan is of a good chair. [laughs]
CHRIS: I would not describe it as such.
STEPH: [laughs]
CHRIS: I think it was like $100 at Staples. It was a fine chair. It served me well for many years. I'm very slow and cautious with what I consider to be large-scale purchases. I hate the idea of having a thing that I've spent a bunch of money on, but I don't actually like. And these are very solvable problems. But I just tend to drag my feet and over-research and do all those sorts of things. And so finally I was just like, nope, we're going to get a chair, got a chair. Cool. Now I have a chair, and it's good. It's got all of the adjustments, which is what makes it very nice. I'd say Steelcase Leap is the model for anyone that's interested.
STEPH: That's funny. I tend to do the same thing. I tend to drag my feet until I get desperate enough that then I'm forced to make a decision and buy something. I do have an oddly specific question. Do you like chairs with or without the arms?
CHRIS: Oh, with the arms.
STEPH: Really?
CHRIS: Yeah.
STEPH: I am team, no arms.
CHRIS: Where do your arms go if there are no arms to put on the chair?
STEPH: They're always on my lap or on my keyboard. So I just don't rest them on the armrest.
CHRIS: Interesting. I feel like that would put -- I've definitely had small bouts of RSI strain fatigue in my forearms. And so I'm very purposeful with how I'm bracing my wrists. I have a little wrist rest that I put my hands on when I'm using my keyboard because the keyboard is slightly raised up because I have a nonsense mechanical keyboard, of course.
STEPH: Delightful, not nonsense.
CHRIS: Yeah, I love it. I would never trade that in, but I have to make it work and not actually sacrifice my body for a clackety keyboard. [chuckles] But yeah, I think I need some more support for my arms; otherwise, there's too much pressure on my wrists, and things are breaking at weird angles, and that's been my experience. I'm intrigued by the free-flying no arms on the chair approach that you're talking about. This particular model has nine degrees of freedom on the armrest. So I'm able to bring them in and forward and at the exact right height so that they perfectly meet my arm where it would naturally be, and that seems good. That seems like the thing that I want.
STEPH: That makes a lot of sense. But yeah, I'm team no arms. Every time I have them, I can't get them at the right comfortable spot. And I like the freedom of where I can quickly get up and out of my chair and not have arms in the way, which sounds like a very small improvement in my life, but yet it's what I want.
CHRIS: I just like the idea of you sitting there and being like, I need to be able to make a quick escape at any moment; who knows what's going to happen? And I need to be able to run the other way.
STEPH: If there's a gnarly bug, I got to be able to run. I can run away quickly as possible. [laughs]
CHRIS: But in other news, so yeah, new chair that's great. I also recently embraced something in the Rails world that I have known about I think for forever for the entire time that I've worked in Rails, but I've never really used it, which is the tmp/restart.txt file, which my understanding of it is if you touch that file, or if that file exists, Rails will recognize that and will restart the server in development mode. And I think I've always known about this, but I've never used it. And I recognized recently that either I was trying to use a gem that I'd added to the Gemfile, but my server didn't know about it. So I was going to do the thing that I normally do, which is kill the server and then restart the server so CTRL+C and then CTRL+P in my terminal and hit enter, and then wait a bunch of minutes and get distracted, all of the bad things there. And I was like, wait; I remember that there's a thing here. And I don't know why I haven't been doing this for years. It's so much better. I actually went the one step further, and I configured a tmux binding so that tmux prefix and then R will touch tmp/restart in the local directory of the tmux session. That's been very nice, I will say. So I keep moving between branches. And I have environment variables that I need to reload or config initializers that I've made a change to, and I want to load that in. Or a gem that I've added to the Gemfile and I've now installed, but the server doesn't know about. All of these are just so quick now. And why wasn't I doing this the whole time?
STEPH: I saw that you mentioned this on Twitter a couple of days ago, and I was so excited. But at the moment, I bookmarked it for later, but I didn't have time to actually really check it out. And I'm so glad you're bringing it up because I actually just tried it while we're chatting. So I started up my Rails server, and then I did the touch tmp/restart, and this is amazing. This is awesome. I'm very excited.
CHRIS: It just does the thing.
STEPH: It just does the thing.
CHRIS: Yeah, it's so nice. [laughter]
STEPH: Yeah, this is fabulous, almost as good as the pending migrations button. Not quite because that's a very special button, but this is also up there.
CHRIS: It's a very, very good button. [laughs] I really got very enthusiastic about that button, didn't I? But I stand by it. It's a very good button, and this is a very good file. But this file has existed for so much longer, this workflow. And so many times, I have restarted the server and have been annoyed that I had to do it. And my brain just had this answer available. I didn't read a blog post and relearn this thing. I've always known it. And it was this one particular time that my brain was like, "Hey, you know how we're always annoyed by having to restart the server? You know there's another answer, right? I know that you know it because I'm your brain, and I'm telling you this." [chuckles] This is my weird internal monologue. So I'm very happy to be on the other side of that and to share that with as many people as possible who may be, like me, know about this but haven't actually leaned into it, small things that make the Rails world very nice.
STEPH: Well, I'm glad you internalized it and then surfaced it because this is not something that I had heard of before. So I'm very appreciative of it. This is going to be great.
CHRIS: Happy to share the wealth. But yeah, that's some of the stuff that's been up in my world. What's been going on in your world?
STEPH: It's been a rather busy week. Most of that week has been focused on improving the accessibility of existing pages and forms, which is an area that I don't get to spend a lot of time, but each time I do, I really would like to be a pro when it comes to accessibility. Well, that's probably a long journey to become a pro. I would like to become more knowledgeable in terms of accessibility because it is so important. And while working specifically on these accessibility tickets and improvements, I've discovered a few helpful tools that I figured I'd share here. So one of the tools that I've started using is a color contrast tool. It's created by WebAIM or web accessibility in mind. And a number of our headers in our application have a white font that's on a background color, and we were getting warnings that this isn't very accessible and that there's not enough contrast. So with the Contrast Checker, you can provide the foreground color and the background color, and then it's going to tell you that contrast ratio. So if you're wondering, well, what's a good ratio? That's a great question. And the W3C Accessibility Guidelines recommend a contrast ratio of 4:5:1 for normal texts and 3:1 for larger texts. Larger text is anything that's typically around 18 px, 18.5 px, or larger. So the color contrast tool has been really helpful because then that's been very easy that we give the blue that we're using, and then we can just darken it a bit to improve that contrast. And then we apply that everywhere throughout the app. The other tool that I've been using that I'm really excited about it's a browser extension called the IBM Equal Access Accessibility Checker. Is that something you've heard of or used before?
CHRIS: I have not heard of that.
STEPH: I would love to know what you currently use for accessibility, and I'll circle back to that in just a moment. But for this particular browser extension, I'm pretty sure they have it for multiple browsers. I'm using Chrome. So I've installed the Chrome extension. Once you have it installed, you can open up the browser console and then tell it to scan the page that you're on. And then it generates a really helpful report that has all the high-level offenses, which are called violations. It also has warnings and recommendations. And then if you click on a specific issue, then the right-hand area shows a detailed description of the offending HTML, what's wrong, why it's important, which I really appreciate that part, and then a couple of examples of how to fix it. So it's been a really nice way as we are working to improve the accessibility of form. We actually have feedback to know that we are making progress and that we are improving the accessibility of that particular page. And then circling back, I'm curious, do you have any particular tools that you use when it comes to improving accessibility or any standards that you tend to follow?
CHRIS: Yeah, this is a very apropos question. I'm working on a new project now, and accessibility is definitely something that I want to consider on every project, but it's all the more so important for this particular project, or it's something that we're, as a team, collectively really embracing early on and wanting it to be a core focus of how we're building out the application. That said, I will say that I'm accessibility aware but far from an expert and still very much learning. But some of the things that I have used are the axe DevTools. I forget what the acronym actually stands for there, but we can certainly include a link in the show notes. But those are DevTools that allow you to, I think, do some color contrast checking actually in the browser just right there, which is really nice. There's also AccessLint, which is a project that scans pull requests and, where possible, does static analysis of the HTML. And that's actually by some former thoughtboters. So it's always nice to have that in the reference.
There's actually a new tool that I've been looking at. I haven't actually tried it out yet, but it's from a company called Assistiv Labs, Assistiv without an E interestingly at the end. But their tool is, as far as I can tell, it allows you to use screen readers and other tools but across various platforms so that you sort of turn on -- It's very similar to if you've ever used an emulated Internet Explorer session because you're working on not an Internet Explorer machine, but you want to make sure your site works in Internet Explorer, same sort of idea, I believe. But it allows you to do the same approach for accessibility. So using a screen reader or using what the native accessibility technologies are on various platforms and being able to test across a wide range of things. So that's definitely one that I'm going to be exploring more in the near future.
And beyond that, there are a handful of static analysis-based tools that I've used. So Svelte actually has some built-in stuff around accessibility. Because they are a compiler, they can do some really nice things there, and I really appreciate that that is a fundamental concern that they've built into the language, and the framework, and the compiler, and all of that. And I've also used ESLint A11y, which is the acronymnified version of the word accessibility. But that again, static analysis, so it can only go so far. And unfortunately, accessibility is one of those things that's hard to get at from a static analysis point of view, but it's still better than nothing. And it allows you to have a first line of defense at the code as you're authoring it. So that's a smattering of things. I've used some of them. I'm interested in others of them. But this is definitely an area that I'm going to be exploring a bunch more in the near future.
STEPH: I like that you brought in the static analysis tools because that's the other thing that's been on my mind as we're making these accessibility improvements; that's been great. And we can run this particular browser extension to then check for warnings or issues on the page but then looking out for regressions is on my mind. Or as we're introducing new pages and new forms, how do we make sure that those are up to standard if someone forgets to run that extension? So I really like the idea of -- There's AccessLint that you mentioned, which will then scan PRs for accessibility improvements. That sounds really great. I'm also intrigued if there's a way to also -- I don't know if maybe tests are a good way to also look for any sort of regressions in terms of changes that we've made to a page. I don't know what those tests would look like. So I'll have to think on that some more, but I think some people at thoughtbot have thought about it.
CHRIS: My understanding is the testing library suite of testing frameworks, so it's like testing library React, testing library, et cetera. It's primarily used in the JavaScript world, although there is Cypress, which is more of a browser-level automation. But it fundamentally works from not exactly an accessibility but a -- It doesn't allow you to do DOM selectors. It really tries to hide that. And it says, "No, no, no. You're not going to be digging in and finding the class name of this thing because guess what? A user of your application can't do that." What we want are – Typically, it's like find by label or find by things that are accessibility available or just generally available to users of your application. So whether it's users that are just clicking around or if they're using any sort of assistive technology, the testing library framework forces you in that direction. You can't write a test if your code is inaccessible tends to be the way it plays out, and it really nudges you in that direction. So it's one of the things that I really love about that. And I actually miss it when I'm working in a Capybara test suite because, as far as I know, there is not a Capybara testing library variant of it. And really, at the end of the day, it's just a bunch of functions to allow you to select within the context of the page. But again, it does it from that standpoint, and I'm all about that.
STEPH: Yeah, that's really nice. That's a good point. Yeah, I don't think Capybara has that explicitly. I know that you have to use specific parameters. Like, if you want to access something on the page that is hidden, that's not something you can just do easily. You have to specify: I'm looking for an element that is hidden on the page. But otherwise, I don't think it goes out of its way to prevent you from doing that. There is an article that this conversation about accessibility made me think of. There's a really fun blog post written by Eric Bailey, who has been or who is a champion of accessibility at thoughtbot and has written a lot of great content around making the web more accessible. And in addition to publishing with the thoughtbot blog post, he has written for a number of publications. And the article that comes to mind that he published on the thoughtbot blog posts is An Introduction to macOS Head Pointer, and we'll link to it in the show notes. But he does a great job walking through what the head pointer is on macOS and then how to use it. And he uses his eyebrows to essentially move the mouse and then click on certain buttons or click on certain links on the screen. And it's incredible. So if you need a little bit of accessibility and joy in your life, I highly recommend checking out that article.
CHRIS: Yeah. Eric has absolutely just been such a fantastic champion of accessibility. And he's definitely someone that I think of constantly as being -- I think he's involved with the Accessibility Project. He writes on CSS Tricks. He's around the internet just being the hero we need because accessibility is such a critical thing. And I'm a deep believer in the idea that accessible applications are better for everyone. And I so appreciate the efforts that he's putting in out there. Thanks, Eric.
STEPH: Thanks, Eric. And then, on a slightly separate note, I have a slight complaint that I'd like to file. And this one is with Rails specifically. And I'm filing this complaint with the understanding that I'm also very spoiled in terms of Rails does so much, and I'm very appreciative of how much Rails does for me and for us. But specifically, while working on accessibility for a date of birth form field, so it's a form field with three different selects, so you have your month, day, and year. And while creating this, there's a very helpful Rails method that's called date_select, where then you can generate all three of those select fields. And you can even specify the order in which you want them generated, but this particular function doesn't have a way to make it accessible. So you can generate a label for each option that's in the select dropdown. And there's no parameter. There's nothing you can pass through. It doesn't automatically generate it for you.
So I was in a spot where I was updating a form that's using the Rails date_select. I can't use date_select and make an accessible dropdown selection for date of birth. So instead, what I had to do is I had to split it out. I had to move away from using date_select, and instead, I'm using select_month and then select_day and select_year because from there, I then can pass in; in my case, I'm using aria-label to provide a label because I don't actually want the label to show up on a screen, which could be another accessibility concern because we do have the birth date label for those three sections. But then we still want at least each text field to have a label, even if it's only visible to screen readers. So then that way, if someone is selecting from year, they understand they're selecting from year or for month they're selecting from month. So by using select_year and select_day and select_month, I could specify the aria-label as month, day, or year, but I couldn't do that with the date_select. And I just realized that there's probably a number of date of birth forms out there that aren't accessible because us Rails developers are leveraging this existing method. So it just seems like a really good opportunity to improve date_select to be able to pass in a label or generate one automatically.
CHRIS: Wow. I'm surprised that's the state of the art that we're currently at. I really wonder if there have been conversations or if there are fundamental limitations because I'd be surprised if such a core piece of the Rails world someone hadn't brought this up in the issues. What's the story there? Because I'm guessing there's a story there. Although flipping it around, I wonder -- I've never loved that input sequence; as an aside, like three different selects, that's not how I think of my birthday. My birthday is one thing. It's not three things that we smash together. But I wonder are we at a point now where IE 11 usage is so small that we can use a native date_select input and then have a polyfill -- And then I start to trail off because I don't know what the story is for. Like, I think Safari doesn't do a great job, and I forget where it's at right now. And what about mobile Safari? And wouldn't it be nice if everything was just easy and everybody kept up? [laughter] But that's an aside. But yeah, that's part of my question here, is like, can we just not use that thing at all? Like, the three select dropdown version of picking a date of birth because, man, that's my least favorite way to do it.
STEPH: Yeah. I'm with you. I'm also curious if there is a story behind this and also if anyone has a different opinion, and I'd love to hear it. Because this has been my experience in digging through the docs is I would date_select, and I could not find a way to pass in a label or have one generated to make it accessible. So then that prompted me to use the three different methods, which, by the way, is fine. It made me stop and pause to think this is the method that most people recommend the usage of in terms of creating those three different select fields for a date of birth or for any particular date that you're supplying; it does not have to be a date of birth. So it also surprised me that then we couldn't make it accessible. So yeah, I was a bit miffed in the moment. [laughter] I had to walk myself back and be like, well, if I want to make the world a better place, I should help make the world a better place. And that started with changing the code in this codebase. But then also it means looking into Rails to see if there's an improvement that I could help with there.
CHRIS: This is what we do: we take our moments of miffed, and we turn them into positive action in the world. This is what we want to see. [chuckles]
STEPH: I figure the least I can do is share a blog post or something on Twitter that shows what it was before and then using the new date_select functions because that is reasonable, although working with a form is a bit different. It got a little tricky there in terms of making sure that each value for each select field is still being passed within the expected nested parameter. And some of that was available in the public API for select_year and select_day, but it's not as well documented. So I'm like, well, this seems to be intentionally public, but it's not documented, so I feel a little nervous about using this. Yeah, that's it. I just wanted to share my annoyance with Rails [laughs] or the fact that it made me work so hard to have a date of birth field.
CHRIS: You joke, but that's a lot of why we use Rails is because we want these common regular things that we're doing to be as easy as possible, to require as little code on our part as possible but also this sort of thing like there's a lot of subtlety and stuff. Accessibility is one of those things that I want a framework that has security, and accessibility, and ease of use, and all of these things just baked in, so I don't have to think about it every time. It turns out having a date of birth, or generically any date field, is going to come up in web applications a lot, it turns out. And so having all of that stuff covered is frankly what I expect of a framework like Rails. So I'm totally on board with your being miffed here.
STEPH: Yeah. Those are all really valid points. So I'm with you. What else has been up in your week?
CHRIS: Well, we've been leading up to this, I think, for many weeks. I did a Rails 6.0 upgrade a while back, and a big reason for that was partly just to get on the current version of Rails but also because I wanted to open the door to database switching, and finally, this week, I tackled it. And let's tell a tale because there was a bit of an adventure, if we're being honest. Fundamentally, all the stuff there makes sense. I'm happy with the end configuration, but there was a surprising amount of back and forth. I broke the app more times than I want to actually announce on a podcast, but I broke it only for a brief period of time. It's fine. It's fine. Everybody's fine. [laughs] I feel a little bad about it, but these things happen. But yeah, it was interesting, is how I'll describe it.
So fundamentally, Rails just has nice configuration for it. So at a high level, you're introducing your config/database.yml. Instead of it just being production is this URL, you now say primary is this replica or follower, whatever you want to name it is this. So you have now two configurations nested within your production config. And then in your ApplicationRecord, you inform Rails that it connects_to, and then you define a Hash for writing goes to the primary, reading goes to the follower. And you have to sync those up with the thing you just wrote in the config/database.yml but fundamentally, that kind of works. That makes it possible in your application to now switch your database connection. The real magic comes in the config environment production file. And in that, you specify that you want Rails to use a database resolver that says GET requests go to the replica, and anything that is not a GET request goes to the primary. So anytime you're writing data, anytime you're changing data within the system, that's going to go to the primary.
And there's also a configuration that, as far as I can tell, gives a session affinity. So for the next two seconds after that, even if you make a GET request subsequently right following it, so you make a right, you POST, and then immediately after that, you do a GET. Like, you create an object, and then you get redirected to the show page for that object, Rails will continue to go to the primary. I think it's probably using a cookie or something to that effect, but you can configure that time span. So you can say like, "Actually, we see that our follower lags behind a little bit more, so let's give it a five-second timeout where all requests for that user will then go to the primary." But otherwise, once that timeout clears, then you're going to switch back, and you're going to go to the follower, and all GET requests will happen to the follower. And that's the story. You have to configure that, and then it works.
STEPH: I always love when you start these out with "I have a tale to tell." I very much enjoy these adventures. And you also answered my question in regards to if you immediately just created something, but then you do a fetch that's very close to after you just created it and how that gets rendered. So that was perfect.
CHRIS: Frankly, the core configuration is very straightforward, and it's very much in line with what we were just talking about of; this is what I want from Rails: make this thing very easy, hide the details behind the scenes. But as I said, there's a bit of a tale here. So that was the base configuration. It sort of worked but then immediately upon deploying it to production -- So we deployed it to staging first just to test it out. Staging was fine, as is often the case. Increasingly, I'm leaning into Charity Majors' idea of you got to test in production. You're testing in production even if you say you aren't. So once it got to production, we started seeing a bunch of errors raised or a handful of errors. And they were related to a handful of controller actions, which are GET requests, so they're either show or index, but in them, they were creating, or they were trying to create data. And so we were getting an error that was read-only connection error or something to that effect, ActiveRecord read-only, I think, was the error class. And that makes sense because I told it, "Hey, whenever you get a GET request, you're going to use that follower." But the follower is a read-only database connection because it's a follower, and so it was erroring.
It was interesting because when this happened, I was like, wait, what? And then I looked into it. And it's frankly fine at all the levels. It is okay to create a record in a GET request as long as that creation is idempotent. You create if it doesn't exist, and then from there on, you use that same one. That still fits within the HTTP rules of idempotents, and everybody's fine with that, except for the database connection. Thankfully, this is relatively easy to work around. You just need to explicitly within that controller action say, "Use the right database, use the primary." And the way I implemented that, I wrote a method within ApplicationRecord that was with right DB connection, and then it takes a block, and you yield to that block. It's basically just proxying to another similar thing. And it's very similar to wrapping something in a transaction; it sort of feels like that. It's saying just for this point in time, switch over and use the primary because I know that I'm going to be having some side effect here.
STEPH: Wow. That's so fun. I'm sure it was not fun for you. But as me hearing the story later, that's fun in regards to I hadn't thought about that idea of you're telling all the GETs you can only go to the read, and now you're also trying to create. I am feeling nervous in terms of local development. So if you're working on a new controller and if you have a fetch or GET action, but you're also creating something, you haven't seen another controller that is demonstrating that strategy that needs to be used. Is it just going to work locally? I imagine it does because it was working for the other code that you were running that didn't yet have that strategy in place. So I'm feeling nervous in terms of someone could easily miss that.
CHRIS: I think there are a couple of different questions in what you just said. So let me try and answer all of the ones that I think I heard. So for local development, your database/config.yml is still going to be the same as it was. So you're just connecting to database name_development. There's only one of them; there's no primary follower. So this is a case where you have a discrepancy between production and development, which is always interesting. And maybe that's something to poke at because ideally, I want as little gap there as possible. But this is one of those cases where I'm like, eh, I don't think I'm going to run two databases locally and have one be a follower. That feels like too much to manage. Under the hood with that right DB connection method that I talked about where you want to explicitly opt-in, in the case that we're in development, I just yield directly to the block. So instead of doing the actual database switching at that point, the method is basically saying, "If we're in production, then switch to the primary and yield and if we're not in production, then just yield." And so it'll just run that code, and it'll connect to the only database. More generally, I have the connects_to configuration; I wrapped that. So that's in ApplicationRecord where you're saying, "Hey, connect to these databases based on this logic," that is wrapped if we're in production check as well. And the same thing in the top-level configuration that says -- We're getting ahead of ourselves in the story because this is the end state that I got to. It's not where I started, and I screwed some stuff up in here, but basically, all of the different configuration points, my end result was to wrap them in a check that we are in production.
STEPH: Okay. Sorry if I rushed your story. I was already thinking ahead to how could we accidentally goof this up? That makes a lot of sense for the method that's with right DB connection, that method that then it's going to check if we're in production, then we can use a primary follower strategy; otherwise, just use the database that we know of. So that helps a lot in answering those questions. And then we can pause and then get to my question later. But my other question that I'm curious about is what helps us prevent the team from making this mistake in terms of where we're adding a new controller, we add a new GET action, and we are also creating data, but then someone doesn't know to add that strategy that says, "Hey, you are allowed to go to the primary to also get data but also to write data too." And I'll let you take it away.
CHRIS: I don't know that I have a great answer to that one if we're being honest. As I saw this, it was very easy to find -- I think there were three controller actions that had this behavior in the system that I was working on. They all threw errors. It was very easy to just wrap them in this extra method and fix that, and then we're good, and I haven't seen that error again. As for preventing it from new instances of this behavior, I don't have a good answer other than potentially you share this information within the team and then PR review. Ideally, someone's like, "Oh, this is one of those things you've got to wrap it in the fancy database switching logic." Potentially, and I don't actually think this would be possible, but there's a chance that RuboCop or other static analysis type thing could look inside any index or show action and say, "I see a create or an update or any of the methods." But again, Rails is so hard to do static analysis on that I would be surprised if were actually feasible to do that in a trustworthy way, probably worth a poke because this is the sort of thing that can easily sneak out. But potentially, my answer is, well, it'll blow up pretty loudly the first time you do it. And then you'll just fix it after that, which is not a great answer. I'm open to that being a mediocre answer at best.
STEPH: [chuckles] Yeah. That's a fair answer. Just because I pose a question, I don't know if there necessarily is a great answer to it right away. And disseminating that information to the team to then having the team be able to point that out also sounds very reasonable but then still hashes that danger of someone overlooking it. The static analysis is an interesting idea, sort of like strong migrations. As you're introducing a new migration, strong migrations will do a wonderful job of showing you concerns that it has with the migration that you've added. And this is all just theoretical dreams and hopes because, yeah, that would help prevent some of those scenarios.
CHRIS: It's interesting now that this is the second time we've discussed static analysis in this very episode. Clearly, it's a thing that I want more of in my world, and yet I work in languages like Ruby that are notoriously difficult to perform static analysis on.
STEPH: I had a moment today writing a method that was currently just returning a string each time but then I was about to update that method. I was looking for a way like, well, maybe I don't always want a string. Maybe I actually want a Boolean here. But in the other case, I want a string. And the person I was pairing with they're like, "You could return -- [inaudible 29:31] Boolean in one case and then a string in the other case. Like, this is Ruby." [laughs] I was like, true, but I feel bad about it, and I don't love it. And we just had a phone conversation around that. If you're in the Ruby world following the more functional programming or type strictness and where you're returning specific types or trying to return a consistent type, it's ideal. But then also in Ruby, it's like it's Ruby, so sometimes you can finagle the rules a little bit.
CHRIS: YOLO, as they say.
STEPH: [laughs]
CHRIS: Yeah, I'm definitely interested to see where projects like Sorbet and...I forgot what the core Ruby typed thing in Ruby 3.0 is called, but either of those. I'm really intrigued to see where they go and how the Ruby community either adopts or doesn't. I wouldn't be surprised if that were part of the outcome there. I've been impressed with the adoption of TypeScript and JavaScript, which is also a very, very free language, not quite to the degree that Ruby is. But yeah, it remains to be seen what will happen on those fronts.
But continuing back to our saga, so we've now had the read-only error, we've fixed those, just wrapped them in blocks, and said, "Explicitly connect to the primary." So the next thing that I did after that, I realized that my configuration was a little bit flimsy is probably the best word to describe it. I was explicitly creating a new environment variable with the URL, the Postgres URL of the follower. And so I was using that environment variable to define where the URL like the Postgres URL of the follower database -- But I realized if Heroku comes in and does any maintenance on that Postgres instance, it's possible that the AWS IP address or other details of it will actually change and so that Postgres URL will no longer be valid. So that's one of the things that I rely on Heroku for, is to maintain my databases for me. But they will update, say, the DATABASE_URL environment variable if they change out your database. But now, I had broken that consistency. And so I'd set us up for somewhere down the road this will break, and I realized that because Heroku reached out and said, "Hey, your follower database needs maintenance." And I was like, oh, no. So, I tried to get it from -- It turned out, in this case, it didn't actually change. They were able to swap it out in place, but I wanted to add a little bit of robustness around that.
And so I actually reached out, and Dan Croak, former CMO of thoughtbot, actually had written a wonderful blog post about how to configure this and particularly how to configure it in the context of Heroku. And he described how to use the Heroku naming scheme for the environment variables. They happen to have colors in them. So it's like Heroku Postgres cyan URL or orange URL or purple URL. And so he defined a scheme where you set an environment variable that describes the color, and then it can infer the database URL environment variable from that. And then went the one step further to say, "If that color environment variable is set, then treat as if we are configured for database switching. But if it is not set, even if we're in production, pretend like we don't have database switching," which that was another nice feature that I hadn't built in the first place. When I first configured this, I just said, "Production gets database switching. And if we're in production, then database switching is true," but that's actually not something that I want. I want to be able to say, "Upgrade our follower," at some point or do other things like that. And so I don't want to be locked into database switching on production. So that was a handful of nice configurations that I wanted to get to.
Unfortunately, when I tried to deploy that switch, man, did it break. It broke, and then I was like, oh, I see I did something wrong there. So then I tested again on staging. Staging was fine. And then I went to production, and it broke again. And this happened like three times in one day. I felt like a terrible programmer. I had no idea what I was doing. Turns out that staging and production had different environment config files, and so their configurations were fundamentally different. They also had a different configuration for the database level. So one of the things I did as part of this was to clean those up and unify them so that staging was production with some environment variables to config it, but identically production, which is definitely a thing that I believe in, and I want basically all the time. I don't think we should have a distinct staging environment config that is wildly different. It should only vary in very small ways, basically just variables that say, "This is where the database is for staging," but otherwise be exactly configured as production. So I eventually got on the other side of that, fixed everything, have a nicely Heroku-fied color-based environment variable scheme, which is a bit of a Rube Goldberg machine, but it works. And I was able to hide that config in one place. And then everything else just says, "If there is a database follower URL defined, then use it." But yeah, so that was the last hard, weird bit of it.
And then the only other thing that I did was I realized that this configuration was telling the Rails server how to behave, but there are also background jobs. And this application actually happens to have a ton of background job traffic. And so I did a quick check of those, and there were a handful of background jobs that were read-only. A lot of them were actually sending data to external systems, so to analytics or other email marketing or things like that. And so constantly, as users are doing anything in the application, there are jobs that are queued that aggregate some information, maybe calculate some statistics, and then push it to another system. But those are purely read-only when those jobs execute. And so I was able to add another configuration which said, "Use the read-only connection and configure that to wrap those particular sidekick jobs." And with that, I think I have a working database switching configuration that will hopefully give us a lot of headroom in the future. That's the idea, that's the dream, but we will see.
STEPH: That is quite the saga between having GET requests that create data and then also the environment inconsistencies, which is a nice win that then you're able to improve that to make those environments more consistent. And then the background jobs, yeah, that's something that I had not considered until you just brought it up, and then being able to opt-out of the database switching sounds really nice. In regards to moving in this direction, you're saying gave you a lot of headroom for this; when it comes to monitoring performance, is there anything in place to let you know how it's doing?
CHRIS: I love that I knew that this was going to be your question. I love that this is your question because it's a very good question. And unfortunately, in this case, it's actually somewhat unsatisfying. So as is my typical answer for this, we're using Scout as the application performance monitoring tool on this. And I was able to go in and monitor what it looked like a week ago, what it looked like after I made the change, and it was a little better. And that's all I can say about it. But that's fine. The idea with this, and at least in the way I was thinking about it, is this should get better at the margins. On the days where we have a high spike in traffic, those are the days where the database is actually working hard. They shouldn't make the normal throughput of the application that much higher in the regular case; it's for those outlier instances. To that end, though, I did analyze it. And so the average response time got 2% to 3% better in that week-by-week comparison, which was fine. The 95th percentile response time, so starting to get out to those margins, starting to get to the long tale of where stuff gets -- a couple of requests came in at the same time, and the application had to try a little harder, those got 8% to 9% better. That shape of improvement where for most requests, nothing really changed for some of the requests that used to be a little bit slower; those got a little bit better. That's the shape of what I would hope to see here. And it remains to be seen. This application has particular traffic patterns where they'll encourage a lot of users to be using the app at the same time. And historically, those have been somewhat problematic, and we've had to really work to shore up the performance in those cases. That's where I'm really interested to see how this goes. It would be hard to replicate those traffic patterns at this point. So I don't have a good way to really stress test this, but my hope is that for those cases, things will just hum along and be happy.
STEPH: That makes a lot of sense and something that would be hard to measure, but the fact that you already see a little bit of improvements that's encouraging.
CHRIS: But yeah, certainly, if I get a chance to see what that looks like in the near term, I will respond back and let you know how this has played out. But overall, now the configuration seems pretty stable. I think we're in a good spot. Hopefully, we won't have to do too much proactive management around this. And ideally, it just buys us a little bit of headroom. So that is certainly nice. But with that, with your wonderful question getting to the heart of the issue, I think that wraps up the saga of the database switching.
STEPH: Well, I appreciate you sharing that saga. That's really helpful. I've been very excited to hear about how this goes because I haven't gotten to work on a project that's going to use database switching just yet. And now I know all the inside baseball. I'm trying to use sports metaphors here as to how to do this for when I get to work with database switching.
CHRIS: Sports de force.
CHRIS: Along the lines of new stuff, there is something I'm excited about. So in juxtaposition to my earlier statement or my earlier grievance where friends don't let friends use date_select in regards to trying to keep the web accessible, I do have some praise for something that's being added in Rails 6.1 that I'm excited about. And it's a really nice method. It's a query method that can be used to find orphan records. So if I'm writing a query that is then looking for some of these missing records, so if I have my table -- I didn't come with a great example today, so let's just say we have like table A and then we're going to left_joins on table B. And then we're going to look for where the ID for table B is nil, so then that way we find where we don't have that association that it's missing. And so left_joins does this for us nicely. And then I always have to think about it a little bit where I'm like, okay, I want everything from table A, and I don't want to exclude anything in table B if there's not a match on the two. And so then I can find missing records that way or orphaned records that way. The method that's being introduced or has been introduced in Rails 6.1, so anyone that's on that new-new, there is the missing method. So you could do tableA.where.missing and then provide the table name. So there's a really nice blog post that highlights exactly how this method works, so I'll use the example that they have. So for where job listings are missing a manager, so you could do JobListing.where.missing(: manager), and then it's going to perform that left_join for you. And it's going to look for where the ID is nil. And I love it. It's really nice.
CHRIS: That sounds excellent. That's definitely one of those things that I would have to sit down and squint my eyes and think very hard, really anything involving left_joins otherwise center. Any joins always make me have to think and so having Rails embrace that a little bit more nicely sounds delightful.
STEPH: Yeah, it sounds like a nicety that's been added on top of Rails so that way we don't have to think quite as hard for any time; we want to find these orphaned records, and we know that we can use this new missing method.
CHRIS: On the one hand, I feel bad saying, "I don't want to think that hard." On the other hand, that's literally our job is to make it so that we encode the thinking into the code, and then the machines do it for us. So it's kind of the game, but I still feel kind of bad. [laughs]
STEPH: Well, it's more thinking about the new stuff, right? Like, if it's something that I've done repetitively, finding orphan records is something I've done several times, but I do it so infrequently that then each time I come back to it, I'm like, oh, I know how to do this, but I have to dig up the knowledge. How to do it is that part that I want to optimize. So I feel less bad in terms of saying, "I don't want to think about it," because I've thought about it before. I just don't want to think about it again.
CHRIS: I like it. That's a good framing. I've thought about this before. Don't make me think about it again. [chuckles]
STEPH: Exactly. On that note, shall we wrap up?
CHRIS: Let's wrap up.
STEPH: The show notes for this episode can be found at bikeshed.fm.
CHRIS: The show is produced and edited by Mandy Moore.
STEPH: Thanks, Mandy. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes as it really helps other people find the show.
CHRIS: If you have feedback for this or any of our other episodes, you can reach us at @bikeshed. And you can reach me @christoomey.
STEPH: And I'm @SViccari.
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
On this week's episode, Steph and Chris are joined by fellow thoughtbotter, Joël Quenneville, to discuss all things debugging. Joël is helping publish a weekly debugging blog series and in this conversation they discuss how the series got started, technology agnostic debugging strategies, writing less bug-prone software, and speculate if Joël moonlights as a hockey coach.
Transcript:
STEPH: All right. And then who will be editing this episode will be Mandy. So as we run into blunders, which we never do, but if we do, then we can talk to Mandy and ask her to edit things for us. So I will try very hard to do that because I will likely still talk to Thom. [chuckles]
CHRIS: Hello, Mandy. It is a pleasure to meet you. In the last recording that will be going through you, I was referring to you indirectly as our next producer. But now that we know your name, I'm so excited to have you on the team and to know who is on the other side of these, hopefully not too nonsensical recordings. So pleasure to meet you.
STEPH: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Steph Viccari.
CHRIS: And I'm Chris Toomey.
STEPH: And together, we're here to share a bit of what we've learned along the way. So, hey Chris, today is an extra special day as we have a guest today. Joining us is Joël Quenneville, a thoughtbot developer extraordinaire and a previous Bike Shed guest. Welcome, Joël!
JOËL: Hi.
CHRIS: It's a pleasure to have you.
JOËL: It's good to be on the show.
STEPH: So, Joël, you and I are on the same client project. And during the past few weeks, we have encountered some very challenging bugs. In fact, if I look back at my developer journal, the last five or so of my entries begin with a very cheesy mystery title that's like the case of the missing data or Harry Potter and the chamber of errors. But in addition to spending your time bug hunting with me on this project, I understand that you and other thoughtboters, Jesse Bailey and Louis Antonopoulos, have been publishing weekly blog posts specifically about debugging.
JOËL: Yes, that's correct. This has been a project that we've been doing for a little while. We spent about three months doing research, conducting a bunch of interviews, and gathering a lot of material on debugging. And then, starting in April and through the middle of the summer, we are publishing an article every week on the topic of debugging and exploring different aspects of it.
STEPH: I'm really curious; what prompted y'all to start talking about debugging? I'm really excited to talk about the specific topics that are included in the series. And then also you mentioned all the research. I'd love to know how you went about that research. But before we go there, what prompted this conversation and then led to the creation of the series?
JOËL: Within thoughtbot, we spend a lot of time trying to improve ourselves, improve the broader community as well. And there was a conversation that got started about how we could get better at debugging. It's a thing that, as developers, we do all the time, and it's not something that's often taught explicitly. Many of us our experience with debugging has been very much just learning on the job and picking it up by osmosis and trial and error. And when you're more junior, a lot of that is just random stuff and changing random lines of code and maybe copying something from the internet and hoping things go well. And as you build experience, you tend to start becoming a little bit more methodical because you've seen what works and what doesn't. And so we wondered within thoughtbot, we have all these different people who've come up with their own experience. Can we meld that together and share the summary of all thoughtbot debugging knowledge combined and help everybody level up? Initially, we were wondering could this be just an internal workshop or something where we get together and exchange on the different ways that we've learned or different techniques that each of us uses? But then this evolved into a project that we wanted to share not just within thoughtbot but with the wider world. And so its final form, or at least its current form, has been a blog post series that's going to run over the course of three to four months.
STEPH: I appreciate how you've taken the opportunity to take all of this knowledge and then turn it outward-facing, so it's available to the public as well. And it really wasn't until you were talking about the series and then I started reading the weekly blog posts that are being published how little I read about concrete strategies around debugging. I think you said it very well earlier in terms of it as something that you pick up on the job, and you learn different strategies as you go. It's really hard to know exactly how to go about debugging unless you've had experience with that type of bug before.
CHRIS: I've been also reading the blog posts as they come out. And I'm similarly very grateful for both the general theme of thoughtbot of hey, let's take this thing and actually make it a shared resource for the world. But specific to debugging, it really is this interesting intersection of practical steps that you can take but also almost an art form. There's like, oh, I use Pry, and I put a debugger statement here, and that's a mechanical approach that we have. But then there's also the more general how do I think about code, and how do I build a model in my head and compare that to the thing that's running on my screen? And that is really so much more something that you learn in passing, or at least it typically is. And this idea of trying to be a little more purposeful and share more of that with the world is something that I absolutely love because it is both the harder aspects of the job but also probably one of the more common. I'm spending a lot of time being like, why isn't that working the way that I thought it would? I don't know, maybe 90% of my time as a developer, maybe I'm a bad developer, but it's so much of the time that I spend. And so any amount that I can get better at this or learn from others, I'm super open to that. So thank you for producing this and for sharing all the secrets.
JOËL: I liked your example of saying you drop in and you drop Pry to put a debugger at a particular location. And I think that maybe that's something that most people are familiar with and might use. But even something like that sounds like a fairly simple, basic concept. I think someone with a lot of experience, if you're pairing with them, you might ask, "Why did you put the Pry here and not there?" And that might make the difference between spending all day versus spending half an hour to find the bug. And that's, I think, a lot of where the experience comes in and where being a little bit more structured, having a strategy can really make the difference in being efficient. Because oftentimes, you're using the same tools and doing roughly the same techniques, but one can be totally flailing and doing random stuff, hoping you'll get lucky with a solution. And the other one is very methodically working towards finding the problem.
CHRIS: I love that framing of the question, but I also love the idea of you ask that to someone, and they're like, "Huh? I don't actually know. But now that I think about it…" and then they sort of discover the answer. But it's very much they're operating from intuition and just like, well, obviously I put the debugger right before the line that I think has this going on. But that's not necessarily something that's top of mind. So when you ask the question, you almost help them to formalize it. So yeah, there's gold in those hills.
JOËL: Absolutely. Definitely. I think gaining self-awareness about why you do what you do is always such a rich exercise, at least in my experience.
STEPH: There's an interesting comparison here where people will ask me how to do something in Vim. And I'm like, oh, let me do it first because I need the muscle memory for it, and then I can tell you how to do it. And I feel like debugging is often the same way where I'm like, well, let me hear about the problem, and then I can work through it with you, but I can't necessarily tell you off the top of my head all the different strategies that I would apply. So I really liked the idea of becoming more self-aware as to how you would approach that so then you can provide those tips to someone else.
You mentioned earlier about the research for creating the series, and I'd love to hear more about that. Because I imagine there's so much that you could talk about in terms of debugging but then still wanting it to be helpful to people that are working in different technologies. So how did that research go? What did that look like?
JOËL: Initially, this was focused on experience within thoughtbot. So we focused a lot on more internal research. We had a survey where people just shared their debugging thoughts, tips, and tricks. And then, we also set up a bunch of interviews with multiple thoughtboters across varying levels of experience to get their thoughts on debugging, and that was incredibly rich. So we did, I think, 10 or 12 interviews within the company. We captured all of that and then synthesized the output of those interviews, the survey with all of the random tips and tricks, existing thoughtbot blog content. And then, we also pulled some external resources that we had found as well as some podcasts, some other blog posts elsewhere. And we have tried to mix all of that together to come up with 10 or 12 high-level topics that we thought were particularly valuable to talk about and then turn those into blog posts.
STEPH: That's really awesome. I love how you took that approach of interviewing different people to take that instinctual knowledge that we have around how we're debugging but then helping people put that into words and then capturing that and then sharing it. What type of questions did you ask people to help people walk through what are their debugging strategies?
JOËL: That was actually pretty fun because we came up with a list of questions ahead of time, not that it was a strict list to be adhered to, but we wanted to have something to have a little bit of commonality between the interviews and then let them go where they will. Most of them we opened up with just asking people, "What does the word debugging make you feel?" What is your personal connection to debugging?" And I expected most people to be like, "Oh, dread or frustration," And that was not the case. Most people said, "Actually, I like debugging. It's a challenge. It's a puzzle. It appeals to the analytical side of me." One person mentioned that it's like the code equivalent of an escape room and that escape rooms are one of their favorite things, and so they actually really enjoyed it. Where a lot of the frustration comes in is when you have deadlines, and this bug is unexpected and therefore is taking time away from things you really need to be doing now or yesterday. And so the timeline can cause pressure, but the bug itself most people seem to enjoy finding the bug.
STEPH: I love so much that you just used the comparison of an escape room because I was just chatting with someone recently about how my week has been going, and I'm like, I am in an escape room. That is my job this week is to figure out how to get out of this or to understand what is happening in the system. So that's really funny. That's also very encouraging to hear that so many people have a positive association with debugging. But I certainly understand that it's more the deadline, the timeline that is then putting pressure on us to solve something quickly, but we actually enjoy the hunt is what it sounds like.
CHRIS: And in my experience, when it goes well, it can be really fun. But then there are those days where not just under deadline pressure but just sort of I lost a day to blah. I couldn't figure out what was going on. And especially when it turns out to be something relatively simple, that feeling is somewhat crushing. But when there's this like, oh no, things are not working the way I expect it, and then I'm able to dig in, read a bunch of Stack Overflow, put a bunch of debug or put statements and crack that, that is a fantastic feeling. And there is the optimum flow level of where it's just at the edge of your knowledge and skill set, and it doesn't take too long, but it's not too easy. Because, like you said, the thrill of the hunt is there. So there is a sweet spot, I think, and there are ways that it can go wrong on either side. But I definitely resonate with the idea that a medium-scale bug that I'm able to tackle that's a good day, actually. I feel good at the end of that day.
STEPH: There's a delightful blog post by Chelsea Troy where they share a graph that talks about the time spent debugging versus the probability of a one-line fix. And so the more hours that you've invested into fixing a bug -- Joël, I think I found this particular blog post thanks to the Debugging Series. It's like two in one of those posts. And so the more hours that you sink in finding a bug, the more likely that it's going to be just a one-line fix. And those are the ones that feel the most painful to me. It's something that is small and comes down to an incorrect assumption about the system. And those are the ones that feel good that I solved it, but I didn't really enjoy the hunt for that one. There's a specific time window in which I'm enjoying myself, which then just becomes stressful and frustrating.
JOËL: Sometimes, it almost feels like the number of lines of code for the solution needs to match the amount of effort it took to find the bug. And so if I spend a day searching for the bug, it's frustrating that it's only one line and that the solution took 30 seconds, but the search took eight hours.
CHRIS: I think my measure would be the length of the commit message associated. I'm fine with a one-line change as long as there is a couple of paragraphs of explanation of the journey that I went on and not just self-serving like, hey everyone, listen, because this was rough for me. But the look at all the things I learned, let me capture that knowledge here because there's actually a bunch that this one code line change encapsulates. But it's actually looking at the history of HTTP and the way that the different headers are handled in different browsers and, for many reasons, this one-line change. But if it's a one-line change and the commit message is like, "Oh, typo, sorry," then I feel bad. That's a bad day for me. [chuckles]
JOËL: This reminds me of the project, Steph, that you and I are on where I've definitely had several bugs that I've gone through to fix where the commit message is definitely quite a bit longer than the actual diff. And the commit message includes ASCII diagrams showing the structure of certain database tables and why I had to change a particular query. And it's weird. And you would never understand why without realizing the schema. So yeah, I definitely feel you there, Chris, where sometimes you go on a journey, and it's very important to record that for the next person.
STEPH: Yeah, your commit messages have been phenomenal. And I love all the diagrams that you've included because that has helped me have context for what exactly you understand about the system and what appears to be wrong with the system. That has been wonderful. Speaking along those lines, as we were just talking about how it can feel very ephemeral in terms of the strategies that we use for debugging, I'm really curious what strategies do y'all use for debugging? Do you have particular tools that you use? I know Joël, you are a fan of diagrams. Is there a particular tool that you use for that?
JOËL: There's a variety of tools that I'll use. Recently I've been using Monodraw, which is a tool for macOS that allows you to make diagrams that can be exported as text using various ASCII characters, which means that you can then draw an entity-relationship diagram of your database and include that in a commit message where it needs to be text. You can also export it as an image. But the fact that I can use it as text in a commit message has made it particularly valuable recently. So that's my latest go-to diagramming tool.
STEPH: That's really nice. I haven't used that, but I've been seeing you use it so extensively that I've added it to my list of things to check out very soon. Are there any other particular strategies that you use, since we're on the topic of concrete strategies, for debugging and how you approach debugging?
JOËL: I am a big fan of binary search, which sounds like a fancy computer science term. But I think from a very practical standpoint; it's just a process of elimination. Sometimes finding where the bug is, is harder than where it's not. And so can I eliminate roughly half of this file or this project or whatever and know that it's not a concern and then keep repeating that until I have a pretty small surface area to actually find the bug itself. And this can be as simple as just commenting out lines of code. And so, like, I'm going to comment out roughly half the lines in this method and see can I still reproduce the bug? If so, then I know it's in the ones that haven't been commented repeat, repeat until I find the bug. And because you remove so much code every time that you have, it takes relatively few steps until you find a very narrow area in which the bug likely is.
CHRIS: I love that. Binary search is definitely one of my favorite approaches. And I think that was a very welcoming and friendly introduction to the topic for anyone that might not be familiar. But it really is such a simple and yet effective tool for narrowing down the scope. Because when you look at an entire application, you're like, ah, something's wrong, and you start from the outside, that's overwhelming. But if you can start to narrow it down, especially, like you said, in this more methodical, purposeful approach, that is really wonderful. I think one of the tactics that I have been reaching for more and more is using minimal reproduction cases. So rather than actually working in the context of the full application -- This is especially true if I'm working on, say, a JavaScript app or Svelte is the most recent example. Svelte has a REPL on the svelte.dev website. And I find myself more and more reaching for that and just trying to very minimally reproduce code that just isolates that bug. And then I try and tease away the pieces, but now I'm left with this minimal reproduction there in an executable format, and that ends up being really useful. The same sort of thing if I'm on the Ruby side, I might actually do in a Spec just because that's a really nice way to harness execution and be like, I want to do these things. Here's the setup. And it's one of the reasons we love Specs, but I find it's actually a really great tool for setting up some data, executing the app in a certain way, and then testing it. And I find particularly with RSpec and Rails; I feel like I have good control over getting the system into a certain shape. Other applications I find that's a little more difficult, so other techniques may be necessary. But yeah, that's definitely one of the things that I've been leaning on more and more is minimal reproduction so that I can really narrow down the scope of what I'm looking at.
JOËL: I like that example, Chris. And that actually is a variant of probably one of my favorite approaches, which is reasoning by analogy. So you have a hard problem, and you can't figure out a way to solve it. So you find a similar easy problem, find the solution to that, and then try to backport the solution to your hard problem. Oftentimes, the easy problem is just a simplified version of your hard problem, such as stripping out all the unnecessary detail, then you can backport that. But sometimes, it's just something in a different domain that you understand more easily, and then you can take that solution and backport it. And I had a magical experience a while back. For those of you who know me, I'm a big fan of the Elm language. And I spend a lot of time in their Slack just helping out people. And someone ran into a situation with random generators, which are a concept in that language that can generate random values. And they're trying to combine a bunch of them and having really weird bugs. And I tried a bunch of different techniques to figure out what was going on, and I couldn't figure it out. But the thing that I did figure out is that random generators in this particular dimension we were trying to understand work very similarly to functions, which are a much more simple concept. And the moment I realized, oh, in this very particular way, generators and functions are the same, all of a sudden, that unlocked in my mind; wait, I can reason by analogy here because I know how to solve this problem with functions. I don't know how to solve it with random generators. So I went and solved it by saying, "I don't know how to compose a bunch of generators. I do know how to compose a bunch of functions, figure that out and then take that solution and bring it back to generators." And I followed that process through, and it worked, and it was amazing. I came in to work the next day, and I was really excited about this. And I was talking with some colleagues, and everyone's like, "You should write about that." So there's a blog post from a year or two ago where I walked through my whole process and all the different debugging strategies I used, including reasoning by analogy to solve what was a pretty tricky bug.
STEPH: I love how the typical response from talking to a thoughtboter about going through something like that is often, "Oh, you should write about it."
CHRIS: And I'll help you edit it, not just "Please go write that." And Joël, you live that to the extreme. You have been an absolutely prolific author on the thoughtbot blog. And you bring some of the images and things like that that you're talking about. You really, I think, provide such a great example of paying knowledge forward and sharing better. So if anyone wants to learn about blogging on the internet, just go follow Joël's, work for a while, and you'll learn some great things.
STEPH: Yeah, that is so true. I love how much you publish, and I'm a big fan of everything that you write. One of the debugging strategies that was mentioned in the blog post that really rang true for me was talking about identifying assumptions because that is one that I typically fall into where I will read about a problem, and then I will say, "Okay, I understand exactly what problem they're running into." And once I start troubleshooting, if I'm unable to reproduce -- Because I follow a similar strategy, Chris, that you just mentioned where I will try to replicate the issue either if I'm doing it locally or ideally if I'm writing a test, so then I can write a test that fails and then I can then make that test pass. But if I'm unable to reproduce, then I'm forced to go back and say, "Okay, am I making an incorrect assumption about what's being reported?" And that has been so helpful. Like, there are just little things where I realize I'm on autopilot for where it's like, the user downloads a report, and I'm like, oh well, they mean this report. And then I find out that they actually meant something else. Also, assumptions in the codebase, and that's one that you and I, Joël, have run into so much with this past week in regards to assumptions in the codebase as to how many associations a record can have. Is it one? Is it many? It has many, but it really only wants one record. So assumptions from the perspective of when someone is reporting an issue and then also assumptions in the codebase. For the first one, I have found that, especially when I'm new to a team when someone reports an issue, I often like to hop on a quick call with them and say, "Hey, are you able to reproduce this for me? And so I can watch you, and I can understand your workflow." And I have found that typically speeds me up drastically.
JOËL: One of the things that was mentioned in the article on listing assumptions which is maybe a bold claim, but it opens by saying that all bugs are a form of miscommunication. And this might be human-to-human communication where you didn't understand the requirements or what was trying to be done. But code is also communication between us and computers. We want computers to do a thing. And if the computer doesn't do what we're telling it to, it's not doing that just to show us who's boss; it's because we didn't communicate correctly what we wanted. And so yeah, trying to better understand ourselves what we mean and our assumptions is a key part of debugging. And that's the thing that came up over and over and over in all the interviews was one, build self-awareness about the assumptions you have, and there's a bunch of different techniques for doing that. And then once you have self-awareness of what your assumptions are, never trust anything, validate, validate, validate, because yeah, you're often wrong.
CHRIS: There's an interesting parallel to that in my mind of we often end up with these systems, and it's behaving in an odd way. And so we have to build this mental map of okay; what are all of the different states and workflows that can get us into those various states? And having now debugged a handful of times in my life, I'm trying as much as possible to flip the script and go with an ounce of prevention is worth a pound of cure. And this is why one of the most common things that I say in a pull request is, "Hey, can we make that null false on that new database column? Hey, can we change this type constraint so that instead of it being a Boolean and then another attribute, it's actually a three-state enum?" and et cetera, et cetera. How can I collapse the states down so that when I'm in debugging mode, I actually can take some things as givens? Still, maybe validate from time to time, but the more I can learn to trust a type system or the database or things like that, things that are a little more trustworthy than I am or other humans, I'm increasingly loving that. And there's obviously a gentle balance there, but that's something that I've been leaning into more and more. And I think it's directly informed by the years of my life that have gone into debugging at this point. Is that accurate? That seems like a high estimate, but it's a lot. It's a bunch of weeks at a minimum.
JOËL: I felt that really strongly. I'm kind of disappointed as an industry that we default things to be nullable so often. I wish database columns were non-nullable by default, and you had to opt in to make them nullable. I wish GraphQL didn't make columns nullable by default. And I think oftentimes, when you're working in a dynamic language, you don't care about that distinction. And so you just let it go by. And let's say with GraphQL, again, I hang out a lot in the Elm Slack channel, and I spend a lot of time helping people integrate Elm in GraphQL. And when you get a schema and try to load that into Elm because it has a type system, it will read your schema and wrap everything in maybes because it's as though this field is optional, this thing is optional, this thing is optional. And then people come to the Slack, and they're like, "Why is there maybe everywhere with deeply nested -- This is a terrible mess in Elm. What went wrong?" And then I have to tell them, "It's not the Elm tool that's wrong. If your schema has this implicitly, the default thing was to make it null." And so it just looks normal, and you want to put all those exclamation marks everywhere. And a lot of time people don't believe me. And I have to say, "No, no, you're right. It really is the schema that's the problem. Please go put all these exclamation points." I'll give an example of a non-null schema, and then they try it, and they're like, "Wow, this makes such a difference."
STEPH: Joël, the defender of Elm.
JOËL: [laughs]
STEPH: And I am with you, and it is interesting. And I've been there myself, too, where there's a fear of over-restriction in terms of if I make this not null and something blows up, then that feels like a bad outcome. And so I've seen a number of projects where we let nil get through so easily because then we just always handle the nil versus having that restriction earlier and making that decision. It's like we're pushing off that decision of like, well, that'd be nil for now, and then we'll figure it out later. Versus starting with that decision upfront and saying, "No, let's go ahead and make that decision now. What do we do if this is nil?" And I agree that it would be wonderful if we had more restriction upfront and then we loosened the requirements as we find out that we need to versus starting with the loose requirements because walking that backwards is incredibly difficult.
JOËL: Yes, having to backfill nullable columns that we don't have values for in the database because now we want to restrict it, but for years we didn't collect that value. And so what do we do now? That becomes really tricky.
STEPH: Chris, a minute ago, you were mentioning prevention, which I love so much because then we can avoid a number of these debugging discussions, although this is also delightful. But there's an opposite end of that spectrum that has taken me a while to gain comfort with, and it is when you don't know enough about the bug, and you can't reproduce, and then you essentially have to let it go. And there are other ways that you can debug, but you can't fix it in the moment. So let's say that you have an error or something that happens every once in a while, but you can't actually find the reasons that it's happening or if you have data that's getting created in a certain state, but you don't know what in your application is creating that state. So instead of spending what could be hours and days triaging how your system got in that state is instead perhaps adding some logging around it to say, "Hey, this is the moment where we are causing this to happen," or maybe it's adding a constraint, so something fails very loudly whenever the system tries to put data in a particular state. And in those cases, it took me a while to become comfortable with the idea that I can't solve this today. I can't solve it now, but I can take steps to then know how this is happening. So then, in the future, we can actually prevent this or apply the fix. But initially, that always felt like a really bad outcome for a ticket that's reporting a bug where it's like, hey, I can't fix this today. I don't know exactly what's happening, but I've added some logging, or I've added something that's going to raise when this happens. So when we do get notified of it again, then we can more quickly triage and put the right fix in.
CHRIS: Yeah. If anything, I feel like there can be a -- Like we were talking about earlier, there's almost an enjoyment to solving a bug where it's a puzzle. It's a thing that may capture our attention, but sometimes, actually, perhaps often, the correct answer is that's actually not the most important thing right now, or the cost is too high to try and solve it given the information that we have. It's affecting a very small number of users. Maybe we can make that experience better. It's not just a generic 500-page, but we turn it into a, "Hey, sorry, you got into a…" like a more explicit error state but defer the actual debugging because there are other things that are slightly more pressing, affecting more users, or again, we just don't have enough information. And I feel like that actually can be a difficult thing to be like, no, but I want to solve the puzzle now. This is a fun puzzle. Please let me solve the puzzle. But sometimes, we don't get to solve the puzzle today.
JOËL: One of the articles in our series that I'm really excited about takes a look at classical philosophy and various categories of reasoning identified in classical philosophy and how we can apply them to debugging to debug in a more strategic and methodical manner. And one of those categories is inductive reasoning, which is very similar to, say, the scientific method where you gather a lot of information. And then, based off of those cases, you try to come up with a pattern, have a hypothesis for it, test it. And then if you can show that it's actually the case, then you're likely correct. But of course, that depends on having enough test cases for there to actually be a pattern and for it to be statistically significant. And so for some bugs, if there's only one instance of it happening and you just say, oh, somehow a bad value made it from our front end UI into the database, and we don't know how. But it's not happening to every user. Like, there's only one use case, and we can't figure out how that happened. We don't have enough information to build up those test cases to try to find a pattern. And so that's where getting more test cases becomes really important. As Steph mentioned, logging can be really helpful here, raising whatever you need to do. Maybe it's adding a constraint or a validation to say, "Blow up the next time this happens." But once you have enough test cases, then you can start seeing patterns. And that's your inductive reasoning to solve the bug.
STEPH: Something that we touched on earlier but I don't think I've given enough credit to but is something that I really appreciate is the fact that all of these strategies that are being talked about in each blog post are applicable across technology stacks, across languages. Because you really highlighted a point just a moment ago around how most of the bugs that we're working with are all bespoke. They're all special little bugs in their own special way. And so it's really hard to have a blanket strategy that then applies to each one because they are unique. And the fact that y'all are creating so much content that has general strategies that people can apply when debugging is really impressive to me. So I'm really curious how are y'all doing that?
JOËL: When we came up with a series idea, we laid down a few principles for how we wanted the outcome to be. And one thing that came up pretty early was it's important for this to be language agnostic. We don't want to just teach like, here are some very specific Ruby tools you can use, which that's a very helpful article, but that's not really the kind of information that we were looking for. So we were trying to find what are some higher-level techniques and strategies that are usable throughout your career? And then secondly, we wanted to focus on finding bugs rather than how do you solve them or how do you prevent them? Chris mentioned a little bit earlier about techniques for preventing bugs from happening in the first place, and we might have a bonus episode or article on how to write bug-resistant code. But the focus of the series is what are some language agnostic ways that we can improve your search to find the root cause of a bug? And a lot of that has just been synthesis, so saying okay, here's a bunch of different things that different people told us. And we were fine with having language-specific examples in the interviews. But how can we then find what's common and what's not? One thing that I think was really interesting was talking how different people gain information about a particular point in code, and debuggers are a pretty common way of doing this. But print line debugging is also a really common way to do that. And every language does this slightly differently. And you can even do this, not just via text, but visually. So if you're writing CSS and you are trying to figure out when a particular rule triggers, you might put a 1-pixel red border around something. And that's CSS' equivalent to print here or console.log in JavaScript or Ruby or some other language.
So the idea is to see all what different people told us and then see can we extract a general principle out of this? And walking a fine line between we want the theories to have practical advice for people that they can use but also zoom out just a little bit so that we have some of the big picture so that you can make some of these big connections and see some of the patterns that can help you apply in different situations that you run into for debugging without necessarily getting head in the clouds, you know, what is debugging? And I'm sure there's a really fascinating philosophy article, not the classical philosophy article I mentioned; that one's actually good. But you can philosophize about debugging in a way that's too abstract to be useful. So we want just a touch of the philosophy to keep it big picture while also giving very concrete, useful tips and techniques that are language agnostic that folks can use.
STEPH: That's fabulous how y'all are able to separate that thought process away from the direct, specific action that someone takes. So when someone is dropping in that console.log statement or that print statement, it's like, okay, but what thought process took you there to the point that then you were trying that action? I really like that. There's one other trick that was mentioned in one of the articles that I also really enjoy. It's the analogy of taking a ball of yarn with you, so you're always tracking where you've been. And that is the other thing that I do heavily when I'm debugging is that I always have a note-taking application that's open because I always document what I've looked at, what were my findings. And then I think through what do I want to look at next? And sometimes I'll write down a list of three or four different questions of it could be this, it could be that. And then, I will prioritize those based on what's the quickest to look at? What can I replicate the fastest? What can I remove from this list? Back to earlier, when you were talking about the process of elimination and then walk through each one. So that way, when I do find the bug, I also have those steps that I can look back. So in case, someone has a question about it, in case they're like, "Well, what about this?" I can say, "Well, yes, I also checked that," or there just may be extra helpful tidbits that fall out of that process. At least they prevent me from checking something twice.
JOËL: I've been pairing with you, Steph, on several bugs recently, and I've really appreciated the notes that you keep. They're very thorough, and that's something that I've tried to bring into my own practice by seeing you do that.
CHRIS: That's something I've been iterating on in my own workflow of late is having a directory of notes associated with each project. And as I'm working on each, it's almost not even just bugs at this point but any new feature anything that I'm exploring. Because often, initial exploration of integrating with a library feels a little bit like debugging, sort of poking at the edges and what's true and what's not and writing up little reproducible steps of okay, run this code, then this code, then this code. And I've now just taken to keeping those forever because it turns out like, oh, I know I integrated that, but what was the step? I feel like there was something I did. And being able to go back and have that artifact now is so useful. And it's actually something that I've only really gotten in the habit of over, I'd say, the last two years, but that archive of notes is now very useful even to this day.
STEPH: So I think we've covered a number or hinted at a number of the wonderful topics that are included in the series, including some of the different ways that we can identify our assumptions or ways that we can get unstuck. That was one of my favorite ones on all the different ways to get yourself unstuck when you are in the throes of debugging. There's also the idea of when you encounter a bug that if you can't fix it right away, but then some of the steps that you can take to then be able to fix it in the future. I would love to know, if you don't mind sharing some spoilers, as to some upcoming topics that will be in the unpublished but soon-to-be-published blog posts.
JOËL: Yeah. So for all of The Bike Shed listeners who want the inside scoop, there are a few that I'm really interested in. So we opened the series by talking about mindset issues, how to approach a bug, how to think about assumptions, and now we're moving into some more concrete techniques and tooling. One that I'm really excited for is ways you can use Git more effectively. There are a couple of people that we interviewed who mentioned just how important it was to their workflow. And we've got some really interesting notes on that. There are also some really interesting ideas around areas in our codebase where bugs tend to accrue that are more likely to be, particularly around the nebulous concept of boundaries. This was a conversation that we had with one of our interviewees where we had our initial list of questions, and then we ended up completely throwing them away and going down this long, random tangent about boundaries; it was so good. We decided to dedicate a whole article to it because there are really interesting things around that.
STEPH: All of that sounds really exciting. I love that you mentioned Git because even in the conversation that we've been having right now, that didn't cross my mind, but yeah, that is such an incredible debugging tool. So I'm really looking forward to that and also the one that's going to dive into boundaries. All of that sounds really exciting.
JOËL: One thing that I think is really fun with digging into Git is that generally, when we think of debugging, we're trying to find where the bug is. But oftentimes, the real question we need to answer is not just where is the bug it's when is the bug? So not just debugging through space but debugging through time, and Git is the tool to do that.
STEPH: Oh, that made me laugh but also made me depressed: when is the bug? [laughs]
JOËL: And I feel like this is going to turn into a cheesy sci-fi TV series.
CHRIS: It doesn't need to be cheesy.
JOËL: True.
CHRIS: Yeah, it does. [laughter]
STEPH: For it to be good, I think it has to be a little cheesy. Sci-fi bugs coming to your application next summer.
CHRIS: Everybody hop in the Tardis. We're going to find the bug.
JOËL: I feel like there's some variation of that line that shows up in a lot of time travel series. Like not where is X, but when is X?
STEPH: On that delightful note, thank you, Joël, so much for coming onto The Bike Shed and chatting with Chris and I about debugging. For everyone that would like to follow along for the Debugging Series, where can they find those articles?
JOËL: So you can go to the thoughtbot blog. There is a tag we created specifically for that, Debugging Series 2021. And I'm sure that you'll link to that. All the articles are also going to be linked from the first article of the series, and I'm sure that will be included in the notes as well.
STEPH: Perfect. And where can people follow your work?
CHRIS: People can follow me on Twitter @joelquen, J-O-E-L-Q-U-E-N. It's not the hockey coach, although I can neither confirm nor deny that the two are the same person. We've never seen them in the same room together.
STEPH: I didn't know you were a hockey coach in your spare time. Oh wait, this is the part that you can't confirm, right?
JOËL: [laughs] Well, that would be letting the secret out.
STEPH: All right. We will try to maintain your secret identity or whichever one that is.
CHRIS: It's a really terrible secret identity if it's actually the same name. Joël, you really should have put more effort into this, coach Q.
JOËL: [laughs]
STEPH: Coach Q. I'm going to start calling you Coach Q. That's wonderful. Well, with your permission.
JOËL: That's the real nickname. For those who don't know, Joël Quenneville is or formerly was the coach of the Chicago Blackhawks NHL hockey team, one of the best coaches ever in the National Hockey League. And a couple of years ago, I got to give a talk in Chicago, a conference talk. And everybody asked me, "Ooh, any connection to the coach?"
STEPH: To which you replied?
JOËL: Oh, I had a whole slide about the conspiracy that we may or may not be the same person.
STEPH: That's really fun. Well, thank you again so much for coming on our show, Coach, and walking us through the wonderful Debugging Series. The show notes for this episode can be found at bikeshed.fm.
CHRIS: This episode was produced and edited by Mandy Moore.
STEPH: If you enjoy listening, one really easy way to support the show is to leave us a quick rating or a review on iTunes as it helps other people find the show.
CHRIS: If you have feedback for this or any of our other episodes, you can reach us at @ _bikeshed on Twitter. And I'm @christoomey.
STEPH: I'm @SViccari.
JOËL: And I'm @joelquen
CHRIS: Or you can email us at [email protected].
STEPH: Thanks so much for listening to The Bike Shed, and we'll see you next week.
All: Bye.
This week Steph's taking a quick break, but while she's off, Chris is joined by a special guest - Jonathan Reinink. Jonathan is the creator of Inertia.js. Inertia.js lets you quickly build modern single-page React, Vue and Svelte apps using classic server-side routing and controllers, and listeners of the show will certainly have heard Chris rave about it on previous episodes.
Chris and Jonathan dig into what makes Inertia unique as compared to frameworks like Phoenix LiveView, Laravel Livewire, and Rails' Hotwire & Turbo. They also discuss how Inertia embraces the URL, the unique "protocol" nature of Inertia, and how to consider Inertia alongside native mobile applications. Throughout the conversation, Jonathan's consistent philosophy of wanting to build robust, performant, and delightful applications shines through.
Transcript:
CHRIS TOOMEY: I am seeing what I believe to be the relevant things.
JONATHAN REINIK: Let's dance.
CHRIS: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Chris Toomey. And this week, Steph is taking a quick break, but while she's away, I'm joined by a special guest, Jonathan Reinik. Jonathan Reinik is the creator of Inertia.js. And listeners of the show will know that that is increasingly one of my favorite frameworks and, frankly, just ways to build applications on the internet. Jonathan is also the creator of the Eloquent Performance Patterns course, which teaches the Eloquent ORM, which is the ORM in Laravel but really digs into deep performance and database things, so really covering that back end as well. Jonathan also collaborated on the development of Tailwind CSS, a utility-first CSS framework which again is something that I have spoken of in very high terms on this podcast. And lastly, Jonathan currently runs his own SaaS business called Church Social. So really, Jonathan is a bit of a quadruple threat covering back end and front end design and entrepreneurship. So pretty much everything you want to see. And frankly, I've been so impressed by the breadth and the depth of Jonathan's work and just the deep way that he is thinking about building applications. So I am absolutely thrilled to have him on the show today. So without further ado, Jonathan, thank you so much for joining me.
JONATHAN: Thanks so much, Chris. That was a very kind introduction, and yeah, it's awesome to be on The Bike Shed. I've been a long-time listener, and as I've said to you, I really appreciate the support that you've given to my work over the years. So yeah, it's awesome to be here.
CHRIS: That's interesting. We're measuring it in years now, but it's a very sincere thing for me. I think with Inertia, you've built something that is both very unique and a special approach to how we build things, but it's also built from very familiar pieces and allows us to reuse the deep amounts of knowledge that we have in the Rails community or the Laravel community. But actually backing up just a little bit, because we're going to dive deep on Inertia.js today, for folks that are not as familiar or have only heard me mention it in passing, there's a wonderful episode of Full Stack Radio where Jonathan and Adam Wathan talked about Inertia.js, and I think gave a very good foundational summary. So we'll link to that in the show notes in case anyone wants to dig in a little bit more. Likewise, Jonathan has a really fantastic blog post called Server-side Apps With Client-side Rendering, which, as far as I can tell, is the manifesto that began this whole journey for you. And I really love that you have done so much of the work in public, and people can see the history of how this idea has evolved and really crystallized into what now is a very production-ready framework in sort of the way to build things. But I would love to hear right now just for anyone who is not as familiar and also just to hear how you summarize it at this point in time. What does your introductory elevator pitch for Inertia.js sound like in April 2021?
JONATHAN: That's a great question. So it's hard to answer this without unpacking a lot of different things. And you mentioned the podcast with Adam; I think that's good because it goes in a lot of the technical detail of how Inertia works and why I created it. But the elevator pitch these days for when I talk to someone about it, it's generally I explain it as a way of building modern web apps. And in particular, when I say modern, I mean web apps that have a lot of JavaScript, so frameworks like Vue or React or Svelte, so applications that are built using those tools. And the key thing that Inertia offers is for you to develop these modern applications without having to first build an API sort of the way historically if you ever wanted to use one of these modern web stacks like Vue or React or Svelte, you could use them within Laravel or Rails by inserting them into your Vue or into your Blade templates or your ERB templates. But it was difficult if you ever wanted to turn it into a legitimate single-page application. And anytime you would ask that question, if you go out on the internet and say, “Hey, how do I build SPA with, say, Vue and Laravel or React and Rails?” The answer was always, “Well, you need to build an API. You need to build an API.” That was always the missing piece because that's the way that everyone in the Jamstack era that we're in that's the way that everyone's building their applications that are like these heavy client-side applications.
And I totally get the need for those style apps and the place for those style apps. But I really missed this way that you could build an application with Rails or with Laravel where you could just literally spin up a new app, create some routes within your server-side framework, create some controllers, create some views, and have a working application within minutes really. You could have something being displayed on the screen within minutes with these classic monolith applications. And if you wanted to do the same thing, if you wanted to get an app up and running in minutes with Vue or React as your completely client-side SPA scenario, it just wasn't working because as soon as you say, “Well, in order to do that, you're going to need to have a back end Rails or a Lavarel application, and then a client-side Vue or React application. And then you're going to have to create this API that connects the two together.” There's just a lot more work that goes into that. It's not only the work of actually creating the API; I find a lot of the decisions that come along with building an API; it’s like, okay, what does the abstraction look like? Am I going to build it with REST, or am I going to build it as a GraphQL API? And all the decisions that come along with designing and architecting that which again has its place. But there's just something awesome about saying, “Here's a new route. Here's the view that I want to render. And here's the data that I want that view to have,” and just go off and do it, and it's done.
And some people ask me, “Well, with Inertia, if you're not building an API, what happens someday if you need an API?” And they frame it like, well, this is a terrible decision. You should be starting with an API. But for me, the reality is that so many of the web applications that I was building and that I've seen other people building is they had already made the decision not to use an API because they had already made the decision that they wanted to use Laravel as a monolith app that had their controllers and the routes and their views all within that and the same thing with Rails. So if you've made that decision to build a monolith app with Laravel or Rails, you've already made the decision to not build it with an API. I was coming in from the other way. It's like, I just want to build an app the way I've always built in Laravel, and I don't want to have to build this API. Of course, there are times where you do need an API, which I think we're going to talk about maybe a little bit later if I don't ramble on too long, where it does make sense to have an API. But yeah, that's kind of the elevator pitch.
I think maybe to close off that thought is that I really, really enjoy having a tight coupling between my routes and my data layer and my views, which, again, I appreciate that. That probably sounds like blasphemy in modern web development. But for me, I think it's so empowering when you say, “Hey, I have a controller that's given me some data, and I have a view that's rendering the data, and those two know about each other, and those two depend on each other.” You can work so fast because I'm not thinking, okay, well, I have this API endpoint that returns a user, and that has their first name and their last name and their email. But I also need to think about it in the situation in the future where I might need this attribute or that attribute or some other attribute and make sure I have all that figured out ahead of time or at least have a way to add it in later. And all of that thinking that goes into designing an API, I find that that adds a lot of overhead.
And then maybe related to that also is the amount of times that you're rendering a view within your application that needs data from multiple different places. And to me, this is like one of the huge performance benefits that you get with a tool like Inertia is with, say a REST API, GraphQl solves this, but with a REST API, you're often getting too much data for what you actually need for the page, or you're often making more than one HTTP request because you say, “Well, on this particular dashboard, I need some user information. So I have to hit the user endpoint. I need maybe the latest product sales data, so I need to hit that endpoint.” And you’re dealing with these performance issues that you get with a REST API that with Inertia, you don't have that problem because it's just going back to classic Laravel Blade views or Rails and ERB templates. Am I saying that right, ERB template?
CHRIS: Mm-hmm.
JONATHAN: In those situations, you say, “Well, if I need data from three different places, well, I'll just grab data from three different places and send it to the view, and that's fine. And I can do that in the most efficient way and get the data that I need specifically for that view.” So anyway, that's some of the thinking that drove me to build Inertia and some of the things that I was going for. And yeah, it was an evolution. It really came out of me using Turbolinks and really appreciating what Turbolinks gave me but taking it to that next step where it's like Turbolinks, except it's built with the same principles as Turbolinks but built for modern client-side frameworks like Vue and React.
CHRIS: Yeah, that all feels very familiar to me. And in my experience, I've now worked with Inertia on a handful of projects, but in particular, I have just a small personal app that I use to manage different aspects of my life. And it's been my playground for different technologies. And I've migrated it through a bunch of different versions where it used to just be a Rails app. And I was like, oh no, the thing that I need to do to be on the cutting edge is to turn this into -- it's a Rails app on the back end with an API, and then it's a React app. It's separately deployed, but those two talk to each other. And what you were talking about of the deep coupling, I think that coupling exists whether we want it to or not. And so embracing that and revisiting when I eventually migrated that application to use Inertia and the client-side stuff folded back into the core codebase. Now deployments all go out in sync. And that turns out to actually be a really nice thing and a non-trivial thing to solve otherwise.
As a developer of one on this particular project, the amount of complexity that was removed from the app when I switched it over to Inertia was amazing. I got to remove client-side routing. I got to remove client-side state management, which I think I was using Redux at the time. I got to remove some form helpers that I had. I think I might've been using Formik or React Use Former, any of those. But there are so many different little pieces that you ended up cobbling together to make an application. And it was amazing to me as I moved to Inertia, where I was like; actually, I don't need any of those, and routes suddenly are defined in one place in Rails in a familiar way. But things like redirects all work -- It feels just like a Rails app but with the extra abilities that a front-end client-side framework gives you when you want those when you need those. But otherwise, it really does feel like I'm rendering to an ERB template. It just happens to be that that template is rendering on the client-side and is written in React or Svelte or whatever the front-end framework is. But it almost feels like progressive enhancement. I'm borrowing a term, and it's not actually applicable here, but it really feels like that. It's like, oh, it's a Rails app, but I just want to make it a little bit fancier, and Inertia does that in such a fantastic way.
And actually pivoting just a bit, as far as I can tell, there seems to be an explosion of thinking in this space. There are a handful of frameworks, namely Laravel Livewire, which is often paired with Alpine.js. Elixir has Phoenix LiveView, and then Rails has the Hotwire suite, which Turbo and Stimulus are the most pointed considerations. But interestingly, I think all of those frameworks, which I think are trying to provide a very similar experience, tend to keep things on the server-side, so using the Laravel Blade templates or the ERB in Rails. But you've taken the different approach to say, “No, let's embrace this front-end technology where it makes sense.” And again, there are a lot of pieces that can fall in, and I don't need the Redux and the React Router and all of those things but still use that client-side framework to be the rendering engine. And so I'm intrigued if you can talk a little bit more about that and that trade-off because I think it really differentiates Inertia and its approach. I personally found it to be fantastic, but I'd love to hear a little bit more about your thinking on that.
JONATHAN: The thing about modern websites and web apps, in particular, is it doesn't matter how you slice it; we need JavaScript. So if you disagree with me there, then everything I say from this point on will not make sense to you. But I think we can all agree that modern web apps need JavaScript. JavaScript is the programming language of the web of the browser that allows us to do whatever magical things that we want to do. And if you look at tools like Phoenix LiveView, Laravel Livewire, and even the new Turbo stuff from the folks over at Basecamp, they all are embracing JavaScript in the same way. It's just they're framing it in a different way. I would say, especially with Livewire and LiveView, they're almost creating like an abstraction between the server and the way you write things on the server and the client-side. And they're almost hiding the JavaScript, which is really, really cool. I think it's such an interesting thing to try to do where somebody who's not familiar with JavaScript and not familiar with Vue, React, and Svelte can go and do things, write server-side code that gets rendered server-side. And then there's some JavaScript that these libraries insert that allow you to do more interesting things, whatever those things might be, show a drop-down, or drag and drop or validate a form or submit a form without actually submitting it fully to the server but submit it over XHR instead, all these kinds of things. But the point is they're embracing JavaScript just in a different way. And same with Turbo, Turbo gives you a way to write JavaScript for an application that mostly has server-side rendered HTML.
So I think it's important to just recognize that JavaScript is key in all these frameworks. With Inertia, it's the same idea in that Inertia wants to embrace that classic way of building applications using the classic server-side monolith application framework like Laravel or Rails. But the difference is it acknowledges or embraces these existing client-side frameworks that have really grown in popularity. And the three, again, I keep mentioning them, Vue, React, or Svelte. Svelte being an up-and-coming one that's not nearly as popular yet, but it seems to be gaining a lot of steam.
CHRIS: It's on the rise. That’s my long [inaudible 14:46]
JONATHAN: Yes, and people keep saying that. So anyway, Inertia basically said, “Hey, we want to keep building server-side apps. We want to keep building monolith apps similar to these other tools, except what we're going to do is we're going to embrace the fact that there's this really, really amazing tooling that's been developed for the client-side.” And it just doubles down on that. So for me, the reason that I ended up here was because, in my own SaaS application, it was a Laravel application that started with mostly Blade views initially. And then, over the years of building it, which has been many, many years, I've slowly added more and more Vue components within my app. And initially, the way I did it is those Vue components would just be inserted in as regular HTML tags in my server-side rendered templates. And then, when the page renders, those Vue components would boot up and do whatever they need to do. So for me, when I was building Inertia, I had already fallen in love with Vue, in particular, and having all the power of these client-side frameworks. And there is so much there. It's not just Vue, React, and Svelte; it's all the amazing tooling that's available out there that you can add on top of it.
And this is the thing I often tell people that Inertia isn’t -- we say right on the homepage, “Inertia isn't a framework.” And the reason why I say that is because I don't want people to think of Inertia as an alternative to Vue, React, and Svelte. Do you know what is a better way to frame it? It's actually more of an alternative to React Router or Vue Router; that's really more what it is, where you can say, “All my routings are handled server-side,” and that has all kinds of interesting implications. But it's more of a router, and it just so happens to pass along that routing control over to the server. Anyway, so that's really for me what differentiates Inertia from those other tools is because it's really doubled down on these client-side frameworks.
And I think the reason why Inertia has been relatively popular is because people know Vue and people know React. And when it comes to then working with Inertia, it's not some new thing that they have to learn. It's an existing set of tools that they're already super comfortable with. And in so many ways, when you're building an Inertia app, you're kind of building a classic Vue app or a classic React app or a classic Svelte app. It's just there's a bunch of pieces missing. Like you said, it's like a bunch of the client-side state management stuff, which nobody likes anyway, is gone. The other thing that's gone is client-side routing. You don't have this back-end routing is over here now, and client-side routing is over here, and I have two different routing definitions. It's like, no, that's all just server-side now in one place.
The other amazing thing you get is you mentioned redirects and that whole HTTP layer you get just along with Inertia for free because it's just part of your server-side stack. And one key aspect of that is auth. You can just use good old-fashioned nothing is better than session auth. Like, it just works. And, so whatever your typical solution for doing session auth in Laravel or Rails or whatever server-side framework you're using, all this stuff just works. So anyway, coming full circle on your question, the reason why Inertia has gone this way is because I really think that there's a huge amount of value with using these modern frameworks. And we just doubled down on using them.
CHRIS: Yeah, that resonates with my experiences using Inertia and in contrast to the other frameworks. Everyone seems to be trying to get to the same place of providing a mechanism to have more almost app-like functionality but still using the traditional server-side technologies. But I think Inertia has chosen an approach to that that is unique in that category and really has provided a fantastic outcome. I've been very frankly surprised by the fidelity of experience and how app-like I can get something to feel when building with Inertia while still using all the same technologies. And the fact that I can use just traditional server-side auth and redirects and things like that is just so nice, and everything feels right.
There's an experience that I've had on many applications that are, say, a React client-side bundle that gets sent down and then boots up, and then the layout starts to render. And as its data fetching, it gets like a 402 response or something like that in that data fetching. And then it's like, oh no, I need to hard redirect you over here on the client-side to this other page. And there's this junk of semi-filled-out layout, and then suddenly you're on the login page. And again, with Inertia, it looks like a normal server-side rendered app, but it isn't in the ways that really matter to us. And it is one of those things where the more I played with it, the more there's an experience of interacting with Inertia that it surprises me consistently how nice it is to work with it and yet it's so much easier to maintain an application using it. I know I'm raving here, but I am really a big fan of this for everyone listening in the audience.
JONATHAN: [laughs]
CHRIS: And actually to continue on one of the things you were saying there, one of the things that stands out to me in Inertia is the way that it embraces URLs and to a certain degree, that seems like a purposeful thing, but it also seems like it just naturally falls out of how Inertia works because we're no longer using a client-side state management technology, the way to manipulate state is through the URL. If you want to see a different version of the to-do list you're looking at when you click on that link, you change the URL and the state changes in response to that. And so everything is fundamentally kept in sync, but URLs are very much at the center of the architecture, and I really love that so much. I think URLs are often forgotten in client-side frameworks or underserved or underused. And it turns out in my experience as a user and both having served many users, people love to command-click on links. They love to right-click open a new tab. They love to be able to reload and see the same thing on the screen when they reload the page. They love to be able to bookmark. These are all really wonderful things that come out of working on the web. And the fact that Inertia has a pit of success around having URLs and have that be the way that we drive state is just so fantastic. So I'm wondering how much of that was very purposeful on your part versus how much of that fell out of the architecture.
JONATHAN: That is very much something that fell out of the architecture. I say that not to say that I don't value URLs; I absolutely did. That's the way every single one of my Laravel built apps worked. It always starts in the route file. You hit the route file, you define a new route, and it goes from there. So I absolutely think that the URLs are critical. But the fact that it just ended up working out so nicely was, yeah, I'm going to say it was a bit of luck, a bit of coincidence. I find this is what's so interesting when you start pushing on a new way of things; you initially don't really know where it's going to end. It’s like you have some ideas of how the tool can work and where it might go, but I think there's a lot of unknowns that you just figure out after a while. So the thing I said earlier actually about the fact that Inertia in a lot of ways is like a client-side router; it’s, it's a routing library, to put it that way. I had been working on Inertia for a year and a half, and then a buddy of mine, Taylor Otwell, the creator of Laravel, he and I were chatting, and he said to me at one point he's like, “Oh, you know what? Inertia is actually super simple. It's really just a routing library.” And it was like, bam. It was kind of that moment; it’s like, oh yeah, I hadn't thought of it like that at all. But when he said that, it made a ton of sense to me. So it's just this interesting progression the more you work on something, and the more you push on the edges, you learn what's possible and what it even is.
I had this interesting experience, so remembering that Inertia came from Turbolinks. So I had my whole app built with Laravel ton of server-side rendered templates with Blade with view mixed in. And I had the SPA mode by clicking around using Turbolinks. So when I decided to try building Inertia, I removed Turbolinks, and all these requests now happened over XHR but using this preset JSON structure that powers Inertia. I really, in my mind, had this idea that it was only for GET requests, for GET visits; it was just for that. So the initial version of Inertia, there was no Inertia.post or Inertia.put or anything like that. It just wasn't something I even thought was possible. But then I remember, and this is often how it goes; I was out for a hike that day to get away from the computer for a little while and just let your brain drift; I'm sure you can relate to that. I was like, wait a minute; I could totally just support POST, PUT, PATCH, DELETE. And that was such an aha moment for me where I just realized that it was so much more than what I originally thought it was.
And then the whole thing from that I remember it was a bit of like a waterfall effect after that where I remember running home out on that hike and hacked it together, and then it was like, okay, well, if I submit a form using POST, well, okay, I'm on the create user page. And I submit this form using Inertia.post to the user's endpoint. I'm like, well, how do I now end up back at the user index page or whatever page, maybe the user edit page. I’m like, wait a minute, I can just return a redirect back to the user index page, and it's literally going to return an Inertia response from the user index page. And then the way Inertia works is it dynamically swaps the page component client-side. And it was just like, oh, this is way too cool. And this really drives my thinking now that it's become a little bit more clear to me is that it really it's all based on HTTP using headers and normal HTTP stuff like redirects are such a critical piece of the story. But to me, that's super neat that, in a way, it's like a throwback to the fundamentals of the web and the browser and the fact that Inertia can just use those things,,, and it doesn't have to be fancy in a lot of the ways. It can just rely on those existing core pieces of the browser. So, yeah.
CHRIS: It really is interesting to me how it feels like progressive enhancement in that way where you're building on top of these core fundamentals of HTTP and requests and redirects and status codes and things of that nature. Particularly interesting to me was it took me a while, I'm going to be honest, to figure out forms and particularly validation errors in Inertia. And that is entirely my fault. You have absolutely fantastic documentation. I am so impressed by the quality and the density of the documentation that you have that really covers everything. If we're being honest, I hadn't read the page, but I was doing form posting and then the subsequent errors and how you deal with that. I was doing it in a very traditional Rails way which if we're being honest, that is not a fundamental of how HTTP works. Rails just chose an option of oh, if you POST but we don't create the object because there's a validation error, then we're going to stay on the URL of the POST, so the collection route, but we're going to re-render the form in line. And that's a choice that Rails made that is interesting because at that point, if a user reloads the page, then things are weird, and it's not going to reload. They're not going to see the same thing after that reload, or it's going to try to repost or et cetera, et cetera. There's a bunch of edge cases there that sort of fall out. Whereas with Inertia, you end up redirecting back, and there's this interesting handshake of the errors, but from an end-user experience, it is absolutely fantastic where you stay on the form; the URL does not change. Technically, there's a POST and a redirect back under the hood, but Inertia just handles all of that for you. And you end up with sort of in-line validation errors. But you don't clear out any fields, and there are just wonderful things that fall out of it that again took me a while to get there, but it was another one of those oh, wow, this just naturally falls out of the architecture, but it's so nice and such a nice incremental advance on top of frankly, the stuff that I was doing in Rails historically.
JONATHAN: So the way that Laravel works and it's always worked this way is when you make a request using POST or PATCH or DELETE or whatever to an endpoint, and that endpoint does its validation in the event that that validation fails, this is just like built-in standard like Laravel Stock behavior. It automatically redirects you back to the endpoint that you were on. So if you're on the create page or the edit page, it automatically redirects. That's just Laravel behavior. And what it does is it takes those errors that come out of the validator, it flashes them to the session, and then when the forum page reloads, you have those errors available to you in the session. Now, of course, if you're building like a classic server-side rendered application and you redirect now back to your form, you have to repopulate old form inputs, which is not a lot of fun, which you don't have any of that stuff with Inertia because Inertia allows you to preserve your state. But anyway, that's a separate thing. But for me, it’s like you build a tool a little bit like in your own silo and the world that you know, and for me, that's Laravel. But there are also ideas that you get that just come from the tooling that you use, and the fact that Taylor Otwell made that decision in Laravel at one point is absolutely what now dictates how is the go-to way to do it in Inertia, just because it works so nicely.
CHRIS: I wonder if there's been any consideration in the Rails world to adopt that because I think from an experience perspective, it feels like it's a better thing. It feels like it has the same robustness and guarantees that I would expect. But yeah, that's interesting. It makes sense that that was just naturally there because, again, it didn't feel like the obvious correct thing that Rails was doing. It was always a little bit odd and so interesting that Laravel was already there. But then Inertia can take it that one step further. But taking a slightly higher level view of all of this, one of the things that's really interesting about Inertia to me, especially in contrast to some of the other frameworks that we've been talking about like the Livewires in the LiveViews is Inertia is almost at its core a protocol more than it is…it's a sum of pieces, and with Inertia, you have a server-side adapter, so there's the Rails adapter and the Laravel adapter. And then, on the client-side, you have a separate either Vue or React or Svelte. So those are the officially supported ones on both sides, but there's also been a swell of community support. And so there's a Django one, which I'm not sure if it's currently maintained, but I just saw a Clojure one the other day. There's a Java Spring Boot. So those are all server-side adapters. I haven't seen as much on the client-side, but I imagine there are at least a handful of them out there. And it's so interesting to me that there's this core idea that you define this protocol of communicating back and forth from the server to the client and now this collection of things that are growing around that. And I wonder again, how much was that purposeful versus how much did that just happen? And then further to add a second question to complicate things, how are you thinking about managing that community? Because my sense is that this could allow for Inertia to be so much of a bigger tent and really bring in the best ideas from all of these different communities and end up with something at the core of this Inertia thing that is the best of every community and all of that. So yeah, a lot of questions there, but I'll hand it over to you because I'm super interested.
JONATHAN: So I think when I first got going, it was Laravel and Vue; those were the tools that I worked with. And often, the best software and the best open-source software in my mind comes out of trying to solve something for your own needs. So that's really where Inertia came from and specifically for Laravel and Vue. But I quickly realized early on that it didn't have to be just a Vue and Laravel thing. So intentionally early on, I had this idea of trying to build it with multiple adapters, and I had this idea that you could build as many server-side adapters as you want and as many client-side adapters as you want, and maybe we'll officially maintain a certain amount of those, which is what we do right now. We officially maintain the Vue, the React, the Svelte adapters. And then, we also maintain officially the Laravel and the Rails server-side adapters. So that was, I would say, pretty intentional. And it's crazy how many server-side adapters people have been able to put together. Somebody wrote a ColdFusion server-side adapter for Inertia. I had no idea ColdFusion was even a thing anymore; yeah, legit. There are node ones; there are Phoenix ones; if you can believe it, there's a WordPress one, which I'm not even totally sure even how that works. There is ASP.NET.
CHRIS: [chuckles]
JONATHAN: Like, there's a whole bunch of them. And it's actually despite of me, not because of me, that this has happened because I am yet to write a good here's how to build an Inertia server-side adapter in the language and framework of your choice guide. It's been on my to-do list. I have a bunch of things I want to do. So it's still something I want to write, but people what they're doing is they're just reverse engineering what we're doing in Laravel and Rails and these other adapters, and they're figuring out how to do it in their own server-side language and framework. So that's been really, really cool.
On the flip side, on the client side, I'm starting to realize more and more that that's actually where the most important work for us as the maintainers of Inertia that we need to focus our efforts on because it's non-trivial to create these client-side adapters. And for us, we actually have four of them now because we have React and Svelte, but then we have Vue 2 and Vue 3. And they're different enough those frameworks that we actually had to create a separate adapter. So that's really where all our work is. The core of Inertia is actually ridiculously short, like the whole file, like the whole core Inertia adapter is 150 to 200 lines of code. And maybe it's a bit more than that, but it was that for a long time. It might be 300 or 400 now. It's very short. Even honestly, the client-side adapters are pretty short too. It's just that it's more difficult to make these client-side adapters because you get to learn all the intricacies of how each one of these frameworks handles their rendering. The core behavior that Inertia uses is the fact that you can dynamically swap components. So we dynamically swap page components when you visit from one page to the next and the details that come along with that.
Anyway, so I’ve realized that moving forward, my job is going to be to make sure that the client-side adapters are awesome and then letting the community drive the server-side adapters a little bit more and providing some better guides on how to do that. But yeah, for now, it’s like if we can get it working in Laravel and Rails, we should be able to get that functionality working in any server-side adapter. And because it's all again just based on HTTP, that's the language, that's the protocol like you say. That's the thing that matters between all these web frameworks, which they all, of course, support since they’re web frameworks.
CHRIS: I think you're not giving yourself nearly enough credit for the support that you've given to the server-side frameworks because you do actually have a page in the documentation called the protocol that does a great job of at least summarizing it at that HTTP level. But at the end of the day, again, like the job of someone implementing it is to then map that into their given language and framework of choice. But yeah, the documentation is impressive in just how much you put in there and how much care you obviously put into it and lots of nice, subtle details that are covered very well in that. So that again, if you read it, unlike me, then you get to know everything; eventually, I got there. I think I've read the whole thing now. But there's a lot there, and you cover all of the details.
But actually looping back to a topic that you hinted at earlier, but this is something that I've been pressing up against lately is I absolutely love building web apps in Inertia, but there's often the need to bring in a mobile app, and we want native mobile for various reasons. I love the idea of progressive web apps, and I want to push that envelope as much as I can. But as an example, right now, iOS does not support push notifications to PWA. So if that's a key feature that we want, then we're dead in the water or if there are certain GPS things. There are a bunch of true platform native things that we just can't get. And so I'm now contemplating building out an app alongside my Inertia web stuff, but I want to build a React Native app, and I'm wondering, to a certain degree, does this invalidate some of my ideas? I know you hinted at this earlier, but I think I'm still convinced of the utility of Inertia on the web. But I think I need a different paradigm to build for a mobile app, and I'm trying to decide where that line falls. I'm also wondering if I can just get away with embedding a bunch of web views and reusing my web logic because, again, if I'm building all of this, I'm going to build it in a mobile responsive way. I don't want to rebuild the core page functionality of my app just to put it on mobile. Maybe mobile folks would tell me I'm wrong there, but I'm interested in maybe wrapping it and getting access to those platform features. But yeah, I'm interested in what your thoughts are there.
JONATHAN: Well, embedding a web view within a native app has been proven to work just as DHH, obviously. But yeah, there are definitely people who disagree with that approach and feel like you should build a legitimately native app. Let's say that we're going to legitimately build a real native app. We want to have an Android and iOS app. So I actually ran into this myself for my own SaaS application, and I solved it by building a native app using React Native so React Native obviously being an abstraction on top of iOS and Android and all the tooling there, which is such an amazing platform. It was just a real joy to work with. And I don't even hardly work with React, and I was able to get a nice, high-quality native feeling app built relatively easily. But I had to come to grips with this very question because, like I've been saying all along like, “Inertia is great because you don't need to build an API. Yay, this is amazing. This is what you should do. Oh, crap. I need an API.” And I had those questions like, okay, well, does this invalidate everything that I've been doing? So I was thinking about it, and in the end, what I did is I just built a light API alongside my Inertia application. So what it is is I think I have seven endpoints, and they're just REST endpoints that are designed specifically for my native app. And this works honestly so well.
And I think I've explained to you a little bit in a previous conversation, so I'll repeat myself a little bit here for the benefit of the listeners. The reason why I think it's completely legitimate to have Inertia and build your entire web app that way and then have a companion API alongside it in the same monolith app (let's be clear: it's in the same application. It's in my Laravel app, or it would be in your Rails application) is because it just extends a core principle for me of what Inertia is. And that core principle is a tight coupling between my data layer so my controllers, and my views. So if we take that thinking where we say, well, Inertia in an Inertia web app when we have an endpoint, we hit the controller, we load data from the database, we pass that very specific data to the view, which is Vue or React or Svelte and it renders it. And there's a very tight coupling between the two. And I treated my native app in the exact same way. I said, “Okay, I need an API because obviously, the native app on iOS and Android has to make an HTTP request to get this data somehow. But instead of trying to create this super generalized API that could theoretically be used for anything, I use the same principle that said I'm going to allow myself to create an API that has a really tight coupling between the screens in my native apps and the actual data that's coming from those API endpoints.
And this worked out really, really, really well. I don't have to deal with a lot of the issues that you run into when trying to create a more generalized API because I could just say, “Hey, I have this calendar page, and I want that calendar page in my particular app. I want it to show people's birthdays, and I want it to show wedding anniversaries, and I want it to show custom events and these things that we have called schedule reminders.” So it’s data from four different endpoints. I didn't try to say, “Well, I'm going to go and create now my event’s endpoint, and my birthday's endpoint, and my anniversary's endpoint, and my schedule reminder’s endpoint,” and now have all that work to do in my native app to okay, we'll hit all these different endpoints and merge it all together; it wasn't like that at all. I created a calendar endpoint that returned all the data that's needed for that screen. And I basically applied that thinking through my whole native app, and it was really a joy to work this way. So I think that approach works really well if you have an app that doesn't have complete feature parity with your web app.
And I think if you had a native app that needed absolute feature parity between the native app and the web app, then my thinking might be a little bit different on this. But in my experience, so often, native apps have a vastly reduced subset of the features that the web app has in particular, even if not for the core functionality of the application but just for the administrative side of it. There's a whole bunch of stuff that you tend to have in a web app around administrative stuff that you literally never need. And I mean administrative both in terms of it's a multi-tenant style app, which most apps are so in terms of the user's administrative functionality and in terms of the system level, the software owner administration. If you build your whole web app to be built on top of an API, all that administrative stuff that really doesn't need to exist in both places, you now have to make it exist in your API because you've made that decision to build it that way. Whereas if you just stick with Inertia on the web and just build it using that classic monolith way where you get data from the controls and send it to your Blade views or in this situation, client-side page views, and then you just expose the stuff that you actually need natively, for me personally, it's worked out so well. And if I look at my own web app, the amount of controllers that I have for the whole web app, I have like 100; it's a very big app. And for my native app, I have about 10. So that was like, I'm so glad that I didn't have to create 100 of these in both places.
And then some people will be like you might be thinking, well, now I have duplication. I have duplication in some of my API endpoints and my web endpoints, and that's true. I would say first that duplication isn't always a bad thing. I think more duplication in our web apps would actually probably lead -- I feel like we run away from duplication too quickly. I don't think duplication is as bad as software developers often think it is. But even then, if with the duplication you can't live with yourself, there are still ways to solve duplication. So Laravel, for instance, has this concept of they're called API resources, which is basically they're essentially transformers. You give it some models, and it transforms that model into some other states, some other design. So there's nothing stopping you. And I even did this myself within my server-side application within Laravel to have an API resource to have a transformer that's used by both my Inertia controller and my API controller in a couple of situations and for me, only when it makes sense. I'm not going to do it all the time because I found that most often, I wanted the data in a slightly different format in my native apps than I'd want it into my web app. So quite often, that didn't happen. But I'm just saying if you're scared of duplication, there are totally ways to solve it. And we can solve this in our existing frameworks. Laravel or Rails has ways to allow us to abstract some of that stuff and reuse it in multiple places. So, yeah, that's my long-winded answer to how I've approached doing the native apps sort of thing. I think that tight coupling between the data and the screen I think that's a really nice thing, and you just can build faster. And just like you can build faster with Inertia on the website, you can build fast [inaudible 43:19]
CHRIS: Frankly, that answer makes a ton of sense one and two, makes me feel better about the path that I'm on because, again, I'm really desperate to cling to Inertia for the web side of things. So I love what you're saying. And again, it really resonates with me and how you're thinking about building. There's also I really appreciate a subtle common theme that I've seen in a bunch of things that you've said where you're like; let me poke at best practices a bit and see what falls out. What if we were to actually embrace the coupling between our data and our view layer? And it's like, actually some really nice things happen there. And actually, going back to an earlier project that you worked on, Tailwind CSS is one of those projects that when you first see it, you're like, well, that's obviously wrong. That's definitely an incorrect way to do things. But then you explore it, and you're like, well, I mean, I know there are trade-offs here, but actually, in my experience and I'm sure in your experience, Tailwind is absolutely fantastic. And the trade-offs you totally win in the long game, and it's maintainable, and it's understandable. And you can continue to develop on top of it in a way that I've never found with any other CSS framework. But again, at first glance, you're like, ooh, that's not right. That can't be right.
JONATHAN: 100%, exactly. I think it's fun to push back and just experiment with different things. And for me, I think a lot of my decisions, too, come back to the fact that I'm running a SaaS application as one person, and I need to be able to move fast. I don't want to have two different servers and two different repos. I want to be able to build my applications as fast as I can, as a single developer, a single founder. And so I think the things that I push against and try and experiment with come out of me trying to find the simplest ways to maintain things. So Tailwind, that's really Adam's brainchild. I came along in the first six months or so; me and him built it. I was really just helping him flesh out his idea there, and that was super fun. But yeah, I had the exact same experience as you. Adam was telling me about this, and I'm like, that sounds pretty terrible. Like, I have CSS figured out already. And then it was like, oh man, this is amazing. Fun little fact, my SaaS app, me and him, were both working on web apps at that time. So my SaaS app was one of the first Tailwind applications ever because I and Adam were literally both building our own apps while building Tailwind CSS.
But anyway, so yeah, it comes out of not me trying to be like, I know better than other people; it's not that at all. It's more just I'm trying to find a way to survive as a business and trying to build at the same time, not only survive but also I want to build awesome products. I don't want to build software that is just kind of okay. I love striving to make software that's just exceptional that delights people that works the way someone expects it to work. And I just think that there's so much broken software out there. There's a lot of bad software. And don't get me wrong, I've created a lot of bad software, too. But I really try to hold myself to a high standard. And really, for me, that comes down to not necessarily what some purist says that “This is how you need to do it.” It comes more down to like, okay, let me see the results. How fast does the webpage open? What's the performance? You mentioned my course earlier. I’m really, really interested in database performance and how to use databases more intelligently to deliver really fast web applications. And that matters to me because customers hate waiting. They hate it. And that was even part of what drove me to create Inertia because I hated this. I was working for a company, and we had built the right way where we have an API and the client separate And we went down that road. And that was a big team with 20 to 30 developers in the end. And I was just like -- I shouldn't say, “I was,” but we, in general, were not happy with what happened because just the way that the app was built and the way that single views were hitting the API. You could probably argue that this was like we were doing something wrong, but the paradigm didn't lend us to doing it right, in my opinion. So we'd have pages that were hitting the REST API with sometimes 10, 20 HTTP requests just to get the data. And you're dealing with all the loading states of all this stuff. And of course, there were probably better ways to design, but we were trying to ship a product there too. We were trying to get it out the door and make happy customers. And I didn't feel like that way was helping us.
I think GraphQL, just as an aside, is a huge step forward where you can say, “Hey, here's all my data in an API, but I'm not going to hit the user's endpoint just to get back whatever you decide to give me.” I can be much more intentional about saying, “Hey, I want this data and then pull in this relationship for that data and this other piece of data.” And I think that's really, really cool. But I think the problem there again is you need to build that GraphQL API, and that's non-trivial, not to mention you probably have to figure out OAuth, which is pretty much always a game-stopper for me because if I never have to work with OAuth in my life [laughs] I'll be totally okay with that. I know it has its place, but yeah.
CHRIS: There's a clear passion and a desire that you're describing there to just build good things and the belief that it can be done. And then, as someone who has really benefited from your work, I thank you for carrying that torch and for pushing the envelope. And like you said, having that high standard and holding yourself to it but then hopefully bringing the rest of us along, and I really appreciate that. But I think with that, that's probably a perfect time to wrap up. If folks want to follow more of what you are working on, where can they find what you're up to on the internet?
JONATHAN: I'm on Twitter, the classic place to go for following someone in tech, so twitter.com/reinink, my last name. That's R-E-I-N-I-N-K. So that's where even if I have stuff shared elsewhere on the web, that's where it starts.
CHRIS: Perfect. We'll include links to your Twitter as well as everything else that we've mentioned in this episode in the show notes. So folks that do want to keep up or investigate further listening to that other podcast episode that I mentioned will have all of that available. But with that, thank you so much for your time, and yeah, again, really appreciate you joining.
JONATHAN: Thanks so much, Chris. Pleasure to be here.
CHRIS: The show notes for this episode can be found at bikeshed.fm. If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review on iTunes, as it really helps other folks find the show. If you have any feedback for this or any of our other episodes, you can reach us at _bikeshed on Twitter. Or you can reach me @christoomey, or you can e-mail [email protected]. Thanks so much for listening to The Bike Shed, and we'll see you next week.
This podcast was brought to you by thoughtbot. thoughtbot is your expert design and development partner. Let's make your product and team a success.
On this week's episode, Chris and Steph discuss testing webhooks, the challenges in replicating third-party data, and troubleshooting unexpected side effects. They also respond to a listener question about secrets management, touring popular solutions and discussing the trade-offs.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
On this week's episode, Steph and Chris tackle a pair of questions -- the first dealing with how closely we might want to map an API to the underlying database schema, and the second dealing with back of the envelope math and horses (it makes more sense in context.... mostly). They also discuss the subtleties of the javascript date API across browsers, and a quick adventure in tuning database indexes for fun and profit.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Sponsored By:
On this week's episode, Chris and Steph discuss migrating a polymorphic relationship over to UUIDs and balancing trade-offs between data integrity vs complexity. They also touch on a new Rails feature that adds support to safely remove and add columns, GitHub Discussions, measuring team experiments, and purposeful communication.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Sponsored By:
On this week's episode, Steph shares a recent performance improvement, a Postgres delight, and testing concurrency in RSpec. Chris revisits an earlier theme of "Good Idea, Bad Idea?" as he explores ways to speed up tests builds and avoid duplicate test builds. They round things out with a listener question about managing ERB partials and Vue components.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode, Chris shares a rare airing of grievances regarding the importance of secure, encrypted websites and Steph shares a tale of time zone troubles and testing.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode, Steph and Chris tackle a listener question around the world of service objects. Where, really, should we be putting our business logic. Model concerns, "service" objects, the model files themselves? Tune in to find out. They also discuss a perilous Rails 6 upgrade deployment and the ensuing debugging session, as well as Steph's retro on her extended break from work.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode Chris and Steph chat about upgrading to Rails 6, intercepting emails, and play a few rounds of Software Terminology Trivia. They also discuss "Deep Work" by Cal Newport and share strategies for finding and maintaining focus.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Send us your question, we would love to hear about it.
Looking for your next role? thoughtbot is hiring!
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode Steph and Chris discuss a listener question around managing content within an application, weighing options like an integrated CMS, headless CMS provides, proxying the content, and supporting marketing and landing pages without needing a developer for every change. They also provide an update on dead man's snitch and a preview of a rails 6 upgrade on the horizon and dreams of database switching.
This episode is brought to you by SPOTcon. Tune in to Scout APM's first conference, and join for developers from around the world to meet, engage with, and learn about solutions that drive leading-edge transformation in application development by registering for free today!
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode, Chris adds Dead Man's Snitch to a personal project and considers "what is the app doing at runtime?" as he touches on the importance of creating observable systems. Steph shares analyzing a site's traffic and using Apache Bench for simple load testing. They also respond to a listener question about creating environment-specific data for data-intensive applications.
This episode is brought to you by SPOTcon. Tune in to Scout APM's first conference, and join for developers from around the world to meet, engage with, and learn about solutions that drive leading-edge transformation in application development by registering for free today!
Send us your question, we would love to hear about it.
Looking for your next role? thoughtbot is hiring!
Become a Sponsor of The Bike Shed
Sponsored By:
On this week's episode, Steph and Chris tackle a listener question around switching from mostly-developing, to mostly-communicating and the tactics they've used to balance these facets of their work. They also discuss the new error objects in Rails 6.1, the value of breakable toys, and the importance of keeping presentational concerns out of the data model.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode Chris and Steph discuss a new tmux feature and wvim, a script that streamlines shell command edits. They also discuss the value of taking a sabbatical and protecting downtime. Steph shares some exciting news about thoughtbot and they answer a listener question about GraphQL and whether your app really needs an API?
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode Steph and Chris discuss some of characteristics and behaviors they've observed in high-performing teams, touching on pull request sizing and prioritizing code review, deploy cadence, error monitoring and response, and minimizing the number of themes being tackled by the team in parallel. They also touch on moving to Netlify and simplifying deploys, an odd edge case with 303 vs 302 status code, and the quirks of the ActiveRecord or
method.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
In this week's episode, Steph and Chris discuss the popular testing themes and questions that emerged during the RSpec training course, reflecting on which testing "rules" still apply and when to break the rules. They also chat about the results of the 2020 State of JS survey and repurposing email validations to be helpful vs strict.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
In this week's episode, Steph and Chris discuss some of their methods for helping out reviewers of their pull requests and keeping code review moving along smoothly. They also discus the shift to async communication and the tools, processes, and workflows that come with a shift to async. Does standup still have a place in an async world? Tune in to find out.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris shares a new favorite tool for querying JSON and Steph revisits a previous deployment issue. They also dive into the new features in Ruby 3, ponder the idea of adding types to Ruby, revisit breaking changes, and round out the conversation with a listener question about managing tmux sessions.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph and Chris revisit their discussion about testing rack rewrite redirect logic, mystery guests, DNS configuration, and trying very hard to not be too dogmatic. Steph describes her recent work trying to debug failing deploys with Concourse, Kubernetes, and Google Cloud while touching on blue-green deployment and secrets management. Finally, Chris talks about porting a svelte project to typescript and the trade-offs of adding Types upfront vs types after the fact, and the parallels to testing and TDD.
This episode is brought to you by ScoutAPM. Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris and Steph reflect on their top themes and technical picks for 2020.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph and Chris begin wrapping up 2020 with a review of their 2019 top 10 list. They share what's changed, what's stayed the same, and what they'd like to see more of in the coming year.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
In this week's episode, Chris undertakes long-running background jobs that are performing duplicate work and adding significant load on the database. Steph shares her initial take of the book "Soul of a New Machine", a non-fiction account that chronicles the development of a mini-computer in the 1980s.
They also dive into the question "how can teams turn a slow, hard to maintain test suite from a liability into an asset?" and touch on how to identify highly-functioning teams.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris describes his continued explorations with Svelte specifically bringing TypeScript into the mix. Steph discusses the first cohort for the RSpec training and some related testing questions around third party APIs. They round things out with a listener question about managing permissions and roles, with a brief detour around single table inheritance vs polymorphic associations. Oh, and Steph rented goats to mow her lawn. 🐐
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
In this week's episode, Chris and Steph discuss redirecting requests for various hostnames to one canonical host, creating student personas to improve educational content, and walking away from failing tests. They also embark on a Hollywood themed tour of RSpec mocks, stubs, and spies, when to use each approach, and discuss the types of tests they do (or do not) write.
This episode is brought to you by:
Sponsored By:
Steph's taking a quick break this week, but while she's away, Chris is joined by special guest Gary Bernhardt. Gary is the creator of Destroy All Software screencasts as well as his more recent venture, Execute Program. Between Execute Program, his screencasts, conference talks, and more Gary has consistently provided some of the highest quality and most impactful educational content around building great software and has been a huge inspiration to the hosts of this show.
In the episode, Chris and Gary discuss Gary's recent work with TypeScript and how it compares with Gary's focus on testing, they revisit some of Gary's ideas around software architecture and how they map to his current work, Gary's thoughts around the value of knowing our tools deeply, and the trade-offs between careful upfront design and shipping early and often.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph discusses the value of conducting student research when creating course content and Chris revisits a recent architecture decision to use Svelt and Inertia. They also explore the challenges developers face in acquiring their first job and share insights for those looking for their next big role.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph describes her unique new project where they're building and presenting a training course around RSpec, testing, and TDD specific to an organization's codebase. Chris then runs some architecture choices by Steph to discuss a collection of new technologies he's considering, and more generally how we think about our experimentation budget.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris and Steph share mixed-feelings about Spring preloader and how to use Spring just for tests. They also dive into troubleshooting an OpenSSL error, Postgres generated columns, and creating moments of contentment.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph and Chris chat about database transactions and job queues, building static sites with GatsbyJS and NetlifyCMS, the performance impacts of front end frameworks and static content, and lastly they catch up on Hacktoberfest and the complexities of encouraging and supporting work in open source.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode Steph and Chris discuss the ins and outs of joining teams, building trust, and working together to improve processes and communication. They also touch on some lesser used features of bundler, and revisit a discussion around Rails maintenance periods thanks to some listener feedback.
This episode is brought to you by:
Sponsored By:
Steph's taking a quick break this week, but in her absence, Chris is joined by Dave Rupert. Dave is the lead developer at Paravel, co-host of the Shop Talk Show podcast, creator of The Accessibility Project, and an all-around prolific and thoughtful maker of digital things.
Chris and Dave chat about creating and sharing content like podcasts and blogs and how to get past your inner editor. They discuss the web platform and accessibility, and finally, they round out the conversation with a chat about design systems as an intersection between design and development.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris introduces a new segment called "Good Idea, Terrible Idea?" as he considers introducing a read-only mode to avoid interrupting users during scheduled downtime. Steph has started a new project and explores the idea of merging separate, but similar, applications into one codebase.
They also dive into micro-service environments to discuss the difficulties of integration testing and potential strategies.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph and Chris tackle a listener question around composition over inheritance, especially in the context of Rails which makes regular use of inheritance. Dependency injection, OOP vs FP, frameworks vs app code -- they hit it all!
They also chat about burnout and how they've dealt with it, using jq to investigate differences between json responses, refactoring tests and using let
, and Steph shares her recent learnings about graphviz.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris shares a tale of performance improvements and a recent discussion about replacing a REST API with GraphQL. Steph dives into migrating a database column to restrict input and dropping database columns safely. They also discuss when to abstract code (a topic that surprisingly, they may not agree on) and running "Unused" to identify dead code.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph and Chris tackle the thorny topic of 10X engineers. Do we think they really exist? What characteristics make an individual more effective, and more importantly, what can they do for a team?
To round out the conversation, they chat about rewrites and when they do and don't make sense, Ruby 2.7 keyword argument deprecation warnings, and a listener question revisiting Ruby popularity and what languages would we learn if we couldn't write Ruby anymore.
This episode is brought to you by:
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Steph and Chris discuss a git-blame feature that supports bypassing less helpful commits. They also revisit a discussion about Dependabot PRs and recent performance adjustments, sharing which strategies worked and which ones didn't. They also discuss the dreaded three-state boolean, designing a system for cacheability, and using Ruby's magic comment to freeze string literals.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph & Chris take a deep dive into all things technical debt. How do you know when your code has reached "good enough"? When might we purposefully knowingly take on technical debt? How do we tackle existing technical debt without halting new development? How can we tell high-interest, hair on fire debt from "ehh, it's fine" debt that we can let lie? Tune in to find out!
This episode is brought to you by:
BIKESHED
Become a Sponsor of The Bike Shed!
Sponsored By:
On this week's episode, Chris shares his recent adventures of working with a team that prioritizes async-first communication and Steph revisits a previous discussion around the use of web sockets and optimistic user interfaces. They also dive into the classically hard question "should we rewrite the app?" and share survival tips for learning to type on a split keyboard.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph and Chris chat about the relatively new Rails view_component library from GitHub, Steph talks about her work with Storybook as part of extracting and defining a design system, and they chat about the attr_extras project with convenience helpers for ruby & Rails apps. They round out the conversation with some keyboard updates (ErgoDox onramp is steep!) and project rotation notes.
This episode is brought to you by ScoutAPM.
Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy!
Sponsored By:
On this week's episode, Steph celebrates passing an important test and discovers an API that returns different data than it's provided while Chris asks the important bikeshed question "What is the proper maximum line length?".
They also roundup the latest listener questions and discuss establishing freelancing rates, property-based testing, and time tracking skills that help them manage competing priorities.
This episode is brought to you by ScoutAPM.
Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy!
Sponsored By:
On this week's episode, Steph and Chris have a brief chat about Snowpack, a new and ultra-speedy bundler in the front-end world, and revisit a conversation around namespacing models in Rails. The conversation then shifts to a discussion of the ins and outs of hosting a podcast and how folks might be able to dive in if they're interested in starting one themselves -- from selecting topics, to the hardware and software they use, to the guiding philosophy in how to discuss technical concepts.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph and Chris discuss leveraging the Unix utility sed
to search files and remove unnecessary test setup, using Vim's Arglist to create a to-do list for file edits, and budgeting time for fancy command-line scripts. They then take a deep dive into the world of utility-first CSS and TailwindCSS.
This episode is brought to you by:
Sponsored By:
On this week's episode, Steph and Chris discuss using JSONB to store survey responses and the differences between JSON and JSONB, using (or not using!) exceptions in Ruby and the fail
keyword, the pros and cons of namespacing models in Rails to organize features, and a new recommendation for running tests from vim.
This episode is brought to you by ScoutAPM.
Give Scout a try for free today and Scout will donate $5 to the open source project of your choice when you deploy!
fail
keywordSponsored By:
On this week's episode, Chris and Steph discuss the importance of using inclusive language, branching into new branch names, and strategies that encourage the use of inclusive terminology. Chris also shares his latest experience with merging two systems that were split apart back into one system, tackling conflicting foreign keys and competing auth libraries. Steph discusses using polling vs web sockets to monitor work being completed in a background job and communicating to the user the various states of success and failure.
On this week's episode, Steph and Chris trade some consulting and everyone comes out a winner. Steph talks about a win and a loss on the battlefield of refactoring, and Chris shares a related effort around identifying and removing unused code. Chris shares a pattern his team has been using with a special "demo" flag to provide small enhancements but otherwise keep sales demos within the product.
Steph then shares some friction related to using dependabot on her team's project that hints at more foundational ideas at the intersection of workflow, team dynamics, testing, deployment. And finally, Chris asks Steph for her thoughts on how best to add testing around the structure of API responses.
This episode is brought to you by Datadog. Click through to get a free 14-day trial and a free Datadog t-shirt!
Sponsored By:
On this week's episode, Steph shares a keyboard confession and interest in migrating to a split keyboard layout. Chris dives into creating static error pages that are independent of the app while still leveraging the app's CSS framework. They also respond to a listener question about Conventional Commits and discuss when automation tooling feels helpful vs harmful.
ErgoDox EZ Keyboard
Keyboardio Atreus
Tailwind CSS
PurgeCSS
CSS Used Chrome Extension
Conventional Commits
SemVer
semantic-release
husky
GitHub Issue and Pull Request Templates
On this week's episode, Steph and Chris discuss potential approaches to a complex client-side workflow, Chris shares the highs and lows of his recent adventures revising the caching in a REST API, Steph shares an Ember testing pro-tip and then explores the questions it brings up, and lastly, they revisit prettier-ruby and it's fantastic configuration setup.
This episode is brought to you by Datadog. Click through to get a free 14-day trial and a free Datadog t-shirt!
stale?
and fresh_when
etag calculationcache
method for "fragment caching"travel_to
time helpersand_call_original
Sponsored By:
We are pausing our normal tech-talk this week in support of the ongoing protests and to re-share the #BlackTechTwitter episode with Pariss Athena from our sister podcast, Giant Robots.
During the past week, millions of people across the country have participated in protests in response to the killing of George Floyd and the systemic racism that plagues our nation.
For everyone fighting for equality and justice, we see you, we love you, and we support you. Black lives matter. Black culture matters. Black communities matter.
For those looking for ways to take action, we have provided a few resources in the show notes. The list is intentionally short as we ask everyone to research ways to get involved and listen to leaders in the Black community.
Fighting for equality falls on each of us, regardless of race or position, to work together to fight racism and unequal treatment.
Stay Safe.
Pariss Athena, Hiring & Product Team Member at G2i, creator of #BlackTechTwitter, and founder of Black Tech Pipeline, shares her journey from never hearing about code to viral awareness campaign creator, as well as discusses visibility, finding value on twitter, and life online with thousands of followers.
On this week's episode, Steph is joined by thoughtbotter German Velasco. German and Steph chat about remote work and the rewards and challenges of their new(ish) roles as Development Team Leads. German also shares that he is writing a book! German shares his approach for defining a MVB (Minimum Viable Book), ideas for how to collect feedback, and plans for publishing. Lastly, they discuss a vim plugin that lives up to the hype.
This episode is brought to you by Datadog. Click through to get a free 14-day trial and a free Datadog t-shirt!
To register for the free online workshop "How to Supercharge Your Rails App with a Code Audit", visit https://thoughtbot.com/events/code-audit-workshop.
More episodes with German:
Sponsored By:
On this week's episode, Steph troubleshoots a mysterious Ember test failure that can't find a visible element, and Chris recounts an exciting three-act adventure that spans N+1 queries, caching, and SQL window functions. Steph also touches on upgrading to Ember Octane and Glimmer components and Chris shares a new helpful tool for drawing architecture diagrams.
On this week's episode, Chris shares his recent explorations of railway oriented programming (hint: not what you think!) while doing his best to avoid words like "monad" and "functor" (he does not succeed in this effort). Steph updates on her quest for the ultimate personal note taking app and some misadventures in DNS and networking, and they touch on their shared search for ergonomics in the home office world we all live in these days.
This episode is brought to you by ExpressVPN. Click through to get three months for free.
Sponsored By:
On this week’s episode, Chris and Steph share their excitement for Roam Research and formatting Ruby with Prettier Ruby. They also discuss writing test coverage for an important GDPR process, embracing async communication, and share their preferred strategies for knowledge sharing within teams and the broader community.
On this week's episode Steph and Chris dig into MVP thinking and asking how we can write as little code as possible before finding out if any user will actually want the thing we're building.
They also tackle a listener question around Vim and the general ROI on honing our developer tools, discuss some of the subtleties of HTTP and forms as well as the difficulties when half of our UI is in React and the other half Rails, and lastly chat a bit about their adaptation to full-time remote work.
On this week's episode, Chris and Steph discuss troubleshooting a race condition, trusting your intuition and pessimistic locks. They also touch briefly on TailWind CSS before diving deep into first impressions of Inertia.js.
This episode is brought to you by ExpressVPN. Click through to get three months for free.
Sponsored By:
On this week's episode, Steph and Chris discuss what it really means to make a project "open source". Is it just about making the code publicly available, or should we be considering licenses and responsibility to update?
They also discuss the need for breaks and structure now that everyone is working from home, revisit previous discussions around building functionality for admin users and the various admin systems out there, and they round out the conversation with a discussion around doubles vs spies in testing.
Note - No snakes were harmed as Steph found them a new home 😊
Enroll in our free online-workshop on code audits How to supercharge your Rails application with a code audit
In this week's episode, Chris shares details about his new greenfield project, implementing static pages with high voltage, opting for just-in-time architecture decisions and working with various admin libraries. Steph discusses various ways to advocate for change across larger engineering teams, recognizing when it's important to push for change vs letting go of strong opinions, and how to gain buy-in from your team.
Enroll in our free online-workshop on going remote Being Human in the Absence of Humans: A Live Q&A for Product Teams
On this week's episode, Steph and Chris discuss the pros and cons of memoization, Chris revisits the discussion around the value of react snapshot tests as well as his continued explorations with Inertia.js while Steph updates us on living in a schema-less world, and they round out the conversation with a listener question about pairing tools, setup, and approaches.
This episode is brought to you by ExpressVPN.
Click through to get three months for free.
Sponsored By:
On this week's episode, Chris and Steph discuss recent challenges associated with upgrading React Router and uploading files to Amazon S3. Steph also shares her latest reading adventure in cybersecurity and Chris reflects on his time at thoughtbot, how his approach to web development has shifted over the past seven years, and what he plans to do next.
*Correction - The Cuckoo's Egg helped pioneer cybersecurity techniques
On this week's episode, Steph and Chris dig into their shared love of refactoring. How do they think about it, have they ever reverted a refactor, thoughts on deferred refactoring, and more.
They also discuss some positive team habits, snapshot testing, the importance of keeping your testing as close to production as possible, and finally, Chris shares some big personal news.
On this week's episode, Chris and Steph respond to a listener question about the complex tradeoffs between craft, preferences, and business needs. They also revisit Steph's recent work with mirage factories, Chris's struggles with test failures, and discuss Steph's recent use of the acts_as_paranoid gem.
This episode is brought to you by Clubhouse. Click through to get 2 free months on any paid plan.
Sponsored By:
On this week's episode, Steph is joined by Joël Quenneville. It's the season for CFPs (call for proposals) and Joël shares insights about his past conference talk submissions, both the accepted and rejected. They also discuss writing habits that help increase blogpost frequency and helping teams upgrade their Rails application.
On this week's episode, Chris and Steph celebrate the new Bike Shed website and logo!
Steph finds a new way to optimize her keyboard happiness and Chris dabbles with Zsh auto-suggestions. They also explore the team and technical trade-offs in the pursuit of clean code and respond to a listener question about building products that meet strict security policies.
This episode is brought to you by Clubhouse. Click through to get 2 free months on any paid plan.
Sponsored By:
On this week's episode, Steph shares more of her Ember adventures, specifically sharing some of her work with the Mirage API mocking and prototyping library, and her search for factories and more ergonomic data in tests.
Chris shares some struggles he's had recently with automation and tooling around deployment and releasing packages, and they discuss the inherent trade-offs that we have to consider when automating anything.
Lastly they touch on Twitter's alt text accessibility features, and answer a listener question about using React without having an API, and instead just using it as a more dynamic view layer.
On this week's episode, Chris and Steph revisit the long-lived feature branch Chris has been working on and chat about adventures with Yalc. They also dive into the common questions and concerns associated with coding bootcamps, thoughtbot's exciting new partnership with Resilient Coders, and what it would be like to "start over".
This episode is brought to you by Clubhouse. Click through to get 2 free months on any paid plan.
Sponsored By:
On this week's episode, Steph and Chris catch up in their first recording of 2020. They discuss git workflows and the surprisingly strong opinions often associated with them, testing at all levels of your application, Steph gives a quick summary of her Ember adventures, and they round out the discussion with some new years systems building and Star Wars reviews.
This episode is brought to you by Clubhouse. Click through to get 2 free months on any paid plan.
Sponsored By:
On this week's episode, Steph is joined by George Brocklehurst, a Development Director in the NYC thoughtbot office. Steph and George chat about the variety of projects and technologies that caught their attention during thoughtbot's recent internal hackathon. They also dive into Gitsh, a dedicated shell for Git commands, as they chat about preferred git workflows and George shares his recent adventure in updating Gitsh to support tab completion.
On this week's episode, Chris and Steph discuss their recent holiday hackathon efforts building a game in Elm. They discuss their experiences with Elm and the broader prospects of using Elm in more production applications. They also discuss the new git subcommands "git switch" and "git restore", and round things out with a listener question concerning FactoryBot and "minimum viable factories".
On this week's episode, in celebration of the new year, Thom shares the 2019 blooper reel! Words are hard and here's the audio to prove it. Listen to all of the silly mishaps, goofs, and general nonsense captured in between the moments of "professional podcasting". Chris and Steph also reflect on their top themes of 2019 and discuss New Year Systems vs New Year Resolutions.
On this week's episode, Steph gets Chris to share his biggest developer regrets over the years. They also revisit a favorite topic of estimation and story points, and round out the conversation with some details from the world of application security.
On this week's episode, Chris catches us up on his latest keyboard adventures and Steph shares her first impression of working with Ember.
They also dive into Chris's experience triaging errors Sentry, their love for Elm, how teams achieve a consistent velocity, and Steph's upcoming workshop on how to stay agile when building a healthcare product. To bring it home, they respond to a listener who's wondering when is it a good idea to convert a loose data structure (e.g.: hash) into a class?
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Chris and Steph discuss identifying refactoring opportunities by highlighting overly coupled code and Chris announces that he has advanced his vim setup into the 21st century by making the switch to Neovim.
On this week's episode, Steph and Chris dive into the world of crafting pull requests for optimal code review, as well as the flip side of providing code review. How can we make it easy for reviewers, and as reviewers, how can we make it easy for our teammates to incorporate our suggestions?
They also discuss the world of testing, from integration to visual to unit testing, and some of the tools an practices they use at each level.
Lastly, they discuss Steph's continued pairing adventures and possibly finding her max on the pairing front, a quick update on mechanical keyboards, and Steph shares a teaser of an upcoming workshop she'll be hosting around how to stay agile when building health tech products.
This episode of The Bike Shed is sponsored by Honeybadger.
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Chris and Steph catch up on recent client adventures, revisit their feelings on using let in rspec, and spend a bit of time outside their respective comfort zones. There's also some talk about nearly full-time pairing, mechanical keyboards, debugging thorny datetime issues, and how we interact with our developer tools and workflows.
This episode of The Bike Shed is sponsored by Honeybadger.
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Chris and Steph chat about their new client projects, VimScript, and ways to automate refreshing materialized views in tests. They also play the game Overrated/Underrated, created by Tyler Owen, and respond to a CS student who is feeling overwhelmed by the various technologies and looking to transition from tutorials to meaningful projects.
This episode of The Bike Shed is sponsored by Honeybadger.
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Steph catches us up on her ever-growing collection of mechanical keyboards, Chris talks about his recent purchase of an apple watch, and they follow up a previous discussion around case-sensitivity (or insensitivity) in URLs and email addresses. They round out the discussion with a chat about writing blog posts and some postgres fun, and finally discuss the merits and drawbacks of monorepos.
This episode of The Bike Shed is sponsored by Honeybadger.
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Steph is joined by Brittany Martin, an avid Rubyist and the host of the Ruby on Rails Podcast. They discuss Brittany's passion for roller derby and her upcoming Ruby conference talk: "Hire Me, I'm Excellent at Quitting." They also discuss using AWS Serverless, troubleshooting Postgress connection errors and working with Google Pay and Apple Wallet to introduce digital tickets.
On this week's episode, Steph shares an update on her mechanical keyboard adventures and provides a summary for the Ruby pipeline operator being reverted. Chris gets Steph's opinion on a possible improvement around using materialized views in tests and describes a recent debugging adventure he and Steph went on. They also discuss a listener question regarding encouraging companies to use Ruby and Rails and asking how we identify ourselves as developers. Finally, they round out the conversation with a clarification around public vs private GraphQL APIs.
:method
source_locationIf you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Steph recounts an issue with an email client that lowercases URLs and Chris ponders the art of logging and using structured logs. They also highlight a plugin that improves TypeScript support in Vim, how the Pinterest team celebrates the "retirement" of code, and respond to a listener who is debating between refactoring their app or investing in a full rewrite.
If you're enjoying The Bike Shed, we'd love it if you could give it a rating or review on iTunes. Thanks!
On this week's episode, Steph returns from vacation and Chris makes some noise about a fantastic new button. They discuss Steph's continued adventures in search of the perfect mechanical keyboard and then dig into two listener questions on landing a first job as a developer and what frameworks and languages to focus on, as well as discussing some of the common objections to GraphQL.
On this week's episode, Matt Sumner guest stars to discuss his recent adventures on a project that uses React, TypeScript and GraphQL. Along the way, Matt and Chris discuss VS Code features, Apollo caching and reflect upon their first year as Development Directors.
On this week's episode, Steph discusses a mini design sprint she led to help validate an internal admin tool while Chris muses on the merits of net negative lines of code on a project. They dig into the idea that while code can certainly be an asset, it may also be a liability. They investigate ActiveSupport::MessageVerifier for secure time-sensitive tokens. Steph shares details about her recent visit to the Ruby on Rails Podcast and Chris shares the recording for a talk he gave on understanding technology choices. Lastly, they round out the conversation with a listener question about build times and lock files and how to organize and split up our tests.
On this week's episode, Steph and Chris share the news that The Bike Shed won the Best Dev Podcast on the Hackernoon Noonies awards! After a bit of celebration, they get back to their normal adventures with a discussion around onboarding covering the importance, approach, and pitfalls that they've seen in their time joining countless teams. They also touch on the relevance and increasing ease of SSL everywhere, and they answer a listener question about technical debt and rewriting applications.
On this week's episode, Chris and Steph discuss their preferred strategy when building an admin portal (spoiler: it's not using a client-side technology), separating our identity from our preferred technology, coding styles that require greater mental effort, and answer a listener's question about deleting migrations.
On this week's episode, Steph and Chris discuss mechanical keyboards, combating error fatigue, the joy of admin features and respond to two listener questions about typed vs dynamic languages and various ways to "speed up" third-party API calls.
Become a Sponsor of The Bike Shed!
On this week's episode Chris is joined by Michael Chan aka @chantastic, host of the React Podcast and prolific maker and sharer throughout the internets. They discuss Micheal's work on the React Podcast and themes in open source in general, Michael's focus on communication and delivering value, and the honest take that no one has all the answers or a silver bullet.
On this week's episode, Chris and Steph weigh-in on curved monitors, discuss how pairing improves productivity and team morale, and respond to two listener questions inquiring what makes Rails successful and new project nerves.
Vote for us for 'Best Dev' Podcast in this year's Noonie Awards.
On this week's episode, Steph and Chris discuss a handful of utilities that help with their workflows and GitHub, and then dive into a handful of ActiveRecord, SQL, and postgres-related topics. They discuss safe vs unsafe migrations when dealing with larger volumes of data, adding an index safely in migration without downtime, and bringing postgres enums into Rails.
Vote for us for 'Best Dev Podcast' in this year's Noonie Awards.
This episode of The Bike Shed is sponsored by Indeed Prime
On this week's episode, Steph and Chris discuss working with Django, Angular, and explore the new features released in Ruby 2.7.0-preview1! They also respond to a listener's question regarding the trade-offs of using client state management tools like NgRx and Redux.
Vote for us for 'Best Dev' Podcast in this year's Noonie Awards.
On this week's episode, Chris is joined in a live recording from RailsConf by the one and only Aaron Patterson. They discuss Aaron's many RailsConf keynotes, his recent work on Rails view rendering and his three-year-long effort to bring more advanced garbage collection to Ruby which will finally be seeing the light of day. And of course, plenty of puns.
This episode of The Bike Shed is sponsored by Indeed Prime
In this week's episode, Steph and Chris discuss ways to unplug and protect personal downtime, RESTful sorting, altering production data within a Rails migration vs a rake task, adopting Unicode characters, and respond to a listener's question about how they approach client relationships and share thoughtbot's Agile-like process.
On this week's episode, we revisit RailsConf 2019 for another live recording, this time with Eileen M. Uchitelle, GitHubber and rails core team member. Eileen joins Chris to discuss her RailsConf talk on how GitHub maintained a custom fork of Rails for years, how they finally moved off it, and what lessons we can take away from their experience. They also discussed Eileen's recent work on automatic database switching coming in Rails 6, microservices and monoliths, and getting into working on Rails.
This episode of The Bike Shed is sponsored by Indeed Prime
In this week's episode, Chris and Steph discuss how working with typed-languages influences their work with dynamic languages. They also chat about the benefits of pair programming, tracking performance events using Rails' Instrumentation API and respond to a listener's question about how to structure code that doesn't fit neatly within the default Rails' structure.
On this week's episode, Chris is joined by Kevin Deisz, CTO of CultureHQ, live from RailsConf. They discuss Kevin's RailsConf talk on preevalution in Ruby, but dig further into Kevin's core philosophies that drive his work on tools like preval. They round out the discussion with Kevin's work on prettier-plugin Ruby, an automated code formatter to finally tame the wild west of Ruby syntax, and the hopeful path to a v1.0 in the not too distant future.
On this very special Bike Shed, Steph and Chris celebrate reaching the 200th episode. They discuss the origins of the show and thank some of the wonderful folks who helped make it happen (thanks Derek, Sean, Amanda, Laila, and of course Thom!). They discuss Chris's recent trip to RailsConf and some strategies for making the most of conference attendance. Also, Steph's recent work hosting an intro to web development course. They wrap things up with a series of questions captured live from RailsConf at the community meetup covering career growth, naming, graphql, joy, and more.
On this week's episode, Steph and Chris talk about PR sizing, load testing (the weird way), and ponder the merits and pitfalls of personal style in code. They also discuss Hertz suing Accenture for undelivered software and the belief that engineers should talk to users! This one truly has something for everyone.
On this week's episode, Chris is joined by Glenn Vanderburg, VP of Engineering at First.io, live from RailsConf. They discuss Glenn's RailsConf talk, "The 30-Month Migration", covering distributed data models, refactoring, and the wonders of postgres. They also discuss Glenn's famous talk, "Real Software Engineering", and what the term "software engineering" means within our communities.
Steph and Chris discuss Redux, integration testing strategies, scoping data for React components, and take a question from a listener about improving process and reducing bugs in a complex service-oriented system with a hint of waterfall in their workflow.
On this week's episode, Chris is joined by Lin Clark and Till Schneidereit of Mozilla to discuss all things WebAssembly. Lin and Till are helping to lead the development and advocacy around WebAssembly and in this conversation they discuss the current state of WASM, new developments like the WebAssembly System Interface (WASI), and the longer term possibilities and goals for WASM.
On this week's episode, Chris is joined by Mike Burns, developer in our New York studio, to discuss the ins and outs of application security. Mike recently added a comprehensive Application Security Guide to the thoughtbot guides, and in this chat they discuss some of the high points of the guide, some of the low points of common security holes, and some of the fantastically specific workflows and approaches Mike has for his personal information and security management.
On this week's episode, Chris is joined by Edward Loveall, former thoughtbot design apprentice and now thoughtbot developer. After a quick chat about Edward's thoughtbot origin story, podcasts, and DNS, they dig into the heart of the conversation talking about their respective "must have" developer tools on new machines.
Thank you to CircleCI for sponsoring this episode.
On this week's episode, Chris is joined by Sid Raval, developer in our New York studio. Chris and Sid chat about functional programming, strong types, and accessibility. Along the way they touch on TypeScript, Haskell, Scala, Elm, and plenty in between. They round out the conversation with a discussion around accessibility and developer tools.
Thank you to CircleCI for sponsoring this episode.
Chris is joined by Devon Zuegel who recently joined GitHub in the new Open Source Product Manager role. Devon and Chris discuss the complexities inherent to open source including funding models, managing motivation and burnout, different open source models, and end with a discussion around how we can be better open source citizens, both as consumers and maintainers.
Thank you to CircleCI for sponsoring this episode.
On this week's episode, Chris is joined by Alex Sullivan, mobile developer in our Boston office. Alex takes Chris on a tour of the mobile landscape comparing the core native platforms (Android and iOS), the languages, developer tooling and IDEs, and fundamental thinking. They also dip into a discussion around React Native highlighting some of its strengths, as well as areas where native still clearly wins. Finally they touch on Flutter, the newest entrant into the mobile space to round out the discussion.
Thank you to CircleCI for sponsoring this episode.
On this week's episode, Chris is joined by Steph Viccari to chat about Steph's recent experience working on the Hubspot API ruby wrapper as a client project. They discuss strategies for testing third-party APIs, focusing on VCR and some of the benefits and trade-offs inherent to that style of API testing. Following that they chat about using exceptions for control flow, digging into why this seems to be a common pattern in Ruby API wrappers, what the alternatives are, and even a quick tour to React-land where this pattern is being used for interesting effect.
On this week's episode, Chris is joined by German Velasco for a conversation
that fully lives up to the name of the show with plenty of opinions and
impressively deep dives on topics that folks outside the world of programming
would never think could warrant this much discussion.
How much duplication should we have? Is there such a thing as too DRY? Is there
ever a need for code comments, really? Lest you worry that Chris & German spend
the whole episode just volleying opinions, have no fear: the episode is balanced
out with plenty of pointed suggestions and useful anecdotes to make sure
everyone will enjoy it.
On this week's episode, Chris is joined by Matt Sumner, development director in our Boston Studio. Chris & Matt start with a quick update on Matt's crypto adventures, and then transition to the core of the conversation as Matt describes the past few weeks of starting a new project and all the decisions that come with that.
The project kicked off with a product design sprint to help determine the initial direction for MVP. From there, Matt describes some of the thinking that went into the technology choices for the app, as well as describing his experience thus far working in a novel ecosystem for him with Scala & GraphQL.
On this week's episode, Chris is joined by Daniel Colson, developer in our New York studio and current maintainer of all things FactoryBot. Chris & Daniel discuss Daniel's work as maintainer of one of thoughtbot's most popular open source projects and some of the parallels to thoughtbot's consulting work. They then discuss a bit more on the specifics of FactoryBot and what's in store for upcoming versions.
To round out the conversation Daniel and Chris also dig into some of the testing related best practices and patterns common to thoughtbot projects, linting and formatting tools, and even dip into the age old discussion around single quotes vs double quotes (just a tiny bit).
Thank you to One Month for sponsoring this episode.
On this week's episode, Chris is joined by Ruby Hero Avdi Grimm. They discuss Avdi's history of guiding the Ruby and broader programming communities, his thoughts about where we're at with object-oriented programming, and where he's looking to next for our industry.
This conversation touches on a variety of topics both technical and personal. Avdi shares some of his thinking around where we've failed with our approaches to object-oriented programming and viewing the world as transactional, and instead offers ideas around modeling our systems as processes.
Avdi & Chris also chat about some of Avdi's my recent explorations into the world of JavaScript & React, as well as the growing "resilience engineering" mindset.
Thank you to One Month for sponsoring this episode.
On this week's episode, Chris is joined by Eebs Kobeissi, a developer in our Boston studio, for a discussion encompassing the front end, back end, and everything in between. They start by discussing Eebs' recent work with both Elm & TypeScript, and the relative merits of these two strongly typed languages for the front end. From there they move on to a discussion around the different communities and rates of change in each.
Shifting gears, Chris then asks Eebs about his experience with more distributed systems and technologies like JSON Web Tokens, ElasitcSearch, RabbitMQ, Kafka, and more.
They round out the conversation with a discussion around some recent security discussions in package managers and their collective surprise that things work at all.
Thank you to One Month for sponsoring this episode.
On this episode of the Bike Shed, Chris is joined by former thoughtbotter Ben Orenstein. Ben & team are currently feverishly working towards launching Tuple.app, an app for remote pair programming. The conversation covers the unique technical challenges inherent to building this sort of app (WebRTC & firewalls, oh my), as well as a discussion around the merits and value of pair programming. To round out the conversation, Ben checks in on whether Chris is still "nerding out hard on Vim".
Thank you to One Month for sponsoring this episode.
Chris is joined by Eric Bailey, thoughtbot designer and champion for all things accessibility on the web. Chris & Eric chat about how Eric approaches accessibility and works to include it throughout the design process, design systems, functional CSS, CSS in JS, and more.
On this episode of the Bike Shed, Chris is joined by thoughtbot CTO Joe Ferris. Chris & Joe start by talking about all things data. More and more we're building applications that need to manage medium to large data sets, combining data from multiple sources, and our approaches and frameworks need to evolve to match these needs. Joe provides the low down on how this can shape the way we build our applications.
As part of the discussion around data they dig into the idea of event logs, most notably discussing Apache Kafka and it's unique approach to capturing state by storing an immutable event log, and the resulting architecture that falls out of this.
Lastly they chat about the Scala language both in relation to data and streaming applications, but also more generally as an example of an approachable yet powerful strongly typed language.
On this episode of the Bike Shed, we're thrilled to welcome special guest John Resig, creator of jQuery and front-end architect at Khan Academy. The conversation begins with a discussion around John's work on jQuery, one of the most influential libraries in the history of the web. From there the discussion shifts to John's role as front-end architect at Khan Academy and how he balances feature development and paying down tech debt or exploring new technologies.
John and Chris then discuss the rate of change of front-end technologies, and John provides wonderfully pragmatic guidance distinguishing the rate of innovation from the perceived needed rate of adoption. The conversation also ventures into discussions around the trade-offs involved in open sourcing internal projects. Lastly, they touch briefly on the topic of GraphQL based on John's work at Kahn Academy, as well as his in-progress book, The GraphQL Guide.
A little bit of everything with one of the most influential web developers of
the past 15 years. What more could you ask for?
On this episode of the Bike Shed, Matt Sumner returns to chat with Chris about their recent adventures. They start by discussing Matt's ongoing work building an open source Ethereum implementation in Elixir and the joys of a test suite guiding your work. From there, Matt asks Chris about Chris's recent trip to speak at GraphQL Summit and his take on the current state of affairs in the GraphQL world (hint, it's good).
Matt and Chris then discussed the progress they've made on simpler form handling in React applications and consider how far they could go with this, and then discuss the recent announcement of React Hooks.
And finally, they discuss the fact that thoughtbot is hiring, and we think you should apply! Head on over to thoughtbot.com/jobs and drop us a line :)
On this episode of the Bike Shed Chris is joined by Derek Prior, former
thoughtbotter and previous host of this very podcast. Derek has recently moved
on from thoughtbot to try out a new role as an engineering manager at GitHub.
During their conversation they talk about Derek's experience shipping the
"Suggested Changes" feature on github.com, and the MVP process Derek brought to
the planning and development of the feature. They also touch on the architecture
of GitHub and where services and monoliths fit in the world of larger systems
like GitHub. Lastly they discuss Chris & Derek's respective transitions into
more roles with a bit less code and a bit more management. As usual, this one
has a little bit of everything!
On this episode of the Bike Shed, Chris is joined by Christina Entcheva, developer from thoughtbot's New York studio who has been a product manager and designer previously in her career, but has since settled in to her role as a developer.
Chris & Christina share a conversation ranging from their shared love of "boring Rails apps", Christina's recent work with headless CMSs like Contentful & Prismic, and a discussion around Rails performance. Throughout the conversation they touch on theme's of keeping a focus on user needs throughout the work of developing applications.
On this episode of the Bike Shed Chris is joined by George Brocklehurst, development director in thoughtbot's New York studio. The conversation starts with a discussion around progressive enhancement and the state of the modern web, and then shifts to focus on George's recent explorations of machine learning. This episode is a perfect introduction to the topic of ML, and provides a great summary of why you might want to start working with it and how to go about that.
On this episode of the Bike Shed, Chris is joined by Josh Clayton, thoughtbot's managing director in our Boston studio. Chris and Josh spend the episode discussing the various patterns and trends they see in the world of web development. Specifically, they touch on server side frameworks like Ruby on Rails and Phoenix in the Elixir world. In addition, they discuss a variety of front end trends including the move towards typed languages like ReasonML, TypeScript, Elm, PureScript, and Scala.js, as well as frameworks like React, Ember, Angular, and Vue.js.
In this special crossover episode, Chris is joined by Chad Pytel, Co-founder & CEO of thoughtbot and host of Giant Robots Smashing Into Other Giant Robots podcast, to discuss the content, history, and the process of making Upcase, thoughtbot's online learning platform, FREE.
Joël Quenneville joins Chris to discuss Elm, the strongly typed functional programming language for writing reliable client side web apps. They discuss recent changes from the 0.19 release including reduced bundle size from dead code elimination, the somewhat controversial removal of custom operators. Anecdotally, Joël and team saw a reduction from 31.5K to 16.6K in bundle size going from 0.18 to 0.19 and felt no pain from the custom operators removal, so a big net win for them with this new version.
Along the way Joël and Chris detour into the complexity of managing a project and community like Elm's and discuss Joël‘s recent work with the thoughtbot apprentice program. To round things out, Joël and Chris discuss the power of using a type system like Elm's to constrain the valid states of your application and make your apps more robust and maintainable.
Steph Viccari joins Chris for a conversation starting with a discussion of some deployment and orchestration issues Chris was helping out with, followed by some of Steph's recent experiences with JSONB in postgres and the relative trade-offs of unstructured data.
The heart of the conversation revolves around the core processes we use to develop software touching on sprint planning & story points, deadlines, the place for refactoring and code review in the regular cadence of development, and the often lamented retrospective meeting.
Matt Sumner joins Chris for a discussion around Matt's recent adventures with the block chain and Ethereum, as well as tackling the thorny issue of server rendered vs client side apps. They cover a bit of history, a bit of opinion, and some practical considerations to keep in mind when tackling rich client development.
Chris is joined by Paul Smith to discuss Crystal, a statically-typed and compiled language with a Ruby inspired syntax. Paul has spent much of the past few years exploring Crystal and building a new web framework called Lucky.
Paul's infectious enthusiasm for the Crystal language shines through in this discussion covering some of the unique features of Crystal & Lucky, but there is plenty to enjoy even if you're not specifically interested in Crystal.
With Lucky, Paul has done a great job of taking the best of what has been built in other frameworks and bring it to Crystal, drawing inspiration from Ruby & Rails, Elixir & Phoenix, and even PHP and the Laravel framework. There's something in this episode for everyone!
Chris is joined by Kane Baccigalupi, development director from thoughtbot's San Francisco office to discuss Kane's history in government working for 18F and California State and how those experiences have informed Kane's work since.
Throughout the conversation Chris and Kane discuss their shared desire to hide all implementation details and their love of Ruby for how it allows us to do
that, testing vs test driven development, and approaches for refactoring large
untested systems.
Chris is joined by Rachel Mathew to discuss Rachel's recent experiences with
Scala on a handful of client and side projects. They discuss the benefits of
working within a type system, learning to work with a compiler, and the process
of getting to know a new language and paradigm.
Along they way they dip into the complexity of twitter as a platform for
discussion and making improvements to development workflows.
Chris is joined by German Velasco for a discussion ranging from German's
recent transition to remote working to the wonders of the Elixir language and
the Erlang platform, blockchain, Ethereum, TypeScript, the Language Server
Protocol, and more!
Chris & Derek discuss the world of services, exploring the various forms SOA can take, the oft stated benefits, and some of the pitfalls they commonly see in the
wild. The discussion ranges from alternative architectures, guidelines for how to think about services within your platform, and even includes an anecdote about thoughtbot's foray into the world of SOA on Upcase.
Chris & Derek talk about beginnings and ends, borrowing from their consulting mindset for a conversation spanning CI, deployment, communication, team structure, and everything in between.
After Sean confronts some breaking changes to Diesel, we discuss what we like about Visual Studio Code and how changing your tools can change your perspective.
Sam Phippen joins us to discuss the maintenance burden of supporting old Rubies, service oriented architecture, and explorations of GraphQL and graph databases.
bundle update --conservative
docsRails performance, rebalancing coherence, and themes from career advice requests.
We're joined by Vaidehi Joshi to discuss her multimedia empire, conference talk prep, getting started with computer science, and the applicability of a computer science education in every day development work. We wrap the episode with live Q&A from our RailsConf audience.
An ORM that's a pleasure to use with raw SQL when needed? Sean discusses how that can be. Plus, Derek shares a new and exciting way for migrations to break!
We're joined by Aaron Patterson for puns. Aaron also updates us on compacting GC for Ruby and Ruby 2.6's JIT compiler before telling us how he really feels about functional programming.
Chris Toomey joins Derek to talk about their shared experience in Elm and their excitement about GraphQL.
We speak with Olivier Lacan about KeepAChangelog.com, tooling improvements for better developer experience, and the emotional impact of shutting down CodeSchool.com
Amanda is joined by Alex Sullivan, Android developer at thoughtbot, to discuss the state of React Native and its new competitor from Google, Flutter.
Eileen Uchitelle joins us live from RailsConf to talk about exciting improvements coming to Rails 6, problems encountered by larger Rails apps, strategies for upgrading Rails and more!
Is the bug in Postgres? Sean takes over operations of crates.io and keeps himself very busy. We also wrap up our experience at RailsConf.
See open positions at thoughtbot!
Become a Sponsor of The Bike Shed!
We catch up with Nick Means at RailsConf and discuss storytelling, "human error", advice for job seekers, and the idea of licensing software developers.
Derek & Sean discuss their final preparations for RailsConf, the role of Diesel's schema.rs
is in comparison to schema.rb
in Rails, and how Derek took down production.
Derek and Sean discuss ethical concerns in software development and the prospect of licensing software developers.
Sean experiences a frustrating Ruby bug while building tooling to enforce module boundaries in Shopfiy's monolith. Derek deprecates Rails functionality instead of preparing his talk.
Amanda and Sean discuss Flutter, modeling the game of baseball, and the state of persistence and networking in Android.
Derek shares his experiences with new features in Ruby 2.5 before we turn our ire towards daylight savings time and timezones once more.
yield_self
for composable ActiveRecord relations#merge
Amanda, Derek, and Sean discuss style guides, automated code formatting, and the cycle of disillusionment in development work.
Derek and Sean commiserate about the latest generation of MacBooks, Slack, and the state of the Web.
We talk about everyone's favorite Fisher-Price web framework and a small upcoming change to it before pivoting to discuss Derek's experience with his first Elm PR.
We chat about the Falcon Heavy launch before discussing a couple of issues Derek encountered when upgrading to Rails 5.2
Derek and Sean debate the value provided by database migrations written in your programming language of choice versus those written in SQL.
reversible
revert
We discuss the challenges in parallelizing development work and also take a look at what's coming soon in Rails 5.2.
Sean and Derek argue the semantics of versioning and opine for automated reporting on more structured changelogs as a feature of future package managers.
Derek is joined by coworker Sean Doyle and Codecademy’s Alex Clark to discuss the process of test-driven development and the development of a new TDD course for Codecademy.
We chat about how shared global state in tests can cause you to doubt foundational truths of the universe, some issues with Rails system tests, and recent changes in browser behavior.
Capybara.server
with Rails system testsWho should library documentation be written for? How do you, as an author, know what your users will need to know? Should you have long form guides in addition to API documentation? We ask and answer these questions in the context of Sean's work to document Diesel 1.0.
Stick around for the spoiler-filled after show about Star Wars: The Last Jedi.
Amanda joins Derek to discuss KotlinConf, powerful IDEs, our Ralphapalooza hackathon, and the React Native experience from a native mobile developer's perspective.
We discuss a possible ActiveRecord bug Derek encountered and explore the ambiguity of SQL formatting best practices.
We share our favorite talks from RubyConf and discuss how Sean has made ActiveRecord attributes allocation significantly faster with Rust.
Sean is on to a significant ActiveRecord optimization using an extension written in Rust and Derek shares an overdue thanks to an excellent manager.
We discuss patterns and anti-patterns encountered in agile retrospectives and revisit a favorite topic: form objects.
We briefly discuss the renaming of factory_girl to factory_bot before diving in to the visitor pattern; what is it, and what are its inherent tradeoffs.
Is Database Cleaner necessary anymore? Tune in for our exciting play-by-play reporting on that issue and stick around for chatter on personal defaults for new Rails applications.
We discuss an issue in the interaction between Rails, Chrome, and the HTTP referrer policy before Derek shares his love for GraphQL.
We discuss strategies for fighting back against project management overhead, refactoring workflows, and on-call rotations.
We discuss Bundler warning us to update to a prerelease version and other recent annoyances with our favorite dependency manager. We also wonder what GitHub diff stats can tell you about your contributions to a project and when they might be a smell. Stick around post credits for some spoiler-filled chatter about the first couple episodes of Star Trek: Discovery.
We discuss a major change to Diesel's insert statements in advance of its 1.0 release and reexamine Contracts.ruby after Derek spends some time with it in use.
We share and discuss some user feedback on fakes and mocks, discuss the benefits and drawbacks to FactoryGirl and share exasperation over the handling of the Equifax data breach.
We go inside the RubyConf CFP review process before turning our attention to questions about the impact of code review. Stick around post credits for some spoiler-filled, lukewarm Game of Thrones takes.
Derek and Sean discuss the troubles encountered when code reuse is a goal above all others and strategies to have your reusable cake and eat it too.
Derek and Sean discuss going from zero to code on new projects, writing tests that deal with external services, and a tricky floating point precision bug Sean encountered in ActiveRecord.
The Changelog's Jerod Santo joins the show to talk finding time for, sustaining, and funding open source development.
We do some follow-up on open source fundraising and discuss some interesting patterns in Derek's new client project.
method_added
Sean and Derek are joined by Caleb Thompson and Matthew Mongeau for our annual live episode to discuss lessons learned from past projects, and speaking at conferences.
has_many
We discuss the economics of remote work, ActionDispatch::SystemTest
in RSpec, and the use of Patreon on open source projects.
We chat with Justin Searls about testing, programmer personality types, programming communities, and putting spreadsheets on the Internet.
Amanda is joined by SF thoughtbot developers Tony, Josh, & Greg to discuss learning new languages (and whether developers should do that in their free time), machine learning, the future of AR/VR, and tech that strives to make a social difference.
We talk with Cecy Correa about how to hire and get hired.
We discuss a tiny DOS caused when upgrading thoughtbot.com to Rails 5.1 and how Rails could better surface warnings that only occur in your production configuration. We also get an update on multi-table joins in Rust.
We talk to Matt Casper about contributing to Diesel, Rust's ecosystem, and the next big thing.
Amanda joins Sean to discuss all the Android news to come out of Google I/O, Kotlin as a "first class language", and features of Android "O"!
We talk with Aaron Patterson about Ruby and Rails upgrades, and the goal of making Ruby 3 three times faster than Ruby 2.
What’s the deal with green potato chips? Also: RailsConf wrap up and an AST pass refactor for Diesel.
to_sql
into a standard AST passFollow up about Service Objects and Computer Engineering. Plus, RailsConf prep, code slide woes, and modal pop-ups.
Thank you to our sponsor this week, SparkPost
Is your operating system hosed? That might be related to Rails! We also chat about the trend towards compiled languages.
Thank you to our sponsor this week, SparkPost
Single table inheritance, polymorphic associations, state machines and service objects, oh my!
Thank you to our sponsor this week, SparkPost
Chris Toomey joins to talk about Tell Me When It Closes, Haskell, and GraphQL.
reverse_merge
to with_defaults
Thank you to our sponsor this week, SparkPost
Complexity vs Functionality, Validations vs Database Constraints, plus whatever a Cap'n Proto is.
Google's carrot-and-stick HTTPS policies and how playing The Legend of Zelda is like refactoring.
When a hash isn't a hash, GitHub as your Résumé, and porting Crates.io to Diesel.
Going "to" the moon, hidden type errors in our Rails apps, the process of talk prep, and the S3 outage.
Amanda and Sean discuss the evolving stages of open source projects, native apps vs web apps, and space.
Thank you to our sponsor this week, FreshBooks
Sam Phippen helps us celebrate episode 100, as we discuss Diesel bugs, REST, RPC, and more.
LEFT JOINS
bugformaction
Thank you to our sponsor this week, FreshBooks!
We go into the weeds with MySQL and discuss the virtues of database migrations written in SQL.
mysql_real_escape_string
CLIENT_IGNORE_SIGPIPE
for MySQL?reversible
in migrationsThank you to our sponsor this week, FreshBooks!
We discuss complexity and progressive disclosure, garbage collection, and the impenetrable nature of Git.
Thank you to our sponsor this week, FreshBooks!
We wonder why writing parameterized associations in Rails is not easy, and discuss the difficulty in eliminating no-op queries in ActiveRecord. Plus, we discuss how you can give a great RailsConf talk proposal that doesn't have anything to do with Rails.
Thank you to our sponsor this week, FreshBooks!
Baby Ruby, Ruby refinements, Rails discoverability, and annoying polyfills.
- "Send me onesies!"
Thank you to our sponsor this week, FreshBooks!
Amanda is joined by Morgane Santos to discuss the experiences, technology, and development of Virtual Reality.
Thank you to our sponsor this week, FreshBooks!
We discuss the pain of custom inputs in HTML, ActiveRecord bugs, and Rust's Fire Flower.
Thank you to our sponsor this week, FreshBooks!
The impact of codes of conduct on community behavior, shipping a mobile app written in Elm, and yet more to say on SemVer.
We discuss the sneaky performance differences between present?
, any?
, blank?
and empty?
with ActiveRecord, when N+1 is a "feature", and the future of Diesel.
Derek briefly complains of the staleness of the asset pipeline in Rails 5, before Sean catches Derek up on Rails 5.1's support for Webpack, Yarn, and ES6. We also discuss the pain of deprecations in the upgrade to Rails 5.
We discuss adventures with shared mutable state in Elixir before turning to our thoughts on mocking HTTP interaction and how our approaches may differ depending on the language we’re using.
Ashley Williams joins the show to discuss NPM, Yarn, and the general package manager ecosystem.
yarn install--flat
--override
for Elixir dependenciesAmanda and Sean discuss talk prep and slide envy before diving in to Kotlin 1.0.5, UTF-8 identifiers in programming, and responsive layouts in Android.
We talk about a widespread DNS outage and what steps you might take to avoid or limit your application's exposure to these things in the future.
ALIAS
Record?ANAME
recordsCNAME
record be used at the apex (aka root) of a domain?Derek chats with Ian Anderson about developing a mobile app for iOS and Android with React Native.
We briefly ponder the very nature of our existence before discussing edge cases and interesting bugs encountered in ActiveRecord.
from
methodwhere.not
methodWhereClause
classdeep_munge
What do we look for when reviewing job applications, interviewing candidates, and pairing with prospective co-workers?
Sean encounters a roadblock in updating Diesel to use Rust’s new soon-to-be-stabilized procedural macros. Derek and Sean discuss the most recent CVE filed for Bundler, which is a lot like the previous CVE filed for Bundler.
We discuss the problems with getting a CVE and the new lightning fast search tool, ripgrep. Sandwiched between those topics, we dive into the colonization of Mars. Yes, that's right, Mars.
Derek and Sean talk through how to handle a security vulnerability that was reported for Clearance, a user authentication library.
What's appropriate for a web middleware stack to provide? Does Rack do too much? Plus, our thoughts on NeoVim and Vim 8.
Derek and Sean talk through some complex SQL before they examine the calluses developed from years of writing software on OS X.
Sean and Amanda discuss the state of Android Development in 2016. Java, Kotlin, Dependency Injection, and Functional Reactive Programming, oh my!
How can you get your open source pull request merged?
Module#prepend
is the end of alias_method_chain
- by Justin WeissBetween thoughtbot's Summer Summit and Sean's vacation, we are sadly without a new episode this week. However, we would love you all to check out thoughtbot's newest podcast, interviewing inspirational designers, developers, and other makers in tech, The Laila & Brenda Show!
Give their latest episode a listen here, and if you like it subscribe to their feed however you listen to podcasts!
Derek and Sean discuss hunting down intermittently failing tests, finding unused code in your application, and why you should never ever change your test framework.
We talk through design considerations for a user-visible custom query builder for a high volume ecommerce system.
We discuss Pokémon Go and what it's success might mean for software developers before Sean lays out his case for replacing the pg
gem and libpq
.
Aaron Patterson joins us from RailsConf for puns, performance improvements in Ruby, and AirDropping cats.
Inspired by Nickolas Means’ fantastic RailsConf keynote, we discuss the corollaries between Lockheed Martin’s Skunk Works projects and our software development projects.
Sandi Metz joins us live from RailsConf to talk about the rules, the trouble with naming things, making the right kinds of errors, and conference speaking.
A big thanks to everyone who came out to our live show! A video version of this episode is available on the thoughtbot YouTube Page.
We discuss thoughtbot's increasing use of Elixir and Phoenix and what that means for our Rails work before diving into what's new in Elixir 1.3 and Ecto 2.0.
Sean runs through a Rails bug that sits at the intersection of several magical and confusing Rails features.
Leading Rails contributor Rafael Franca joins us from RailsConf to talk about taking over Sprockets, the future of the asset pipeline in Rails, managing Rails dependencies, and the hard work of software maintenance.
Sean said you'd all "definitely" have the final build of Rails 5 by now. Whoops!
We talk with Terence Lee of Heroku, Bundler, and mruby-cli fame about Apache Kafka and the future of mruby scripting.
While at RailsConf, we talk with Katrina Owen about finding metaphors for software development, the successes and mistakes of Exercism.io, and the benefits of providing code reviews.
Open Mic is back by popular demand, this time in San Francisco. We hear from developers in thoughtbot's San Francisco office about their recent investment time projects.
Derek and Sean discuss some recent issues with exciting language features like pattern matching, macros, and static types.
Sean celebrates Diesel reaching "faster than a SQL string" status before we chat about Rails 5 blockers and the clarity of focus and priorities that only shipping can bring.
"Send me an email every year for my birthday" is an easy thing for a human to understand but it can be deceptively tricky to do with computers. Also tricky for (some) computers: SELECT * FROM
. Wait... what?
DATE_PART
or EXTRACT
EXPLAIN
ANALYZE
VACUUM
KF (Katherine Fellows) joins the show to chat about successful BridgeFoundry events and creating environments where remote developers, junior and otherwise, can thrive.
Derek and Sean discuss the left-pad saga, how other programming communities are reacting to it, and what you should learn from it as a library or application author.
gem yank
a security concern?Thank you to Hired for sponsoring this episode!
Should you rewrite or refactor? What should you consider as you weigh this decision and what exactly constitutes a rewrite anyway?
Thank you to Hired for sponsoring this episode!
We chat with José Valim about bringing light to Elixir's dark corners, the design goals of Ecto, and the future of Elixir, Ecto, and Phoenix.
mix deps.tree
in Elixir 1.3mix app.tree
in Elixir 1.3Ecto.Query.preload
Ecto.Changeset
github_ecto
: An Ecto adapter for the GitHub API.Is ActiveRecord reinventing Sequel? If it is, does it matter? Derek and Sean discuss that and whether maybe we could all stand to tone down the JavaScript.
Derek and Sean talk about their experience with the Rails 5 betas, how to test against them today, and things that you might want to look out for when updating your app.
Derek shares some Elixir annoyances with Sean and they discus how a consulting role colors their perception of languages and frameworks, both for better and for worse. Sean provides an update on SQLite and Association support in Diesel.
Laila and Derek go on a tour of the various caching mechanisms available to web applications in general, and Rails specifically. When is the right time to cache and at what level?
Rails.cache.fetch
)counter_cache
Derek and Laila discuss Derek's excitement for Elixir and Phoenix. Is Elixir as fun to write as Ruby? Is Phoenix a better Rails?
We enjoy a wide-ranging discussion with Steve Klabnik on the importance of good documentation, the sometimes cloudy definition of a breaking change, the politics of open source contributions, and work/life balance or boundaries.
It's Open Mic day at The Bike Shed. We hear from other thoughtbot designers and developers about what they're excited to be spending their investment time on lately.
Is Everyone Trying Their Best? - The Bike Shed on software quality
The Buffalo Bills` Playoff Drought - The longest current drought in sports
How can an ORM be faster than a SQL String? Laila and Sean discuss the latest happenings in Diesel and why it is that a systems language needs an ORM, anyway.
Software is broken. In this episode, Derek and Sean discuss why exactly it's broken, and what we can do to make it better.
Ruby 2.3 is out! What are we looking forward to trying and what do we think of &.
and try
? Stick around after the credits for spoiler-filled discussion of Star Wars: The Force Awakens
#try
might not be doing what you think it’s doing by Avdi Grimmtry
in Rails a comment from Myron Marston&method
Passes You!Hash#dig
did_you_mean
by Yuki Nishijima.We discuss the maintenance burden of ActionCable and its dependencies on Rails 5, follow-up on Scenic issues, and discuss implementing migrations anew in Diesel.
Derek shipped Scenic 1.0, which spurs a conversation about semantic versioning and the value of the 1.0 milestone. We discuss what the bar for breaking changes in a library should be and look at some specific changes on tap for Scenic and whether they will or should carry a major version bump.
Sean has shipped early versions of Diesel, an ORM for Rust! We discuss its semantic versioning, the ergonomics of use versus the complexities of implementation, early issues with the API and the road to Diesel 1.0.
We talk about lessons learned from teachable moments both in the moment and decades later.
COPY FROM
We speak to Grayson Wright about building Administrate, an open source Rails framework for administrative interfaces. What makes Administrate different than existing solutions and what are the challenges in maintaining high-level dependencies.
Derek and Sean talk about Derek's exploration into Elixir and Phoenix, when and how performance matters, and ways to keep your Rails app fast from day 1.
The ActiveRecord update API is a mess of methods that confuse even ActiveRecord’s maintainer. What are the problems and is there any hope for a solution?
We talk with Yehuda Katz about how much risk is right for you and your app, the sharp tools of high level abstractions, and how our statistical intuition leads us astray on web performance.
Laila and Derek discuss how they have handled forms with complex validation requirements and how to make these forms have a smooth user experience.
Begun, the ad block wars have. Derek debugs an issue that arises from iOS ad blocking and wonders if analytics will move back to the server side. Sean fills us in on how dirty checking works in ActiveRecord and how he's making it faster and better in Rails 5.
Derek and Laila talk about learning Python and Django and discuss how thoughtbot adopts new languages, frameworks, and libraries. What factors influence adoption? How do we share what works and doesn't work?
Sean and Derek explain why you should always use a personal email address in your Git configuration before they dive into Ruby exception handling, and potential MRI proc optimizations.
.mailmap
Kernel#raise
documentationException#cause
documentationraise
, but feels uneasy about it.Proc#===
documentation.Derek is joined by Laila Winner to discuss Neo4j, the importance of fantastic documentation, and the different types of documentation a project requires.
SignInGuard
documentationSean and Derek take some listener questions, and dig into DRY.
Thanks for sending us your questions and feedback. Got more? You can email us at [email protected] or tweet us.
Derek and Sean discuss Microsoft's interest in open source, improving the Rails development story on Windows, and Sean's progress implementing an ORM in Rust.
Are provably correct queries of interest to you? Sean gives a rundown of what a Rusty ORM might be like to build.
Derek and Sean discuss hypothetical changes to Rails routing before turning their attention toward hunting memory bloat and the proposal that strings be frozen by default in Ruby 3.
disable_with
default on submit_tag
form_for
derailed benchmarks
to find memory leaks and bloat.Sean is joined by Mike Burns to discuss what Ruby and Rails can learn from Python and Django.
map
Did you know Rails has no integration test suite? What could go wrong?
Sean and Derek circle back on HTTP before diving into unsafe rust, and finally the merits of a small standard library.
This week, Sean and Derek discuss performance and inheriting code. In a stroke of complete madness, Derek decides that turbolinks isn't that bad.
Derek is joined by Gordon Fontenot for a discussion of the JSON API specification, problems consuming it from Swift, and the future of functional programming in Swift.
This episode of The Bike Shed is sponsored by:
Links / Show Notes
Richard Schneeman joins The Bike Shed to discuss ruby memory use, horizontal scaling, and tackling open source issues big and small.
This episode of The Bike Shed is sponsored by:
Links & Show Notes
mail
gem memory usemime-types
memory use*_path
methods in mailersSean gives Derek a tour of Rust, a new systems language from Mozilla.
This episode of The Bike Shed is sponsored by:
Code School: Entertaining online learning for existing and aspiring developers. Leave a review on our iTunes page to be entered to win a free month of Code School.
Eileen Uchitelle joins the show to discuss performance improvements to ActiveRecord, speeding up integration tests, and contributing to or running open source projects.
This episode of The Bike Shed is sponsored by:
Links & Show Notes
Rails Core Team member Godfrey Chan joins the show to demystify rails bug hunting and contributing.
We're joined by Josh Clayton to discuss our differing strategies on testing view behavior, strategies for dealing with brittle feature specs, and what types of tests each of us favor.
Derek and Sean are joined by Sam Phippen from the RSpec core team to discuss RSpec mocks, testing strategies, and minitest.
any_instance
to test legacy codestub_const
method.assigns
and assert_template
are deprecated in Rails 5Sean and Derek discuss rails asset dependencies before diving into an overview of animation techniques and C extensions.
We chat with Sam Saffron about performance, benchmarking, and database migration strategies.
default_scope
Feedback? You can tweet us, email us, or leave a comment on our website.
Grab Bag! Derek and Sean talk about math, augmented reality, RailsConf wrap up, Minimum Viable Products, Accessibility...
Sean, Derek, and Sarah Mei talk about conference speaking, refactoring, and OO vs FP problems.
This week, we're joined by DHH and discuss microservices, monoliths, shared abstractions, and the fate of Action Cable.
Live from RailsConf, Aaron Patterson joins the show to talk about Rails 5, Rack 2, Contributing to Open Source, and cats. We also field audience questions.
Derek and Sean talk about naming, debugging, and the anxiety of conference talks.
bundle search
commandSean and Derek talk about the state of Android tooling, refactoring journeys, and an approach to rails form objects.
Pat Brisbin joins Derek to discuss the many advantages of Haskell programming.
note: at 27:01 Pat says "referential integrity" when he meant "referential transparency"; he's very sorry.
Sean and Derek discuss Monoliths, Service Oriented Architecture, and the new Adapter Specific Type Registry in Rails 5.
Derek and Sean discuss what the Attributes API enables, the addition of Relation#or
and paid open source.
composed_of
load_schema
makes sense now.DelegateClass
&block
on MRI and jRubyDerek and Sean talk trade schools, sneaky bugs, bad method names, before_filters, and the Superbowl.
accessed_fields
to the ActiveRecord Attributes API.before filter
and its friends have been deprecated in Rails 5.0read_attribute_before_typecast
RangeError
s are hard (Sean's solution to the test placement problem was to not commit the test)Sean and Derek discuss thoughtful deprecations, backwards compatibility, and other joys of library maintenance.
Derek and Sean discuss various ways of taking the Rails out of your Ruby on Rails application, what folder to put your files in, and the difficulties and rewards of learning new programming languages.
Derek and Sean discuss hunting Rails performance regressions and techniques for improving performance in your web applications.
Sean and Derek take a fresh look at the tradeoffs in writing CoffeeScript and whether we should be using an ES6 transpiler instead.
map
, reduce
and forEach
?Derek and Sean discuss Sean's commit access to Rails, what's coming in Rails 4.2, and how to go about making Rails code better.
Sean and Derek discuss lessons learned from following Sandi Metz' rules on a project and the overall impact of rules on code.
method_added
method.En liten tjänst av I'm With Friends. Finns även på engelska.