100 avsnitt • Längd: 40 min • Månadsvis
Are you an enterprise data or product leader seeking to increase the user adoption and business value of your ML/AI and analytical data products?
While it is easier than ever to create ML and analytics from a technology perspective, do you find that getting users to use, buyers to buy, and stakeholders to make informed decisions with data remains challenging?
If you lead an enterprise data team, have you heard that a ”data product” approach can help—but you’re not sure what that means, or whether software product management and UX design principles can really change consumption of ML and analytics?
My name is Brian T. O’Neill, and on Experiencing Data—one of the top 2% of podcasts in the world—I offer you a consulting product designer’s perspective on why simply creating ML models and analytics dashboards aren’t sufficient to routinely produce outcomes for your users, customers, and stakeholders. My goal is to help you design more useful, usable, and delightful data products by better understanding your users, customers, and business sponsor’s needs. After all, you can’t produce business value with data if the humans in the loop can’t or won’t use your solutions.
Every 2 weeks, I release solo episodes and interviews with chief data officers, data product management leaders, and top UX design and research professionals working at the intersection of ML/AI, analytics, design and product—and now, I’m inviting you to join the #ExperiencingData listenership.
Transcripts, 1-page summaries and quotes available at: https://designingforanalytics.com/ed
ABOUT THE HOST
Brian T. O’Neill is the Founder and Principal of Designing for Analytics, an independent consultancy helping technology leaders turn their data into valuable data products. He is also the founder of The Data Product Leadership Community. For over 25 years, he has worked with companies including DellEMC, Tripadvisor, Fidelity, NetApp, Roche, Abbvie, and several SAAS startups. He has spoken internationally, giving talks at O’Reilly Strata, Enterprise Data World, the International Institute for Analytics Symposium, Predictive Analytics World, and Boston College. Brian also hosts the highly-rated podcast Experiencing Data, advises students in MIT’s Sandbox Innovation Fund and has been published by O’Reilly Media. He is also a professional percussionist who has backed up artists like The Who and Donna Summer, and he’s graced the stages of Carnegie Hall and The Kennedy Center.
Subscribe to Brian’s Insights mailing list at https://designingforanalytics.com/list.
The podcast Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management) is created by Brian T. O’Neill from Designing for Analytics. The podcast and the artwork on this page are embedded on this page using the public podcast feed (RSS).
With GenAI and LLMs comes great potential to delight and damage customer relationships—both during the sale, and in the UI/UX. However, are B2B AI product teams actually producing real outcomes, on the business side and the UX side, such that customers find these products easy to buy, trustworthy and indispensable?
What is changing with customer problems as a result of LLM and GenAI technologies becoming more readily available to implement into B2B software? Anything?
Is your current product or feature development being driven by the fact you might be able to now solve it with AI? The “AI-first” team sounds like it’s cutting edge, but is that really determining what a customer will actually buy from you?
Today I want to talk to you about the interplay of GenAI, customer trust (both user and buyer trust), and the role of UX in products using probabilistic technology.
These thoughts are based on my own perceptions as a “user” of AI “solutions,” (quotes intentional!), conversations with prospects and clients at my company (Designing for Analytics), as well as the bright minds I mentor over at the MIT Sandbox innovation fund. I also wrote an article about this subject if you’d rather read an abridged version of my thoughts.
Highlights/ Skip to:
Links
Today, I’m chatting with Adam Berke, the Chief Product Officer at The Predictive Index. For 70 years, The Predictive Index has helped customers hire the right employees, and after the merger with Charma, their products now nurture the employee/manager relationship. This is something right up Adam’s alley, as he previously helped co-found the employee and workflow performance management software company Charma before both aforementioned organizations merged back in 2023.
You’ll hear Adam talk about the first-time challenges (and successes) that come with integrating two products and two product teams, and why squashing out any ambiguity with overindexing (i.e. coming prepared with new org charts ASAP) is essential during the process.
Integrating behavioral science into the world of data is what has allowed The Predictive Index to thrive since the 1950s. While this is the company’s main selling point, Adam explains how the science-forward approach can still create some disagreements–and learning opportunities–with The Predictive Index’s legacy customers.
Highlights/ Skip to:
Links Referenced
Today, I’m talking to Andy Sutton, GM of Data and AI at Endeavour Group, Australia's largest liquor and hospitality company. In this episode, Andy—who is also a member of the Data Product Leadership Community (DPLC)—shares his journey from traditional, functional analytics to a product-led approach that drives their mission to leverage data and personalization to build the “Spotify for wines.” This shift has greatly transformed how Endeavour’s digital and data teams work together, and Andy explains how their advanced analytics work has paid off in terms of the company’s value and profitability.
You’ll learn about the often overlooked importance of relationships in a data-driven world, and how Andy sees the importance of understanding how users do their job in the wild (with and without your product(s) in hand). Earlier this year, Andy also gave the DPLC community a deeper look at how they brew data products at EDG, and that recording is available to our members in the archive.
We covered:
Quotes from Today’s Episode
Links Referenced
After getting started in construction management, Anna Jacobson traded in the hard hat for the world of data products and operations at a VC company. Anna, who has a structural engineering undergrad and a masters in data science, is also a Founding Member of the Data Product Leadership Community (DPLC). However, her work with data products is more “accidental” and is just part of her responsibility at Operator Collective. Nonetheless, Anna had a lot to share about building data products, dashboards, and insights for users—including resistant ones!
That resistance is precisely what I wanted to talk to her about in this episode: how does Anna get somebody to adopt a data product to which they may be apathetic, if not completely resistant?
At the end of the episode, Anna gives us a sneak peek at what she’s planning to talk about in our final 2024 live DPLC group discussion coming up on 12/18/2024.
We covered:
Links Referenced
LinkedIn: https://www.linkedin.com/in/anna-ching-jacobson/
DPLC (Data Product Leadership Community): https://designingforanalytics.com/community
R&D for materials-based products can be expensive, because improving a product’s materials takes a lot of experimentation that historically has been slow to execute. In traditional labs, you might change one variable, re-run your experiment, and see if the data shows improvements in your desired attributes (e.g. strength, shininess, texture/feel, power retention, temperature, stability, etc.). However, today, there is a way to leverage machine learning and AI to reduce the number of experiments a material scientist needs to run to gain the improvements they seek. Materials scientists spend a lot of time in the lab—away from a computer screen—so how do you design a desirable informatics SAAS that actually works, and fits into the workflow of these end users?
As the Chief Product Officer at MaterialsZone, Ori Yudilevich came on Experiencing Data with me to talk about this challenge and how his PM, UX, and data science teams work together to produce a SAAS product that makes the benefits of materials informatics so valuable that materials scientists depend on their solution to be time and cost-efficient with their R&D efforts.
We covered:
Quotes from Today’s Episode
Links Referenced
Website: https://www.materials.zone
LinkedIn: https://www.linkedin.com/in/oriyudilevich/
Email: [email protected]
Jeremy Forman joins us to open up about the hurdles– and successes that come with building data products for pharmaceutical companies. Although he’s new to Pfizer, Jeremy has years of experience leading data teams at organizations like Seagen and the Bill and Melinda Gates Foundation. He currently serves in a more specialized role in Pfizer’s R&D department, building AI and analytical data products for scientists and researchers. .
Jeremy gave us a good luck at his team makeup, and in particular, how his data product analysts and UX designers work with pharmaceutical scientists and domain experts to build data-driven solutions.. We talked a good deal about how and when UX design plays a role in Pfizer’s data products, including a GenAI-based application they recently launched internally.
Highlights/ Skip to:
Quotes from Today’s Episode
Links Referenced
LinkedIn: https://www.linkedin.com/in/jeremy-forman-6b982710/
The relationship between AI and ethics is both developing and delicate. On one hand, the GenAI advancements to date are impressive. On the other, extreme care needs to be taken as this tech continues to quickly become more commonplace in our lives. In today’s episode, Ovetta Sampson and I examine the crossroads ahead for designing AI and GenAI user experiences.
While professionals and the general public are eager to embrace new products, recent breakthroughs, etc.; we still need to have some guard rails in place. If we don’t, data can easily get mishandled, and people could get hurt. Ovetta possesses firsthand experience working on these issues as they sprout up. We look at who should be on a team designing an AI UX, exploring the risks associated with GenAI, ethics, and need to be thinking about going forward.
Highlights/ Skip to:
Quotes from Today’s Episode
Sometimes DIY UI/UX design only gets you so far—and you know it’s time for outside help. One thing prospects from SAAS analytics and data-related product companies often ask me is how things are like in the other guy/gal’s backyard. They want to compare their situation to others like them. So, today, I want to share some of the common “themes” I see that usually are the root causes of what leads to a phone call with me.
By the time I am on the phone with most prospects who already have a product in market, they’re usually either having significant problems with 1 or more of the following: sales friction (product value is opaque); low adoption/renewal worries (user apathy), customer complaints about UI/UX being hard to use; velocity (team is doing tons of work, but leader isn’t seeing progress)—and the like.
I’m hoping today’s episode will explain some of the root causes that may lead to these issues — so you can avoid them in your data product building work!
Highlights/ Skip to:
Quotes from Today’s Episode
In today’s episode, I’m joined by John Felushko, a product manager at LabStats who impressed me after we recently had a 1x1 call together. John and his team have developed a successful product that helps universities track and optimize their software and hardware usage so schools make smart investments. However, John also shares how culture and value are very tied together—and why their product isn’t a fit for every school, and every country. John shares how important customer relationships are , how his team designs great analytics user experiences, how they do user research, and what he learned making high-end winter sports products that’s relevant to leading a SAAS analytics product. Combined with John’s background in history and the political economy of finance, John paints some very colorful stories about what they’re getting right—and how they’ve course corrected over the years at LabStats.
Highlights/ Skip to:
Quotes from Today’s Episode
In today’s episode, I’m going to perhaps work myself out of some consulting engagements, but hey, that’s ok! True consulting is about service—not PPT decks with strategies and tiers of people attached to rate cards. Specifically today, I decided to reframe a topic and approach it from the opposite/negative side. So, instead of telling you when the right time is to get UX design help for your enterprise SAAS analytics or AI product(s), today I’m going to tell you when you should NOT get help!
Reframing this was really fun and made me think a lot as I recorded the episode. Some of these reasons aren’t necessarily representative of what I believe, but rather what I’ve heard from clients and prospects over 25 years—what they believe. For each of these, I’m also giving a counterargument, so hopefully, you get both sides of the coin.
Finally, analytical thinkers, especially data product managers it seems, often want to quantify all forms of value they produce in hard monetary units—and so in this episode, I’m also going to talk about other forms of value that products can create that are worth paying for—and how mushy things like “feelings” might just come into play ;-) Ready?
Highlights/ Skip to:
Quotes from Today’s Episode
Due to a technical glitch that ended up unpublishing this episode right after it originally was released, Episode 151 is a replay of my conversation with Zalak Trivdei from this past March . Please enjoy our chat if you missed it the first time around!
Thanks,
Brian
Links
Sigma Computing: https://sigmacomputing.com
Email: [email protected]
LinkedIn: https://www.linkedin.com/in/trivedizalak/
Sigma Computing Embedded: https://sigmacomputing.com/embedded
About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted
“Last week was a great year in GenAI,” jokes Mark Ramsey—and it’s a great philosophy to have as LLM tools especially continue to evolve at such a rapid rate. This week, you’ll get to hear my fun and insightful chat with Mark from Ramsey International about the world of large language models (LLMs) and how we make useful UXs out of them in the enterprise.
Mark shared some fascinating insights about using a company’s website information (data) as a place to pilot a LLM project, avoiding privacy landmines, and how re-ranking of models leads to better LLM response accuracy. We also talked about the importance of real human testing to ensure LLM chatbots and AI tools truly delight users. From amusing anecdotes about the spinning beach ball on macOS to envisioning a future where AI-driven chat interfaces outshine traditional BI tools, this episode is packed with forward-looking ideas and a touch of humor.
Highlights/ Skip to:Quotes from Today’s Episode
Guess what? Data science and AI initiatives are still failing here in 2024—despite widespread awareness. Is that news? Candidly, you’ll hear me share with Evan Shellshear—author of the new book Why Data Science Projects Fail: The Harsh Realities of Implementing AI and Analytics—about how much I actually didn’t want to talk about this story originally on my podcast—because it’s not news! However, what is news is what the data says behind Evan’s findings—and guess what? It’s not the technology.
In our chat, Evan shares why he wanted to leverage a human approach to understand the root cause of multiple organizations’ failures and how this approach highlighted the disconnect between data scientists and decision-makers. He explains the human factors at play, such as poor problem surfacing and organizational culture challenges—and how these human-centered design skills are rarely taught or offered to data scientists. The conversation delves into why these failures are more prevalent in data science compared to other fields, attributing it to the complexity and scale of data-related problems. We also discuss how analytically mature companies can mitigate these issues through strategic approaches and stakeholder buy-in. Join us as we delve into these critical insights for improving data science project outcomes.
Highlights/ Skip to:
Quotes from Today’s Episode
Links
Ready for more ideas about UX for AI and LLM applications in enterprise environments? In part 2 of my topic on UX considerations for LLMs, I explore how an LLM might be used for a fictitious use case at an insurance company—specifically, to help internal tools teams to get rapid access to primary qualitative user research. (Yes, it’s a little “meta”, and I’m also trying to nudge you with this hypothetical example—no secret!) ;-) My goal with these episodes is to share questions you might want to ask yourself such that any use of an LLM is actually contributing to a positive UX outcome Join me as I cover the implications for design, the importance of foundational data quality, the balance between creative inspiration and factual accuracy, and the never-ending discussion of how we might handle hallucinations and errors posing as “facts”—all with a UX angle. At the end, I also share a personal story where I used an LLM to help me do some shopping for my favorite product: TRIP INSURANCE! (NOT!)
Highlights/ Skip to:
Quotes from Today’s Episode
Links
Let’s talk about design for AI (which more and more, I’m agreeing means GenAI to those outside the data space). The hype around GenAI and LLMs—particularly as it relates to dropping these in as features into a software application or product—seems to me, at this time, to largely be driven by FOMO rather than real value. In this “part 1” episode, I look at the importance of solid user experience design and outcome-oriented thinking when deploying LLMs into enterprise products. Challenges with immature AI UIs, the role of context, the constant game of understanding what accuracy means (and how much this matters), and the potential impact on human workers are also examined. Through a hypothetical scenario, I illustrate the complexities of using LLMs in practical applications, stressing the need for careful consideration of benchmarks and the acceptance of GenAI's risks.
I also want to note that LLMs are a very immature space in terms of UI/UX design—even if the foundation models continue to mature at a rapid pace. As such, this episode is more about the questions and mindset I would be considering when integrating LLMs into enterprise software more than a suggestion of “best practices.”
Highlights/ Skip to:
Quotes from Today’s Episode
Links
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI). Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
Resources and Links:
Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
Wait, I’m talking to a head of data management at a tech company? Why!? Well, today I'm joined by Malcolm Hawker to get his perspective around data products and what he’s seeing out in the wild as Head of Data Management at Profisee. Why Malcolm? Malcolm was a former head of product in prior roles, and for several years, I’ve enjoyed Malcolm’s musings on LinkedIn about the value of a product-oriented approach to ML and analytics. We had a chance to meet at CDOIQ in 2023 as well and he went on my “need to do an episode” list!
According to Malcom, empathy is the secret to addressing key UX questions that ensure adoption and business value. He also emphasizes the need for data experts to develop business skills so that they're seen as equals by their customers. During our chat, Malcolm stresses the benefits of a product- and customer-centric approach to data products and what data professionals can learn approaching problem solving with a product orientation.
Highlights/ Skip to:
Welcome to another curated, Promoted Episode of Experiencing Data!
In episode 144, Shashank Garg, Co-Founder and CEO of Infocepts, joins me to explore whether all this discussion of data products out on the web actually has substance and is worth the perceived extra effort. Do we always need to take a product approach for ML and analytics initiatives? Shashank dives into how Infocepts approaches the creation of data solutions that are designed to be actionable within specific business workflows—and as I often do, I started out by asking Shashank how he and Infocepts define the term “data product.” We discuss a few real-world applications Infocepts has built, and the measurable impact of these data products—as well as some of the challenges they’ve faced that your team might as well. Skill sets also came up; who does design? Who takes ownership of the product/value side? And of course, we touch a bit on GenAI.
Highlights/ Skip to
Quotes from Today’s Episode
Links
Welcome back! In today's solo episode, I share the top five struggles that enterprise SAAS leaders have in the analytics/insight/decision support space that most frequently leads them to think they have a UI/UX design problem that has to be addressed. A lot of today's episode will talk about "slow creep," unaddressed design problems that gradually build up over time and begin to impact both UX and your revenue negatively. I will also share 20 UI and UX design problems I often see (even if clients do not!) that, when left unaddressed, may create sales friction, adoption problems, churn, or unhappy end users. If you work at a software company or are directly monetizing an ML or analytical data product, this episode is for you!
Highlights/ Skip to
Welcome to a special edition of Experiencing Data. This episode is the audio capture from a live Crowdcast video webinar I gave on April 26th, 2024 where I conducted a mini UI/UX design audit of a new podcast analytics service that Chris Hill, CEO of Humblepod, is working on to help podcast hosts grow their show. Humblepod is also the team-behind-the-scenes of Experiencing Data, and Chris had asked me to take a look at his new “Listener Lifecycle” tool to see if we could find ways to improve the UX and visualizations in the tool, how we might productize this MVP in the future, and how improving the tool’s design might help Chris help his prospective podcast clients learn how their listener data could help them grow their listenership and “true fans.”
On a personal note, it was fun to talk to Chris on the show given we speak every week: Humblepod has been my trusted resource for audio mixing, transcription, and show note summarizing for probably over 100 of the most recent episodes of Experiencing Data. It was also fun to do a “live recording” with an audience—and we did answer questions in the full video version. (If you missed the invite, join my Insights mailing list to get notified of future free webinars).
To watch the full audio and video recording on Crowdcast, free, head over to: https://www.crowdcast.io/c/podcast-analytics-ui-ux-design
Highlights/ Skip to:In this week's episode of Experiencing Data, I'm joined by Duncan Milne, a Director, Data Investment & Product Management at the Royal Bank of Canada (RBC). Today, Duncan (who is also a member of the DPLC) gives a preview of his upcoming webinar on April 24, 2024 entitled, “Is that Data Product Worth Building? Estimating Economic Value…Before You Build It!” Duncan shares his experience of implementing a product mindset within RBC's Chief Data Office, and he explains some of the challenges, successes, and insights gained along the way. He emphasizes the critical role of understanding user needs and evaluating the economic impact of data products—before they are built. Duncan was gracious to let us peek inside and see a transformation that is currently in progress and I’m excited to check out his webinar this month!
Highlights/ Skip to:
This week on Experiencing Data, I chat with a new kindred spirit! Recently, I connected with Thabata Romanowski—better known as "T from Data Rocks NZ"—to discuss her experience applying UX design principles to modern analytical data products and dashboards. T walks us through her experience working as a data analyst in the mining sector, sharing the journey of how these experiences laid the foundation for her transition to data visualization. Now, she specializes in transforming complex, industry-specific data sets into intuitive, user-friendly visual representations, and addresses the challenges faced by the analytics teams she supports through her design business. T and I tackle common misconceptions about design in the analytics field, discuss how we communicate and educate non-designers on applying UX design principles to their dashboard and application design work, and address the problem with "pretty charts." We also explore some of the core ideas in T's Design Manifesto, including principles like being purposeful, context-sensitive, collaborative, and humanistic—all aimed at increasing user adoption and business value by improving UX.
Highlights/ Skip to:
This week on Experiencing Data, something new as promised at the beginning of the year. Today, I’m exploring the world of embedded analytics with Zalak Trivedi from Sigma Computing—and this is also the first approved Promoted Episode on the podcast. In today’s episode, Zalak shares his journey as the product lead for Sigma’s embedded analytics and reporting solution which seeks to accelerate and simplify the deployment of decision support dashboards to their SAAS companies’ customers. Right there, we have the first challenge that Zalak was willing to dig into with me: designing a platform UX when we have multiple stakeholder and user types. In Sigma’s case, this means Sigma’s buyers, the developers that work at these SAAS companies to integrate Sigma into their products, and then the actual customers of these SAAS companies who will be the final end users of the resulting dashboards. also discuss the challenges of creating products that serve both beginners and experts and how AI is being used in the BI industry.
Highlights/ Skip to:
Sigma Computing: https://sigmacomputing.com
Email: [email protected]
LinkedIn: https://www.linkedin.com/in/trivedizalak/
Sigma Computing Embedded: https://sigmacomputing.com/embedded
About Promoted Episodes on Experiencing Data: https://designingforanalytics.com/promoted
In this episode of Experiencing Data, I speak with Ellen Chisa, Partner at BoldStart Ventures, about what she’s seeing in the venture capital space around AI-driven products and companies—particularly with all the new GenAI capabilities that have emerged in the last year. Ellen and I first met when we were both engaged in travel tech startups in Boston over a decade ago, so it was great to get her current perspective being on the “other side” of products and companies working as a VC. Ellen draws on her experience in product management and design to discuss how AI could democratize software creation and streamline backend coding, design integration, and analytics. We also delve into her work at Dark and the future prospects for developer tools and SaaS platforms. Given Ellen’s background in product management, human-centered design, and now VC, I thought she would have a lot to share—and she did!
Highlights/ Skip to:This week, I'm chatting with Karen Meppen, a founding member of the Data Product Leadership Community and a Data Product Architect and Client Services Director at Hakkoda. Today, we're tackling the difficult topic of developing data products in situations where a product-oriented culture and data infrastructures may still be emerging or “at odds” with a human-centered approach. Karen brings extensive experience and a strong belief in how to effectively negotiate the early stages of data maturity. Together we look at the major hurdles that businesses encounter when trying to properly exploit data products, as well as the necessity of leadership support and strategy alignment in these initiatives. Karen's insights offer a roadmap for those seeking to adopt a product and UX-driven methodology when significant tech or cultural hurdles may exist.
Highlights/ Skip to:
This week I’m chatting with Caroline Zimmerman, Director of Data Products and Strategy at Profusion. Caroline shares her journey through the school of hard knocks that led to her discovery that incorporating more extensive UX research into the data product design process improves outcomes. We explore the complicated nature of discovering and building a better design process, how to engage end users so they actually make time for research, and why understanding how to navigate interdepartmental politics is necessary in the world of data and product design. Caroline reveals the pivotal moment that changed her approach to data product design, as well as her learnings from evolving data products with the users as their needs and business strategies change. Lastly, Caroline and I explore what the future of data product leadership looks like and Caroline shares why there's never been a better time to work in data.
Highlights/ Skip to:
This week, I’m chatting with Steve Portigal, who is the Principal of Portigal Consulting and the Author of Interviewing Users. We discuss the changes that prompted him to release a second version of his book 10 years after its initial release, and dive into the best practices that any team can implement to start unlocking the value of data product UX research. Steve explains that the key to making time for user research is knowing what business value you’re after, not simply having a list of research questions. We then role-play through some in-depth examples of real-life experiences we’ve seen from both end users and leadership when it comes to implementing a user research strategy. Thhroughout our conversation, we come back to the idea that even taking imperfect action towards doing user research can lead to increased data product adoption and business value.
Highlights/ Skip to:
In this episode, I’m chatting with former Gartner analyst Sanjeev Mohan who is the Co-Author of Data Products for Dummies. Throughout our conversation, Sanjeev shares his expertise on the evolution of data products, and what he’s seen as a result of implementing practices that prioritize solving for use cases and business value. Sanjeev also shares a new approach of structuring organizations to best implement ownership and accountability of data product outcomes. Sanjeev and I also explore the common challenges of product adoption and who is responsible for user experience. I purposefully had Sanjeev on the show because I think we have pretty different perspectives from which we see the data product space.
Highlights/ Skip to:
Today I am sharing some highlights for 2023 from the podcast, and also letting you all know I’ll be taking a break from the podcast for the rest of December, but I’ll be back with a new episode on January 9th, 2024. I’ve also got two links to share with you—details inside!
Transcript
Greetings everyone - I’m taking a little break from Experiencing Data over December of 2023, but I’ll be back in January with more interviews and insights on leveraging UX design and product management to create indispensable data products, machine learning apps, and decision support tools.
Experiencing Data turned this year five years old back in November, with over 130 episodes to date! I still can’t believe it’s been going that long and how far we’ve come.
Some highlights for me in 2023 included launching the Data Product Leadership Community, finding out that the show is now in the top 2% of all podcasts worldwide according to ListenNotes, and most of all, hearing from you that the podcast, and my writing, and the guests that I have brought on are having an impact on your work, your careers, and hopefully the lives of your customers, users, and stakeholders as well!
So, for now, I’ve got just two links for you:
If you’re wondering how to either:
…just head over to designingforanalytics.com/podcast and you’ll get links to all those things there.
And secondly, if you need help increasing customer adoption, delight, the business value, or the usability of your analytics and machine learning applications in 2024, I invite you to set up a free discovery call with me 1 on 1.
You bring the questions, I’ll bring my ears, and by the end of the call, I’ll give you my best advice on how to move forward with your situation – whether it’s working with me or not. To schedule one of those free discovery calls, visit designingforanalytics.com/go
And finally, there will be some news coming out next year with the show, as well as my business, so I hope you’ll hop on the mailing list and stay tuned, that’s probably the best place to do that. And if you celebrate holidays in December and January, I hope they’re safe, enjoyable, and rejuvenating. Until 2024, stay tuned right here - and in the words of the great Arnold Schwarzenegger, I’ll be back.
In this conversation with Klara Lindner, Service Designer at diconium data, we explore how behavioral science and UX can be used to increase adoption of data products. Klara describes how she went from having a highly technical career as an electrical engineer and being the founder of a solar startup to her current role in service design for data products. Klara shares powerful insights into the value of user research and human-centered design, including one which stopped me in my tracks during this episode: how the people making data products and evangelizing data-driven decision making aren’t actually following their own advice when it comes to designing their data products. Klara and I also explore some easy user research techniques that data professionals can use, and discuss who should ultimately be responsible for user adoption of data products. Lastly, Klara gives us a peek at her upcoming December 19th, 2023 webinar with the The Data Product Leadership Community (DPLC) where she will be going deeper on two frameworks from psychology and behavioral science that teams can use to increase adoption of data products. Klara is also a founding member of the DPLC and was one of—if not the very first—design/UX professionals to join.
Highlights/ Skip to:
Links
This week I’m covering Part 1 of the 15 Ways to Increase User Adoption of Data Products, which is based on an article I wrote for subscribers of my mailing list. Throughout this episode, I describe why focusing on empathy, outcomes, and user experience leads to not only better data products, but also better business outcomes. The focus of this episode is to show you that it’s completely possible to take a human-centered approach to data product development without mandating behavioral changes, and to show how this approach benefits not just end users, but also the businesses and employees creating these data products.
Highlights/ Skip to:
Links:
Today I’m joined by Nick Zervoudis, Data Product Manager at CKDelta. As we dive into his career and background, Nick shares insights into his approach when it comes to developing both internal and external data products. Nick explains why he feels that a software engineering approach is the best way to develop a product that could have multiple applications, as well as the unique way his team is structured to best handle the needs of both internal and external customers. He also talks about the UX design course he took, how that affected his data product work and research with users, and his thoughts on dashboard design. We discuss common themes he’s observed when data product teams get it wrong, and how he manages feelings of imposter syndrome in his career as a DPM.
Highlights/ Skip to:
Today I’m joined by Marnix van de Stolpe, Product Owner at Coolblue in the area of data science. Throughout our conversation, Marnix shares the story of how he joined a data science team that was developing a solution that was too focused on the delivery of a data-science metric that was not on track to solve a clear customer problem. We discuss how Marnix came to the difficult decision to throw out 18 months of data science work, what it was like to switch to a human-centered, product approach, and the challenges that came with it. Marnix shares the impact this decision had on his team and the stakeholders involved, as well as the impact on his personal career and the advice he would give to others who find themselves in the same position. Marnix is also a Founding Member of the Data Product Leadership Community and will be going much more into the details and his experience live on Zoom on November 16 @ 2pm ET for members.
Highlights/ Skip to:
Today I’m joined by Vishal Singh, Head of Data Products at Starburst and co-author of the newly published e-book, Data Products for Dummies. Throughout our conversation, Vishal explains how the variations in definitions for a data product actually led to the creation of the e-book, and we discuss the differences between our two definitions. Vishal gives a detailed description of how he believes Data Product Managers should be conducting their discovery and gathering feedback from end users, and how his team evaluates whether their data products are truly successful and user-friendly.
Highlights/ Skip to:
Today I’m joined by Jonathan Cairns-Terry, who is the Head of Insight Products at the Care Quality Commission. The Care Quality Commission is the the regulator for England for health and social care, and Jonathan recently joined their data team and is working to transform their approach to be more product-led and user-centric. Throughout our conversation, Jonathan shares valuable insights into what the first year of that type of shift looks like, and why it’s important to focus on outcomes, and how he measures progress. Jonathan and I explore the signals that told Jonathan it’s time for his team to invest in a designer, the benefits he’s gotten from UX research on his team, and the recent successes that Jonathan’s team is seeing as a result of implementing this approach. Jonathan is also a Founding Member of the Data Product Leadership Community and we discuss his upcoming webinar for the group on Oct 12, 2023.
Highlights/ Skip to:
Links
Today I’m joined by Anthony Deighton, General Manager of Data Products at Tamr. Throughout our conversation, Anthony unpacks his definition of a data product and we discuss whether or not he feels that Tamr itself is actually a data product. Anthony shares his views on why it’s so critical to focus on solving for customer needs and not simply the newest and shiniest technology. We also discuss the challenges that come with building a product that’s designed to facilitate the creation of better internal data products, as well as where we are in this new wave of data product management, and the evolution of the role.
Highlights/ Skip to:
Today I’m joined by Vera Liao, Principal Researcher at Microsoft. Vera is a part of the FATE (Fairness, Accountability, Transparency, and Ethics of AI) group, and her research centers around the ethics, explainability, and interpretability of AI products. She is particularly focused on how designers design for explainability. Throughout our conversation, we focus on the importance of taking a human-centered approach to rendering model explainability within a UI, and why incorporating users during the design process informs the data science work and leads to better outcomes. Vera also shares some research on why example-based explanations tend to out-perform [model] feature-based explanations, and why traditional XAI methods LIME and SHAP aren’t the solution to every explainability problem a user may have.
Highlights/ Skip to:
Links
In this episode, I give an overview of my PiCAA Framework, which is a framework I shared at my keynote talk for Netguru’s annual conference, Burning Minds. This framework helps with brainstorming machine learning use cases or reverse engineering them, starting with the tactic. Throughout the episode, I give context to the preliminary types of work and preparation you and your team would want to do before implementing PiCAA, as well as the process and potential pitfalls you may run into, and the end results that make it a beneficial tool to experiment with.
Highlights/ Skip to:
Today I’m wrapping up my observations from the CDOIQ Symposium and sharing what’s new in the world of data. I was only able to attend a handful of sessions, but they were primarily ones tied to the topic of data products, which, of course, brings us to “What’s a data product?” During this episode, I cover some of what I’ve been hearing about the definition of this word, and I also share my revised v2 definition. I also walk through some of the questions that CDOs and fellow attendees were asking at the sessions I went to and a few reactions to those questions. Finally, I announce an exciting development on the launch of the Data Product Leadership Community.
Highlights/ Skip to:
Links
Today I’m answering a question that was submitted to the show by listener Will Angel, who asks how he can prioritize and scale effective discovery throughout the data product development process. Throughout this episode, I explain why discovery work is a process that should be taking place throughout the lifecycle of a project, rather than a defined period at the start of the project. I also emphasize the value of understanding the benefit users will see from the product as the main goal, and how to streamline the effectiveness of the discovery process.
Highlights/ Skip to:
Today I’m chatting with Peter Everill, who is the Head of Data Products for Analytics and ML Designs at the UK grocery brand, Sainsbury’s. Peter is also a founding member of the Data Product Leadership Community. Peter shares insights on why his team spends so much time conducting discovery work with users, and how that leads to higher adoption and in turn, business value. Peter also gives us his in-depth definition of a data product, including the three components of a data product and the four types of data products he’s encountered. He also shares the 8-step product management methodology that his team uses to develop data products that truly deliver value to end users. Pete also shares the #1 resource he would invest in right now to make things better for his team and their work.
Highlights/ Skip to:
Today I’m continuing my conversation with Nadiem von Heydebrand, CEO of Mindfuel. In the conclusion of this special 2-part episode, Nadiem and I discuss the role of a Data Product Manager in depth. Nadiem reveals which fields data product managers are currently coming from, and how a new data product manager with a non-technical background can set themselves up for success in this new role. He also walks through his portfolio approach to data product management, and how to prioritize use cases when taking on a data product management role. Toward the end, Nadiem also shares personal examples of how he’s employed these strategies, why he feels it’s so important for engineers to be able to see and understand the impact of their work, and best practices around developing a data product team.
Highlights / Skip to:
The conversation with my next guest was going so deep and so well…it became a two part episode! Today I’m chatting with Nadiem von Heydebrand, CEO of Mindfuel. Nadiem’s career journey led him from data science to data product management, and in this first, we will focus on the skills of data product management (DPM), including design. In part 2, we jump more into Nadiem’s take on the role of the DPM. Nadiem gives actionable insights into the realities of data product management, from the challenges of actually being able to talk to your end users, to focusing on the problems and unarticulated needs of your users rather than solutions. Nadiem and I also discuss how data product managers oversee a portfolio of initiatives, and why it’s important to view that portfolio as a series of investments. Nadiem also emphasizes the value of having designers on a data team, and why he hopes we see more designers in the industry.
Highlights/ Skip to:
Today I’m chatting with Kyle Winterbottom, who is the owner of Orbition Group and an advisor/recruiter for companies who are hiring top talent in the data industry. Kyle and I discuss whether the concept of data products has meaningful value to companies, or if it’s in a hype cycle of sorts. Kyle then shares his views on what sets the idea of data products apart from other trends, the well-paid opportunities he sees opening up for product leaders in the data industry, and why he feels being able to increase user adoption and quantify the business impact of your work is also relevant in a candidate’s ability to negotiate higher pay. Kyle and I also discuss the strange tendency for companies to mistakenly prioritize technical skills for these roles, the overall job market for data product leaders, average compensation numbers, and what companies can do to attract this talent.
Highlights/ Skip to:
Today I’m chatting with Phil Harvey, co-author of Data: A Guide to Humans and a technology professional with 23 years of experience working with AI and startups. In his book, Phil describes his philosophy of how empathy leads to more successful outcomes in data product development and the journey he took to arrive at this perspective. But what does empathy mean, and how do you measure its success? Brian and Phil dig into those questions, and Phil explains why he feels cognitive empathy is a learnable skill that one can develop and apply. Phil describes some leading indicators that empathy is needed on a data team, as well as leading indicators that a more empathetic approach to product development is working. While I use the term “design” or “UX” to describe a lot of what Phil is talking about, Phil actually has some strong opinions about UX and shares those on this episode. Phil also reveals why he decided to write Data: A Guide to Humans and some of the experiences that helped shape the book’s philosophy.
Highlights/ Skip to:
Do you ever find it hard to get the requirements, problems, or needs out of your customers, stakeholders, or users when creating a data product? This week I’m coming to you solo to share reasons your stakeholders, users, or customers may not be making time for your discovery efforts. I’ve outlined 10 reasons, and delve into those in the first part of this episode.
In part two, I am going to share a big update about the Data Product Leadership Community (DPLC) I’m hoping to launch in June 2023. I have created a Google Doc outlining how v1 of the community will work as well as 6 specific benefits that I hope you’ll be able to achieve in the first year of participating. However, I need your feedback to know if this is shaping up into the community you want to join. As such, at the end of this episode, I’ll ask you to head over to the Google Doc and leave a comment. To get the document link, just add your email address to the DPLC announcement list at http://designingforanalytics.com/community and you’ll get a confirmation email back with the link.
LinksToday I’m chatting with Osian Jones, Head of Product for the Data Platform at Stuart. Osian describes how impact and ROI can be difficult metrics to measure in a data platform, and how the team at Stuart has sought to answer this challenge. He also reveals how user experience is intrinsically linked to adoption and the technical problems that data platforms seek to solve. Throughout our conversation, Osian shares a holistic overview of what it was like to design a data platform from scratch, the lessons he’s learned along the way, and the advice he’d give to other data product managers taking on similar projects.
Highlights/ Skip to:
Today I’m chatting with Josh Noble, Principal User Researcher at TruEra. TruEra is working to improve AI quality by developing products that help data scientists and machine learning engineers improve their AI/ML models by combatting things like bias and improving explainability. Throughout our conversation, Josh—who also used to work as a Design Lead at IDEO.org—explains the unique challenges and importance of doing design and user research, even for technical users such as data scientists. He also shares tangible insights on what informs his product design strategy, the importance of measuring product success accurately, and the importance of understanding the current state of a solution when trying to improve it.
Highlights/ Skip to:
Today I’m chatting with Cole Swain, VP of Product at Tomorrow.io. Tomorrow.io is an untraditional weather company that creates data products to deliver relevant business insights to their customers. Together, Cole and I explore the challenges and opportunities that come with building an untraditional data product. Cole describes some of the practical strategies he’s developed for collecting and implementing qualitative data from customers, as well as why he feels rapport-building with users is a critical skill for product managers. Cole also reveals how scientists are part of the fold when developing products at Tomorrow.io, and the impact that their product has on decision-making across multiple industries.
Highlights/ Skip to:
Today I’m chatting with Samir Sharma, CEO of datazuum. Samir is passionate about developing data strategies that drive business outcomes, and shares valuable insights into how problem framing and research can be done effectively from both the data and business side. Samir also provides his definition of a data strategy, and why it can be complicated to uncover whose job it is to create one. Throughout the conversation, Samir and I uncover the value of including different perspectives when implementing a data strategy and discuss solutions to various communication barriers. Of course, dashboards and data products also popped up in this episode as well!
Highlights/ Skip to:
Today I’m chatting with Yuval Gonczarowski, Founder & CEO of the startup, Akooda. Yuval is a self-described “socially capable nerd” who has learned how to understand and meet the needs of his customers outside of a purely data-driven lens. Yuval describes how Akooda is able to solve a universal data challenge for leaders who don’t have complete visibility into how their teams are working, and also explains why it’s important that Akooda provide those data insights without bias. Yuval and I also explore why it’s so challenging to find great product leaders and his rule for getting useful feedback from customers and stakeholders.
Highlights/ Skip to:
Today I’m chatting with Dr. Sebastian Klapdor, Chief Data Officer for Vista. Sebastian has developed and grown a successful Data Product Management team at Vista, and it all began with selling his vision to the rest of the executive leadership. In this episode, Sebastian explains what that process was like and what he learned. Sebastian shares valuable insights on how he implemented a data product orientation at Vista, what makes a good data product manager, and why technology usage isn’t the only metric that matters when measuring success. He also shares what he would do differently if he had to do it all over again.
Highlights/ Skip to:
Today I’m chatting with Bob Mason, Managing Partner at Argon Ventures. Bob is a VC who seeks out early-stage founders in the ML/AI space and helps them inform their go-to-market, product, and design strategies. In this episode, Bob reveals what he looks for in early-stage data and intelligence startups who are trying to leverage ML/AI. He goes on to explain why it’s important to identify what your strengths are and what you enjoy doing so you can surround yourself with the right team. Bob also shares valuable insight into how to earn trust with potential customers as an early-stage startup, how design impacts a product’s success, and his strategy for differentiating yourself and creating a valuable product outside of the ubiquitous “platform play.”
Highlights/ Skip to:
Today I’m chatting with Bruno Aziza, Head of Data & Analytics at Google Cloud. Bruno leads a team of outbound product managers in charge of BigQuery, Dataproc, Dataflow and Looker and we dive deep on what Bruno looks for in terms of skills for these leaders. Bruno describes the three patterns of operational alignment he’s observed in data product management, as well as why he feels ownership and customer obsession are two of the most important qualities a good product manager can have. Bruno and I also dive into how to effectively abstract the core problem you’re solving, as well as how to determine whether a problem might be solved in a better way.
Highlights / Skip to:
Quotes from Today’s Episode
Links
Today I’m chatting with returning guest Tom Davenport, who is a Distinguished Professor at Babson College, a Visiting Professor at Oxford, a Research Fellow at MIT, and a Senior Advisor to Deloitte’s AI practice. He is also the author of three new books (!) on AI and in this episode, we’re discussing the role of product orientation in enterprise data science teams, the skills required, what he’s seeing in the wild in terms of teams adopting this approach, and the value it can create. Back in episode 26, Tom was a guest on my show and he gave the data science/analytics industry an approximate “2 out of 10” rating in terms of its ability to generate value with data. So, naturally, I asked him for an update on that rating, and he kindly obliged. How are you all doing? Listen in to find out!
Highlights / Skip to:
Today I’m chatting with former-analyst-turned-design-educator Jeremy Utley of the Stanford d.school and co-author of Ideaflow. Jeremy reveals the psychology behind great innovation, and the importance of creating psychological safety for a team to generate what they may view as bad ideas. Jeremy speaks to the critical collision of unrelated frames of reference when problem-solving, as well as why creativity is actually more of a numbers game than awaiting that singular stroke of genius. Listen as Jeremy gives real-world examples of how to practice and measure (!) your innovation efforts and apply them to data products.
Highlights/ Skip to:
Links
Today I’m discussing something we’ve been talking about a lot on the podcast recently - the definition of a “data product.” While my definition is still a work in progress, I think it’s worth putting out into the world at this point to get more feedback. In addition to sharing my definition of data products (as defined the “producty way”), on today’s episode definition, I also discuss some of the non-technical skills that data product managers (DPMs) in the ML and AI space need if they want to achieve good user adoption of their solutions. I’ll also share my thoughts on whether data scientists can make good data product managers, what a DPM can do to better understand your users and stakeholders, and how product and UX design factors into this role.
Highlights/ Skip to:
Today I’m chatting with Indi Young, independent qualitative data scientist and author of Time to Listen. Indi explains how it is possible to gather and analyze qualitative data in a way that is meaningful to the desired future state of your users, and that learning how to listen and not just interview users is much like learning to ride a bicycle. Listen (!) to find out why pushing back is a necessary part of the design research process, how to build an internal sensor that allows you to truly uncover the nuggets of information that are critical to your projects, and the importance of understanding thought processes to prevent harmful outcomes.
Highlights/ Skip to:
Today I’m chatting with Eugenio Zuccarelli, Research Scientist at MIT Media Lab and Manager of Data Science at CVS. Eugenio explains how he has created multiple algorithms designed to help shape decisions made in life or death situations, such as pediatric cardiac surgery and during the COVID-19 pandemic. Eugenio shared the lessons he’s learned on how to build trust in data when the stakes are life and death. Listen and learn how culture can affect adoption of decision support and ML tools, the impact delivery of information has on the user's ability to understand and use data, and why Eugenio feels that design is more important than the inner workings of ML algorithms.
Highlights/ Skip to:
Today I’m chatting with Iván Herrero Bartolomé, Chief Data Officer at Grupo Intercorp. Iván describes how he was prompted to write his new article in CDO Magazine, “CDOs, Let’s Get Out of Our Comfort Zone” as he recognized the importance of driving cultural change within organizations in order to optimize the use of data. Listen in to find out how Iván is leveraging the role of the analytics translator to drive this cultural shift, as well as the challenges and benefits he sees data leaders encounter as they move from tactical to strategic objectives. Iván also reveals the number one piece of advice he’d give CDOs who are struggling with adoption.
Highlights / Skip to:
Today I’m chatting with Katy Pusch, Senior Director of Product and Integration for Cox2M. Katy describes the lessons she’s learned around making sure that the “juice is always worth the squeeze” for new users to adopt data solutions into their workflow. She also explains the methodologies she’d recommend to data & analytics professionals to ensure their IOT and data products are widely adopted. Listen in to find out why this former analyst turned data product leader feels it’s crucial to focus on more than just delivering data or AI solutions, and how spending more time upfront performing qualitative research on users can wind up being more efficient in the long run than jumping straight into development.
Highlights/ Skip to:
Quotes from Today’s Episode
Today I’m chatting with Vin Vashishta, Founder of V Squared. Vin believes that with methodical strategic planning, companies can prepare for continuous transformation by removing the silos that exist between leadership, data, AI, and product teams. How can these barriers be overcome, and what is the impact of doing so? Vin answers those questions and more, explaining why process disruption is necessary for long-term success and gives real-world examples of companies who are adopting these strategies.
Highlights/ Skip to:
Today I’m sitting down with Jon Cooke, founder and CTO of Dataception, to learn his definition of a data product and his views on generating business value with your data products. In our conversation, Jon explains his philosophy on data products and where design and UX fit in. We also review his conceptual model for data products (which he calls the data product pyramid), and discuss how together, these concepts allow teams to ship working solutions faster that actually produce value.
Highlights/ Skip to:
Quotes from Today’s Episode
Today I’m chatting with Emilie Shario, a Data Strategist in Residence at Amplify Partners. Emilie thinks data teams should operate like product teams. But what led her to that conclusion, and how has she put the idea into practice? Emilie answers those questions and more, delving into what kind of pushback and hiccups someone can expect when switching from being data-driven to product-driven and sharing advice for data scientists and analytics leaders.
Highlights / Skip to:
Quotes from Today’s Episode
Today, I chat with Manav Misra, Chief Data and Analytics Officer at Regions Bank. I begin by asking Manav what it was like to come in and implement a user-focused mentality at Regions, driven by his experience in the software industry. Manav details his approach, which included developing a new data product partner role and using effective communication to gradually gain trust and cooperation from all the players on his team.
Manav then talks about how, over time, he solidified a formal framework for his team to be trained to use this approach and how his hiring is influenced by a product orientation. We also discuss his definition of data product at Regions, which I find to be one of the best I’ve heard to date. Today, Region Bank’s data products are delivering tens of millions of dollars in additional revenue to the bank. Given those results, I also dig into the role of design and designers to better understand who is actually doing the designing of Regions’ data products to make them so successful. Later, I ask Manav what it’s like when designers and data professionals work on the same team and how UX and data visualization design are handled at the bank.
Towards the end, Manav shares what he has learned from his time at Regions and what he would implement in a new organization if starting over. He also expounds on the importance of empowering his team to ask customers the right questions and how a true client/stakeholder partnership has led to Manav’s most successful data products.
Highlights / Skip to:
Quotes from Today’s Episode
Today I chat with Chad Sanderson, Head of Product for Convoy’s data platform. I begin by having Chad explain why he calls himself a “data UX champion” and what inspired his interest in UX. Coming from a non-UX background, Chad explains how he came to develop a strategy for addressing the UX pain points at Convoy—a digital freight network. They “use technology to make freight more efficient, reducing costs for some of the nation’s largest brands, increasing earnings for carriers, and eliminating carbon emissions from our planet.” We also get into the metrics of success that Convoy uses to measure UX and why Chad is so heavily focused on user workflow when making the platform user-centered.
Later, Chad shares his definition of a data product, and how his experience with building software products has overlapped with data products. He also shares what he thinks is different about creating data products vs. traditional software products. Chad then explains Convoy’s approach to prototyping and the value of partnering with users in the design process. We wrap up by discussing how UX work gets accomplished on Chad’s team, given it doesn’t include any titled UX professionals.
Highlights:
Re: user research: "The best content that you get from people is when they are really thinking about what to say next; you sort of get into a free-flowing exchange of ideas. So it’s important to find the topic where someone can just talk at length without really filtering themselves. And I find a good place to start with that is to just talk about their problems. What are the painful things that you’ve experienced in data in the last month or in the last week?" - Chad
Re: UX research: "I often recommend asking users to show you something they were working on recently, particularly when they were having a problem accomplishing their goal. It’s a really good way to surface UX issues because the frustration is probably fresh." - Brian
Re: user feedback, “One of the really great pieces of advice that I got is, if you’re getting a lot of negative feedback, this is actually a sign that people care. And if people care about what you’ve built, then it’s better than overbuilding from the beginning.” - Chad
“What we found [in our research around workflow], though, sometimes counterintuitively, is that the steps that are the easiest and simplest for a customer to do that I think most people would look at and say, ‘Okay, it’s pretty low ROI to invest in some automated solution or a product in this space,’ are sometimes the most important things that you can [address in your data product] because of the impacts that it has downstream.” - Chad
Re: user feedback, “The amazing thing about building data products, and I guess any internal products is that 100% of your customers sit ten feet away from you. [...] When you can talk to 100% of [your users], you are truly going to understand [...] every single persona. And that is tremendously effective for creating compelling narratives about why we need to build a particular thing.” - Chad
“If we can get people to really believe that this data product is going to solve the problem, then usually, we like to turn those people into advocates and evangelists within the company, and part of their job is to go out and convince other people about why this thing can solve the problem.” - Chad
Links:
Today I am bringing you a recording of a live interview I did at the TDWI Munich conference for data leaders, and this episode is a bit unique as I’m in the “guest” seat being interviewed by the VP of TDWI Europe, Christoph Kreutz.
Christoph wanted me to explain the new workshop I was giving later that day, which focuses on helping leaders increase user adoption of data products through design. In our chat, I explained the three main areas I pulled out of my full 4-week seminar to create this new ½-day workshop as well as the hands-on practice that participants would be engaging in. The three focal points for the workshop were: measuring usability via usability studies, identifying the unarticulated needs of stakeholders and users, and sketching in low fidelity to avoid over committing to solutions that users won’t value.
Christoph also asks about the format of the workshop, and I explain how I believe data leaders will best learn design by doing it. As such, the new workshop was designed to use small group activities, role-playing scenarios, peer review…and minimal lecture! After discussing the differences between the abbreviated workshop and my full 4-week seminar, we talk about my consulting and training business “Designing for Analytics,” and conclude with a fun conversation about music and my other career as a professional musician.
In a hurry? Skip to:
Today I sit down with Vijay Yadav, head of the data science team at Merck Manufacturing Division. Vijay begins by relating his own path to adopting a data product and UX-driven approach to applied data science, andour chat quickly turns to the ever-present challenge of user adoption. Vijay discusses his process of designing data products with customers, as well as the impact that building user trust has on delivering business value. We go on to talk about what metrics can be used to quantify adoption and downstream value, and then Vijay discusses the financial impact he has seen at Merck using this user-oriented perspective. While we didn’t see eye to eye on everything, Vijay was able to show how focusing on the last mile UX has had a multi-million dollar impact on Merck. The conversation concludes with Vijay’s words of advice for other data science directors looking to get started with a design and user-centered approach to building data products that achieve adoption and have measurable impact.
In our chat, we covered Vijay’s design process, metrics, business value, and more:
LinkedIn: https://www.linkedin.com/in/vijyadav/
In one of my past memos to my list subscribers, I addressed some questions about agile and data products. Today, I expound on each of these and share some observations from my consulting work. In some enterprise orgs, mostly outside of the software industry, agile is still new and perceived as a panacea. In reality, it can just become a factory for shipping features and outputs faster–with positive outcomes and business value being mostly absent. To increase the adoption of enterprise data products that have humans in the loop, it’s great to have agility in mind, but poor technology shipped faster isn’t going to serve your customers any better than what you’re doing now.
Here are the 10 reflections I’ll dive into on this episode:
Quotes from Today’s Episode
Today I’m talking about how to measure data product value from a user experience and business lens, and where leaders sometimes get it wrong. Today’s first question was asked at my recent talk at the Data Summit conference where an attendee asked how UX design fits into agile data product development. Additionally, I recently had a subscriber to my Insights mailing list ask about how to measure adoption, utilization, and satisfaction of data products. So, we’ll jump into that juicy topic as well.
Answering these inquiries also got me on a related tangent about the UX challenges associated with abstracting your platform to support multiple, but often theoretical, user needs—and the importance of collaboration to ensure your whole team is operating from the same set of assumptions or definitions about success. I conclude the episode with the concept of “game framing” as a way to conceptualize these ideas at a high level.
Key topics and cues in this episode include:
Today I talked with João Critis from Oi. Oi is a Brazilian telecommunications company that is a pioneer in convergent broadband services, pay TV, and local and long-distance voice transmission. They operate the largest fiber optics network in Brazil which reaches remote areas to promote digital inclusion of the population. João manages a design team at Oi that is responsible for the front end of data products including dashboards, reports, and all things data visualization.
We begin by discussing João’s role leading a team of data designers. João then explains what data products actually are, and who makes up his team’s users and customers. João goes on to discuss user adoption challenges at Oi and the methods they use to uncover what users need in the last mile. He then explains the specific challenges his team has faced, particularly with middle management, and how his team builds credibility with senior leadership. In conclusion, João reflects on the value of empathy in the design process.
In this episode, João shares:
Quotes from Today’s Episode
Michelle Carney began her career in the worlds of neuroscience and machine learning where she worked on the original Python Notebooks. As she fine-tuned ML models and started to notice discrepancies in the human experience of using these models, her interest turned towards UX. Michelle discusses how her work today as a UX researcher at Google impacts her work with teams leveraging ML in their applications. She explains how her interest in the crossover of ML and UX led her to start MLUX, a collection of meet-up events where professionals from both data science and design can connect and share methods and ideas. MLUX now hosts meet-ups in several locations as well as virtually.
Our conversation begins with Michelle’s explanation of how she teaches data scientists to integrate UX into the development of their products. As a teacher, Michelle utilizes the IDEO Design Kit with her students at the Stanford School of Design (d.school). In her teaching she shares some of the unlearning that data scientists need to do when trying to approach their work with a UX perspective in her course, Designing Machine Learning.
Finally, we also discussed what UX designers need to know about designing for ML/AI. Michelle also talks about how model interpretability is a facet of UX design and why model accuracy isn’t always the most important element of a ML application. Michelle ends the conversation with an emphasis on the need for more interdisciplinary voices in the fields of ML and AI.
Skip to a topic here:
Quotes from Today’s Episode
Dashboards are at the forefront of today’s episode, and so I will be responding to some reader questions who wrote in to one of my weekly mailing list missives about this topic. I’ve not talked much about dashboards despite their frequent appearance in data product UIs, and in this episode, I’ll explain why. Here are some of the key points and the original questions asked in this episode:
Quotes from Today’s Episode
Links Referenced:
Mike Oren, Head of Design Research at Klaviyo, joins today’s episode to discuss how we do UX research for data products—and why qualitative research matters. Mike and I recently met in Lou Rosenfeld’s Quant vs. Qual group, which is for people interested in both qualitative and quantitative methods for conducting user research. Mike goes into the details on how Klaviyo and his teams are identifying what customers need through research, how they use data to get to that point, what data scientists and non-UX professionals need to know about conducting UX research, and some tips for getting started quickly. He also explains how Klaviyo’s data scientists—not just the UX team—are directly involved in talking to users to develop an understanding of their problem space.
Klaviyo is a communications platform that allows customers to personalize email and text messages powered by data. In this episode, Mike talks about how to ask research questions to get at what customers actually need. Mikes also offers some excellent “getting started” techniques for conducting interviews (qualitative research), the kinds of things to be aware of and avoid when interviewing users, and some examples of the types of findings you might learn. He also gives us some examples of how these research insights become features or solutions in the product, and how they interpret whether their design choices are actually useful and usable once a customer interacts with them. I really enjoyed Mike’s take on designing data-driven solutions, his ideas on data literacy (for both designers, and users), and hearing about the types of dinner conversations he has with his wife who is an economist ;-) . Check out our conversation for Mike’s take on the relevance of research for data products and user experience.
In this episode, we cover:
Quotes from Today’s Episode
For Danielle Crop, the Chief Data Officer of Albertsons, to draw distinctions between “digital” and “data” only limits the ability of an organization to create useful products. One of the reasons I asked Danielle on the show is due to her background as a CDO and former SVP of digital at AMEX, where she also managed product and design groups. My theory is that data leaders who have been exposed to the worlds of software product and UX design are prone to approach their data product work differently, and so that’s what we dug into this episode. It didn’t take long for Danielle to share how she pushes her data science team to collaborate with business product managers for a “cross-functional, collaborative” end result. This also means getting the team to understand what their models are personalizing, and how customers experience the data products they use. In short, for her, it is about getting the data team to focus on “outcomes” vs “outputs.”
Scaling some of the data science and ML modeling work at Albertsons is a big challenge, and we talked about one of the big use cases she is trying to enable for customers, as well as one “real-life” non-digital experience that her team’s data science efforts are behind.
The big takeaway for me here was hearing how a CDO like Danielle is really putting customer experience and the company’s brand at the center of their data product work, as opposed solely focusing on ML model development, dashboard/BI creation, and seeing data as a raw ingredient that lives in a vacuum isolated from people.
In this episode, we cover:
Quotes from Today’s Episode
Today, I’m flying solo in order to introduce you to CED: my three-part UX framework for designing your ML / predictive / prescriptive analytics UI around trust, engagement, and indispensability. Why this, why now? I have had several people tell me that this has been incredibly helpful to them in designing useful, usable analytics tools and decision support applications.
I have written about the CED framework before at the following link:
https://designingforanalytics.com/ced
There you will find an example of the framework put into a real-world context. In this episode, I wanted to add some extra color to what is discussed in the article. If you’re an individual contributor, the best part is that you don’t have to be a professional designer to begin applying this to your own data products. And for leaders of teams, you can use the ideas in CED as a “checklist” when trying to audit your team’s solutions in the design phase—before it’s too late or expensive to make meaningful changes to the solutions.
CED is definitely easier to implement if you understand the basics of human-centered design, including research, problem finding and definition, journey mapping, consulting, and facilitation etc. If you need a step-by-step method to develop these foundational skills, my training program, Designing Human-Centered Data Products, might help. It comes in two formats: a Self-Guided Video Course and a bi-annual Instructor-Led Seminar.
Why design matters in data products is a question that, at first glance, may not be easily answered for some until they see users try to use ML models and analytics to make decisions. For Bill Báez, a data scientist and VP of Strategy at Ascend Innovations, realizing that design and UX matters in this context was a realization that grew over the course of a few years. Bill’s origins in the Air Force, and his transition to Ascend Innovations, instilled lessons about the importance of using design thinking with both clients and users.
After observing solutions built in total isolation with zero empathy and knowledge of how they were being perceived in the wild, Bill realized the critical need to bring developers “upstairs” to actually observe the people using the solutions that were being built.
Currently, Ascend Innovation’s consulting is primarily rooted in healthcare and community services, and in this episode, Bill provides some real-world examples where their machine learning and analytics solutions were informed by approaching the problems from a human-centered design perspective. Bill also dives in to where he is on his journey to integrate his UX and data science teams at Ascend so they can create better value for their clients and their client’s constituents.
Highlights in this episode include:
Building a SAAS business that focuses on building a research tool, more than building a data product, is how Jonathan Kay, CEO and Co-Founder of Apptopia frames his company’s work. Jonathan and I worked together when Apptopia pivoted from its prior business into a mobile intelligence platform for brands. Part of the reason I wanted to have Jonathan talk to you all is because I knew that he would strip away all the easy-to-see shine and varnish from their success and get really candid about what worked…and what hasn’t…during their journey to turn a data product into a successful SAAS business. So get ready: Jonathan is going to reveal the very curvy line that Apptopia has taken to get where they are today.
In this episode, Jonathan also describes one of the core product design frameworks that Apptopia is currently using to help deliver actionable insights to their customers. For Jonathan, Apptopia’s research-centric approach changes the ways in which their customers can interact with data and is helping eliminate the lull between “the why” and “the actioning” with data.
Here are some of the key parts of the interview:
Design takes many forms and shapes. It is an art, a science, and a method for problem solving. For Bob Goodman, a product management and design executive, the way to view design is as a story and a narrative that conveys the solution to the customer. As a former journalist with 20 years of experience in consumer and enterprise software, Bob has a unique perspective on enabling end-user decision making with data.
Having worked in both product management and UX, Bob shapes the narrative on approaching product management and product design as parts of a whole, and we talked about how data products fit into this model. Bob also shares why he believes design and product need to be under the same umbrella to prevent organizational failures. We also discussed the challenges and complexities that come with delivering data-driven insights to end users when ML and analytics are behind the scenes.
As the conversation around AI continues, Professor Cynthia Rudin, Computer Scientist and Director at the Prediction Analysis Lab at Duke University, is here to discuss interpretable machine learning and her incredible work in this complex and evolving field. To begin, she is the most recent (2021) recipient of the $1M Squirrel AI Award for her work on making machine learning more interpretable to users and ultimately more beneficial to humanity.
In this episode, we explore the distinction between explainable and interpretable machine learning and how black boxes aren’t necessarily “better” than more interpretable models. Cynthia offers up real-world examples to illustrate her perspective on the role of humans and AI, shares takeaways from her previous work which ranges from predicting criminial recidivism to predicting manhole cover explosions in NYC (yes!). I loved this chat with her because, for one, Cynthia has strong, heavily informed opinions from her concentrated work in this area, and secondly, because Cynthia is thinking about both the end users of ML applications as well as the humans who are “out of the loop,” but nonetheless impacted by the decisions made by the users of these AI systems.
In this episode, we cover:
The relationship between humans and artificial intelligence has been an intricate topic of conversation across many industries. François Candelon, Global Director at Boston Consulting Group Henderson Institute, has been a significant contributor to that conversation, most notably through an annual research initiative that BCG and MIT Sloan Management Review have been conducting about AI in the enterprise. In this episode, we’re digging particularly into the findings of the 2020 and 2021 studies that were just published at the time of this recording.
Through these yearly findings, the study has shown that organizations with the most competitive advantage are the ones that are focused on effectively designing AI-driven applications around the humans in the loop. As these organizations continue to generate value with AI, the gap between them and companies that do not embrace AI has only increased. To close this gap, companies will have to learn to design trustworthy AI applications that actually get used, produce value, and are designed around mutual learning between the technology and users. François claims that a “human plus AI” approach —what former Experiencing Data guest Ben Schneiderman calls HCAI (see Ep. 062)—can create organizational learning, trust, and improved productivity.
In this episode, we cover:
Finding it hard to know the value of your data products on the business or your end users? Do you struggle to understand the impact your data science, analytics, or product team is having on the people they serve?
Many times, the challenge comes down to figuring out WHAT to measure, and HOW. Clients, users, and customers often don’t even know what the right success or progress metrics are, let alone how to quantify them. Learning how to measure what might seem impossible is a highly valuable skill for leaders who want to track their progress with data—but it’s not all black and white. It’s not always about “more data,” and measurement is also not about “the finite, right answer.” Analytical minds, ready to embrace subjectivity and uncertainty in this episode!
In this insightful chat, Doug and I explore examples from his book, How to Measure Anything, and we discuss its applicability to the world of data and data products. From defining trust to identifying cognitive biases in qualitative research, Doug shares how he views the world in ways that we can actually measure. We also discuss the relationship between data and uncertainty, forecasting, and why people who are trying to measure something usually believe they have a lot less data than they really do.
Episode Description“Often things like trustworthiness or collaboration, or innovation, or any—all the squishy stuff, they sound hard to measure because they’re actually an umbrella term that bundles a bunch of different things together, and you have to unpack it to figure out what it is you’re talking about. It’s the beginning of all scientific inquiry is to figure out what your terms mean; what question are you even asking?”- Doug Hubbard (@hdr_frm) (02:33)
“Another interesting phenomenon about measurement in general and uncertainty, is that it’s in the cases where you have a lot of uncertainty when you don’t need many data points to greatly reduce it. [People] might assume that if [they] have a lot of uncertainty about something, that [they are] going to need a lot of data to offset that uncertainty. Mathematically speaking, just the opposite is true. The more uncertainty you have, the bigger uncertainty reduction you get from the first observation. In other words, if, you know almost nothing, almost anything will tell you something. That’s the way to think of it.”- Doug Hubbard (@hdr_frm) (07:05)
“I think one of the big takeaways there that I want my audience to hear is that if we start thinking about when we’re building these solutions, particularly analytics and decision support applications, instead of thinking about it as we’re trying to give the perfect answer here, or the model needs to be as accurate as possible, changing the framing to be, ‘if we went from something like a wild-ass guess, to maybe my experience and my intuition, to some level of data, what we’re doing here is we’re chipping away at the uncertainty, right?’ We’re not trying to go from zero to 100. Zero to 20 may be a substantial improvement if we can just get rid of some of that uncertainty, because no solution will ever predict the future perfectly, so let’s just try to reduce some of that uncertainty.”- Brian T. O’Neill (@rhythmspice) (08:40)
“I think people routinely believe they have a lot less data than they really do. They tend to believe that each situation is more unique than it really is [to the point] that you can’t extrapolate anything from prior observations. If that were really true, your experience means nothing.”- Doug Hubbard (@hdr_frm) (29:42)
Berit Hoffmann, Chief Product Officer at Sisu, tackles design from a customer-centric perspective with a focus on finding problems at their source and enabling decision making. However, she had to learn some lessons the hard way along the road, and in this episode, we dig into those experiences and what she’s now doing differently in her current role as a CPO.
In particular, Berit reflects on her “ivory tower design” experience at a past startup called Bebop. In that time, she quickly realized the importance of engaging with customer needs and building intuitive and simple solutions for complex problems. Berit also discusses the Double Diamond Process and how it shapes her own decision-making and the various ways she carries her work at Sisu.
In this episode, we also cover:
Eric Weber, Head of Data Product at Yelp, has spent his career developing a product-minded approach to producing data-driven solutions that actually deliver value. For Eric, developing a data product mindset is still quite new and today, we’re digging into all things “data product management” and why thinking of data with a product mindset matters.
In our conversation, Eric defines what data products are and explains the value that data product managers can bring to their companies. Eric’s own ethos on centering on empathy, while equally balanced with technical credibility, is central to his perspectives on data product management. We also discussed how Eric is bringing all of this to hand at Yelp and the various ways they’re tackling their customers' data product needs.
In this episode, we also cover:
Even in the performing arts world, data and analytics is serving a purpose. Jordan Gross Richmond is the Chief Product Officer at AMS Analytics, where they provide benchmarking and performance reporting to performing arts organizations. As many of you know, I’m also a musician who tours and performs in the performing arts market and so I was curious to hear how data plays a role “off the stage” within these organizations. In particular, I wanted to know how Jordan designed the interfaces for AMS Analytics’s product, and what’s unique (or not!) about using data to manage arts organizations.
Jordan also talks about the beginnings of AMS and their relationship with leaders in the performing arts industry and the “birth of benchmarking” in this space. From an almost manual process in the beginning, AMS now has a SaaS platform that allows performing arts centers to see the data that helps drive their organizations. Given that many performing arts centers are non-profit organizations, I also asked Jordan about how these organizations balance their artistic mission against the colder, harder facts of data such as ticket sales, revenue, and “the competition.”
In this episode, we also cover:
Why do we need or care about design in the work of data science? Jesús Templado, Managing Director at Bedrock, is here to tell us about how Bedrock executes their mantra, “data by design.”
Bedrock has found ways to bring to their clients a design-driven, human-centered approach by utilizing a “hybrid model” to synthesize technical possibilities with human needs. In this episode, we explore Bedrock’s vision for how to achieve this synthesis as part of the firm’s DNA, and how Bedrock adopted their vision to make data more approachable with the client being central to their design efforts. Jesús also discusses a time when he championed making “data by design” a successful strategy with a large chain of hotels, and he offers insight on how making clients feel validated and heard plays a part.
In our chat, we also covered:
“Many people in corporations don’t have the technical background to understand the possibilities when it comes to analyzing or using data. So, bringing a design-based framework, such as design thinking, is really important for all of the work that we do for our clients.” - Jesús Templado (2:33)
“We’ve mentioned “data by design” before as our mantra; we very much prefer building long-lasting relationships based on [understanding] our clients' business and their strategic goals. We then design and ideate an implementation roadmap with them and then based on that, we tackle different periods for building different models. But we build the models because we understand what’s going to bring us an outcome for the business—not because the business brings us in to deliver only a model for the sake of predicting what the weather is going to be in two weeks.”- Jesús Templado (14:07)
“I think as consultants and people in service, it’s always nice to make friends. And, I like when I can call a client a friend, but I feel like I’m really here to help them deliver a better future state [...] And the road may be bumpy, especially if design is a new thing. And it is often new; in the context of data science and analytics projects.”- Brian T. O’Neill (@rhythmspice) (26:49)
“When we do data science [...] that’s a means to an end. We do believe it’s important that the client understands the reasoning behind everything that we do and build, but at the end of the day, it’s about understanding that business problem, understanding the challenge that the company is facing, knowing what the expected outcome is, and knowing how you will deliver or predict that outcome to be used for something meaningful and relevant for the business.”- Jesús Templado (33:06)
“The appetite for innovation is high, but a lot of the companies that want to do it are more concerned about risk. Risk and innovation are at opposite ends of the spectrum. And so, if you want to be innovative, by definition—you’re signing up for failure on the way to success. [...] It’s about embracing an iterative process, it’s about getting feedback along the way, it’s about knowing that we don’t know everything, and we’re signing up for that ambiguity along the way to something better.”- Brian T. O’Neill (@rhythmspice) (38:20)
Links Referenced
How do we get the most breadth out of design and designers when building data products? One way is to have designers be at the front leading the charge when it comes to creating data products that must be useful, usable, and valuable.
For this episode Prasad Vadlamani, CDW’s Director of Data Science and Advanced Analytics, joins us for a chat about how they are making design a larger focus of how they create useful, usable data products. Prasad talks about the importance of making technology—including AI-driven solutions—human centered, and how CDW tries to keep the end user in mind.
Prasad and I also discuss his perspectives on how to build designers into a data product team and how to successfully navigate the grey areas between various areas of expertise. When this is done well, then the entire team can work with each other's strengths and advantages to create a more robust product. We also discuss the role a UI-free user experience plays in some data products, some differences between external and internally-facing solutions, and some of Prasad’s valuable takeaways that have helped to shape the way he thinks design, data science, and analytics can collaborate.
In our chat, we covered:
The challenges of design and AI are exciting ones to face. The key to being successful in that space lies in many places, but one of the most important is instituting the right design language.
For Abhay Agarwal, Founder of Polytopal, when he began to think about design during his time at Microsoft working on systems to help the visually impared, he realized the necessity of a design language for AI. Stepping away from that experience, he leaned into how to create a new methodology of design centered around human needs. His efforts have helped shift the lens of design towards how people solve problems.
In this episode, Abhay and I go into details on a snippet from his course page for the Stanford d. where he claimed that “the foreseeable future would not be well designed, given the difficulty of collaboration between disciplines.” Abhay breaks down how he thinks his design language for AI should work and how to build it out so that everyone in an organization can come to a more robust understanding of AI. We also discuss the future of designers and AI and the ebb and flow of changing, learning, and moving forward with the AI narrative.
In our chat, we covered:
“I certainly don’t think that one needs to hit the books on design thinking or listen to a design thinker describe their process in order to get the fundamentals of a human-centered design process. I personally think it’s something that one can describe to you within the span of a single conversation, and someone who is listening to that can then interpret that and say, ‘Okay well, what am I doing that could be more human-centered?’ In the AI space, I think this is the perennial question.” - Abhay Agarwal (@Denizen_Kane) (6:30)
“Show me a company where designers feel at an equivalent level to AI engineers when brainstorming technology? It just doesn’t happen. There’s a future state that I want us to get to that I think is along those lines. And so, I personally see this as, kind of, a community-wide discussion, engagement, and multi-strategy approach.” - Abhay Agarwal (@Denizen_Kane) (18:25)
“[Discussing ML data labeling for music recommenders] I was just watching a video about drum and bass production, and they were talking about, “Or you can write your bass lines like this”—and they call it reggaeton. And it’s not really reggaeton at all, which was really born in Puerto Rico. And Brazil does the same thing with their versions of reggae. It’s not the one-drop reggae we think of Bob Marley and Jamaica. So already, we’ve got labeling issues—and they’re not even wrong; it’s just that that’s the way one person might interpret what these musical terms mean” - Brian O’Neill (@rhythmspice) (25:45)
“There is a new kind of hybrid role that is emerging that we play into...which is an AI designer, someone who is very proficient with understanding the dynamics of AI systems. The same way that we have digital UX designers, app designers—there had to be apps before they could be app designers—there is now AI, and then there can thus be AI designers.” - Abhay Agarwal (@Denizen_Kane) (33:47)
Links ReferencedSimply put, data products help users make better decisions and solve problems with information. But how effective can data products be if designers don’t take the time to explore the complete needs of users?
To Param Venkataraman, Chief Design Officer at Fractal Analytics, having an understanding of the “human dimension” of a problem is crucial to creating data solutions that create impact.
On this episode of Experiencing Data, Param and I talk more about his concept of ‘attractive non-conscious design,’ the core skills of a professional designer, and why Fractal has a c-suite design officer and is making large investments in UX.
In our chat, we covered:
“When you look at analytics and the AI space … there is so much that is about how do you use ... machine learning … [or] any other analytics technology or solutions — and how do you make better effective decisions? That’s at the heart of it, which is how do we make better decisions?” - Param Venkataraman (@onwardparam) (6:23)
“[When it comes to business software,] most of it should be invisible; you shouldn’t really notice it. And if you’re starting to notice it, you’re probably drawing attention to the wrong thing because you’re taking people out of flow.” - Brian O’Neill (@rhythmspice) (8:57)
“Design is kind of messy … there’s sort of a process ... but it’s not always linear, and we don’t always start at step zero. … You might come into something that’s halfway done and the first thing we do is run a usability study on a competitor’s thing, or on what we have now, and then we go back to step two, and then we go to five. It’s not serial, and it’s kind of messy, and that’s normal.” - Brian O’Neill (@rhythmspice) (16:18)
“Just like design is iterative, data science also is very iterative. There’s the idea of hypothesis, and there’s an idea of building and experimenting, and then you sort of learn and your algorithm learns, and then you get better and better at it.” - Param Venkataraman (@onwardparam) (18:05)
“The world of data science is not used to thinking in terms of emotion, experience, and the so-called softer aspects of things, which in my opinion, is not actually the softer; it’s actually the hardest part. It’s harder to dimensionalize emotion, experience, and behavior, which is … extremely complex, extremely layered, [and] extremely unpredictable. … I think the more we can bring those two worlds together, the world of evidence, the world of data, the world of quantitative information with the qualitative, emotional, and experiential, I think that’s where the magic is.” - Param Venkataraman (@onwardparam) (21:02)
“I think the coming together of design and data is... a new thing. It’s unprecedented. It’s a bit like how the internet was a new thing back in the mid ’90s. We were all astounded by it, we didn’t know what to do with it, and everybody was just fascinated with it. And we just knew that it’s going to change the world in some way. … Design and data will take some time to mature, and what’s more important is to go into it with an open mind and experiment. And I’m saying this for both designers as well as data scientists, to try and see how the right model might evolve as we experiment and learn.” - Param Venkataraman (@onwardparam) (33:58)
Links ReferencedHow do you extract the real, unarticulated needs from a stakeholder or user who comes to you asking for AI, a specific app feature, or a dashboard?
On this episode of Experiencing Data, Cindy Dishmey Montgomery, Head of Data Strategy for Global Real Assets at Morgan Stanley, was gracious enough to let me put her on the spot and simulate a conversation between a data product leader and customer.
I played the customer, and she did a great job helping me think differently about what I was asking her to produce for me — so that I would be getting an outcome in the end, and not just an output. We didn’t practice or plan this exercise, it just happened — and she handled it like a pro! I wasn’t surprised; her product and user-first approach told me that she had a lot to share with you, and indeed she did!
A computer scientist by training, Cindy has worked in data, analytics and BI roles at other major companies, such as Revantage, a Blackstone real estate portfolio company, and Goldman Sachs. Cindy was also named one of the 2021 Notable Women on Wall Street by Crain’s New York Business.
Cindy and I also talked about the “T” framework she uses to achieve high-level business goals, as well as the importance for data teams to build trust with end-users.
In our chat, we covered:
“There’s just so many good constructs in the product management world that we have not yet really brought very close to the data world. We tend to start with the skill sets, and the tools, and the ML/AI … all the buzzwords. [...]But brass tacks: when you have a happy set of consumers of your data products, you’re creating real value.” - Cindy Dishmey Montgomery (1:55)
“The path to value lies through adoption and adoption lies through giving people something that actually helps them do their work, which means you need to understand what the problem space is, and that may not be written down anywhere because they’re voicing the need as a solution.” - Brian O’Neill (@rhythmspice) (4:07)
“I think our data community tends to over-promise and under-deliver as a way to get the interest, which it’s actually quite successful when you have this notion of, ‘If you build AI, profit will come.’ But that is a really, really hard promise to make and keep.” - Cindy Dishmey Montgomery (12:14)
“[Creating a data product for a stakeholder is] definitely something where you have to be close to the business problem and design it together. … The struggle is making sure organizations know when the right time and what the right first hire is to start that process.” - Cindy Dishmey Montgomery (23:58)
“The temporal aspect of design is something that’s often missing. We talk a lot about the artifacts: the Excel sheet, the dashboard, the thing, and not always about when the thing is used.” - Brian O’Neill (@rhythmspice) (27:27)
“Everyone should understand product. And even just creating the language of product is very helpful in creating a center of gravity for everyone. It’s where we invest time, it’s how it’s meant to connect to a certain piece of value in the business strategy. It’s a really great forcing mechanism to create an environment where everyone thinks in terms of value. And the thing that helps us get to value, that’s the data product.” - Cindy Dishmey Montgomery (34:22)
Links ReferencedThere are many benefits in talking with end users and stakeholders about their needs and pain points before designing a data product.
Just take it from Bill Albert, executive director of the Bentley University User Experience Center, author of Measuring the User Experience, and my guest for this week’s episode of Experiencing Data. With a career spanning more than 20 years in user experience research, design, and strategy, Bill has some great insights on how UX research is pivotal to designing a useful data product, the different types of customer research, and how many users you need to talk to to get useful info.
In our chat, we covered:
“Teams are constantly putting out dashboards and analytics applications — and now it’s machine learning and AI— and a whole lot of it never gets used because it hits all kinds of human walls in the deployment part.” - Brian (3:39)
“Dare to be simple. It’s important to understand giving [people exactly what they] want, and nothing more. That’s largely a reflection of organizational maturity; making those tough decisions and not throwing out every single possible feature [and] function that somebody might want at some point.” - Bill (7:50)
“As researchers, we need to more deeply understand the user needs and see what we’re not observing in the lab [and what] we can’t see through our analytics. There’s so much more out there that we can be doing to help move the experience forward and improve that in a substantial way.” - Bill (10:15)
“You need to do the upfront research; you need to talk to stakeholders and the end users as early as possible. And we’ve known about this for decades, that you will get way more value and come up with a better design, better product, the earlier you talk to people.” - Bill (13:25)
“Our research methods don’t change because what we’re trying to understand is technology-agnostic. It doesn’t matter whether it’s a toaster or a mobile phone — the questions that we’re trying to understand of how people are using this, how can we make this a better experience, those are constant.” - Bill (30:11)
“I think, what’s called model interpretability sometimes or explainable AI, I am seeing a change in the market in terms of more focus on explainability, less on model accuracy at all costs, which often likes to use advanced techniques like deep learning, which are essentially black box techniques right now. And the cost associated with black box is, ‘I don’t know how you came up with this and I’m really leery to trust it.’” - Brian (31:56)
As much as AI has the ability to change the world in very positive ways, it also can be incredibly destructive. Sean McGregor knows this well, as he is currently developing the Partnership on AI’s AI Incident Database, a searchable collection of news articles that covers questionable use, failures, and other incidents that affect people when AI solutions are poorly designed.
On this episode of Experiencing Data, Sean takes us through his notable work around using machine learning in the domain of fire suppression, and how human-centered design is critical to ensuring these decision support solutions are actually used and trusted by the users. We also covered the social implications of new decision-making tools leveraging AI, and:
“As soon as you enter into the decision-making space, you’re really tearing at the social fabric in a way that hasn’t been done before. And that’s where analytics and the systems we’re talking about right now are really critical because that is the middle point that we have to meet in and to find those points of compromise.” - Sean (12:28)
“I think that a lot of times, unfortunately, the assumption [in data science is], ‘Well if you don’t understand it, that’s not my problem. That’s your problem, and you need to learn it.’ But my feeling is, ‘Well, do you want your work to matter or not? Because if no one’s using it, then it effectively doesn’t exist.’” - Brian (17:41)
“[The AI Incident Database is] a collection of largely news articles [about] bad things that have happened from AI [so we can] try and prevent history from repeating itself, and [understand] more of [the] unintended and bad consequences from AI....” - Sean (19:44)
“Human-centered design will prevent a great many of the incidents [of AI deployment harm] that have and are being ingested in the database. It’s not a hundred percent thing. Even in human-centered design, there’s going to be an absence of imagination, or at least an inadequacy of imagination for how these things go wrong because intelligent systems — as they are currently constituted — are just tremendously bad at the open-world, open-set problem.” - Sean (22:21)
“It’s worth the time and effort to work with the people that are going to be the proponents of the system in the organization — the ones that assure adoption — to kind of move them through the wireframes and examples and things that at the end of the engineering effort you believe are going to be possible. … Sometimes you have to know the nature of the data and what inferences can be delivered on the basis of it, but really not jumping into the principal engineering effort until you adopt and agree to what the target is. [This] is incredibly important and very often overlooked.” - Sean (31:36)
“The things that we’re working on in these technological spaces are incredibly impactful, and you are incredibly powerful in the way that you’re influencing the world in a way that has never, on an individual basis, been so true. And please take that responsibility seriously and make the world a better place through your efforts in the development of these systems. This is right at the crucible for that whole process.” - Sean (33:09)
Links Referenced
Twitter: https://twitter.com/seanmcgregor
Doug Laney is the preeminent expert in the field of infonomics — and it’s not just because he literally wrote the book on it.
As the Data & Analytics Strategy Innovation Fellow at consulting firm West Monroe, Doug helps businesses use infonomics to measure the economic value of their data and monetize it. He also is a visiting professor at the University of Illinois at Urbana-Champaign where he teaches classes on analytics and infonomics.
On this episode of Experiencing Data, Doug and I talk about his book Infonomics, the many different ways that businesses can monetize data, the role of creativity and product management in producing innovative data products, and the ever-evolving role of the Chief Data Officer.
In our chat, we covered:
“Many organizations [endure] a vicious cycle of not measuring [their data], and therefore not managing, and therefore not monetizing their data as well as they can. The idea behind my book Infonomics is, flip that. I’ll just start with measuring your data, understanding what you have, its quality characteristics, and its value potential. But vision is important as well, and so that’s where we start with monetization, and thinking more broadly about the ways to generate measurable economic benefits from data.” - Doug (4:13)
“A lot of people will compare data to oil and say that ‘Data is the new oil.’ But you can only use a drop of oil one way at a time. When you consume a drop of oil, it creates heat and energy and pollution, and when you use a drop of oil, it doesn’t generate more oil. Data is very different. It has unique economic qualities that economists would call a non-rivalrous, non-depleting, and regenerative asset.” - Doug (7:52)
“The Chief Data Officer (CDO) role has come on strong in organizations that really want to manage their data as an actual asset, ensure that it is accounted for as generating value and is being managed and controlled effectively. Most CDOs play both offense and defense in controlling and governing data on one side and in enabling it on the other side to drive more business value.”- Doug (14:17)
“The more successful teams that I read about and I see tend to be of a mixed skill set, they’re cross-functional; there’s a space for creativity and learning, there’s a concept of experimentation that’s happening there.” - Brian (19:10)
“Companies that become more data-driven have a market-to-book value that’s nearly two times higher than the market average. And companies that make the bulk of their revenue by selling data products or derivative data have a market-to-book value that’s nearly three times the market average. So, there's a really compelling reason to do this. It’s just that not a lot of executives are really comfortable with it. Data continues to be something that’s really amorphous and they don’t really have their heads around.” - Doug (21:38)
“There’s got to be a benefit to the consumer in the way that you use their data. And that benefit has to be clear, and defined, and ideally measured for them, that we’re able to reduce the price of this product that you use because we’re able to share your data, even if it’s anonymously; this reduces the price of your product.” - Doug (28:24)
Links referenced
Drew Smith knows how much value data analytics can add to a business when done right.
Having worked at the IKEA Group for 17 years, Drew helped the company become more data-driven, implementing successful strategies for data analytics and governance across multiple areas of the company.
Now, Drew serves as the Executive Vice President of the Analytics Leadership Consortium at the International Institute for Analytics, where he helps Fortune 1000 companies successfully leverage analytics and data science.
On this episode of Experiencing Data, Drew and I talk a lot about the factors contributing to low adoption rates of data products, how product and design-thinking approaches can help, and the value of proper one-on-one research with customers.
In our chat, we covered:
Quotes from Today’s Episode
“I think as we [decentralize analytics back to functional areas] — as firms keep all of the good parts of centralizing, and pitch out the stuff that doesn’t work — I think we’ll start to see some changes [when it comes to the adoption of data products.]” - Drew (10:07)
“I think data people need to accept that the outcome is not the model — the outcome is a business performance which is measurable, material, and worth the change.” - Drew (21:52)
“We talk about the concept of outcomes over outputs a lot on this podcast, and it’s really about understanding what is the downstream [effect] that emerges from the thing I made. Nobody really wants the thing you made; they just want the result of the thing you made. We have to explore what that is earlier in the process — and asking, “Why?” is very important.” - Brian (22:21)
“I have often said that my favorite people in the room, wherever I am, aren’t the smartest, it’s the most curious.” - Drew (23:55)
“For engineers and people that make things, it’s a lot more fun to make stuff that gets used. Just at the simplest level, the fact that someone cared and it didn’t just get shelved, and especially when you spent half your year on this thing, and your performance review is tied to it, it’s just more enjoyable to work on it when someone’s happy with the outcome.” - Brian (33:04)
“Product thinking starts with the assumption that ‘this is a good product,’ it’s usable and it’s making our business better, but it’s not finished. It’s a continuous loop. It’s feeding back in data through its exhaust. The user is using it — maybe even in ways I didn’t imagine — and those ways are better than I imagined, or worse than I imagined, or different than I imagined, but they inform the product.” - Drew (36:35)
Links Referenced
Analytics Leadership Consortium: https://iianalytics.com/services/analytics-leadership-consortium
On today’s episode of Experiencing Data, I’m so excited to have Omar Khawaja on to talk about how his team is integrating user-centered design into data science, BI and analytics at Roche Diagnostics.
In this episode, Omar and I have a great discussion about techniques for creating more user-centered data products that produce value — as well as how taking such an approach can lead to needed change management on how data is used and interpreted.
In our chat, we covered:
Resources and Links:
“I’ve been in the area of data and analytics since two decades ago, and out of my own learning — and I’ve learned it the hard way — at the end of the day, whether we are doing these projects or products, they have to be used by the people. The human factor naturally comes in.” - Omar (2:27)
“Especially when we’re talking about enterprise software, and some of these more complex solutions, we don’t really want people noticing the design to begin with. We just want it to feel valuable, and intuitive, and useful right out of the box, right from the start.” - Brian (4:08)
“When we are doing interviews with [end-users] as part of the whole user experience [process], you learn to understand what’s being said in between the lines, and then you learn how to ask the right questions. Those exploratory questions really help you understand: What is the real need?” - Omar (8:46)
“People are talking about data-driven [cultures], data-informed [cultures] — but at the end of the day, it has to start by demonstrating what change we want. ... Can we practice what we are trying to preach? Am I demonstrating that with my team when I’m making decisions in my day-to-day life? How do I use the data? IT is very good at asking our business colleagues and sometimes fellow IT colleagues to use various enterprise IT and business tools. Are we using, ourselves, those tools nicely?” - Omar (11:33)
“We focus a lot on what’s technically possible, but to me, there’s often a gap between the human need and what the data can actually support. And the bigger that gap is, the less chance things get used. The more we can try to close that gap when we get into the implementation stage, the more successful we probably will be with getting people to care and to actually use these solutions.” - Brian (22:20)
“When we are working in the area of data and analytics, I think it’s super important to know how this data and insights will be used — which requires an element of putting yourself in the user’s shoes. In the case of an enterprise setup, it’s important for me to understand the end-user in different roles and personas: What they are doing and how their job is. [This involves] sitting with them, visiting them, visiting the labs, visiting the factory floors, sitting with the finance team, and learning what they do in the system. These are the places where you have your learning.” - Omar (29:09)
Earlier this year, the always informative Women in Analytics Conference took place online. I didn’t go — but a blog post about one of the conference’s presentations on the International Institute of Analytics’ website caught my attention.
The post highlighted key points from a talk called Design Thinking in Analytics that was given at the conference by Alison Magyari, an IT Manager at Eaton. In her presentation, Alison explains the four design steps she utilizes when starting a new project — as well as what “design thinking” means to her.
Human-centered design is one of the main themes of Experiencing Data, so given Alison’s talk about tapping into the emotional state of customers to create better designed data products, I knew she would be a great guest. In this episode, Alison and I have a great discussion about building a “design thinking mindset” — as well as the importance of keeping the design process flexible.
In our chat, we covered:
Resources and Links:
Quotes from Today’s Episode
“In IT, it’s really interesting how sometimes we get caught up in just looking at the technology for what it is, and we forget that the technology is there to serve our business partners.” - Alison (2:00)
“You can give people exactly what they asked for, but if you’re designing solutions and data-driven products with someone, and if they’re really for somebody else, you actually have to dig in to figure out the unarticulated needs. TAnd they may not know how to invite you in to do ask for that. They may not even know how they’re going to make a decision with data about something. So, you can say “sorry, ... You could say, “Well, you’re not prepared to talk to us yet,.” oOr, you can be part of helping them work it out. ‘decide,H how will you make a decision with this information? Let us be part of that problem-finding exercise with you, not just the solution part. Because you can fail if you just give people what they asked for, so it’s best to be part of the problem finding not just solving.” - Brian (8:42)
“During our design process, we noted down what the sentiment of our users was while they were using our data product. … Our users so appreciated when we would mirror back to them our observations about what they were feeling, and we were right about it. I mean, they were much more open to talking to us. They were much more open and they shared exactly what they were feeling.” - Alison (12:51)
“In our case, we did have the business analyst team really own the design process. Towards the end, we were the champions for it, but then our business users really took ownership, which I was proud of. They realized that if they didn’t embrace this, that they were going to have to deal with the same pain points for years to come. They didn’t want to deal with that, so they were really good partners in taking ownership at the end of the day.” - Alison (16:56)
“The way you learn how to do design is by doing it. … the second thing is that you don’t have to do, “All of it,” to get some value out of it. You could just do prototyping, you could do usability evaluation, you could do ‘what if’ analyses. You can do a little of one thing and probably get some value out of that fairly early, and it’s fairly safe. And then over time, you can learn other techniques. Eventually, you will have a library of techniques that you can apply. It’s a mindset, it’s really about changing the mind. It’s heads not hands, as I sometimes say: It’s not really about hands. It’s about how we think and approach problem-solving.” - Brian (20:16)
“I think everybody can do design, but I think the ones that have been incredibly successful at it have a natural curiosity. They don’t just stop with the first answer that they get. They want to know, “If I were doing this job, would I be satisfied with compiling a 50 column spreadsheet every single day in my life? Probably not. Its curiosity and empathy — if you have those traits, naturally, then design is just kind of a better fit.” - Alison (23:15)
I once saw a discussion on LinkedIn about a fraud detection model that had been built but never used. The model worked — it was expensive — but it just simply didn’t get used because the humans in the loop were not incentivized to use it.
It was on this very thread that I first met Salesforce Director of Product Management Pavan Tuvu, who chimed in on the thread about a similar experience he went through. When I heard about his experience, I asked him if he would share it with you and he agreed. So, today on the Experiencing Data podcast, I’m excited to have Pavan on to talk about some lessons he learned while designing ad-spend software that utilized advanced analytics — and the role of the humans in the loop. We discussed:
Reed Sturtevant sees a lot of untapped potential in “tough tech.”
As a General Partner at The Engine, a venture capital firm launched by MIT, Reed and his colleagues invest in companies with breakthrough technology that, if successful, could positively transform the world.
It’s been about 15 years since I’ve last caught up to Reed—who was CTO at a startup we worked at together—so I’m so excited to welcome him on this episode of Experiencing Data! Reed and I talked about AI and how some of the portfolio companies in his fund are using data to produce better products, solutions, and inventions to tackle some of the world’s toughest challenges.
In our chat, we covered:
There have been a couple of times while working at The Engine when I’ve taken it as a sign of maturity when a team self-examines whether their invention is actually the right way to build a business. - Reed (5:59)
For some of the data scientists I know, particularly with AI, executive teams can mandate AI without really understanding the problem they want to solve. It actually pushes the problem discovery onto the solution people — but they’re not always the ones trained to go find the problems. - Brian (19:42)
You can keep hitting people over the head with a product, or you can go figure out what people care about and determine how you can slide your solution into something they care about. ... You don’t know that until you go out and talk to them,listen, and and get in to their world. And I think that’s still something that’s not happening a lot with data teams. - Brian (24:45)
I think there really is a maturity among even the early stage teams now, where they can have a shelf full of techniques that they can just pick and choose from in terms of how to build a product, how to put it in front of people, and how to have the [user] experience be a gentle on-ramp. - Reed, on startups (27:29)
Debbie Reynolds is known as “The Data Diva” — and for good reason.
In addition to being founder, CEO and chief data privacy officer of her own successful consulting firm, Debbie has been named to the Global Top 20 CyberRisk Communicators by The European Risk Policy Institute in 2020. She’s also written a few books, such as The GDPR Challenge: Privacy, Technology, and Compliance In An Age of Accelerating Change; as well as articles for other publications.
If you are building data products, especially customer-facing software, you’ll want to tune into this episode. Debbie and Ihad an awesome discussion about data privacy from the lens of user experience instead of the typical angle we are all used to: legal compliance. While collecting user data can enable better user experiences, we can also break a customer’s trust if we don’t request access properly.
In our chat, we covered:
Resources and Links: Quotes from Today’s Episode
When it comes to your product, humans are using it. Regardless of whether the users are internal or external — what I tell people is to put themselves in the shoes of someone who’s using this and think about what you would want to have done with your information or with your rights. Putting it in that context, I think, helps people think and get out of their head about it. Obviously there’s a lot of skill and a lot of experience that it takes to build these products and think about them in technical ways. But I also try to tell people that when you’re dealing with data and you’re building products, you’re a data steward. The data belongs to someone else, and you’re holding it for them, or you’re allowing them to either have access to it or leverage it in some way. So, think about yourself and what you would think you would want done with your information. - Debbie (3:28)
Privacy by design is looking at the fundamental levels of how people are creating things, and having them think about privacy as they’re doing that creation. When that happens, then privacy is not a difficult thing at the end. Privacy really isn’t something you could tack on at the end of something; it’s something that becomes harder if it’s not baked in. So, being able to think about those things throughout the process makes it easier. We’re seeing situations now where consumers are starting to vote with their feet — if they feel like a tool or a process isn’t respecting their privacy rights, they want to be able to choose other things. So, I think that’s just the way of the world. .... It may be a situation where you’re going to lose customers or market share if you’re not thinking about the rights of individuals. - Debbie (5:20)
I think diversity at all levels is important when it comes to data privacy, such as diversity in skill sets, points of view, and regional differences. … I think people in the EU — because privacy is a fundamental human right — feel about it differently than we do here in the US where our privacy rights don’t really kick in unless it’s a transaction. ... The parallel I say is that people in Europe feel about privacy like we feel about freedom of speech here — it’s just very deeply ingrained in the way that they do things. And a lot of the time, when we’re building products, we don’t want to be collecting data or doing something in ways that would harm the way people feel about your product. So, you definitely have to be respectful of those different kinds of regimes and the way they handle data. … I’ll give you a biased example that someone had showed me, which was really interesting. There was a soap dispenser that was created where you put your hand underneath and then the soap comes out. It’s supposed to be a motion detection thing. And this particular one would not work on people of color. I guess whatever sensor they created, it didn’t have that color in the spectrum of what they thought would be used for detection or whatever. And so those are problems that happen a lot if you don’t have diverse people looking at these products. Because you — as a person that is creating products — you really want the most people possible to be able to use your products. I think there is an imperative on the economic side to make sure these products can work for everyone. - Debbie (17:31)
Transparency is the wave of the future, I think, because so many privacy laws have it. Almost any privacy law you think of has transparency in it, some way, shape, or form. So, if you’re not trying to be transparent with the people that you’re dealing with, or potential customers, you’re going to end up in trouble. - Debbie (24:35)
In my experience, while I worked with lawyers in the digital product design space — and it was heaviest when I worked at a financial institution — I watched how the legal and risk department basically crippled stuff constantly. And I say “cripple” because the feeling that I got was there’s a line between adhering to the law and then also—some of this is a gray area, like disclosure. Or, if we show this chart that has this information, is that construed as advice? I understand there’s a lot of legal regulation there. My feeling was, there’s got to be a better way for compliance departments and lawyers that genuinely want to do the right thing in their work to understand how to work with product design, digital design teams, especially ones using data in interesting ways. How do you work with compliance and legal when we’re designing digital products that use data so that it’s a team effort, and it’s not just like, “I’m going to cover every last edge because that’s what I’m here to do is to stop anything that could potentially get us sued.” There is a cost to that. There’s an innovation cost to that. It’s easier, though, to look at the lawyer and say, “Well, I guess they know the law better, so they’re always going to win that argument.” I think there’s a potential risk there. - Brain (25:01)
Trust is so important. A lot of times in our space, we think about it with machine learning, and AI, and trusting the model predictions and all this kind of stuff, but trust is a brand attribute as well and it’s part of the reason I think design is important because the designers tend to be the most empathetic and user-centered of the bunch. That’s what we’re often there to do is to keep that part in check because we can do almost anything these days with the tech and the data, and some of it’s like, “Should we do this?” And if we do do it, how do we do it so we’re on brand, and the trust is built, and all these other factors go into that user experience. - Brian (34:21)
Ben Shneiderman is a leading figure in the field of human-computer interaction (HCI).
Having founded one of the oldest HCI research centers in the country at the University of Maryland in 1983, Shneiderman has been intently studying the design of computer technology and its use by humans. Currently, Ben is a Distinguished University Professor in the Department of Computer Science at the University of Maryland and is working on a new book on human-centered artificial intelligence.
I’m so excited to welcome this expert from the field of UX and design to today’s episode of Experiencing Data! Ben and I talked a lot about the complex intersection of human-centered design and AI systems.
In our chat, we covered:
Quotes from Today’s Episode The world of AI has certainly grown and blossomed — it’s the hot topic everywhere you go. It’s the hot topic among businesses around the world — governments are launching agencies to monitor AI and are also making regulatory moves and rules. … People want explainable AI; they want responsible AI; they want safe, reliable, and trustworthy AI. They want a lot of things, but they’re not always sure how to get them. The world of human-computer interaction has a long history of giving people what they want, and what they need. That blending seems like a natural way for AI to grow and to accommodate the needs of real people who have real problems. And not only the methods for studying the users, but the rules, the principles, the guidelines for making it happen. So, that’s where the action is. Of course, what we really want from AI is to make our world a better place, and that’s a tall order, but we start by talking about the things that matter — the human values: human rights, access to justice, and the dignity of every person. We want to support individual goals, a person’s sense of self-efficacy — they can do what they need to in the world, their creativity, their responsibility, and their social connections; they want to reach out to people. So, those are the sort of high aspirational goals that become the hard work of figuring out how to build it. And that’s where we want to go. - Ben (2:05)
The software engineering teams creating AI systems have got real work to do. They need the right kind of workflows, engineering patterns, and Agile development methods that will work for AI. The AI world is different because it’s not just programming, but it also involves the use of data that’s used for training. The key distinction is that the data that drives the AI has to be the appropriate data, it has to be unbiased, it has to be fair, it has to be appropriate to the task at hand. And many people and many companies are coming to grips with how to manage that. This has become controversial, let’s say, in issues like granting parole, or mortgages, or hiring people. There was a controversy that Amazon ran into when its hiring algorithm favored men rather than women. There’s been bias in facial recognition algorithms, which were less accurate with people of color. That’s led to some real problems in the real world. And that’s where we have to make sure we do a much better job and the tools of human-computer interaction are very effective in building these better systems in testing and evaluating. - Ben (6:10)
Every company will tell you, “We do a really good job in checking out our AI systems.” That’s great. We want every company to do a really good job. But we also want independent oversight of somebody who’s outside the company — someone who knows the field, who’s looked at systems at other companies, and who can bring ideas and bring understanding of the dangers as well. These systems operate in an adversarial environment — there are malicious actors out there who are causing trouble. You need to understand what the dangers and threats are to the use of your system. You need to understand where the biases come from, what dangers are there, and where the software has failed in other places. You may know what happens in your company, but you can benefit by learning what happens outside your company, and that’s where independent oversight from accounting companies, from governmental regulators, and from other independent groups is so valuable. - Ben (15:04)
There’s no such thing as an autonomous device. Someone owns it; somebody’s responsible for it; someone starts it; someone stops it; someone fixes it; someone notices when it’s performing poorly. … Responsibility is a pretty key factor here. So, if there’s something going on, if a manager is deciding to use some AI system, what they need is a control panel, let them know: what’s happening? What’s it doing? What’s going wrong and what’s going right? That kind of supervisory autonomy is what I talk about, not full machine autonomy that’s hidden away and you never see it because that’s just head-in-the-sand thinking. What you want to do is expose the operation of a system, and where possible, give the stakeholders who are responsible for performance the right kind of control panel and the right kind of data. … Feedback is the breakfast of champions. And companies know that. They want to be able to measure the success stories, and they want to know their failures, so they can reduce them. The continuous improvement mantra is alive and well. We do want to keep tracking what’s going on and make sure it gets better. Every quarter. - Ben (19:41)
Google has had some issues regarding hiring in the AI research area, and so has Facebook with elections and the way that algorithms tend to become echo chambers. These companies — and this is not through heavy research — probably have the heaviest investment of user experience professionals within data science organizations. They have UX, ML-UX people, UX for AI people, they’re at the cutting edge. I see a lot more generalist designers in most other companies. Most of them are rather unfamiliar with any of this or what the ramifications are on the design work that they’re doing. But even these largest companies that have, probably, the biggest penetration into the most number of people out there are getting some of this really important stuff wrong. - Brian (26:36)
Explainability is a competitive advantage for an AI system. People will gravitate towards systems that they understand, that they feel in control of, that are predictable. So, the big discussion about explainable AI focuses on what’s usually called post-hoc explanations, and the Shapley, and LIME, and other methods are usually tied to the post-hoc approach.That is, you use an AI model, you get a result and you say, “What happened?” Why was I denied a parole, or a mortgage, or a job? At that point, you want to get an explanation. Now, that idea is appealing, but I’m afraid I haven’t seen too many success stories of that working. … I’ve been diving through this for years now, and I’ve been looking for examples of good user interfaces of post-hoc explanations. It took me a long time till I found one. The culture of AI model-building would be much bolstered by an infusion of thinking about what the user interface will be for these explanations. And even the DARPA’s XAI—Explainable AI—project, which has 11 projects within it—has not really grappled with this in a good way about designing what it’s going to look like. Show it to me. … There is another way. And the strategy is basically prevention. Let’s prevent the user from getting confused and so they don’t have to request an explanation. We walk them along, let the user walk through the step—this is like Amazon checkout process, seven-step process—and you know what’s happened in each step, you can go back, you can explore, you can change things in each part of it. It’s also what TurboTax does so well, in really complicated situations, and walks you through it. … You want to have a comprehensible, predictable, and controllable user interface that makes sense as you walk through each step. - Ben (31:13)
En liten tjänst av I'm With Friends. Finns även på engelska.