D2DO257: Love is in the Firmware

Ned Bellavance, co-host of the Day Two DevOps podcast.

Ned
Bellavance

Kyler Middleton, co-host of the Day Two DevOps podcast.

Kyler
Middleton

Listen, Subscribe & Follow:
Apple Podcasts Spotify Overcast Pocket Casts RSS

With the current cultural emphasis on AI and how that is changing our world, we often forget the human element of individuals and teams when building an effective software development team. On today’s Day Two DevOps, we explore the psychology behind software teams, psychological safety for those teams and how the advent of AI plays a role in developer culture.  Dr. Cat Hicks and Dr. Ashley Juavinett delve into the conversation surrounding trust, inclusivity and open communication to enhance team dynamics and productivity.

Episode Guests: Dr. Cat Hicks and Dr. Ashley Juavinett

Dr. Cat Hicks, Psychologist for Software Teams, Pluralsight

Dr. Cat Hicks is a  psychologist for technology teams.  She is a leader that drives forward applied behavioral sciences to help real people in their workplaces.  She has studied and understands how technical teams work together, change and achieve goals and how to measure those successes scientifically.  That data feeds the inference and informed and insightful actions that lead to industry and team success.

 

Dr. Ashley Juavinett, Associate Teaching Professor, Neuroscientist, UC San Diego

Dr. Ashley Juavinett is a neuroscientist, educator, write and is currently working as an Associate Teaching Professor at UC San Diego. She cares deeply about teaching, learning about the brain and ultimately, ourselves.

AdSpot Sponsor:  1Password

Bring all your unmanaged devices, apps, and identities under control. 1Password Extended Access Management ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible.  1Password Extended Access Management is available now to companies with Okta, and coming later this year to Google Workspace and Microsoft Entra.  Check it out at 1password.com/daytwocloud.

Episode Links:

Dr. Cat Hicks’ website 

Dr. Ashley Juainett’s website

Developer Success Lab – Pluralsight

Change, Technically Podcast – https://www.changetechnically.fyi

Fostering Psychological Safety for Tech Teams – Technically Leadership Podcast Episode 004

Episode Transcript:

This episode was transcribed by AI and lightly formatted. We make these transcripts available to help with content accessibility and searchability but we can’t guarantee accuracy. There are likely to be errors and inaccuracies in the transcription. 

 

Automatically Transcribed With Podsqueeze

 

Drew Conry-Murray 00:00:01  Quick question. Do your end users always and I mean always, without exception, work on company owned devices and IT approved apps? I didn’t think so. So how do you keep your company’s data safe when it’s sitting on all those unmanaged apps and devices? 1Password has an answer to this question:   Extended Access Management.  1Password Extended Access Management helps you secure every sign in for every app on every device, because it solves the problems traditional IAM and MDM can’t touch. Check it out at 1password.com/xam. That’s 1password.com/xam.

 

Kyler Middleton 00:00:42  Welcome to Day Two DevOps where the devooops is in the details. I’m Kyler Middleton and on Day Two DevOps we explore those details and how you can learn from the mistakes of others and build a solid DevOps practice on day one and beyond with me on this journey of learning and doing is my co-host, Ned Bellevance. Hi, Ned.

 

Ned Bellavance 00:01:01  Hi, Kyler. And, you know, we say the devooops is in the details. And I think it’s so important to feel safe and empowered to make mistakes, sometimes in public.  And that’s part of what we’re going to be talking about today. We’re going to talk about how love is in the firmware, how it’s about teams and individuals when you’re trying to build an effective software development team, and how you can unlock people’s potential by creating a feeling of safety within the group.

 

Kyler Middleton 00:01:26  Guiding us through this topic. Our guest, Dr. Cat Hicks, a self-described psychologist for software teams, and Dr. Ashley Juavinett, a neuroscience professor. Without further ado, because this podcast is amazing, let’s get this started, let’s dive right in. Thank you so much, Ashley and Cat, for joining us on the show. I’ve been reading from both of you online, and Ned and I both read some of your articles and some of your research, and it is so exciting to have you two on the show. And before we get into too much about the specifics of your articles, because so many of them could be the entire podcast, et’s just talk about what do you each research individually. And then I’m curious where the interplay is too.  So whoever wants to go.

 

Cat Hicks 00:02:06  I’ll go, I’ll go because Ashley’s pointing at me. So I think of myself as a social scientist in the world, trying to amplify our human experience and help us answer kind of big questions about how do I learn and how do I thrive and how do I innovate in a different kind of environment. I’ve focused a lot on the environments that help people achieve and sustain high performance. And sustaining high performance is not always the same thing as actually achieving it for a short amount of time, which is something that I really, really like to focus on. How do we help people do this in a healthy, humane, over time kind of way? The way our minds really want to work and learn together? I’m also an applied scientist at heart, so I’m really, really interested in how do we use the evidence tools that social science gives us and use those in all kinds of scrappy different environments.  So for instance, I’ve helped non-profits develop plans to justify the work that they’re doing in communities.  I’ve helped large tech companies talk about how they’re going to measure things with their engineering orgs. I’ve brought those sort of social science lenses to all kinds of problems in lots of different applied spaces.

 

Ashley Juavinett 00:03:14  Yeah, and I’m a neuroscientist, so you might wonder, like where Cat and I overlap. And it’s actually a little bit in sort of this like upskilling and teaching people how to code area. Because even though I’m a neuroscientist, I had to learn how to code on the job. And that’s true for so many biologists, so many neuroscientists. And some of my work these days is trying to understand the choices people make around, like learning how to code or not in biology, because, believe it or not, even in biology, there’s disparities between who learns how to code and who doesn’t. And so as a researcher, I’m asking why?  And as an educator, I’m asking how do we change that?

 

Cat Hicks 00:03:51  Yeah, I think that Ashley and I often end up feeling like we’re working on really similar questions, but at like slightly different levels.  Right. And so I might come to Ashley in our kitchen and be like, hey, I have a brain question, like, what mechanism could, is this plausible? Or hey, people are saying this thing about dopamine and tech, you know, like, do you think that checks out? But then she comes to me with questions too, about, hey, what do we know about behavior change? What do we know about learning? So there is actually more overlap than you might think.

 

Ashley Juavinett 00:04:18  Yeah. Or I’m like, should this be a five point Likert scale or a six point Likert scale? Like what do you, do you have opinions and Cat’s like, oh my God, how much time do you have?

 

Cat Hicks 00:04:26  Right, right, right. We have a lot of statistics discussions, which I think surprises people. But math is math. And so sometimes the kind of math you need to make predictions about brains or about behavior, we have a lot of like similar problems to solve.

 

Ned Bellavance 00:04:39  Yeah. Cat, I was reading through one of your papers that was talking about AI anxiety with programmers or the perceived threat of AI, and that was a really interesting paper to begin with.  But the thing that struck me was the amount of statistics that goes into that, and it reminded me that like half of every research paper is just a whole big old pile of statistics, and that’s not easy to do. I don’t think I got past like the 101 statistics class that I had to take in business school. 

 

Cat Hicks 00:05:05  You just won my heart like completely by being a person who actually went into the paper and even like took a look at the stats instead of just sort of saying, ah, we’re not going to even look at this. So I think that at the end of the day, we have a right to try to understand our data, and different people have to play at different levels of that. Psychologists actually have spent so many years asking ourselves, wow, you know, we’re trying to make predictions about incredibly multivariant changing over time, complicated things. And so as a psychologist, you count yourself lucky if you can explain like 25% of the variants, you know, some huge thing in the world.  Like that’s an amazing effect in psychology. So yeah, we have pretty complicated statistics sometimes.

 

Kyler Middleton 00:05:46  The amount of time I’ve spent with my partner, who’s doing a social sciences PhD and trying to help her with our programming and iteration and arrays and data frames, and I had no idea that so much programming was embedded in what I would think of the opposite of computer science, which is, you know, social sciences people, the squishy parts of, you know, psychology, but it’s not it’s so much statistics, which is also now so much computer programming that the world is getting smaller and smaller as we go along.

 

Cat Hicks 00:06:14  I’m having these meta light bulbs go off right now because programming in R was my moment of thinking to myself, I could actually be someone who programs, you know, and I could actually access those skills. And I’ve shared before, like on our podcast, that that was not an easy journey for me. And so it’s actually I just want to shout out, so validating and affirming to sit here with the two of you and have you validate like the skills of social science.  You know that those are technical.  I love that.

 

Ned Bellavance 00:06:39  Another meta light bulb that kind of went off for me, as you two were talking, is, I think of this sort of the separation of software and hardware or maybe dev and ops, and Cat, you kind of remind me of the things you’re talking about are a little more squishy. They’re a little more like software abstracted from the hardware. Whereas Ashley, would it be correct to say that since you’re a neuroscientist, you’re looking at almost like the hardware level of the brain?

 

Ashley Juavinett 00:07:04  I love that, and actually we have like many discussions to this end about biological reductionism and how much we can really just talk about the stuff of the brain versus like what the brain does. And Cat comes in with a much more phenomenological sort of worldview about, oh, it’s like the system at play. And yeah, so this is very much the heart of it.

 

Cat Hicks 00:07:22  Let’s call it what it is, because we had a lot of arguments about this early in our relationship. One of the biggest arguments we ever had that maybe was totally one sided because it was just a crisis I was having.   It was like, can I marry someone who’s a biological reductionist? And Ashley had to convince me that she still believed in love and human feelings.

 

Ashley Juavinett 00:07:45  Yeah. In the reality show version of our life, this was like the big crisis before the wedding, you know, of, like, I’m marrying a reductionist. Like, what does that mean?

 

Kyler Middleton 00:07:56  It’s in the firmware. I don’t know where that falls, but the love is in the firmware, I’m pretty sure.

 

Ashley Juavinett 00:08:00  I think that’s it. Yeah. Experience dependent plasticity, that’s what we’d call it.

 

Cat Hicks 00:08:05  Y’all are arguing that. But I’ll tell you what, everybody wants to come to me and ask about the behavior. At the end of the day, that’s what people’s entry point. But what is actually kind of amazing about this is I’m kind of like a synthesis person. I love to break dualities, so I’ll just challenge this because Ashley also is a systems neuroscientist and works a little bit more about these would you say like emergent properties of the brain, the things that happen when you think about the brain as being dynamic and moving over time.

 

Cat Hicks 00:08:33  And I actually feel like that is a lot like how we think about human behavior and, you know, psychology often we sort of think, oh, the psychology stuff is like the soft stuff. It’s the people skills. It’s the stuff that comes after you’ve solved all your difficult problems, and then you have like the time to invest in culture. My position on this as a psychologist for software teams is these mechanisms are actually how you’re solving your technical problems. They’re like deeply entangled. So it’s just important to not fall into that little pitfall of thinking. Just because it’s a little harder to get to we can put it off forever.

 

Ashley Juavinett 00:09:06  I love that I had never actually thought about that parallel between our work, because yeah, I am in this field of neuroscience where we’re moving away from thinking about single neurons. And I think in your world that’s like moving away from thinking about single people as like the main drivers. But actually it’s like the population activity and like the shape of everybody working together that matters.  So that’s a lovely parallel.

 

Ned Bellavance 00:09:28  You specifically called out software teams and not software developers. And is that sort of what you’re alluding to, that what’s actually important to measure is sort of the productivity and the efficacy of the team and not the individual in the team.

 

Cat Hicks 00:09:43  So hot button topic. Right. And software as we’re all pretty well aware, like if you Google developer productivity you’ll get a thousand million hits. Yes, I think that there are lots of things that happen in the world to create innovation, and some of those things are individual, like on the micro level, and some of them are group level and some of them are org level, and some of them are huge and cultural. And so to me, any one of those areas is a place where you could do your work and you could say, hey, there’s some interesting, important things we could measure. So for instance, we did a study on code review anxiety, and we measured that on the level of individual software developers and went really deep into what’s the individual experience of somebody and the choices they’re making.

 

Cat Hicks 00:10:27  And can we help them by intervening on the individual level. So sometimes that is actually the approach that I bring. But even when we do that, we have to acknowledge like the enormous role of context. So one thing that psychologists actually say about, you know, these experiences like anxiety, is it’s important to distinguish between whether we’re calling something a trait or a state. So this is just a nice little piece of psych jargon. Ask yourself, do I think about developers as like productive people or people who could be in a state of productivity? And I am so far on the side of being a state person. I love to think about how can we help somebody access their potential, get to the most optimal spot of their range? And that’s how I approach a lot of this. I also think it’s actually the most accessible thing for a lot of leaders to try to change. So to try to ask yourself, am I creating the best environment in which people can be productive rather than saying, I’m in here trying to figure out who is productive, like it’s some sort of innate quality that you’re born with, which I don’t think it is.  I think it’s very complicated. Does that answer your question?

 

Ned Bellavance 00:11:38  I don’t know.  What was the question?

 

Cat Hicks 00:11:39  So yeah, I’m more on the side of teams than individuals.

 

Ned Bellavance 00:11:45  Ah, yeah, that totally makes sense to me. It’s almost like the nature versus nurture discussion. And it turns out it is probably some of both. I’m sure some people innately tend to be more productive, but you can create an environment that really impedes that productivity by making it really hard to get good feedback. You’re terrified to get feedback because you think everybody else on the team is out to get you.

 

Cat Hicks 00:12:08  Yeah, and in our research, we found really big effects from changing that environment. And that’s what’s really exciting to me, you know? So for instance, you called out the AI skill threat study. And, you know, in that study we looked at where are there teams that have already cultivated a really good culture around learning. So as a team, we learn. That’s one of the ways we measure that question. We share things together as a team.  Right. It’s kind of a team level metric. And also teams that have cultivated a very high sense of belonging. So we are a team where we reinforce and validate for each other that all kinds of different people belong here. And when we looked at those teams confronting like a very anxiety provoking experience, like the introduction of AI and something that not everyone on the team agreed with, and a place where developers are feeling just a lot of uncertainty and sometimes big, big disagreements about how these tools should be used. Those teams that had invested in those environments almost had like a protective buffer against the anxiety and the bad experience. So I think of it like, yes, we’re all like individuals, you know, making a lot of individual choices with our own individual capacities. But the team environment can kind of be like the armor you put on, you know, or the superpower or the tool that you get to like better navigate that environment.

 

Kyler Middleton 00:13:24  I love that.  I like both of those framings, both the catalyst that unlocks developer potential framing and also the defense against instability framing.  I think those are very distinct, but also both very important problems that managers have to solve, and companies have to solve for people to succeed.

 

Ned Bellavance 00:13:41  You have to be able to measure things before you can make change. So how do I measure the culture of my software development team? How do I figure out where I’m at before I can figure out where I’m going?

 

Cat Hicks 00:13:52  Yeah, this is like an immediate rubber hits the road question. Right? Part of what my research lab has tried to do is work on measures for this stuff. So you might start with surveys, but, you know, there’s a world of difference between the terrible marketing survey that you might get from some vendor that you will never in your wildest dreams, like, give an honest response to versus like a survey that comes to you from a team of scientists where people have done the work to say, hey, I’d like to invite you to please engage with this project with me and share in a really authentic way. You know, your experiences here.

 

Cat Hicks 00:14:28  So I hope it’s my aspiration that the second category is what my team has tried to put out. So you can do a lot of great scientific work to connect software teams to better methods of measurement. For instance, in the area of belonging, which I mentioned, sense of belonging is a well studied measure out there in psychology. So people have tested different inventories, different ways of asking about it across many, many groups. My research team has also extended that work. We developed a software team specific measure. So we have been experimenting with testing, validating, doing all those fancy stats on how these different surveys perform. That’s work I’m really proud of and you can actually access it. It’s all open. But other ways of doing it exist. I mean, I think that you might consider qualitative methods like interviewing folks and then developing measures based on that. I also think trace measures are an important part of the social science work that I do. So we might operationalize sometimes, hey, does this team look like it has time for learning? Are they engaging in learning opportunities at the org?

 

Cat Hicks 00:15:29  So depending on your question there’s like many ways to tap into and measure team culture. I think a lot of it depends on your orientation, you’re even allowing yourself to say, what I’m going to assess is the team and the culture. Ashley, what would you say about measurement? Because that’s a big hot topic between the two of us as well. 

 

Ashley Juavinett 00:15:46  Yeah.  I mean, it’s been interesting because I feel like I walk into assumptions where I’m like, well, we’re recording brain activity. That’s like a more real thing than recording human behavior or something. But it’s troubled with the same problems of noisy measurement and proxies for things that, you know, aren’t the same thing as the thing you’re trying to measure. And so it’s been really interesting. We have these discussions about what are we actually measuring and what does that tell us. And I think those same things cross across disciplines.

 

Cat Hicks 00:16:15  We’re both measurement theory nerds. So we’re kind of falling down that rabbit hole now. But like one good example is we often measure blood flow in the brain.  Right. We use that as a proxy for different kinds of brain activity, but it’s not actually directly measuring some of the things. It’s similar in psychology. Sometimes you might find really good proxies. If everybody feels really good about their code review process, that, that taps into probably a larger cultural thing. You might ask yourself, where are the big signals in my environment? And another thing we do in my lab is we bring people in sometimes and do empirical methods where we can directly look at their behavior. So the code review study is a good example of that too, because we actually gave people a workshop intervention to help them manage their anxiety, and we could directly measure what they were experiencing during our experiment. But then also they left the experiment, they went back to their workplace and they reported back to us. How did I feel in the next real code review? So that was kind of a pretty direct measure of behavior.

 

Ashley Juavinett 00:17:09  I know something that you’ve dealt with is people being really skeptical about, like self-report, like, oh, people can’t tell you how they feel.  Right?  What do you say to those people who are like, people are wrong about their own mental state?

 

Cat Hicks 00:17:20  Yeah, well, self-report is what we’ve got a lot of the time. And so I’m really curious about how to make it better. I think my answer is I try really hard to not be, even though I face a lot of skepticism in tech as a psychologist and a lot of people who don’t see the value of studying human things, I try to come from a place of compassion and non defensiveness and say you are right to always ask yourself, is this a good measure? So we know things about how to make surveys better. We know we can use measures from scientists who have tested measures over and over again. We know we can do things in writing good surveys to reduce social desirability. Big pet peeve of mine is that many applied research teams in tech don’t recruit very broadly. They don’t try to go recruit, you know, many, many different people from different groups and then ask, do these items actually work for lots of people like developers outside the US? You know, women in tech, all kinds of groups.

 

Cat Hicks 00:18:16  We do that, and I’m really proud of that. Another thing that I like to tell people is like even a kind of a bad measure, but implemented in the right way that people are able to answer consistently can start to tell you a lot, or if you kind of are making really careful, thoughtful comparisons. So one example is like we asked people about how they feel about the visibility of software work inside of their organization, and we could probably sit here forever and say, there’s a lot to talk about. Like how visible should the work be? How should we measure it? What should we talk about? But I often tell people, if you look at the difference between how managers answer that question and how developers answer that question, you might learn a lot about your org, even if we haven’t quite figured out the perfect question.

 

Ashley Juavinett 00:18:58  It always comes back to experimental design, right?

 

Ned Bellavance 00:19:01  Yeah, absolutely. When I read that the one study I was looking at was based off of surveys, the, you know, immediately the light bulb went off in my head. I’m like, well, this is self-reporting in its surveys. Like, how much can we trust the information that was gathered? But if you have a large enough sample size and you can control for some of it, it does seem to give you some useful information back, especially if you’ve designed it well. So I appreciate that you’re like ten steps ahead of me. I don’t even need to worry about that.

 

Kyler Middleton 00:19:26  I think my fear would be like, if the culture is so bad and psychologically unsafe that folks will not accurately reflect it, because we all know that anonymous surveys are not really anonymous, I’m doing quote fingers here on video, because we want to have traceability to different organizational structure. So like we’re tracking who’s responding to the anonymous surveys. And I don’t feel very safe being truthful for those kind of surveys.

 

Cat Hicks 00:19:49  Yeah, I think that in that case, you probably cannot learn what’s happening from a survey and you probably, maybe even shouldn’t be running a survey that’s coming from your boss. You know what I mean, right? I think that that’s a fair thing to acknowledge that we don’t acknowledge enough.

 

Cat Hicks 00:20:03  When I put out this research on developer thriving, one of the factors that we called out, by the way, this was survey based research. But one thing that I love about it is that you can actually take measures that were developed by people who did things that were not surveys, you know. So that’s one thing that gives me a lot of confidence is not just our surveys, but also the things that we are measuring have been tested in experiments, have been tested in like longitudinal methods. So you can actually be in dialogue, you know, with all these other folks who have worked on this topic. But to get back to your point, we measured agency. And that is a really key one in that model, because that taps into whether people feel like they can voice disagreement, you know, whether they feel like they have a hand in how their success is being measured. Some of these parts of institutional fairness probably would come up when you start to ask about this sort of thing. And, you know, I think a lot of that is on leaders to make sure that that kind of culture exists and then to say, well, because, you know, you can trust me because, you know, I’m going to let you voice disagreement because we’re going to create a safe space.  Now, we can do this work of asking like really deep questions about our org. But yeah, I don’t think that this can always come from HR, honestly.

 

Drew Conry-Murray 00:21:17  Let’s pause for a message from today’s sponsor 1Password. Imagine your company security is like the quad of a college campus. There a nice brick path between the buildings. Those are the company owned devices, IT approved apps and managed employee identities. And then there are the paths people actually use. The shortcuts weren’t through the grass or the actual straightest line from point A to B, those are the unmanaged devices, shadow IT apps, and non-employee identities like contractors, and most security tools only work on those happy brick paths. But lots of security problems take place on the shortcuts. 1Password Extended Access Management is the first security solution that brings all these unmanaged devices, apps, and identities under your control. It ensures that every user credential is strong and protected, every device is known and healthy, and every app is visible. 1Password Extended Access Management solves the problems traditional IAM and MDM can’t.  It’s security for the way we work today, and it’s now generally available to companies with Okta and Microsoft Entra and in beta for Google Workspace customers. Check it out at 1password.com/xam.  That’s 1password.com/xam. And now back to the podcast.

 

Ned Bellavance 00:22:22  I like the way that you’re saying that it’s up to the leader to help foster this kind of culture. What sort of things should an effective leader be doing with their software team to foster that sort of culture where people do feel agency and the ability to disagree with decisions that have been made.

 

Cat Hicks 00:22:41  Yeah.  So one thing that I’ve seen in my last couple of years, kind of being deeply embedded in the psychology of software teams, is there’s a real lack of empowering teams to actually contribute to the design of their culture. And so, first of all, we have a lot of messages going on for people all the time. And some of those messages are like the poster on the wall, like the time when your boss is like, of course we value learning here and everyone should learn.

 

Cat Hicks 00:23:09  But then there are the moments where you have something come up that’s unexpected and scary, and maybe a junior developer tries to raise their voice at a meeting, or you go to your boss and you say, I need more time on this project. You know, those are moments where we are deeply scanning our environments to ask, what is the truth here? Like what are the rules that govern whether I’m going to be successful here? If people do not take those moments really seriously and try to make them really, really safe, that can undo a million thousand other, you know, initiatives. So I think that it’s really, really important to take that psychological safety seriously. It’s important for the senior people on software teams to know that they are modeling whether or not they’re going to have that attitude of like, there’s no dumb questions. You know, we’re going to adjust the plan if your insight is valuable. We call them microinclusions sometimes in psychology. And it’s this idea that you can validate that someone belongs there, that their work is valued almost the opposite of a microaggression.

 

Cat Hicks 00:24:11  Right? A microinclusion kind of gives people this deep message that this is how we’re going to be as a culture. I have some more tactical things too, you know, stuff. I’ll just throw a few things out. One thing is like metrics are applied too broadly in engineering organizations, and there’s always large groups of people who are like, Because I’m on a backend team, they don’t get me. You know, there’s always this sense of like, find those people and talk to them and sort of address it and maybe even acknowledge, like we have some gaps in our visibility. I think that also rigidly defining success from the top down without ever having a period where you’re saying, we’re going to pilot test a set of measures for engineering, we’re going to ask you for your real, honest feedback. And that’s a time when leadership can actually prove to the engineers that they do care about their opinions. Right. I think a lot of leaders probably just feel a lot of fear, and they want to, like, have the answer all the time.  And so they want to like come really top down and be like, here’s the plan and it’s super standardized. So leaders need to do their own work honestly to get some of that stuff out of their heads.

 

Kyler Middleton 00:25:19  There’s this scene in Ted Lasso, which is an incredible show if people have not seen it, and he comes in and this team hates him initially, he’s the coach of the soccer team in England or football, if you’re listening overseas and he implements a feedback box. And one of the things people mostly write explicit like terrible, mean things in there. But one person says, well, the showerheads don’t work and no one expects him to do anything about it at all. And he fixes the showerheads and he doesn’t even tell them he does it. And I think he just proved that he is listening and that their concerns matter. And I relate that to everything that you just said, Cat, that like you’re proving that you are listening and that their opinions and experiences matter. And that is huge.

 

Ashley Juavinett 00:25:59  I love Ted Lasso. I love that you use that example. And I mean, I feel like these things are true just across humans in any organizations. Like so much of what you just said, Cat, is the exact sort of stuff we talk about when we talk about inclusivity in the classroom is like, what happens when someone raises their hand? How do you react to like, what is maybe not the best question you’ve ever heard, but in that moment, like, do you treat that person with respect? Do you give them the space to grow? You know, what does that look like? And yeah, these things just really apply across humans.

 

Cat Hicks 00:26:30  There was a story from a developer who I interviewed, and he told me about a time when he had a manager and he went to this manager, and he was really afraid, and he said, hey, I have this idea for this other solution. It’s an area I’ve never worked in and I don’t know what’s going to happen. Honestly, I just, I can’t tell you why, but I’m obsessed with it.  Please, can we give me the space to do it? And the manager said, absolutely. Like I’m excited, you know, like, this is what engineering is all about sometimes we’ll make the room, but it was like a difference of a week or something like that on the project timeline. And the kicker is this was like 25 years ago in this person’s career. This person was a staff heavyweight member of our technical team who was talking to me. And he remembered this moment from his career, like so far back when someone had just given him one week and he said, I credit that moment and that manager with so much of continuing in this career. So I think about the trade offs of this and the amazing wins we can get. I mean, a week of time to let a developer explore and then like 25 years of amazing technological work later. That’s the stuff I’m obsessed with.

 

Kyler Middleton 00:27:37  They’re so formative. Let’s pivot a little bit, but not very far, because we’re talking a lot about the stress that our teams are under and how to mitigate them and how to help the people.  And one of those stressors that’s changing the whole world is AI. And we’re in 2024 legally required to ask about AI during each of our podcasts. So whenever we talk to engineers and I personally feel it, Ned and I are both like actively working as engineers also, like it’s a little scary that it might one day replace us or make us not as valuable as we were before. And then that can be really scary. So how do you two see AI impacting developer culture and what can engineers do about it? What can managers do about it? How do we mitigate all this scary, cool stuff?

 

Ashley Juavinett 00:28:19  Yeah, I mean, I can start by just saying, like I have accepted the reality of AI for our future, especially in terms of technical things like writing code, for example. It seems silly to me to ever go back without AI assistance. And so this is I mean, it directly impacts the way I teach and the way I coach students through using it, because I feel like to me, that’s the important thing, because my students are going into your developer teams, and I want them to be people that know how to use AI, you know, in an effective way, but also in an ethical way, and to know what the implications are.  So, yeah, I mean, I think about that all the time these days. 

 

Cat Hicks 00:28:56  What do you think about, do your students get things wrong with AI or what do you do to kind of help them navigate into it?

 

Ashley Juavinett 00:29:02  Yeah, totally. I mean, there’s a lot of discussion in computer science education right now about what is it then that students need to learn? And I feel like one of the biggest things is catching mistakes, right. And testing your code and really like making sure your code is robust to a million different scenarios. And so that is one of the biggest things I do now is like rather than having them write code de novo in my intro programming class, they edit code to fix it or identify what does it do in this edge case or things like that. Those are the skills we need now, I think.

 

Cat Hicks 00:29:31  Yeah, I have a little bit of a more complicated relationship with AI as like a phenomenon in the world. I think we mean a million different things when we say AI.   So, like, it’s hard to give one answer. Yeah. I’m someone who’s always I believe in the power of using data to predict things. And I think that that power is amazing. And I think about psychologists used to hand calculate their statistics, you know, and it’s actually quite well known that even when we could do stats on computers, the psychology professors, they were replicating the same statistics by hand for a long time because they did not trust the computer. So a little bit of like computer adoption history there. But I think it’s important to not always see every adoption of AI as inevitable and to really say, well, we have a very high bar for what we introduce into knowledge work, because knowledge work is so precious, you know, and what we’re doing is so valuable. That doesn’t mean I want developers to have to do a lot of mundane tasks that I think AI could probably start to replace or that we could automate and that’s a cool possibility. But I do worry that we don’t always value the human problem solving that developers do, and we don’t always put that first and say, well, no matter what we do with this technology, we’ve got to understand the human problem solving and protect it a way that we’ve tried to do this in my lab as a little tiny experiment, is we created a pre-mortem about AI adoption and a pre-mortem, you folks probably have heard of this, but if you haven’t, it is when you bring people together and you ask, let’s sit down and let’s go into like an imaginary place where we ask about what could go wrong with this thing we’re trying to do. Usually teams do this about, you know, technical decisions, projects. We like to challenge teams to do this about their own cultures. So we’ve had software teams come in, sit down and do a pre-mortem about the risks of AI, and you would be amazed at the stuff that comes up. It always starts with this really deep gloom, this kind of sphere, and then people are really afraid to speak out and to say, well, I actually have these fears about AI. And does that mean I don’t understand AI? And I’m not technical enough? I’m afraid to share them. So we get everybody to start surfacing fears, and then suddenly you get all of this amazing brainstorming almost about the possible wildest, you know, security breaches. Or someone said, I’m worried, you know, my team’s going to adopt AI, and in two weeks I’m going to be out of a job, you know? And then, of course, everyone in the room could be like, oh, that’s not going to happen.

 

Cat Hicks 00:32:00  So we actually have seen teams actively problem solve and emerge in this one hour, you know, moving from a place of paralysis and anxiety and avoidance through some actual decisions where they say, well, you know, actually, I know how to do a lot of things. I know how to vet a tool. I know how to set up a test. How are we going to do this and take this same beautiful engineering mindset to how we adopt these tools? So that’s what I’d love for more teams to get to do.

 

Kyler Middleton 00:32:30  I love that listening to your employees framing of the thing that’s scaring them, and then helping them get there to the goal, like connect the fears to the goal and how are we going to fix it? Like, I think that’s very important.

 

Cat Hicks 00:32:41  Yeah. People have insight about what’s happening to them. You know.

 

Ned Bellavance 00:32:44  Imagine that, people are thinking about what’s going on in their lives.

 

Cat Hicks 00:32:48  That’s my contention. That’s my bet that I’m making across my career.

 

Ashley Juavinett 00:32:52  Are there people like even in tech that you feel like are super resistant to the inclusion of AI in this kind of work?

 

Cat Hicks 00:32:59  You know, yes, there are big group differences.  We’ve found that in our study, actually, and I don’t think tech’s talking about this enough. I would not always frame this as resistance. I think that’s a bit of a leading framing, like saying, well, we got to move all these people. I think it’s very complicated. I think there was probably resistance like, hey, I just don’t want it no matter what. And I think there’s also forms of insight that are not being bubbled up to leaders. So one big signal of this that I’m pretty concerned about is minoritized folks in tech, specifically racially minoritized people in tech, are reporting far more awareness of low quality AI output. We found that in our research, which I thought was really interesting, I think it’s possible there’s many possible explanations. One possible explanation is they’re encountering a lot more times when this technology fails, you know, because of the embedded biases in some of the trainings of these models. And that’s a really important source of insight for us to listen to. Like, hey, if someone has access to a certain form of experience, you know, there’s a difference in how they see the tool.

 

Cat Hicks 00:34:07  They say, I trust it less, the quality is lower. We found this with experienced engineers. This was not a difference of experience. It wasn’t a difference of career. Having those diverse samples, let us ask those questions statistically.  This actually shows up very specifically for this group. There are also some differences. Like women on software, teams are more likely to be the only person who’s not using an AI coding tool while their whole rest of their team is, and I’m not sure why that is. You know, we do see that there are some differences. Like women on software, teams can get less access to things, can be a little bit less able to advocate for themselves, maybe feel less included in certain technical spaces. Of course, that’s a widely known thing, but that’s showing up with AI, and I think we don’t know enough about that.

 

Ashley Juavinett 00:34:52  I think it makes sense to me at some level because I’m thinking about, you know, the folks who are maybe most vulnerable are less likely to dive into this like new, uncertain kind of risky thing.  We don’t know how it’s going to be assessed. Like if you write half your code with an AI assistant, does that count? There’s all these like open questions, right, that we don’t have answers to. And so the folks that are most vulnerable are going to be the people that are thinking about that, even if it’s not like they’re consciously thinking about it, it still feels kind of risky. I feel like I see that with students too, like the students that are really worried about being called out for cheating are less likely to use an AI assistant, even though there are some really totally great uses of AI in the classroom and to help you learn and whatever. So yeah.

 

Cat Hicks 00:35:33  I think the point you’re making is so important because it’s not just about AI, but how are other people going to think about me and think about my skills, our other people going to come at it assuming I’m competent, or are they going to come at it assuming I’m not competent? And we know that’s super different for a lot of people? I will counterpoint one thing you said, Ashley, which is in our AI skill threat study, I think this is very important.  Folks who were racially minoritized in tech, those developers actually were more likely to say, I’m upskilling an AI. I’m trying to learn about it. They had, at least for professional software developers, might not be true for your students, but they are full of coding skills, full of self-efficacy, full of motivation. It’s not necessarily an empowerment difference, in my opinion, but perhaps again, they have access to certain kinds of quality questions and they’re really concerned about those.

 

Ashley Juavinett 00:36:22  Yeah. Thanks for pointing that out. I mean, it’s a more sophisticated argument than just like, oh, we’re afraid of AI, it’s like, no, no, we see the power in AI, but we have concerns.

 

Ned Bellavance 00:36:30  That’s important that they voiced those concerns. As you know, just from using Copilot on my own, I’ve found several code quality issues with it, and I do have to check what it actually gives me every time. Sometimes it does work perfectly well, but sometimes it’s just absolute nonsense. And maybe part of that is how well I phrased the question when I ask it to produce something.  But if our part of it’s just, yeah, that’s where we’re at with those tools right now, and it needs that feedback from people who are skeptical and are not getting the right answers for it to improve.

 

Kyler Middleton 00:36:58  It’s interestingly human in how it solves problems. If you say, write an array that does this, it will do so and do a pretty good job. But like, is it secure? Well, you didn’t ask it to be secure. You asked it to write an array.

 

Ned Bellavance 00:37:11  It did the thing.

 

Cat Hicks 00:37:12  The more we understand ourselves too, you know that we might be better at using these things, we are making assumptions about what it can do, you know, because that’s just our human problem solving. And we’re I’m sure you see this in your students like that they think, oh, it will just think about these things and you’re on this journey often, right, Ashley, of teaching them what is computational thinking and.

 

Ashley Juavinett 00:37:32  Yeah, I think when you’re like a novice to anything, you don’t see the entire solution space.

 

Ashley Juavinett 00:37:36  So a student might say like, please plot this data. And Copilot is like, cool, I made a histogram. Students like wouldn’t want a histogram like I wanted a box plot. You know, like you don’t even, like understand that there’s other possibilities, other ways of thinking about the problem that you’re posing.

 

Cat Hicks 00:37:51  Worth shouting out, though. I’m not a computer scientist, and I have deeply benefited from asking some of these tools, what could I do to make this code I already wrote faster? The whole space of as someone who’s I would consider myself like programming adjacent, you know, I program for stats, I program for science, but I did not have access to just all kinds of things that, you know, you might have access to if you had courses in a major in this. And it’s been wonderful for me to get to say, I don’t want to get tripped up in the syntax. I already know like that I want this kind of plot. Can you just help me figure out how to do it in this framework so the roots, it might open up to people who never thought they could get entry into some of these technical skills.  I think that’s a beautiful possibility also.

 

Ned Bellavance 00:38:30  I guess part of it’s just going back to what’s the true value that a developer brings to the table? Is it being able to churn out thousands of lines of code, or is it that ability to problem solve, to think logically through a solution, break it into component parts and create abstractions and then implement those abstractions? Well, the front end of that seems much more important than the implementation. Not that the implementation isn’t important, but you can oversee an implementation. Coming up with the core concepts first seems to be where the real value is. That’s my perception.

 

Cat Hicks 00:39:00  There’s a question I like to ask my audiences of engineering leaders sometimes when I present about developer science and I say, imagine you’re a software developer and raise your hand if you could, like, just magically churn out so much more code and everybody raises their hand and I’m like, okay, you know, stay in that place. And now imagine every single developer around you can also churn out ten times more code.   Do you feel good? Do you feel the same? Yeah. And it’s a beautiful moment too, because it’s like, you know, I think for a long time we’ve been caught up on this idea that software developers can grind and, you know, take pride in grinding and you’re not going to win a race grinding against a machine. So we have to start defining ourselves as something else, something different, something more human maybe.

 

Kyler Middleton 00:39:48  That is a wonderful outro. So let’s wrap this. I want to talk for about 3 or 4 more hours, but we are limited in how long we should make this. So where can folks find both of you on the big bad internet?

 

Ashley Juavinett 00:40:03  You can find me on LinkedIn. I’m on all of whatever current social media platforms there are at analog underscore ashley and I have a website ashleyjuavinett.com.

 

Cat Hicks 00:40:14  You can find me at drcathikcs.com. I have a blog there. I’m also working on a book project that’s the Psychology of Software Teams. So hit me up if you want to talk about that project, you can find us at changetechnically.fyi, where we have our own podcast where we talk about STEM psychology, neuroscience, inclusion, pathways into being technical, who even gets to be technical and get really meta with it. And you can also find me on way too many social media platforms. My handle is usually grimalkina.

 

Kyler Middleton 00:40:46  And I’ve listened to some of the Change,Technically, I think two of the three so far, and they are incredible. This will be released in a couple of months, so that will certainly be more so go listen to it. It’s fantastic. Thank you to Cat and Ashley for appearing on Day Two DevOps and virtual high fives to you for tuning in. If you have suggestions for future shows, we would love to hear them. You can find us, podcast on LinkedIn or use the feedback form at packetpushers.net/FU. The FU is for follow up. You can find me, Kyler Middleton blogging over on letsdodevops.com and my stupendous co-host Ned Bellavance at nedinthecloud.com.  We’re both very active on LinkedIn.

 

Kyler Middleton 00:41:24  Stop by and say hello. Do you know that you don’t have to scream into the technology void alone? The Packet Pushers podcast network has a free Slack group open to everyone, so come on by. Visit packetpushers.net/slack and join in. It’s a marketing free zone for engineers to chat, compare notes, tell war stories, and solve problems together. packetpushers.net/slack. Until next time, just remember that DevOps is awesome and so are you.



Share this episode

Have feedback for the hosts?

We want your follow-up.

Send us follow-up! 😎

A Free Newsletter That Doesn't Suck

Human Infrastructure covers IT blogs, news and vendor announcements of interest to hands-on engineers.

Subscribe