Expect Only Failure If You Have Poor Feedback Loops
Becoming stuck in work, never finishing things, and feeling the pain of never really “getting there” in terms of what you are doing, might be signs of a poor feedback loop.
A feedback loop is, unscientifically speaking, the process by which you are informed of an action’s outcome. The faster and more exactly you know the outcome, the faster you can learn. The faster you learn, the faster you can improve. If this seems vague, it’s really quite simple. Let’s say that you are redecorating the apartment with your partner, then you’ll probably have a relatively quick and easy feedback loop. When you hang the photo, you ask “Does it look good here?” or “Is it straight?” and you’ll probably get an answer quickly (efficient loop) and hopefully beign informative and to the point (good feedback).
For the obligatory Wikipedia article, see:
Feedback - Wikipedia
Feedback occurs when outputs of a system are routed back as inputs as part of a chain of cause-and-effect that forms a…
The more the feedback (or other parties’ communication) is delayed, the more anxiety it creates. It makes us demoralized. We humans are good at making up (scary) things where darkness and ignorance prevail. That has kept us safe, but dumb, for a long time.
The worse the feedback is, the harder it is to know how to adjust in the next interaction. “Quality” here is not whether it’s nice or not-nice things we learn, but if it tells us how to improve (in an overall sense).
Both of these aspects work together: We want fast, rich feedback.
An assortment of scenarios that prove my point
Let me give a few examples from my life to illustrate bad feedback cycles and how these can be improved.
The colleague who won’t reply
I’ve already alluded to this story in Don’t Leave Things Hanging, but it bears mentioning in this context.
I asked for important advice on a question at work that ended up on the table of a new colleague. The colleague, while pretty easy to deal with in conversation, was hard to deal with in asynchronous communication, such as email. Replies take an indefinite period (weeks?) to arrive and the communication is often “held back” without being customer-oriented or offering solutions.
This situation is a mix of both poor feedback quality and a dysfunctional loop. I’m pretty sure, in certain work cultures or with certain individuals, this could quickly take an ugly turn in response to such behavior.
How to solve this?
Dealing with situations like this will require some clear-headed and mindful tactics.
Depending on your relation to the person, I could recommend either a book like The Manager’s Path: A Guide for Tech Leaders Navigating Growth and Change (if you’re a manager) or How To Deal With Difficult People: Smart Tactics for Overcoming the Problem People in Your Life (in general) for ideas. Overall, being clear on the experienced problem, non-aggressive in approach, and “listening in” to whether or not the other party senses a similar communicative dissonance will help in reaching a better state.
I solved my situation by doing precisely that, also taking the opportunity to feel uncomfortable and being frank about my negative experiences and not understanding why there were such delays in responding. This helped me understand my part in this. All in all this minor discomfort helped facilitate a better working relationship.
The corporate assignment where I learned to hate manual testing and poor tooling
The reason I test code (software being in my line of work) is because in my first corporate job, being much more junior in my own experience too, the feedback loop of our coding tools didn’t give any feedback while I was working. This meant that issues with things like typos, issues, and styling weren’t presented to me, in real-time—something that can quite easily be achieved. Alas, this was not my joy to experience in this job.
Today (and even then) this could have been solved with an adequate smattering of static code analysis running in my development environment. However, at that place and at that time, this didn’t happen. Instead, every time I wrote code I’d (as best as I could) try to ensure it worked functionally on my laptop.
Also, no tests were written, because of this poor youngster’s over-boldness and inexperience. So there was essentially no way of strictly proving it works other than my own experience.
As you’d expect, it would typically take several hours, up to a day, to get feedback on the code that I wrote, coming from someone doing manual testing somewhere else. Many of the issues raised were trivial matters that should never have made it to the tester in the first place; notwithstanding the issue of having testers at all, though here it clearly filled a role. I was equal parts embarrassed by the nature of mistakes and issues, as I was angry at the sub-par tools.
This was a turning point in my career, toward the automation and feedback loops I care a lot about these days. There are two key parts to improving the situation:
- Move manual processes to automated ones; make them repeatable and independent of individuals
- Implement better tooling to drive down feedback time
I could spend a long time discussing this—but I’m not going to, not this time. There is still a great need to learn (and teach) about the practices around continuous delivery and what they mean for modern, good software engineering. While these ideas have existed for 20+ years, it’s clear to me that they are not always well-understood.
For resources to start with, in short, there are excellent books like Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment Automation and Modern Software Engineering: Doing What Works to Build Better Software Faster, but if you prefer a quicker web version with the same essential facts, look no further than MinimumCD (CD for continuous delivery).
At the end of my article Software Delivery That Makes Sense you’ll also get some more guidance.
Getting to know tooling and how to use CI/CD is a great first step, for which you don’t have to wait for someone—just do it! Go learn. However, from the organizational side, you should expect more creaking and discomfort as that’s where the shoe will truly start chafing once you start making people unemployed (or asking them to be retrained).
To end this story, you will never enjoy the above more than if you’ve lived through the pain of manual testing, poor tooling, and slow feedback cycles.
When feedback hurts too much
A few years ago, our enablement team at Polestar procured Codescene, an excellent tool specifically aimed at uncovering “deep details” from source code, on things like who is working on what, how often they are doing, the amount of technical debt, and such things. As a tool, it’s not really ideal for junior, or perhaps intermediate teams, not because of any UX faults, but because of the nature of what it intends to clarify. Junior and intermediate developers are often fully transfixed on other, often humbler and more tangible issues.
A few teams started to use it, and unfortunately, less than a year after taking it in we stopped using it. Why? I mean, it’s a really good tool, and the pricing wasn’t that bad…?
Well… A key learning we took from this tool, or rather its usage, was that we had a somewhat equal split in affinities to a tool like this. On the one hand, some teams found it very exciting to know such details. They felt it could improve their code as well as the way they work. Then on the other side of the coin, some teams found it equally problematic and troublesome with static analysis with “opinion” on what they did and how.
As I discovered, having built developer tools for years, and tried to string up a startup earlier this year, I decided against building something that could fall into this same fate. Tools like Codescene aren’t able to easily integrate in contexts where there is low acceptance for such tools, or where the conditions are too slippery for something that addresses the psycho-social aspects and ways of working. In effect, many junior teams aren’t set up to handle a tool like it—you grow into wanting such tools. This is deeply unfortunate, as such tools could/will make it easier to grow. Instead, they get relegated to being nice, semi-expensive jewelry on teams that have less “real” need for them.
In this case, thus, the conditions for providing feedback are incredibly important. It’s not just the nature of the feedback, as some “objective fact”, but also who/what delivers it, and how. People, especially groups, will be primed to be defensive towards certain feedback, certain “deliverers” of it, and so on. Sometimes teams will be defensive because they have a shared problem (“trauma”) that they have fought against, or because they are constantly under pressure from outside parties.
Change is a hard thing because people just aren’t very prone to it. This is hard stuff, and God knows I’ve messed up a fair number of times too, giving feedback. Before giving feedback, consider if they are receptive to it, or what would be a better “target” to give feedback on. They might be uneasy with discussing code, but happier to discuss the overall solution?
My take is that, in my case, the teams who enjoyed the feedback were doing generally better than those who had a hard time accepting the new tool and what it told the team about their code.
What can you do?
If we use the DORA Core model as a reference, it’s clear that well-being, and in extension psychological safety, is a common effect of high-functioning teams with competent technical practices in place, a clear change process, and being part of a positive organizational culture. The hypothesis is that the team was lacking other, more foundational practices and a feeling of safety, to productively utilize advanced automated tools for code analysis.
Google has a great list of these technical (and other) practices that I highly recommend reading and trying to implement as possible.
DevOps capabilities | Google Cloud
Improving the velocity, reliability, and security of your software delivery capability
Running surveys or interviews with the teams “fighting the tool”, in order to better understand their current practices could assist in knowing better where a tool like this makes sense, and in which teams introducing other tools or practices would help them more. Gradually introducing new practices that typically correlate with software that is deployed more frequently, is more stable, and is of higher quality will ease the transition into a feedback culture.
Rolling out a capability to present teams’ DORA metrics is also one way to help visualize how they are doing in terms of continuously deploying stable software, which is based on fact rather than opinion.
Look at where the pain is
Pain is a great teacher, and I honestly think an inventor who comes only with their rose-colored glasses is worse than the inventor who arrives bitter from experience. Many of the pains we have in the software industry, however, have been felt already. We need less reinvention of wheels and models—rather, we need more reading, understanding, and applying these—but we also need to understand the profoundly human nature of making software when the scale grows outside of your single team. Even with just the one team it’s not easy, but the effects compound a great deal as you scale a software organization.
Optimize where the pain is being felt, not where you are already doing well. It’s almost trite to put that in writing, but reflect and discuss why there might be resistance to dealing with the actual problem, rather than an auxiliary one. It’s a morbid situation in which the problem, and its solution, are equally fearful. Why?
There are no easy answers because we are dealing with humans.
Pain is discomfort, but so is change. One cannot change (others) if one does not change (oneself).
Go fix those feedback loops, don’t wait!