FWDThinking Episode 4: The Tyranny of Analytics

For the latest episode of FWDThinking, I had a chance to dive into a subject that’s near to my heart: Analytics. Measurement is how you know it’s working—and government needs to work in the open. Getting your metrics right is critical in any sector, but in an era of rapid change and increased scrutiny of the public service, analytics are an essential part of digital transformation.

My guests for this episode were:

  • Cori Zarek, a lawyer and journalist who’s worked on civic tech, public policy, and today serves as the Director of the Digital Service Collaborative at the Beeck Center for Social Impact + Innovation. When it comes to metrics, “I think we can achieve the goals that we want in our major institutions better, faster, and cheaper than the ways they’re currently being carried out,” she said. “Those seem like pretty good metrics.”
  • Kate Tarling, a user experience designer with a background in digital strategy and product in both the public and private sector. She works with “leadership and delivery teams to help them understand what the services are that they’re offering to people, to understand how well they’re performing and to bring real clarity to what we actually want to happen as a result of that service existing.”

We touched on a wide range of topics: How to make measurements properly; how to tie policies to outcomes; and what the perverse and sometimes unexpected results of policies can be when we overanalyze things. Perhaps most importantly, we discussed the need to ensure that the analytics we use to track progress in the applications we build dovetail with the original intent of the laws and legislation that drove those analytics.

In analytics, vanity metrics are numbers that celebrate meaningless achievements. In the private sector, “number of followers” is a vanity metric—until those followers are willing to do something that has a material impact on the business model. 

In the public sector, Kate says, vanity metrics tend to be more about launches and dates and delivery. “It gives a story, it demonstrates productivity, demonstrates progress—it’s a sort of firm story to get behind. But as we know, loads of these things don’t necessarily impact outcomes in the way that we might think of them.”

Instead, she suggests pushing back against this tendency is to “give a name to the service, say what the users are doing, and the key user need that is fulfilling, describe the intent of the policy in a short statement.” Just using simple sentences such as “ensure that the right payments [are] successfully made to people who are vulnerable and eligible” can overcome the tendency towards vanity metrics that don’t actually speak to the efficiency or effectiveness of the service.

Cori says finding these stories can be hard. “We can’t possibly imagine that a large government that has been operating under a patchwork of policies and legislation and directives pieced together and overlapped over a very, very long time horizon, is going to be able to have a very quick succinct business case for every single thing it must do, every single service it must deliver, every single aspect of mission it must execute.” The pursuit of metrics can be tyrannical in this situation. 

One of the big reasons for measurement is improvement—but that requires experimentation. “We can’t just stop everything and iterate and test and experiment with something new,” said Cori. “We also have to keep delivering all of the services that people rely on and keep all of those mission-critical balls in the air.” There’s less room for taking risks and testing things out.

Kate does see a big shift in service delivery as we move to digital, though. “That shift is in mindset from well, I’m government. I set the policy. I tell you what to do. You do the right thing. Otherwise you’re going to get in trouble to Okay. You need to drive safely. Here’s a way to get a license, to be able to do that.  That is a service rather than me controlling you from doing something.”

We touched on the tension between politics and metrics; trust in government; the need for better data education; and the importance of multidisciplinary teams. There’s plenty more in this conversation, but I wanted to close with one thing I learned from some of Kate’s past work in defining metrics: Starting with services.

Start with services

Analytics is the analysis of metrics: Did it get better, or worse? How does it compare to similar things? Is it exceeding our expectations? Kate has a great model for thinking about choosing good metrics:

  1. Define the service, and what “good” looks like
  2. Split it into stages
  3. Collect metrics on each stage

Here’s an example, drawing from Kate’s post Types and stages of services.

Government services, Kate says, fall into several broad categories: Get permission to do something, start something, stop something, move something, claim something, or become something; learn, share, or check something; provide information; and son on. So first, pick the service in question. What does a successful use of that service look like?

Each service can be split into stages. Getting permission, for example, involves discovery, routing, eligibility, suitability, issuing, and meeting rules, as shown in this table from the 2015 GDS-led ‘Government as a Platform — enabling strategy’ project.

Then ask, “what measurement of this stage indicates whether it’s delivering on its objective?” The result is a set of metrics that measure the effectiveness of the service.

All opinions expressed in in these episodes are personal and do not reflect the opinions of the organizations for which our guests work.

Alistair Croll: [00:00:00] Hi, and welcome to another episode of FWDThinking. FWDThinking is a joint effort by the FWD50 Digital Government Conference and the Canada School of Public Services Digital Academy. Each episode, we talk to people in digital government who are leading the charge on transformation and implementing technology within governance.

I’m thrilled today to talk about a topic that’s near and dear to my heart: analytics. I’ve got two guests that we’re going to hear from in a minute, and we’re going to dig into topics like how to make measurements properly, how to tie policies to outcomes, and what the perverse and sometimes unexpected results of policies can be when we overanalyze things. I’m thrilled in this session to have two real experts on the subject, Cori Zarek and Kate Tarling. Both of whom have spent their lives working in the [00:01:00] public and private sector on a variety of government jobs from the US digital government, the UK digital government, as well as with organizations that try to increase data and transparency for everybody.

In this session, you’re going to hear about some great examples of good and bad metrics. You’re going to hear about what organizations should do to break down the sort of folklore and ask the right questions and ensure that the analytics we use to track progress in the applications we build dovetail with the original intent of the laws and legislation that drove those analytics.

Please join me in giving a very warm FWDThinking welcome to Cori and Kate. Hi Cori! Hi Kate!

Cori Zarek: [00:01:42] Hello! 

Kate Tarling: [00:01:43] Hi!

Cori Zarek: [00:01:44] Thanks for having us. 

Alistair Croll: [00:01:46] Why don’t you tell us first a little bit about who you are and why analytics matters to you before we get into the details. 

Kate Tarling: [00:01:54] So I’m Kate Tarling and I work a lot with the public sector. [00:02:00] I see a lot of time and money spent on effort that doesn’t result in great products and services or outcomes for that matter. And I work with leadership and delivery teams to help them understand what the services are that they’re offering to people, to understand how well they’re performing and to bring real clarity to what we actually want to happen as a result of that service existing. And then to design the work and shape it to improve those services. Often I’m working in areas where there’s existing services. And I’ve worked with some of the largest government departments here in the UK, as well as the UK Government Digital Service. 

And I think why the subject of measurement really matters is because I think it’s really important to have clarity and what it is we are trying to do as a result of the work we’re having. And one of the ways to do that is to be really clear about the change we’re trying to make happen. And therefore how we would know if that change has happened or not. [00:03:00] And not everything is particularly easy to measure when it comes to general policy intent and outcomes, but I think having at least clear statements and ideas about what it is that you would want to know, and then trying to get the numbers where possible, shapes so much about how well we know what we’re doing, as well as the kind of ideas we can get rid of before wasting time and effort on doing those if we can clearly see they wouldn’t add up to the particular outcome we’re after. 

Cori Zarek: [00:03:28] So hi there. I am Cori Zarek and I am with Georgetown University, the Beeck Center for Social Impact and Innovation. We’re an experiential learning hub on campus that brings together students, and pairs them with practitioners to do great work. At the Beeck Center we ultimately want to help people live their best lives and thrive in their communities and have equitable access to the resources in our society. We approach this by re-imagining the institutions that play a pivotal role in our society and helping them better serve people. 

The [00:04:00] portfolio of work that I lead leverages the tools of data and analytics, as well as technology, design, innovation to rethink how governments and other major institutions do things like deliver social safety net benefits or help foster families become licensed or establish high level executive positions like Chief Data Officers, Chief Analytics Officers, and others who can help us really think about how to measure what matters. 

You know, by using these tools that a lot of us are accustomed to in our daily lives, you know, things like data and modern technology, et cetera, service design, and all of that, I think we can achieve the goals that we want in our major institutions better, faster, and cheaper than the ways they’re currently being carried out. Those seem like pretty good metrics. And yet they seem really in practice quite hard to measure. We know that we can’t build upon and improve what we don’t measure. And I think one of the things I’m [00:05:00] looking forward to in our conversation today is to think about how we can get creative about analytics and measurement around some of the more process oriented needs that aren’t as exciting and don’t necessarily boil down into nice clean data points when we’re trying to kind of change hearts and minds and how we work and not just what we produce.

I should also mention my background. Prior to joining Georgetown was spending eight years in the US government. I was working on open data and access to information efforts as well as our digital government work at the US National Archives and in the White House as President Obama’s Deputy US Chief Technology Officer.

So, I’ve had a chance to experience this work mostly from inside of government until now. I’m thrown into a university which is obsessed, rightly so, with research and qualitative and quantitative information. And so our center, which brings a lot of practitioners into the mix, causes great fun with all [00:06:00] of our more academic research colleagues who are helping us think more smartly about these issues.

Alistair Croll: [00:06:07] So Kate, I know you’ve worked on the consulting side of things with Public Digital so we really have a view of all the different positions here. There’s the private sector, there’s the public sector, there’s academia, and it feels like all those groups love the idea of a metric they can share that will show they’re doing the right thing. But it feels to me like so many of those metrics are vanity metrics- you know a vanity metric, I should qualify, is something that makes you feel good but doesn’t actually tell you about your business outcomes. So number of followers would be a vanity metric. It only becomes a real metric in the private sector if it’s percent of those followers that do a thing when you tell them to. Like that’s when it actually starts to mean something. 

So in your experience, what’s the trick to convince an executive or a person who wants a simple message for [00:07:00] the evening news to find something that is a meaningful metric, rather than a vanity metric?

Kate why don’t you tackle that first since you’ve had your finger in many of those worlds.

Kate Tarling: [00:07:11] So, and in my experience the kinds of vanity metrics, if you like, tend to be more coming from a launch of a thing. So as long as we can announce that we have a website or an app that’s launched or such and such a program is going to be completed in three months and therefore the project will have been completed, those are the kinds of success metrics. It’s not as we would think of it perhaps, but people can talk about with certainty. So it gives a story, it demonstrates productivity, demonstrates progress- it’s a sort of firm story to get behind. But obviously as we know, loads of these things don’t necessarily impact outcomes in the way that we might think of them.

Something that I do when working on and, you know, with a particular  policy [00:08:00] area, and particularly through the lens of some kind of end-to-end service, work with the team to come up with a service description, which gives a name to the service, says what the users are doing, and the key user need that is fulfilling, describes the intent of the policy in a short statement, and then the outcome of the actual service.

So, for example, in an area where, you know, help more people into work while supporting people to avoid poverty might be one area. A service might be to ensure that like the right payments successfully made to people who are vulnerable and eligible. So there’s a relationship there. And just starting off with words and then come up with what numbers or how might we know whether we have done this right, what kind of ratios might we look at, how do we split things into effectiveness of that- as well as efficiency and not just efficiency. 

And how we’ve kind of tackled that push back against “Well, we’ve launched the system. Therefore that’s good” [00:09:00] is actually kind of demonstrating or pulling together indicators right across the service that allude to how well the different stages are kind of all working in order to achieve those overall outcomes. And just make something and start putting it in front of people. So getting somebody to bring it to a board meeting or a committee meeting, or a steering group meeting or something where they’re normally talking about the kind of vanity metrics you speak about. Well, that conversation might still happen, saying “Ah but look, how can we say- the work we’re discussing- how does that relate to these numbers that we are meant to be changing?”

 The other thing is if people don’t have good stories, if we haven’t come up with a story about some change we’re making and the way that we think that people should be talking about it, it’s really easy for people to revert back to what they know. So I think creating what we think people should be looking at, putting it into the forums where the other conversations are happening, and then coming up with good stories about something we’ve been able to do that replaced the “Well we shipped a thing. Therefore it is [00:10:00] good.” Have helped a lot.

Alistair Croll: [00:10:03] We in lean analytics, we talk about business model diagrams. And business model is not necessarily a government term, but the idea is that you draw the user’s experience from the moment they are aware of you, through when they sort of adopt the thing and try it out. And then eventually give you money, tell other people, or, you know, renew and up their use of the product or service. And we draw these diagrams and I have literally gone on consulting engagements where, in fact one of them was in Washington, DC, where I went to a company, spent the day with them, tried to get them to do draw this business model, like just a diagram that says this is the life cycle of a customer. At the end of the day, they said “Can you come back next week?” This is the executive team, they spent the rest of the week working on the diagram. I flew back down to Washington a week later, and now we had this diagram and we sort of labeled all the arrows. So this arrow is your viral coefficient, this arrow is your conversion rate. And then there was [00:11:00] so much disagreeing on that they said “Come back a week later.” I flew back home, flew back to Washington the third time. And then we went through and we said “Okay, so what should these numbers be at? So conversion should be at 12% or churn should be at 4%” or whatever. And that, in turn, led to another week of vanishing. And then only after three weeks did the organization actually have a good understanding of what its business model was in the first place and why that business model made sense and how to understand whether it was working. 

This is a fairly small organization that’s supposed to know what it’s doing. And it took three weeks and, you know, three cross-border visits, to get them to agree on what their business model was, what the names of the metrics were and where those metrics needed to be for it to be successful. It feels like we have a lot of handholding to do to help people get to a point where they even understand what they’re trying to build.

Cori Zarek: [00:11:55] That feels right. 

Alistair Croll: [00:11:57] Is analytics just a red herring, Cori? I mean, should we just spend [00:12:00] our time telling stories first? 

Cori Zarek: [00:12:02] That’s an interesting provocation. I’ll come back to it. But I think that’s right. And it’s also fair. I mean, your example, maybe a smallish company is very different from the behemoth bureaucracies that Kate and I have spent a lot of time working in and around. And if a smallish organization can’t sort that out easily and in short order, we can’t possibly imagine that a large government that has been operating under a patchwork of policies and legislation and directives pieced together and overlapped over a very, very long time horizon, is going to  be able to have a very quick succinct business case for every single thing it must do, every single service it must deliver, every single aspect of mission it must execute. It is almost an impossible feat, and it’s understandable that it takes some time to unpack that and sit with it. And I think what we don’t allow for [00:13:00] in our current  sort of push for metrics and analytics, things we love and are critically important, but what that doesn’t always allow for is that time and space to fully unpack what it is we’re in the middle of, and clean it up and set ourselves on a path for something where we do have agreed upon standards and metrics that we can work toward, and we can come up with numbers we can live with. Which, you know, in your example, maybe 12% turn rate, or whatever it was, is acceptable. Like 0% is what’s acceptable in government, right? 

And so we also have these impossible standards in this really messy system, set of systems, that we’ve set up. And it’s just a very different comparison. It’s hard- it’s not perhaps a fair comparison.  

Alistair Croll: [00:13:44] It does seem that we’ve changed the goalpost a bit though, because you know, humans are terrible at taking notes. Machines have no choice but to do so. And so like every click and tap and scroll is something that’s grist for the analytical mill, which wasn’t there before. So now it’s very [00:14:00] hard to say “Oh, we don’t know” or “We think it’s going well”. 

And I remember, touching on the tyranny aspect of this, I remember a case study a few years ago where a government employee had initiated a program, and the program was unsuccessful according to the goals they had set up. But it turns out that this program had a side effect. I think it was something to do with like lowering sugar consumption in schools and it wasn’t doing it, but then it was actually leading to much better student outcomes in terms of testing. And so the students like body shaped it, like the health metrics that they were targeting didn’t happen, but their test scores went through the roof because now they were being properly fed and stuff. And their funding was shut down because like “Hey, we found this really good thing, but it turns out to be, you know, an academic benefit, not a nutritional benefit. So we’re killing the program.” And it seems to me like there’s a risk here. If we tell a story too well, and we’re not willing to rewrite the story based on, you know, happy unexpected outcomes, [00:15:00] that because technology is so good at taking notes, it becomes an unavoidable conclusion that we can kill the thing cause it didn’t know exactly what we wanted, even if it was good. 

How do we allow ourselves to rewrite the story while maintaining accountability that, you know, we’re still trying to do the things we set out to do? 

Kate Tarling: [00:15:19] I think there’s a difference between- certainly in my work- the shorter term or the pilots or the studies or the kind of research driven new stuff to see if X has an effect on Y. And that comes, you know, you kind of hear about theories of change as a kind of classic way to map out your sequence of assumptions and things you need to hold true for some change in the world. 

And then a giant part of government that I come across with, the things already exists, they’re not kind of going away anywhere. It’s immigration and passports and education and big kind of things like that. The things are just constantly running and it [00:16:00] partly comes down to how, like, almost you might take the work of like operations and then the work of IT department and the work of policy department, and then kind of help everybody see their role in what is essentially the same thing. And you’re kind of identifying, kind of redefining what is meant to be happening to look at, then, what are the roles of individual people to kind of improve that over time. And that’s the kind of space that I experience. That kind of continuous improvement stuff is already happening, it’s just not known by that name. It’s a kind of huge amount of effort going into operating the thing, as opposed to trying to fix it as it goes. 

But this is the sort of distinction I see between this sort of short term “Do we see an impact?”, and then the ownership of an entire area where reducing the separateness of teams. Like, as you say, like the academic schools versus the health schools. That kind of being owned more end-to-end by somebody, where it’s an existing area of work.

Cori Zarek: [00:16:58] I think one of the things that feels [00:17:00] so critical- and loads of people have been talking about this for years, it’s not nothing new- but we really have our work cut out for us to help government, all of government, get more accustomed to risk and to allowing for a little bit of it. And it’s a tricky situation, right? Because we can’t just stop everything and iterate and test and experiment with something new. We also have to keep delivering all of the services that people rely on and keep all of those critical mission balls in the air, so to say. 

And there’s such little incentivization, certainly, and very, very little tolerance and acceptance for risk and testing things out. And so we’re sort of continually forced in a cycle of doing things in older ways and really pushing things, [00:18:00] pushing the ball uphill to try and work in new and modern ways. And it’s been happening in exciting spurts for years in the UK- Kate you’ve had a front row seat to that, you’ve been active in making it happen yourself- in Canada, in the US, and lots of other places around the world not represented on this call. But it’s still such small pockets. Our government in the United States is massive. We have a hundred departments and agencies, and we have kind of new practices and experimental teams in a handful of them, right? And so it is just by and large not reaching most of what we do. And we have amazing innovative units that go back decades- like DARPA. 

And yet still, you know, I came from the US National Archives, which is an amazing agency, intentionally rooted in history. And agencies like that could really benefit from thinking about how to apply modern practices and to test new [00:19:00] scenarios. And yet we’re still really focused on, in that example, the national archives has something like 13 billion paper records they need to digitize, and that is very much a striking metric. And here we are really working in more traditional methods to get that done. So it’s this challenge I think, that we need to move toward of like a walk and chew gum version of metrics, where we can continue to measure and keep moving some of these critical services. And then also consider how to test and experiment with new cutting edge practices that we all see happening. 

And then the key piece that you both touched on, which is how do we tie this to real human stories? Think what we experienced a lot on my team at Georgetown is we have all these wonky, nerdy practitioners who come in, who are excited about fixing the backend systems and, you know, overhauling government procurement, and while people care, nobody cares, right? Those are not the stories [00:20:00] that get folks excited at the end of the day. And while we do have lots of great stories of the people we seek to help, tying that together can actually be really challenging. And rooting both in something measurable is really something we struggle with. Would love to hear what you recommend on that. 

Alistair Croll: [00:20:17] Yeah, I mean, I think one of the challenges of analytics in general is that before we had analytics, before we were marinating in data, the person in charge was the person who could convince others to act in the absence of information. And it feels like today the person in charge is the one who can ask the right question. And so that’s a fundamental like repositioning of who’s in charge. And in department where there are a lot of career employees, the person who’s always been able to convince others to act in the absence of information- which was a great skill, right? That’s how you got people to take the hill if they didn’t know they were gonna win the fight. And now someone shows up [00:21:00] and says “I don’t know anything, but I know how to find out what to ask.” That person has a very different personality. You know, the culture of a meeting needs to be much more inclusive, much more, you know, let’s ask questions and find consensus. And I think there’s so much pressure to be seen as decisive, and act, and take that first step. 

How do you think government departments need to change their culture to move from a world where we have this bias for action, to a world where we have an abundance of information and we need to ask better questions?

Cori Zarek: [00:21:36] I think that’s exactly what it is, that culture change, right? Where we don’t push to continue doing things as they have been and showing up to meetings with our bottom line up front memos, directing this course of action based on these data and figures. And instead, actually take the time to go and walk the halls and ask the questions and talk to the experts who know. 

I think one of the [00:22:00] things that we found a lot in my experience in government is- and we know this or should know this- the folks who are there, know the answers. They know what needs to get done. They’ve been knowing, right? They’ve been trying to get this done for a very long time, and often they can’t get a seat at the table. They can’t get their voice heard. They can’t get their expertise inserted into these memos. And so when you can find these on the ground experts and actually empower them and listen to them and help them move a solution forward or, you know, address a problem in the ways that only they know how, you’re in a much better position to actually try and implement something successfully, rather than just get a decision made.

Alistair Croll: [00:22:42] And Kate in your work with Public Digital and others, was it useful to be the sort of outside third party who could- there was an expectation that something was going to be new or different when you came in, or did you still have to shepherd people away from trying to be decisive rather than listening?

Kate Tarling: [00:22:58] I think [00:23:00] I usually have seen my position as working together on the inside with the group of colleagues that I find there. I think there can be some sort of reticence in the idea of a separate entity person coming in and telling me how I should have been doing my job when frankly, I’ve been here for 20 years, you know, I know what I’m doing. I have all the domain expertise. You might know a little bit more about one area, but don’t come and tell me a better way of doing things we’ve probably already tried it. So working like, kind of coming in the inside and being part of that organization as much as possible, I think is the only way I have found for things to be effective. Yeah. 

Alistair Croll: [00:23:45] Well Kathy, when we had this session with Katherine Benjamin and Kathy Pham and Ayushi Roy, one of the things that came up was this idea of folklore. In Canada, you’re not supposed to talk to citizens during electoral period. And that sort of [00:24:00] became this rule of you’re not allowed to ask citizens questions. And then there was this folklore within Canadian government: you’re not allowed to talk to any users. And when they actually went and checked, it turns out there are no laws about that, and you’re perfectly able to go talk to users, but it became folklore.

So I think there’s always this challenge if the people who’ve been there 20 years and know everything, also have an understanding of folklore that may or may not be true and sometimes gets in the way of things. So I think it was Kathy who said the first thing you gotta do is try to open the doors that are closed because maybe they open, and you don’t know.

I agree with you that this sort of organic knowledge is wonderful and they have this sort of innate, the known unknowns, right? Sorry, the unknown unknowns that are fascinating to discover, but there are a lot of people who know things, they just don’t realize they know them until you go and ask. 

How do you separate the wisdom that can [00:25:00] set you on the right course from the folklore that needs to be overturned because the world has changed? You know, 20 years ago, we didn’t have a device in our pocket that kept records of everything we do and say, gave us perfect locational awareness, and access to the sum of all human knowledge in two seconds. Certain things have changed. How do you reconcile those two, Kate? 

Kate Tarling: [00:25:23] I think I had some advice fairly early on, which is, you know, when somebody was saying something’s not possible, or this is the way that we have to do things, or it doesn’t matter if this is completely archaic and stupid, like it has to be done like this, and bless you you’re going to take it through primary policy and, you know, several years worth of change later, you might go to change it. So that kind of pushback of like, we cannot change the way things are. We didn’t like it either. You know, we have to follow these processes because that’s what’s set out in the operating mandate, or so on and so forth. And I think one thing there is that you have to put in the effort to go back to the original words of the policy yourself and read all the [00:26:00] operating mandates.

It’s also true to say the people who’ve been there 20 years, some of them are quite understandably used to how things are. Some are knowing so much about it and really challenging and frustrated that they haven’t yet managed to kind of overcome some change or other. So it’s partly about yourself understanding exactly what is written down, what is law, what is someone’s assumption, and then finding the people to work with who are the right blend of knowledgeable and influential and seeking better ways of doing things. And in a position to actually help you then change things too. 

And there’s a distinction when you have a really existing area of policy that you’re trying to improve, like improve products and services within, but the policy itself isn’t particularly changing. You still have quite a lot of leeway if you can do that work to figure out what’s actually written what isn’t. Compared to kind of new policy being laid down or kind of brand new initiatives, [00:27:00] in which case you have a different kind of set of opportunities, I suppose.

Alistair Croll: [00:27:08] Cori, I noticed that you- in stalking you to get ready for this- have a background with MuckRock, and we’ve had other organizations, Sunlight Foundation and others, whose mandate is really to shine a light on things and work in the open and be transparent. It seems to me like, as we get into a world where we have analytics and every step of every process is visible, we are close to this place where governments can just publish their work in the open. We could say to people, you know, in Canada we have this  thing called Service Canada where you go in and you ask for help from the government. And unlike a business where Service Canada would say “Okay, here’s the one thing you get and try and offer the least service for the most money to keep you satisfied”, Service Canada’s job is to go “Oh, while you’re here, there’s these five other services you should be benefiting from” which is already very counterintuitive to the private sector. But you know, the goal [00:28:00] here is maximizing human outcomes.

When we take metrics like that, we could post the number of people served at Service Canada, for example. We could post the amount of money given to citizens through Service Canada that day. This is something that would be absolutely within the realm of possibility. And we can publish that data and on the one hand, we would say “Isn’t that wonderful. Look how transparent we’re being”. But the next morning, politicians on the other side of the aisle are going to go: “Look at the reckless spending from that service”. 

There is a tension between working in the open and giving your detractors or critics ammunition to stop the process. And I struggle with this because in the private sector, companies keep things private because they can- they have intellectual property, they have competitors and so on. But it does seem like we need to come up with rules in the same way that we elect a politician to a term, and they can do things during [00:29:00] that term we may not like, but we elected them to do things in that term. It does seem like yes, we’d like you to work in the open and be transparent and share your metrics. But then we’re opening ourselves up to criticism that may interfere with our ability to actually deliver those services. How should people think?

And Cori, the reason I bring this up is organizations like MuckRock and Sunlight and so on are very much like “We want transparency so citizens can be informed”. But there’s a certain amount of processing required to get context for those metrics. How do we share metrics without undermining the intended benefits of the service we’re trying to build? I know that’s a big question. 

Cori Zarek: [00:29:41] It is. It’s also, I mean, it’s democracy, right? And so if we believe that ultimately the power is in the hands of the people and that, you know, there’s no need to underestimate the intelligence of the people, the citizenry, right? That putting that [00:30:00] information out into the open can really allow for informed decision-making.

I’ve been proud to serve as the President of the board for the MuckRock Foundation, which is an organization that promotes transparency for an informed democracy through a number of tools, including  the work that surrounds freedom of information requests in the United States through a centralized service where they help users, requesters get access to government records and kind of track the progress of those records and make those records available, and also other products and tools like document cloud, which allows us to more easily access government records. 

So the work that they do and organizations like MuckRock or the Sunlight Foundation, as you mentioned, or another organization I started my career in, the Reporters Committee for Freedom of the Press- all these organizations serve as great advocates for the public’s right to know. And, actually will, you know, go to bat legally for our right to [00:31:00] know and to protect those rights in the US and around  the world. Those are incredibly important components to our democracy and yet, as you point out, it could potentially uncover information that might lead to outcomes we don’t necessarily want. And that’s what happens in a democracy, right? And we need to set those standards and set them high and hold ourselves to them and hold our governments to those standards so that whether elected officials whose policies and processes we support are in office or not, that those standards and those expectations are the same. And that we hold our systems, our institutions to account for an open and transparent process and governance, no matter who is running those systems.

Alistair Croll: [00:31:46] It does seem like education plays a big role there. That you need an educated electorate. You know, I think if 2019 was a masterclass in civics, then 2020 is a masterclass in statistics for most people. But it does feel like [00:32:00] simply understanding basic statistics and the work that government is doing is required. Like an informed electorate needs to be able to parse and understand the constraints and limitations of data in a way that maybe we are not as equipped to do as we’d like.

Cori Zarek: [00:32:16] Well that’s the role of journalists and experts and think tanks and others- academic institutions, you know, who can actually dig into some of the data or records that are released and help us make sense of them. And those are critically important roles to play. I think it might be a different episode to get into how well those institutions are working in our society at all times. 

Alistair Croll: [00:32:38] Would you think it’d be good to bring- I’m just thinking out loud here- that maybe bringing a journalist or an academic or a communicator into the design process. 

And Kate, I want to get back to  your types of services thing, because I found that the table of like your list of what makes a government service, you know? And I think you have a list of discovery, routing, eligibility, [00:33:00] sustainability, and so on. There’s a list of tasks that  government as a platform does. And when you are disguised, when you’re sitting down to say “Okay, what is the process to start a business? What is the process to claim universal credit? What is the process to register to vote?” And then you start with that very early. You know, the platform has this task it wants to accomplish. Once you’ve decided on that story, then you can decide what metrics are going to be used to track it. And then it almost seems like, to Cori’s point, bringing in a journalist or a communicator or someone who can say “Here’s how to explain that easily”. 

I’ve always said that metrics should never be more complicated than a golf score, right? I’m supposed to get in the hole in five. Four is good. Six is bad. Holding scores are right out. That whole strikes thing is way too confusing. But it feels like you need to bring in people who can communicate what success looks like and what the right metrics should be earlier in the design phase if part of the [00:34:00] success of a project is that people understand that the metrics are a reflection of the design goals in the first place. 

Kate Tarling: [00:34:06] Yeah, that’s true. There’s a really big missing role. Feels like it’s a communicator across a whole service. So someone that- and quite apart from the public understanding for a second, just internally- so someone who can explain what’s happening between operations, between policy, between all the various bits that are operating a service. And help everybody understand their role and what’s actually happening and what the impact is of that and how well it’s doing and so on. 

I think in terms of those kinds of patterns, so you know, getting some kind of license is a really common and obvious thing, you can fairly easily identify like if you think about a funnel, so it’s not necessarily, you don’t want everybody to have a license, but you want the right people to have a license. The right people to rule themselves out if they’re not going to be eligible. You want the people who are most suitable to be kind of going forward. So you can anticipate those metrics [00:35:00] really straightforwardly. That tells you like how effectively that service is doing its job. It doesn’t tell you about the overall outcome or whether even having a service like that is the best way of achieving some overall policy intent. 

So there’s a kind of a class of it missing. And those metrics are hardly ever available because very few policy teams or anyone has the kind of time to kind of go back and evaluate the success of their big policy areas as a sort of, you know, get the thing up and running and there it is. So that’s kind of a key missing area, like the metrics are often missing because half of the time government hasn’t seen itself as a service provider. They see themselves as policy making and then carrying out initiatives.

Alistair Croll: [00:35:42] Is it a new thing that government sees itself in the role of service rather than legislator? 

Cori Zarek: [00:35:47] I think so. 

Alistair Croll: [00:35:48] I bet. Cause that’s news to me, I’ve only ever thought of it as a platform for delivering services, but maybe that’s because I grew up in Canada. You know, that I was born in England, but I’ve always seen government’s role as [00:36:00] managing the safety net, right? Like implementing these things, and making it possible to comply with laws or be aware of laws. Not just passing laws. That’s the first time someone said that to me, but you both went “Yes”. 

Cori Zarek: [00:36:14] Well, I think on all of those options that you listed are also not mutually exclusive. I think if we think about aspects of government, for example in the United States our department of defense is massive and I suppose one might argue it delivers services, but I would say perhaps that’s something that feels a bit more aligned for describing that unit of government might be that it carries out a mission. And sometimes, you know, the Venn diagram there could be quite overlapping, but  an agency like a health and human services or maybe a department of education or something else might be a little bit more akin to service delivery or providing services. But we have lots of parts of our government that are more about regulating or carrying out other [00:37:00] important aspects of our society that aren’t necessarily about delivering services in a traditional way we might  think of in. 

But then you also, you know in the US we’ve got multiple levels of government, so if you think about the average everyday American, most comes into contact with their local government in their community, in their city. And that’s very service heavy, right? You’ve just moved into a new house. We had to call the local government to get trash cans and set up our water and sewer and all of those things that you do with your local government. And that’s the way that I think, you know, that a lot of folks really come into contact with government, is in that local service delivery way. And yet there’s just so much more to it. 

Kate Tarling: [00:37:42] I think there’s a big shift in coming from governments thinking about they are the makers of policy and then they carry out policy so that humans do the right things. There’s a big shift in that. And seeing actually the reason why we control and regulate and so on is because of [00:38:00] democracy, for one. But also it’s to facilitate humans to be able to do the things and get the things they need to do. So that shift is in mindset from “Well I’m government. I set the policy. I tell you what to do. You do the right thing. Otherwise you’re going to get in trouble” compared to “Okay. You need to drive safely. Here’s a way to get a license, to be able to do that”. That is a service rather than me controlling you from doing something. So, yeah, it’s a mindset shift more towards, you know, human centered design, good service design for humans to achieve policy outcomes, as opposed to “I set the policy and then enforce it.”

Alistair Croll: [00:38:36] That’s a fascinating way to look at it and I’d never thought of it that way. 

We are seeing a lot of discussion on metrics. I heard- and I’m going to throw some shade here a little bit- the US spends more than any other country on healthcare, but 2.5% of its healthcare spending is in public health. And it feels like COVID is the first time in most people’s recent memory where the person next to me, where the health of the person next to me, [00:39:00] has a direct impact on my health. Person next to me has cancer. I’m probably not going to catch it on the subway. Person next to me has COVID, I’m at risk.

And it seems like when you have a for-profit healthcare system, it’s very hard to bill the general public. And so you don’t spend a lot of money on public services like public healthcare. It feels like this is one of the reasons why the pandemic is causing both a fight over metrics and a renegotiation of where the money is best spent at the individual versus the collective level, that has people in the streets complaining about justice and healthcare and, you know, teacher safety and myriad other things. It does feel like this is a masterclass in metrics because we’re seeing people debate the numbers around testing versus death, and versus positive rates and all these other things. And every month, somebody says there’s a different metric people should look at. 

Do you think [00:40:00] that the population is going to get better at understanding how to parse metrics like this? Or will politics continue to be arguing about which metrics represent the outcome we want, rather than arguing about the outcome we want?

Cori Zarek: [00:40:18] That’s an interesting question to ask about two months before a major election here in the United States. Will politics overcome metrics? I think this goes back to something you mentioned earlier, Alistair, around what might be a little bit of a gap in how folks are educated or prepared to receive this type of information. So we have lots of different data being thrown at folks in all sorts of platforms and formats, via platforms like our social media, or traditional news media, or in message boards and conversations, sort of private discussions that are [00:41:00] providing lots of different and disparate data points that a lot of folks don’t know really how to make sense of. Because those are not tools we’ve been  provided or if we were provided them, it was a long time ago in school. And, you know, we’ve seen that civic education and other education has sort of waned in recent years.

And so I think back to that point, it is almost impossible for average folks who are very busy living their lives, trying to figure out how to work from home and educate their children and go to the grocery store without coming into, you know, dangerous contact with the virus, to also then expect them to sort of reeducate themselves, to process this barrage of information that’s coming at them in a fire hose. 

So you have sort of an unrealistic information processing [00:42:00] situation put on the average human brain in this moment, right? And will analytics and metrics win the day over politics and policy? Who knows. I think that’s a much bigger question than we can answer in this discussion and a much longer term kind of timeline to look down. But it does sort of strike me back to that education point. It’s critically important that we understand, like, what are these figures meant to measure or show. And how can we ensure that we are not grouping apples with oranges as we do that. 

Kate Tarling: [00:42:34] We had daily updates, particularly when we were on lockdown about COVID-19 so, you know, every kind of news would start with a minute or two minutes update from experts on what’s going on, how we’re doing, what’s leading to what. And then those stopped. But while it was happening, it was so useful and so informative, and it was their job as the experts to figure out what was the right thing to say, the right level of information for some broadest [00:43:00] audience. Maybe they didn’t do such a good job of sort of translating into various languages, but it gets me thinking like, why aren’t we doing that for key policy areas? 

Alistair Croll: [00:43:08] Yeah! I was going to say, the flatten the curve chart, like everybody saw that spike, and then flat curve. And that chart changed billions of people’s perception of the problem. And that one chart had a huge impact. Someone was gonna draw it at some point, but you know, everybody saw the flatten the curve chart. And eventually everyone like realized that percent positive tests is a good number, right? 

Kate Tarling: [00:43:32] Yeah. 

Alistair Croll: [00:43:33] One of the things we talk about in Lean Analytics is that a good metric, is either a ratio or a rate. So like kilometers per hour, return on investment, cause it explores the tension of two things. But it’s really easy to communicate and you can act on it. And it seems to me like we should have, you know, daily briefings on the policies that we think are important. What’s the climate change report today? What’s the healthcare report today? What’s the education report today? If that’s an area of focus for [00:44:00] governments. 

Kate Tarling: [00:44:00] Yeah.

Alistair Croll: [00:44:01] And you’re right. Maybe we need journalists to start doing like five minutes on the most important thing. And we work out a way to talk about that thing and a set of metrics that works for the world. Because with COVID, when I started hearing the numbers, what was never clear was, are we talking about how to save lives or how to keep the economy going? Obviously there’s a tension between the two, but people were arguing over the metrics without mentioning that behind them was their implicit goal of we want to reopen the economy or we want to keep everyone safe and where they were coming down on that. And it does feel like journalism could do a much better job of teaching people how to think about metrics correctly. 

Cori Zarek: [00:44:43] I was just going to say, I think that takes us full circle back to the transparency issues. We are struggling right now in the United States to have access to real time accurate data for some of these figures  that we’d like to see in these daily briefings that aren’t really happening anymore. [00:45:00] We’ve got loads of folks who run organizations or who are journalists, or are just interested concerned citizens, filing requests for this data because it’s no longer being proactively provided. And when you see trust in government plummet to, you know, historic lows in different parts of our world, I think all of this that you laid out, Alistair, is exactly what we should demand happen. And yet we also have to have trust in government and access- and trust and access- to the data we’d like to see in this daily climate report, in this, you know, daily COVID report, in the daily education report, whatever it is. And we are really struggling with that right now on some aspects of information in the United States. And that’s not great. 

Kate Tarling: [00:45:54] And I just had a thinking of sort of leaving the job to journalists. [00:46:00] There’s a role for a really strong service leadership in government. So taking up a sort of policy area and looking at all the ways in which that policy area is enacted, policy intent is enacted through services and other things. And having grown ownership for communicating, being transparent about how well it’s performing as well, so we’re sort of- yeah, journalists definitely have a role to play. But there’s also, there’s that role within government, which is often missing and lacking, like accountability to the people for the way in which that money’s being spent and on important outcomes for us all.

Cori Zarek: [00:46:34] Kate, I’m curious how that’s going in the UK these days. Like  is there kind of a high level of trust in the information that’s coming out of the UK governments? Put you on the spot. And is there, you know, complete inaccurate or a perception of complete inaccurate data that’s being released? And do folks feel like they have high confidence? And what you’re seeing and learning?

[00:47:00] Kate Tarling: [00:47:00] So I think when it’s coming from, you know, the civil service government, it’s on gov.uk, I assume there’s a fairly high level of trust there. I think what does change is how well things are described in the press more broadly, and the news more broadly, and how well that’s designed to reach important parts of our population as well. So just putting something on like one radio station and the main kind of broadcast news in English language at a certain time, it’s not going to be enough to get really imporant like public health messages across, for example, that takes, you know, different languages and communication and different mediums and the involvement of community and so on.

But I think, yeah, in terms of trust where there are numbers, but not necessarily in, you know, there’s not a consistency. Messages change, the news agenda changes, and then people forget. Like they get busy, they do, but their daily lives, I think that area is more problematic.

[00:48:00] Cori Zarek: [00:48:00] Yeah. 

Alistair Croll: [00:48:02] One of the things that came up when we were talking about what to talk about is this idea of intent. Every policy has an intent in it that was written. That policy may be part of a bill that had other writers in it, but somebody, somewhere, believed that their policy was good. 

How do we keep that intent, rather than focusing so much on quantitative metrics that we lose the sort of subjective qualitative: did this make the world a better place? Did this, you know, do the things that I, as an elected leader, promised to deliver to my citizens, and avoid having the quality of absolutes “Hey, we did quite well. We didn’t quite meet the goal, but we did these other things that were good”, “Hey, you know, we didn’t properly state the benefits we were hoping for, but it turns out that they were good benefits”.

How do we keep intent in our decision making as we become increasingly metrics-driven and analytical? 

[00:49:00] Cori Zarek: [00:49:00] One thing I think we need to do is ensure that implementers are at the policymaking table, whoever that may be, whether it’s a technologist or a data scientist or someone else- subject matter experts, obviously. I think in the US what we see a lot is when there’s a new policy initiative being explored, Congress will call hearings and bring in a handful of experts or even invite written testimony from others. And that’s kind of the extent of getting implementation a seat at the table, which is not nothing, but certainly doesn’t allow for a codesign process. And to really ensure that those who are going to be essentially, you know, the success or not of a policy initiative is going to ride on the shoulders of the implementers. And if they’re not bought in, first of all, but second, if they’re not able to provide input from the start, you’re just not going to see a strong policy effort. And I think we lose intent of a policy when we design it [00:50:00] all up front and then hand our little packet over to the implementers and ask them to go make it real. You know, we have to do much more co-design and we’re seeing a lot of great examples of this, but they’re, again, you know, in a giant, giant government, there are pockets and there are handfuls or ones of examples, as opposed to this becoming the way we work.

Alistair Croll: [00:50:23] Yeah, Jen Pahlka had some great lines about- when we talked last week or a couple weeks ago- she said “We talk a lot about how digital government needs to have legislative links”. So for example, we put out a COVID tracing app in Canada, which was very well done and everyone thought the privacy was great. It was good, but it was not clear how that tied back to policies of like, when you got a positive, what would you do? But she said that it’s not just that we need to build things that are tied back to legislation and policy, it’s that we need to build policies and legislation that are implementable. And that in many cases we have such a Byzantine entanglement of rules and regulations that can [00:51:00] simply be overcome by like a dropdown list. You know, like there may be pages and pages of laws that says: if a person has this and not this, and then this, then they qualify for this. That’s two lines of code.

And so I think both parties here, the policy makers and legislators, need to legislate for implementability. And then the digital creators need to build for transparency and sort of accountability that ties back to the initial intent of the policy. It seems like both sides need to sort of show their work, you know?

Kate Tarling: [00:51:37] I think that’s true. It’s also true that even if you had the lucky situation, as I’ve observed a few times of both being done hand in hand, so a genuine multidisciplinary team including policymakers, it’s still really easy for a group of humans focused on getting a thing done, to lose the broader intent. And literally like spending time to [00:52:00] iterate a sensible statement and then sticking it in really big text up on a wall or, you know, behind you in someone’s living room where so many teams are working now. Just every so often, like we’re talking, we’re talking, we can just get back to saying “Hang on, hang on! If it’s an app on an Android, how does that bit of the statement going to hold true?” 

And just literally as mechanical as that, it’s also true that 30 years old policy, it’s really hard to kind of remember there was original intent. So when we’re coming to actually radically redesign how something works in that policy area, we don’t have to follow the existing process. The intent of that policy was not that team A would handle team B, would handle team C, and the X have to be done. So to really challenge, fundamentally have a service which you can still do that often within an existing policy area, if you agree with the policymakers, the intent, rather than just how it happens to be an attitude at the moment. 

Alistair Croll: [00:52:55] So I got a couple of quick questions before we wrap up. First of all: practical [00:53:00] takeaways. I mean, both of you have been involved in implementing and deploying policies and technologies in lots of places, someone who is trying to do well, trying to put the right metrics into place, trying to implement analytics into a service they deliver, what would you suggest they spend their first day doing?

Cori, why don’t we start with you? No pressure.

Cori Zarek: [00:53:22] Yeah, no! Understanding the questions to ask. I think that is the most critical thing to do.

I spoke earlier about how, for example, you’re headed into a new government agency, whether as a fellow government employee or as a consultant or someone else, finding the people who are on the ground and who know what’s going on and asking them what questions you should be asking, I mean, that is thing number one. Understand what you need to know and what questions to ask, especially of the decision makers. Chances are, they may not have full knowledge themselves to provide those answers, but it really can eliminate, you know, the start of a dialogue around what you need to go and sort out together.

[00:54:00] And then the second thing, I’ll mention back to something you also spoke about, is user testing, asking the people who are meant to receive those services, what they want and what they need. It’s not illegal, it’s something we absolutely should and must be doing. And it can be done. It’s a little more complicated in the US because we do actually have some laws that can make it tricky, but it is not illegal. It is very much necessary. And the more we start to get our policymakers and even civil service leaders accustomed to that as one of the first questions, you know, what did the users say? What did the users want? That is going to really help us just reimagine a lot of this and measure very differently.

Alistair Croll: [00:54:42] Kate, what would you do on your first day? 

Kate Tarling: [00:54:44] Those sound excellent, Cori. I agree with those. 

The thing that I would do is figure out what that organization does for the people outside of it. So whether it’s creating a list of services provided to external users- it’s so weird how that often doesn’t exist. Like if you say “Hey, what does [00:55:00] this organization do?” You might get an organization chart, you might get a set of IT systems, you might get anything that describes it from an internal perspective. So just either finding someone who’s saying “Hey, tell me from a end user subjective, in verbs, what is it that we are helping people to do here?” would be one like, just coming at it from outside in. And then to be like: okay, what do we already know about what makes that difficult, awful, completely failed to meet outcomes, make it really expensive for us to run… Like what do we already know? What user research already exists? Those are the first two things I would look for in day one.

Cori Zarek: [00:55:39] Alistair, you wrote the book. You tell us! 

Alistair Croll: [00:55:43] I wrote a book. I think people should spend their first day drawing what we call a business model diagram. And I will try and find one so we can stick it in this video, but basically just the diagram of- the word user journey always sounds sort of like, you know, some kind of [00:56:00] Lord of the Rings quest- but people don’t generally want to interact with government unless they have to. Because government, if it’s working right, is quietly in the background, letting them go about their lives. 

So when there is an interaction with government, that interaction is either push, like the government says “Hey, here’s some news you need to do or think”- behavior needs change or whatever, or it’s pull “Hey, I’d like access to this service. I want a driver’s license.” And being able to map out those journeys and say like, this is the service that we offer, as you said Kate, and what are the verbs. And then what’s the navigation. What’s the journey you move through. And when that journey is done and we’ve kind of gone through it, label all the little arrows and say, what is the thing that takes someone from this box to this box? So, I know I need a driver’s license. Where does that person go to find out about it? Word of mouth, Google search, whatever. Okay. Now they’re on the webpage. How do they know they’re on the right webpage? Okay. How do they find the information they need? 

And when you design that in almost pedantic level of specificity, you can then label each of the things and say, [00:57:00] well, what percentage of people who come to the website should successfully click on the “I need a driver’s license process”? What percentage of people who go to the page to upload their documents should already have their documents with them and not come back two days later because they didn’t know they were supposed to bring something? And you can get down to these steps of the process, but you have to maintain an understanding of the overarching- and I’m going to use the word business model, that’s the wrong word- but the overarching goal here, which is, you know, we want citizens to be able to drive safely. Maybe that’s the high level goal. And then below that there’s: people should not be unduly restricted from quickly getting a driver’s license without defrauding the government, or whatever that thing is.

And it always seems to be like that comes back to the constitution, the platform of the party, the reason they were elected. So there has to be a way to tie the business model- which I think in the public sector tends to be [00:58:00] the law of the land, the constitution, the culture, or whatever- to the eventual outcome. The way that you would deliver a service in Northern Europe is probably very different from the way you would deliver a service in South America because those cultures are different, their priorities are different. And so there is going to be a reflection of national identity in the delivery of those services that I think leads to a tremendous amount of disagreement over, like, what kind of country you want. And then we get wrapped around the axle of what kind of metrics are you tracking and delivering that service. 

So I don’t know if I could tackle that on my first day, but I think drawing out this sort of business model diagram to surface the implicit assumptions that nobody’s talking about is a really good start.

Okay, we’re almost out of time. We’re probably actually out of time, but I want to ask you quickly, what is the best and worst metric you’ve seen? You may not have an answer in which case we can just delete all this, but in your long experience, what is the best, clearest metric you’ve seen and what is the daftest metric you’ve seen in your experience?

[00:59:00] Kate Tarling: [00:58:59] I can go. I think the worst was probably a sort of metrics to do with outcomes, but really that meant the number of things we’ve managed to ship by such and such a date. So it was very much supporting product project management type practice, rather than the actual outcomes that would have made a difference to the business or its customers. That’s probably the worst. 

I think the best is just an example of thinking about when people are moving through passport controls or immigration generally. And it’s like you said, it’s a ratio of like when somebody gets stopped or held up or has to wait for whatever reason, if they subsequently enter, what was the reason for the hold up? You would want that to be as close to- if they’re stopped for any reason, then they don’t come through. If they’re stopped because there was something not clear, something mysterious, a piece of data missing, and then subsequently came through, that just speaks to effectiveness and efficiency in one ratio so [01:00:00] it’s pretty helpful. 

Alistair Croll: [01:00:00] Yeah, the false negative rate is really clever.

Kate Tarling: [01:00:03] Yeah, yeah. 

Alistair Croll: [01:00:04] I think the worst one I’ve seen was there was a company that decided to do all of its sales for the quarter, and compensated salespeople based on number of new leads, not like close leads. And so all the sales people went out and signed up everyone they knew, and of course you now have a CRM with thousands and thousands of lists of people who were never going to buy the product or service. And the salespeople all got compensated and then half of them were fired in the next quarter because they couldn’t close any deals. It’s a private sector example, but I think this is a good example of you have a process of finding customers and selling them things, but when you attach yourself to one step of the process, which is get as many people in the pipeline of potential customers as possible, without the overarching idea of make sure they’re qualified prospects, you create very perverse sort of coin-operated behaviors that break the overarching system. 

Kate Tarling: [01:00:55] Yeah. 

Alistair Croll: [01:00:55] How about you, Cori? 

Cori Zarek: [01:00:57] Yeah. There’s one that plagues [01:01:00] me that is about improving the lives of X million people which is both impossible to measure and I’m sure it was achieved, right? And so that one, and things like that are not great.

This isn’t probably a great example of a good metric, but something that I am increasingly seeing governments measure is the sort of citizen or residence satisfaction with an experience or an interaction. And even if it’s, you know, clunky or maybe takes longer than they had hoped, often having some transparency around what is happening and why it is taking so long, and where they are in a process or a queue is really critical. And giving them an opportunity for feedback where, you know, sometimes at the airports you’ll see the, like, smiley face all the way to the super frowny face, right? Something as simple as that, like being able to really measure the experience that individuals are having as they interact with government. I think [01:02:00] that’s a really important set of data to begin collecting. 

Alistair Croll: [01:02:04] Well, I’m afraid that’s all the time we got. But I have really enjoyed talking about analytics in general and metrics and the sometimes strange perversity of those. It does feel like it skirts perilously close to politics sometimes, which is, I think, one of the challenges is that in the public sector, we want so much to be nonpartisan and focus on service delivery and execution, but the intents that we are supposed to be modeling are often driven by the platforms on which people were elected.

So let’s hear it for five minutes at the opening of every day’s news when journalists teach us data science about a pressing social and public issue. I think that will make the world a much better place. And the telling stories about what the service should be like, rather than just assuming that the way that the world has always been as how it should be in the future. I think those are my big takeaways from this. 

Kate and Cori, it’s been a real pleasure having you here. Thank you so much for joining us. We’ll make sure you’re able to join FWD50 if you want to jump in on that. We are starting the conference on November 3rd. [01:03:00] So the November 9th morning lineup is actually about resilient democracy. And believe it or not, we have someone from the Facebook Supreme Court and one of the founders of TikTok talking about that subject. So we figured we’d take that bull by the horns, but we’d love to have you listening in on some of those things. 

And it’s been really interesting getting to know you here. Thank you so much for spending time with us. Stay safe and we look forward to seeing you online somewhere else.

Cori Zarek: [01:03:23] Thanks for having us. 

Kate Tarling: [01:03:24] Thanks.