[ BETTER TECH LEADERSHIP ]

Robert O’Farell: Breaking Down Complex Problems - Strategies for Success

[ THE SPEAKERS ]

Meet our hosts & guests

Matt Warcholinski
CO-FOUNDER, BRAINHUB

Co-founder of Brainhub, Matt describes himself as a “serial entrepreneur”. Throughout his career, Matt has developed several startups in Germany, wearing many hats- from a marketer to an IT Engineer and customer support specialist. As a host of the Better Tech Leadership podcast, Matt talks about growing successful businesses and the challenges of being a startup founder and investor.

Robert O'Farrell
Chief Technology Officer and Founder

Robert O’Farrell is a seasoned entrepreneur and technology leader with over 20 years of expertise in identity verification, KYC, and information security. As Founder and CTO of ID-Pal, he has led the company to win multiple industry awards, including Best RegTech Solution and Best KYC Solution. He also founded Perfutil Technologies, delivering advanced software solutions for global enterprises. Robert excels in aligning IT strategies with business goals, focusing on Self-Sovereign Identity, agile methodologies, and scalable software architectures.

Transcript

Disclaimer: This transcription of the podcast is AI-generated and may contain errors or inaccuracies.

Matt

My name's Matt and I will be talking with Robert O'Farrell about the intersection of data and technology and cultural integration and security. Hi, Rob. Good morning. How are you doing?

Robert O’Farell

Very good, Matt. And you?

Matt

I'm. I'm just great. I'm just great. Thank you.

Rob, we have talked two, three weeks ago and you have, I would say, huge experience in energy software, energy sector and rebuilding a lot of legacy applications, being like solution architect, planning to rebuild the software behind like the energy sector in Netherlands, behind in the Gazprom. E work in the Gazprom too. So those are huge organizations of huge feedback. And I think you are the first guest with whom I talk in this particular area. And when we last talked, we were talking about those organizations from the outside you see like a big established enterprise and you think everything works perfectly fine there. But you told me a few stories about the XO files and dependency on the XO5 by huge systems and like dependency on one person, which caught my attention. Maybe you could elaborate on that because you've seen a lot of different examples.

Robert O’Farell

Yeah, sure thing. Now, working in the energy sector in a roundabout way is what led to me working in the identity verification space. In that in both cases there are huge, huge, huge amounts of data to be processed, albeit in the energy sector, it's on a whole other scale. I mean, as you say, given the kind of infrastructure people are working with, you would expect everything to be really robust in there. On some of the solutions I was dealing with, we were processing over a trillion data points per day because you're dealing with the measurement of power and electricity at every lesion point in entire countries, sometimes multiple countries in one system. So you would expect a huge amount of robustness there. And I was fortunate enough to work in almost every aspect of it, from sourcing, which is where you actually buy the coal, the oil, whatever else is involved in fueling your power stations, power station instructions, forecasting.

How do you figure out all of the costs, how do you translate that to a price, and then how do you deliver it all on the day? So having seen it all, as you say, I. As I mentioned a few weeks ago, I've seen some interesting scenarios over time. I've turned up in more than one organization, although I won't name names, where you could be looking at the forecasting system for a national grid being based on an Excel spreadsheet that one very, very clever person had put together. And I'm sure it was better than whatever they had before. But it did mean that there Were times where, okay, we're building the forecasting part of the system now. Can I speak to whoever put together their spreadsheet or.

They're not available, so who can explain it? No one. That's not very comforting. And as I said, that happened more than once.

Matt

And it reminds me like, the type of the project which I used to call rescue projects. So I assume there were plenty of them on your end. And I'm just wondering if you have so many, like, and you see so many gaps right in the software and like, there are various software responsible for different functions inside the organizations, but maybe do you have your own approach to those rescue projects?

How do you approach it? How do you decide what to build first? And.

Robert O’Farell

I know exactly what you mean. And yeah, that is unfortunately or fortunately, depending on how you look at it, it's something that I ended up being known for for about 10 years where, when projects were going wrong, I'd be called in to try and figure out how you get them back on track. So I can tell you a little bit about my own approach to that through an example where, again, I won't name names, but a. A large project where I was brought in, where 3 million euros had already been spent on this system and there was literally nothing active yet. It was meant to be replacing all of the systems I referred to earlier. Sourcing, costing, forecasting, pricing, power station management, nothing built for 3 million. And the very first thing I saw when I arrived there was that it would never work because they had picked very good software, but it was all wrong for how they liked to work.

So this was an organization that liked everything to be very tailored to their own processes. They'd spent a lot of time figuring out how to get market advantage by doing things slightly different to others. And then they bought a platform that works in a very particular way and does not vary from that way. So it basically has the business processes built in. So trying to fit something that works with only one process with an organization that wants everything to be flexible, it was never going to work. But of course, 3 million euros had already been spent. So how do you change minds at that point?

There's a lot of people whose jobs are riding on the decisions that have been taken to this point. And so this was where some of my general principles of dealing with rescue projects come in. The first one being you have to live in the gray. If you turn up and start telling everybody black and white, this is right, that is wrong, and just enforcing your point of view, well, then you've lost half of the room the moment you start. So you've got no chance of rescuing the situation. If that was going to work, the people who were already there would have done that. The second thing then is coming down to more strategy and communication.

If you know what the big decision to be made is, don't force people to make that. Get them to make a series of smaller decisions that will step by step move them in the right direction. And little by little, people can move away from their original positions without you forcing them into a massive climb down. So to get there, it comes back to something that I'm always coaching my own team on. So we talk about solutions a lot and a lot of people recognize, oh, the definition of the problem is not the same thing as the definition of the solution. But there's another step that I think people often miss, which is once you've defined the solution, that's not the same thing as how you get there. And you can get there in much, much smaller steps.

You don't just have to be impatient to get straight to the end of it. So for me, that's sort of living in the gray. Being patient and going step by step is the only way to rescue a difficult situation.

Matt

Really interesting approach. I haven't heard the living in a gray approach, but it's really easy to remember. And I think it makes a lot of sense to describe this kind of approach in a really simple warning. Let's say thanks and Carty, I know that you are passionate about the security in general. This is something that you are really into. And now you are acting as a CTO of the ID power and you had the impact on, let's say, how the things are being done in the organization. And from the software engineering perspective, how would you recommend like the previous roles or previous organizations to watch?

I'm sorry to what you are doing now.

Robert O’Farell

Yeah, I know what you mean. So there's a few aspects to that. And probably the first and most important of them is cultural information security is not something that another team does. It's not something that you think of as an after effect or that you can rely on other people to look over everybody's shoulder and fix. It doesn't work that way. You can't see everything everyone is doing all of the time. So you need to get everybody culturally bought in to how this is done.

Now, the unfortunate side effect there is a lot of the time if you just take the predefined processes and throw them on top of everybody and say do this, this, this and this as well as your job. Everybody's going to look at it and go, oh, look, I've got enough on my plate already. This is, this is too much. So you have to have the patience to go that step further and say, no, we're not adding additional requirements onto what people do for their job. We're taking the guidance that they already have for their job and tweaking it. Taking the time to sit with people and figure out, well, if you were to do what you did, what you normally do this way instead of that way, well, we'd achieve what we wanted to achieve over here. And so as a result, you can start to achieve your information security goals as a side effect of people just doing their ordinary everyday job.

So that's a process that we took the time to go through in idpo, creating an integrated management system where it's not only what you do for your daily job, which makes it so much easier, onboarding new people and gives great comfort to everybody. You know, you're not trying to remember a thousand mistakes you made before. Instead you have a procedure or a standard that tells you this is what you need to do. So that's very positive and freeing in and of itself. But not only did we build in the information security of ISO 27001 into that, but then we started doing the same thing for the quality management standard, ISO 9001, and the same thing for the. This is a bit of a mouthful. UK diatf, the United Kingdom Digital Identity Attributes and Trust Framework, which is the first national standard for identity verification.

So all of that is built in. So when people are doing their job, they're not sitting there going, this is what I do. And I do this for information security and I do this for quality and I do this for UK diatf. They just have one standard that says this is what you do and everything else is satisfied as a side effect. So I tend to refer to that as living information security, if you will. But there's the second more technological aspect, which is when we started building idpal, GDPR had come in, but it wasn't yet being applied. And a lot of people at the time looked at it as something that you just had to comply with after the fact.

I've actually gone completely full circle on that and instead I just, well, if the idea is to give everybody control of their own data, why not look at that as the start point of your design, which is actually the core of one of my pet projects in identity verification, self sovereign identity, the idea that you can be the data controller of all of your own identity data, rather than there's some company that you submitted it to to get a bank account, why should they own the data? It is your data. Now, they have reasons to keep some of us and there's subtleties to that. But, but in general, why should you not own your own data? So, yeah, there for me, those are the two key aspects of information security, both technologically and culturally, and how we have applied them in IDPAL, how you can apply them in KenL.

Matt

And last time when we talked, you described me more how the idpal works, how this identity verification works. And I, it was really shocking to me because like the last time when I did the identity verification, like with my id, it was on Airbnb maybe six years ago, or the same I did for my, for my bank account or the card that you show in a camera, the id, like from the both side you show the picture and you describe like the way how to verify how this could be tricked really easily nowadays. Right. And like how do you want to prevent it? And this is really challenging from the computer vision perspective. I mean, so maybe you could give some examples, you know, how do you approach it?

Robert O’Farell

No, absolutely. So, yeah, in ID pal, our general principle is to provide real time verification of an identity using technology, not people. Because, for example, a lot of our competitors, they'll use centers in India of a few thousand people who will see images and click yes or no. Oftentimes by the time you've reached that stage, it's too late to detect the fraud because as we discussed the last day, the fraud can occur right up front. So to give a few examples of that, we need to cover one basic principle in the identity verification. How do you prove who you are, something you have, something you know, something you are. So something you have as an id, something you know is the more traditional stuff like name of your first pet, something you are as a biometric, like your face, your thumbprint, et cetera. So to stick to the case of an ID and a matching your face to the id.

So I'll give you a few examples of the fraud that can occur. So the very first, and I'm covering it first, but it's probably the most cutting edge one at the moment. It's called a camera injection attack. It's where your phone or PC is fooled into thinking that it's getting an input from a camera when it's actually a virtual camera. So for example, you can stick a USB stick into the side of a laptop, have that connect to another laptop. That laptop tells the first one, oh by the way, I'm a camera. And then you can just give it any files you like.

And the first device believes it's getting a real stream from the real world. So any photographs, any videos you have, it thinks it's getting that live. Now that we have such incredible deep fakes out there with all of the gen AI and LLMs that have come through in the last year, it's not hard to see how that's a very easy path for a fraudster in. So that's why our first layer of protection is injection attack detection. The detecting when one of those things are in place. Then like I won't go exhaustively through everything because there's a lot but at a high level you'd think up presentation attacks. So when somebody is wearing a high resolution mask or taking a video of a video on a screen, that's called a screen recapture photo substitution where somebody's got a real ID and they're just laying another picture over it, whether that be physically or digitally.

I will go into the physical one for a second because it actually explains why we have to some of the limitations of the different kinds of AI used. So you would think putting a passport photo over an ID off like I look at a photograph of that, I know immediately what's going on. That can't possibly be real. But the problem is that computers don't see the way we do. They go through a process of examining the pixels and that's typically done through a type of AI called a convolutional neural network. The key word I'm going to focus on out of that is the first one convolutional. The first thing it does is the tech equivalent of blurring everything.

There is too much data in a photograph for computers to process all of it. So they come up with using different algorithms, averaging of the values in the different pixels. So seeing something like a really clear border of a picture being laid over a picture, a lot of platforms can't see that a lot of them think I'm expecting there to be a border between the image of the person and the rest of the id. And so they just don't detect that that happening. Which is why in general one solution alone is not going to do you in actually identifying is this a real Medici document that's physically present at the time. You need a dozen solutions. And of course all of this is before you even touch on the actual capturing of the person where you need liveness detection to prove you're dealing with someone who's actually present at the time.

Or finally, the one where, let's say you've done the most thorough due diligence process in the world, you're happy, you've got solutions that test for everything. What about user experience? I'll give you one simple example there. About eight years ago we used to, or seven years ago we used to refer to that step where you take a picture of yourself as the selfie step. And we did a quick user experience test where we changed that to liveness test. And everybody was telling us, don't change it to Liveness. Everybody knows what a selfie is, nobody knows what a liveness test is.

But that was the point we changed it to Liveness. People on selfie were making dock faces at the camera, taking down angles, whatever else. And all of that was interfering with the ability to get a clean first time facial recognition. So you weren't getting false positives, but it was just getting more difficult for people to get through. We saw 20% uplift in people passing facial recognition first time the moment they thought it was something technical because they said, oh yeah, no, this is a technical thing. I'm just going to look straight at the camera now. So yeah, there is a hell of a lot going on in identity verification to make something really, really simple, really complex look really simple.

Matt

That's really interesting. The last example that you gave, I think it's like when you give some kind of rules that people always try to trick it, right? When you give some kind of system and with that it's like the same with the verification website, like Capacha, right. So they change it from time to time because the people figured out how to trick it. And I think it's a great example that you gave here. But I could imagine that you're using AI a lot like some LLMs, some tools around AI. Maybe you could elaborate on that.

How do you approach it?

Robert O’Farell

So yeah, AI is heavily involved both in the real time processing on the platform itself. So we use a few different kinds of AI for different kinds of detection. So if I was to explain it a little bit more deeply on say document verification, there's the more traditional convolutional neural network approach where you will first off classify a document. So I can tell from certain landmarks on this that it's an Irish passport or an American Alabama driver's license. Second off, you have authentication where it said going through that neural network. You said, okay, well I know that that particular ID has these particular security measures. On it.

For example, continuation patterns are one where if you look closely at your IDs you'll see like little dots going through them or wavy lines and they go through all of the letters and all of the pictures on there so that if somebody tries to doctor something on the id, the idea is that they will accidentally break one of those continuation patterns or they'll use a font that's slightly different. So you're checking for all of these things, the right kind of font, all of the different security measures. But what that kind of AI can't stand up to now is what LLMs and Genai has brought in some of these high fidelity fakes.

It's not hard to imagine. Well, if we can create something that knows about all of these security measures and can look for them, then surely you can create something that knows all of these security measures and can create something. And that's what LLMs have allowed to happen. So now you're using a few different categories of deep learning models and variations therein to detect where when something has actually been generated by AI or when there are artifacts in there to show that it's been modified from an original. All of which is layers and layers that you use on top of the camera injection. So there's a few different kinds of AI in the testing. And then of course if you want to keep pace at the moment and you've got to be looking out for opportunities to use AI in your own processes, I mean there's things that used to take a couple of weeks you can do in a few hours with the assistance of LLM models.

Now it's just quite tricky making sure that you back the right horses there. Everybody thinks AI can do everything and in the vast majority of use cases it's not reliable. You find the right use case for the right problem, it's brilliant. How do you identify which ones to go after? You can't be full time checking every single possibility. And then finally, of course that is leading up to the challenge that everybody's going to face over the next few years, which is the new AI act in the eu, where if you are using AI, processing the data of any European citizen, doesn't matter if you're operating in Europe or not, you're going to have to comply with this. And I can tell you that I've dealt with multiple companies where their policy is we don't share our training data, we don't share insights into how we came to, the decisions we came to and that's not going to be good.

Enough under the AI Acts, people are facing a lot of challenges to comply with that over the next couple of years.

Matt

Speaking of this AI act, on Monday I was watching the Apple conference, they're releasing new iPhones and they put a lot of emphasis on the AI and I think this is something like a crash on the Apple side and I already, I see something new that in the iPhone because the last, the latest versions they were not so disrupting or game changing to be honest. But with this AI, like applicable to the images, applicable to the text and like helping you to work on a phone, I think there is a lot of brilliant applications. But the problem that we are having in eu, it's something that you have mentioned, this AI act. So this AI act and in fact all the regulations, they are blocking even the companies like Apple to release the features that are like game changers for the phones. So even like those huge companies, they have to deal with that, right?

Robert O’Farell

Oh they do, they do. So but obviously everybody listening to this, not everybody is going to be as boring as me and having sat down to read the AI act, so I'll just call it a couple of the more specific challenges you're referring to. So one of the things that they've been trying to deal with is actually one of the most popular research areas in AI at the moment, which is explainability. It's one thing to come to a decision, but can you explain how the AI came to that decision? And it sounds simple of everybody is so used to programs the way we've written them for so many years that everyone assumes the answer to that is yes, but the truth is no. And if you look at GPT2.

Matt

There.

Robert O’Farell

Is a massive, massive leap from GPT2 to GPT3, where in two it could regurgitate information that it had received. In three it started refactoring that information into other uses that came purely as a result of processing huge amounts of data, more probably 10,000 times as much data. But even the people working on GPT were surprised at that jump in the intuitiveness of the responses. And nobody can fully explain how it delivers the information that it does. Which is why we've all heard about hallucinations and whatnot in GPT where they start to do crazy things like asking journalists to leave their wives because they love them. But explainability is one aspect, the other is openness of the training data. Because if you are being asked to rely on an AI for something critical, how do you know that what it's been trained with is an unbiased base of data.

And if people are able to choose whatever data they want going into it, then you've got a major problem. But here's where the balancing act comes in. Even just today, it was either today or yesterday, there was a speech in the European Commission pointing out that maybe Europe does need to look at what we're doing on regulation. There was one statistic that stood out to me that in the last 10 years the US has produced multiple. It's a trillion dollar companies, I think it's five or six trillion dollars companies, while Europe has produced none in that time and in fact has struggled to produce $100 billion companies. So I totally understand what the EU is doing in trying to make sure that we're all properly protected and move forward in the right way, but we're losing out on competitiveness as a side effect of that. And I don't know exactly what the answer is to find the balance between those two.

I'm very excited about what we'll be able to do with Apple Intelligence. Great bit of marketing that to also go with AI, but I can't wait to see what we can do with that and I hope I have it in my hand soon.

Matt

And speaking of the regulations, that's a really interesting statistic that you have mentioned and I have a feeling that in EU they need to start to revise the plans that they have to not stay behind the whole world. And I think they already did with the electric car US because the companies like Audi or Mercedes Benz, they told that they will because the initial plan like a few years ago was like by 2030 we will have only electric vehicles, we'll only be manufacturing vehicles and. But they said like, hey, this is not possible, we need to revise it. And I think the Volvo did the same. So. And with the AI, like the, like. I had a really interesting talk with my friend who invested a lot last year in a product that is.

That is having the AI, like some features around it. And he said that he worked for 12 months. He invested like I think like 700k or something like this to kind of deliver the feature which are. Which was like crucial for him because it was so expensive. I mean like one query to the chat, GPT was so expensive and he was working 12 months or 10 or 12 months, spent a lot of money and like right away when he's not almost finished, like the queries are 60 times cheaper. So like the piece of speed of the disruption in this area, it's like tremendous, right? And I feel like if in the eu, we don't, we don't play the game and I don't know, like maybe the lose LEGO legislation a bit, we'll be far behind.

And this is like the new Internet, like the whole AI thing.

Robert O’Farell

No, I know exactly what you mean. And it'll be really interesting to see how well the patterns are replicated across industries. But in that field of regulation versus competition, I was at the European Anti Financial Crime Summit a few months ago there and as I'm sure you know, there's the new European Anti Money Laundering Agency being brought in to unify how money laundering is dealt with across the eu. And the new head of the agency gave a speech at the summit and one thing really, really stood out for me as greatly insightful from him about a potential way forward, which is the way he wants to enforce AML legislation is to not be strictly rules based. He wants to be providing guidance where people have flexibility within that to decide how exactly they can achieve what everybody wants to achieve in terms of protecting ourselves, protecting consumers, but not being so rules based because the moment you're rules based now suddenly you're cutting off all options for any kind of innovation that you haven't thought of. And that was the real point he was driving up. We need to create an environment where there's safety, but leaving room for that innovation.

Don't be too rules driven. Unfortunately, you were covering a few examples there where it sounds like things were just too rules driven and people who've achieved amazing things either were blocked from doing that or they didn't achieve it.

Matt

Within Europe, I think we are missing something like the silicon in Wali kind of vibe in Europe. So there is no, I cannot imagine there is no country or a place or a zone like industrial zone that supporting the startup, supporting like connecting with the investors and encourage people to be brave and try new things because when you start fresh you need to break some kind of rules. And there was like really interesting lecture from Eric Schmidt, the ex CEO of Google, who was for many, many years and he did the talk on Stanford and it was so controversial that it took, that they took it off out of the website. Yeah, it was so controversial because you know what, like the one thing that he said, and he said it out loud and he said like in Silicon Valley it works this way. If you are a startup and a lot of startups will be around AI and the core, it's to have really good data to train the algorithms. And he said like the first, what you should do, you should feed the Algorithm or your software with the data you want and don't care about the valuation, about stealing or the copyrights, uh, because like, either you will fail or, and this doesn't mean nothing to anybody and when you will win, you will have the lawyers from the Silicon Valley to help you out. So it was really controversial.

But like, to be honest, like all of the many companies work this way, uh, and I, and I think, like, I don't have, I'm not saying that we have to do the same in a year. Right. But my thing is, I think we are missing some kind of like zone to encourage people to be entrepreneurs and not to fight with legislation, but fight with like new ideas and innovation. Right. To figure out the solutions.

Robert O’Farell

Yeah, I very much lean more towards the supports approach to encouraging innovation. Like if, for example, you've just covered the question of, oh, should you care about the ownership of the data? Well, if you know that there's a load of AI startups that you would like to encourage and you'd like to make Europe a safe space for AI startups, why not create a space, as you say, which is putting in place everything needed for that, like for example, gathering huge amounts of data and say, oh, you want to train an AI, here's a load of data where using the resources of 100 companies we've managed to source that you can use freely. Go train your models so it doesn't have to be one or the other. I mean, the extreme at the other side is, I don't know if you saw the, at the judgment from the Dutch regulator in the last week there on Clearview AI, and that one's been going for a few years. But basically that is a facial recognition startup where they specialize in one to many facial recognition. They were targeting law enforcement agencies so that, you know, where, oh, we saw a crime committed, how can we identify this person?

The thing is, the way they trained it was to scrape every social media platform on the planet. So unsurprisingly, that was deemed to be incompatible with EU law. And Clearview have spent several years just saying, we're a US company, EU authorities have no jurisdiction, not engaging with any of these processes. So it's judgment after judgment against them. But they just say, yeah, it means nothing, we're not in Europe. So that's an interesting one to keep an eye on. In terms of what you were talking about, of just train it and figure out the lawyer part later.

Matt

Let's jump to another topic, to more of your lessons learned. Because last time when we talk I really like the story about the Vodafone. We were talking about the performance during one of your key interests. So maybe could you like elaborate and, you know, repeat the story with the bottom. I really enjoy it.

Robert O’Farell

Absolutely, yeah. I was lucky to have several great architects as mentors coming through who all really cared about performance tuning as well as my background in college was computational chemistry. So I used to rent an hour ago on the supercomputer up in Queens, if you only got an hour once a month, you had to make sure that your program was going to get as much work done in that hour as humanly possible. So from those days it was always obsessed with it. But the one you're referring to was at one point I was moved over to the UK again to rescue a problem project. In this particular case, there was contractual disputes and whatnot going on. And one of the things at the core of this was that the project was supposed to deliver a competitive advantage in calculating the Vodafone deal.

So this will mean nothing to most people, but this was a benchmark commonly used in the utilities industry in the UK at the time. The two biggest potential customers you could get in the UK were the mining industry and Vodafone. Mining industry had about 15,000 sites.

Vodafone had about 8,000 sites. So you're essentially getting 8,000 customers in one go. So really, really important to do that. Right. When I turned up on the project, everything, the calculation took five days.

Now why did it take five days? Well, the first thing you have to do is forecast how much energy is going to be needed in every half hour period for the next five years at all 8,000 of those sites. Accounting for holidays, selling things like the peak energy usage in the UK every year is the Christmas special of Coronation street, or at least that's what it was then because everybody turns on the kettle. So you have to predict all of these things to the half hour level for five years. Then cost us. Okay, what's it going to cost us to buy the coal or the oil or the gas? It going to cost us to run the network, is it?

What then run the power stations. So then how much do we price it up? Well, we need to build some risk into this. Yeah. So there's a lot of things that go from how much energy is needed to here you go about a fund, Here's a contract document because it had to go that far. Here are your automatically generated contract documents with all your special terms. Everything the fastest in the market at the time was a competitor of this company who were able to do it in three days.

But I had great fun with my team bringing that from five days down to 48 minutes. So that was my favorite bit of performance tuning ever. I know that sounds very sad to get excited like that about it, but. But I absolutely loved it. The one thing I'd add for anybody who is ever interested in doing performance tuning, don't just start tuning. It is unbelievable how often I'll talk to developers who will just sit down and start trying to make everything faster. You could spend a year making a bit of the code faster, but if that only took up 1% of the total time, the most you could ever save was 1%.

So it amazes me that it still needs to be said, but it is the most important thing. Measure first. Where is the time being spent then? Still don't go straight to tuning. Say, what can be done in parallel, because again, what happens if 90% of that work can actually be broken down into a thousand tiny pieces that can be done in parallel? In this particular case, I found that 70% of the work could be done with 8,000 parallel processes. Now, it wasn't actually all in parallel, but we worked out various ways of saying, right, we can be doing up to 50 of them at once.

Some of them will be done in seconds, some of them will take five minutes. But every time there's another slot to do another one, keep filling those gaps. So the vast majority of the saving wasn't any particular clever tuning exercise. It was just figuring out what could be done in parallel.

Matt

Amazing story and I think it's a great lesson learned, especially like from such a huge example, such a huge pace study. I mean and the last, the last question that I have and I ask, I'm asking all of my guests is are there any or word or any books or maybe resources conferences that has been particularly helpful to you as a tech leader and like they were eye opening for you?

Robert O’Farell

Yeah, for sure. And I'm just thinking to myself, any of my team that listen to this will probably be rolling their eyes to hear the names of these books yet again, but I still love them. So the vast majority of what I go to, there's only one that's actually technical and I go to it because it's really, really short and it'll give anybody a great start point on how to view architecture.

It's called a Software Architecture Primer by Ricky and McAdam. It's like tiny, tiny little book. I love going to that all the time. But the vast majority of it. I think we need to bear in mind that technology is not about the technology itself when it comes to delivery. How do you make that solution meet the real world? How do you get from an idea to the real world?

So my real favorites are Thinking Strategically by Dixon and Elbow, if you ever want to laugh. I know that sounds funny, but brilliant. They're two game theory guys from Princeton and Yale, and they decided to write down all of the key game theory strategies and then try them in the real world. So for example, one of them got into a taxi in Tokyo, agreed a price with the driver, got to their destination, and then tried to pay the driver half what they agreed. And their theory was based on something called the ice cream piece. Even if the deal is unfair, if the alternative is you get nothing because it'll all melt away, then you'll accept the deal. Except if you reach an intransigent negotiator, which this guy was.

He locked the door, drove back to the airport, threw your man's bags out, and told the whole taxi queue not to take him. That one is brilliant. And then the final one I'll leave you with is Persuasion by James Borg, which I love because it goes through all of the theories of persuasion, From Aristotle in 400 BC up to modern theories of how you can measure that when people are in a heightened emotional state, to recognize the point at which to ask somebody to make a decision. So the combination of those things, know what you're doing, understand how you're going to approach it, and then know the right time to ask to make a decision. That's why they're my favorites.

Matt

No, thank you so much, Rob, for all the recommendations. I need to read the book about the game theory like from the two guys who started G Theory. I think that's. That sounds really interesting. And thank you so much for all the interesting stories.

I really appreciate it. I think it will be really interesting episode for our listeners here.

Robert O’Farell

Oh, thank you very much for having me on. I really enjoyed it. Follow Matt on LinkedIn and subscribe to the Better Tech Leadership newsletter.

Explore similar episodes

From Code to Leadership: Mastering the Software Development Lifecycle

Leszek Knoll delves into the practical and theoretical aspects of software development, interdepartmental communication complexities, customer feedback utilization, effective problem-solving, one-on-one alignment, and the value of documents for decision-making with Francisco Trindade, Director Of Engineering at Braze.

listen now
Growth Challenges: The Right Funding and Team Dynamics for Startups

Matt Warcholinski delves into an innovative hybrid work models, balancing sales and product management, funding significance, and the importance of human-centric leadership in startups with Dennis Teichmann, Startup Investor and Advisor.

listen now
Leading with Purpose: Tech Leadership Insights from Ian Forrester

Matt Warcholinski engages with Ian Forrester, Senior "Firestarter" at BBC, discussing innovation and adaptive media technologies, emphasizing public service in tech, offering practical remote work advice through the "personal user manual" concept, and sharing influential books and podcasts on the multifaceted role of technology in society.

listen now
Leading Through Change: From DevOps to AI-driven Futures

Matt Warcholinski explores DevOps philosophy, human-centric tech leadership, the purposeful use of technology, the uncertain future of tech, and recommends a valuable resource with Jan Hegewald, a software product development leader at DevNetwork and SumUp.

listen now