Matt, Mitbegründer von Brainhub, beschreibt sich selbst als „Serienunternehmer“. Im Laufe seiner Karriere hat Matt mehrere Startups in Deutschland entwickelt und dabei viele Hüte getragen — vom Vermarkter über einen IT-Ingenieur bis hin zum Kundenbetreuer. Als Moderator des Podcasts Better Tech Leadership spricht Matt über das Wachstum erfolgreicher Unternehmen und die Herausforderungen, die sich als Startup-Gründer und Investor stellen.
Frank Schmid is a globally recognized insurance executive and CTO at Gen Re, blending deep expertise in economics, risk, and technology to lead strategic innovation across the reinsurance industry. With leadership roles at AIG, NCCI, and the Federal Reserve Bank of St. Louis, Frank brings decades of experience driving billion-dollar deals, building data-driven businesses, and navigating change with analytical rigor. A frequent academic contributor at institutions like Wharton and Goethe University, he combines academic insight with real-world execution to shape the future of insurance and financial risk management.
This transcription is AI-generated and may contain errors or inaccuracies.
Matt
My name is Matt and I will be talking with Frank Schmid about adopting low code to modernize workflows, integrating AI smartly, and applying economic ideas to make better tech and business decisions. So Gen Re is a company, I think they are more than 100 years of the market, which is quite a... quite a... some time in the insurance business. But maybe you could introduce yourself and tell to the listeners a few words about yourself and your experience.
Frank Schmid
Yeah. Thank you, Matt.
It's a pleasure to be with you. My name is Frank Schmid. I'm actually a trained economist. I received my education in Germany and my first position was at the University of Vietnam, Austria, and I was an assistant professor for financial services. And then I moved to the United States to spend time at the Wharton School of the University of Pennsylvania in finance. My first job in the United States was a research economist position at the St. Louis Fed.
And at some point I. So I take my skills to the private sector. I joined the insurance industry, spent a number of years at AIG in the financial district of New York city. And since August 2020, I am with Jen Re leading the IT department, which, you know, I have not had not done before. And we can talk about that. Ginri is a reinsurer, as you said, Matt, we have a long history. Actually, our German subsidiary is even older.
It's the Old Culinary, which we acquired at some point. And we are a member of the Berkshire Hathaway family of insurers and reinsurers. And again, I'm happy to be with you, Matt.
Matt
Great, Frank, thank you for the introduction. And you already mentioned a topic that I wanted to discuss with you. I mean, the transition to the, to the role of the cdo, Right. You have the background, economics, the background, like you work as a leader in aig, for instance, but more on a, let's say, economic side. And why did you choose to step into the CTO position?
Frank Schmid
Yeah, thank you, Matt. That's a good question. Now, I've become increasingly interested in technology in the years prior to taking the position, and that's because there's an interesting confluence happening here between information technology and quantification. And given my background as a quantitative economist, as an econometrician, and that's a fascinating development. So on the information technology side, right, we have observed the rise of software over the past couple of decades. Right. In the past, when you thought information technology, it was about hardware, right.
It was the mainframe, then client server computing. And when you look at the IT budget, right, it was largely fixed costs because hardware generates fixed costs. And now you had the rise of software, you had the arrival of the Internet and the cloud, and suddenly, you know, an increasing share of your IT budget is actually variable cost. So this, this transition from hardware to software I find fascinating. And then on the quantification side, there are, there was the arrival of machine learning. And I'm a trained econometrician, so I'm, I was trained to take, hypothesize social and economic relations and cast them into a set of equations and then estimate the parameters and generate predictions. And now machine learning arrived. Right. And machine learning does not know about social and economic relations. Right. Makes predictions purely based on relations in the data and these predictions tend to outperform the predictions of structural models.
So that is a fascinating development. And when you think of AI, which we will talk about, and a bit, right, you see that confluence of software and hardware and quantification techniques. And so from that perspective, 2012, when Alexnet won the ImageNet competition, that was a watershed moment. So the arrival of deep learning, which takes us beyond the manual feature engineering and machine learning that I also used to do. And you know, it has deep learning comes with feature learning. So you have, and it was the first model and ImageNet that actually ran on Nvidia chips and it's CUDA software. So you have this confluence of quantification techniques, meaning deep learning of hardware, the Nvidia chip and software, the CUDA software layer on the Nvidia chip.
So these are clearly exciting time for anybody who is interested in quantification.
Matt
Speaking of your role as a cto. So because of the background, so you don't have the technical background, like a pure technical background. But I'm just wondering, like, how do you approach your job? How do you approach your job? Because at the same time I think it could be you can have the advantage of this out of the box perspective. Right. But in some cases, of course, there are disadvantages.
So I'm interested. Like how do you, how do you approach.
Frank Schmid
Yeah, thank you, Matt. Good question. It's clearly a challenge to take over an IT department without having worked in IT before, not being an engineer. And there's a certain way of thinking in engineering. So it's clearly helpful that I have a theoretical background in economics and that I have spent time on the business side at aig. And so without theory, one cannot abstract and conceptualize. And in my four years in technology, I have found the ability to conceptualize and to abstract is the scarcest skill.
And having spent time on the business side, that does help understanding how an organization works and what the priorities of the organization is. Now, as a leader in general, there are two tasks one has to perform when assuming a leadership position. One is to establish principles of decision making for the team. And clearly not being engineer myself, that was even more important. And secondly, quickly learn where to augment and complement the skills of your individual team members. And so I established those principles and I lead by these principles with a fair amount of attention, frankly to detail and execution. Now, attention to detail is not the same as micromanaging, right?
Because it's not interventionist. But not being an engineer, I have to rely heavily on those principles. And I can talk about five principles that I have established a little bit. First, openness, right? So we prefer open systems, for instance, like systems that integrate or provide integration. An example is open file format that is important in our data architecture, systems that scale and the cloud as an example, systems that are exposed to competition and benefit from the contribution of the many. And that's open source code.
So just to give you those three as examples. So openness creates optionality, which is figures prominently in our thinking, optionality. And it also ensures continuing progress, right, through exposure to external forces, because an open system is exposed to external forces by definition. Another concept that we favor is modularity. So we prefer systems that allow us to break down a complex problem into a set of small simple problems. And interestingly, you see that in a bunch of spaces, for instance, machine learning is established in this way. You have a complex problem which just break it down and a large set of small simple problems that you can solve.
And the architecture of the Nvidia chip is like that. So you see that concept of modularity pop up in information technology a great deal. And we like it because in terms of architecture, it allows for parallel processing. You can address modernization or workflow redesign in a modular way. But modularity also gives you optionality in sequencing. So you can actually, you know, break it up also on the timeline, not only an architecture wise in parallel processing. And that reduces clearly execution risk.
And so it gives us this option of like local optimization. When you think mathematically, right? You can optimize locally and you can incorporate technological progress in individual modules, individual workflow components independent of other components, right? Other modules. And a third of the five is we like platforms, right? So we do prefer platforms over best in breed tools. It matters a great deal, for instance, and security, but also in other areas.
So we like platforms because they are a manifestation of openness, right? Whereas best in breed tools frequently are just point solutions. And so platforms offer the opportunity for standardization, for the creation of libraries of reusable components. But most importantly, right, platforms generate network effects. So if you add a module to a platform and increases the value of every module that's already on the platform, so that's definition of network effects. So I actually mentioned three embodiments of scale here in the context of platforms, which is arguably the most powerful force and technology. And then fourth is composability.
So if you take openness and modular and you put them together, right, you have composability. So for every existing firm, and Matt, you mentioned we have been around for a while, so we have a fairly diverse technology stack. And so composability matters a great deal, so it matters anyway. But when you have legacy technology, then it matters even more. So we like the orchestration capabilities of our local platform, right? So that generates a fair amount of composability. And also, again, you have the network effect here, meaning every component you add actually adds value to every existing component.
And finally, there is reversibility. Now that's a term you don't hear a lot in technology. And it actually originates in real options evaluation, which in the 1990s was fairly prominent concept. It has fallen out of favor maybe a little bit, but it matters a great deal for us. And I can speak to why it matters a great deal in the AI adoption process. So real options, as opposed to financial options, they exist where you have learning and irreversibility both. And irreversibility is highly related to the concepts of stranded assets and sunk costs.
So and when you have real options and they always exist, the question is, how material are they, right? So they originate in learning and reversibility both. And they mean that the net present value rule, which is your traditional decision rule for projects, is incomplete. So now learning always exists, right? I mean, the future is uncertain, so you have technological progress, and frequently it's not predictable. And also your environment may change, right? Macroeconomic conditions may change, inflation, recession, an organization's market environment may change.
So there's always that learning, right? As time progresses, we learn, and then there's irreversibility. And that comes in two flavors, interestingly. For one, the decision not to execute a project is always irreversible, right? Because the project may have created options at a future point in time for future technology, decisions that are now not available should not get going. So follow on projects are an example. So that generates what is called the cost of waiting.
And the cost of waiting adds to the traditional net present value Rule. So actually the cost of waiting incentivizes you to move faster than at the traditional net present value rule. And then again, right. I mean, executing today could also be a regret. So, you know, if you get going on a project, it could turn out to be a poor decision in hindsight. In hindsight, right. And then organizations may, you know, want to change course or, you know, terminate a project or alter a project.
And so that may create what is called stranded assets. So assets that are not portable to an alternative project or an alter project project. And, and, and so there are costs that cannot be recovered and that generates the value of waiting. So there's actually value not to move forward because of that risk. Right. And so it subtracts it a net present value from the net present value ro. So it actually incentivizes you to, to move slower than the net present value.
So now how, how do we deal with that?
Matt
Right.
Frank Schmid
You can do the math that typically we don't actually write down the numbers so of real options. So what we do is we just establish two principles here to address those two aspects. One is the cost of waiting and the value of waiting. And the cost of waiting we address by favoring technology choice is that increase the option set available for the future. And then the value of waiting we address by favoring technology that's highly reversible, because if you have no reversibility, we have no irreversibility. So complete reversibility or real options wouldn't exist. Right. The challenges address.
So if you make, make technology choices that are highly reversible, then you address an important aspect here. So that concept really does figure prominently and it plays a significant role in the AI adoption process because that's like an iterative process. Organizations are on a discovery journey, right? And we don't know exactly how the technology will evolve, and we all have to figure out how to put it to productive use. So there's an interesting concept in machine learning. It's called the greedy choice property. And ideally your tech stack would satisfy this property, but of course, you know, of course it does not.
But it's kind of good to keep it in mind because, you know, the ideal tech stack essentially would be one where, you know, you would have, that satisfies the greedy choice properties. So that means that you can arrive at a globally optimal solution via local optimal choices. Right. And so if at every step you can optimize without having to go back to the previous step and revise that decision, then your optimization problem follows the greedy choice property. I mention it because talking about the connect between technology and machine learning and the greedy choice particularly figures prominently in machine learning.
Matt
You mentioned the state of the economy and how this impact decision making. And I think you are one of the best person who might talk to. Tell me more about your views, your assumption for 2022. Like how do you see the economic outlook last year, last two years, especially in opec they were really turbulent, I would say. And now S and P is going like crazy. When you look especially relates to all the AI companies, right? How do you see it?
Frank Schmid
Yeah, Matt, they say that economists are people who have predicted five of the past three recessions or so. It's. There is. I'm actually in the camp of arguing there is fairly limited predictability in human existence and in the economy overall. Now, I'm always optimistic about the future because optimism is a manifestation of positivity and positive thinking is an important quality of a leader. Now, optimism isn't blind to challenges though, right. So as a baseline, it's good to think of the economy as a random walk.
And I'm a big fan of the random walk. So the random walk means tomorrow will look like today, plus an innovation. And an innovation is a change that is permanent, that puts you on a a new level. And we know from the normal distribution small innovations are more likely than large innovations. And the random walk is its own forecasting model. Right. Tomorrow will look like today. And frankly many processes in economic speed, economic growth or inflation are very close to random work.
So there is just limited predictability, I would say, in changes. Right. And the best forecast many times is tomorrow, it will look like today. Now there is also mean reversion, right. So especially like following shocks. Right? Mean reversion, following shocks. So for instance, economic activity recovers from a recession, the stock market recovers from a crash and central banks do have the power to bring down the rate of inflation.
So these are mean rewarding processes. But unless you experience the shock and you are in such a mean, you know that you are on a mean reverting process. The random walk is a good predictor. Now, returning to mean reversion, the challenge is, for instance, the time to recovery from an economic recession can vary greatly, right? And so there is a fair amount of uncertainty even if you know there is a mean reverting process playing out. And secondly, the shock itself that generated a mean reverting process. Right. A recession, for instance, or an inflationary shock. Right. The surprise inflation we just experienced, that, that is not predictable, right.
So you have returning to the random Walk and frankly, unless you are in such a post shock environment, the random walk is a, is a good predictor. However, this being said, not that I want to qualify it, I think there are two economic variables that I think I was paying attention to. One is clearly the rate of inflation. And Matt, you mentioned how the world is changing, right. So there is research that shows that the rate of inflation is the function of central bank independence. Now central banks have done quite a good job in anchoring inflation expectations around the world with the concept of inflation targeting first implemented in New Zealand. Now the Fed, the Fed was fairly late, Right.
Formally, inflation targeting was adopted in 2012. Inflation expectations are well anchored.
Should they become unmoored. Yeah. Then we may return to higher and more volatile inflation rates that we experienced before the concept was adopted. And then there is the yield of your tenure of the ten year treasury security. So the United States is the safe asset provider to the world and the 10 year Treasury Security is, is a benchmark for the global financial system. Right. Tertiary market is the bedrock of the global financial system.
So now we have seen a sharp rise in federal debt held by the public in the United States and in other countries, Germany being an exception. And there is, you know, we know that the fiscal path of the United States is, is unsustainable, if you believe the Congressional Budget Office, which is bipartisan. So I think that's worth watching. There will be changes here given the unsustainability of the fiscal pass and the tenure. Treasury security plays a very important role and soundness and safety of the world financial system.
Matt
And speaking of the insurance industry, the state of the insurance industry, I would say from the perspective of technical challenges, what are some the biggest technical challenges facing the insurance industry right now?
Frank Schmid
Well, from a technological perspective, Matt, I'm not sure there's anything unique to the insurance industry. I think outside of technology, you see climate change, for instance, this play a role, predictability of losses, insurance losses, et cetera, wildfires, convective storms, et cetera. So there are clearly those challenges. But from a technological perspective, perspective, I don't see anything unique to the insurance industry. Now it's really the AI adoption process, right. That we all are looking at. Now clearly some use cases in generative AI are more relevant to insurers and reinsurers than other industries.
And, and the ingestion of unstructured data in underwriting in claims is clearly an example of a key use case for the insurance industry. Now at the conceptual level, yeah, we deal with technical debt, right. Associated with legacy technology and Technical debt is called technical debt because it's a liability and it constrains your option set and technology decisions that are available to you. And but again this is also not unique to the industry. It's interesting that I feel technology is increasingly lexicographic, so meaning you cannot have B without A and you cannot have C without B. So. So the way I think about it, without the cloud you can have a model data architecture and without a cloud based architecture, there is no deployment of AI at scale in your organization.
And then if you have data that's locked up in legacy applications, well that then imposes binding constraint and the productive use of generative AI. Now technical debt also manifests itself in operational risk. Technologies don't tend not to talk too much about security, but it does matter. Legacy technology figures prominently, right? And security risks, especially when it comes to edge infrastructure and external facing applications. So I'd like to point out the security aspect of legacy technology which again not unique to insurance, but it does matter, especially you know, for consumer facing insurers.
Matt
And last time we have really interesting conversation regarding the low code platforms and you mentioned that you have built a lot of IT systems or the applications inside the organization with the local solutions and this is quite unique and interesting topic. Maybe you could tell us more about the use cases and how has it been working for you using the low code platforms.
Frank Schmid
Yeah, thank you Matt. Well, we came to low code similar to others, right? We said well we have legacy technology and low code helps us remediate legacy technology challenges. And that brought us to low code and it does play a role but we realized that the opportunities associated with low code are much greater than that. So yeah, we use it for the elimination of our legacy technology and workflows workflow components. So we also build workflow components using low code. And overall the big picture really is to use low code for workflow orchestration.
Now when I say workflow, I mean business workflow. I don't mean software development workflow. Right? So we also building our first complex data intensive and external for facing application using low code. But it's not so much application oriented I was thinking it's much more workflow oriented which means business driven. So we have given our size and the way we are set up so we have a single dedicated team for low code development and that resides within it. So we don't pursue the citizen development model and we keep this team lean.
And the reason is we want to further interaction with external parties for diversity of thought but also knowledge transfer. Right. And IT Ties into the concept of openness. Right. So we, we draw on the local adventure itself when it comes to resources and also the partners within the ecosystem of the low code vendor, especially when it comes to matters of conceptualization and architecture. Right. And then we leverage the ecosystem also for staff augmentation, clearly.
So the bigger a project, you know, the greater that ratio of external to internal sources resources. But it's important to see that, you know, there are, there's the internal team which we keep small, we force it to interact with external parties, there's their resources made available by the local vendor itself and then the ecosystem. So diversity of thought, knowledge transfer, these two concepts do play an important role. Now our customers are the business units in corporate function. So we are a reinsurer. So it's business to business. We are not consumer facing and from an IT perspective it's really the business and corporate functions that we serve.
And over the past three years we have built a fair number of workflow solutions, especially for legal actually and finance functions. So it's either that we did lift and shift, you know, away from a legacy technology to low code. Sometimes lift and shift is okay, you can iterate later. Then we also consolidated legacy components into, you know, single low code workflow solutions or we, we, that's the third one we actually developed low code solutions again, workflow that was previously manual. Right. And, and you know, manual means there, there are handoffs, there's potential source of error, etc. So and now our organization has a fairly high ratio of complexity to size and in part because reinsurance business itself is complex and secondly it's actually a small industry so we offer reinsurance globally, we operate in many countries in all continents and we offer both property and casualty and life in house and can imagine life and health.
You have, you have personally, personally identifiable information and you know, you have data aspects that you wouldn't have in property casualties. And so given that the industry is small, there is a deal of, you know, off the shelf solutions. So there's a fair amount of building as opposed to off the shelf usage and reinsurance and low code is clearly ideal.
Matt
And then let's say we have the low code and we have the so called like a pro code, right, like the solution when built and you have like a custom software here that you write. And can you share a bit of a perspective here on the balance between the pro code that you have and the low code solutions? Maybe some trade offs. How do you do decision? How do you decide if you go the direction low code with the new application or you use like a more custom software development approach?
Frank Schmid
Yeah, internally we don't have pro code development talent. So when I started, we had actually outsourced all software development and so we decided to rebuild development capabilities, but only in low code. Now we have in the past build an application using pro code in the business, you know, and of course there was then an external party building that application. It's as you say, custom custom built. Three insurance. Yeah, it has, it has its justification. At the same time, another project in the same space we are now doing using low code.
That's the one I mentioned. Fairly complex, external facing, data intensive. Now, interestingly, low code is the younger technology. And so when you have a new technology, there's an adoption process. Right? And this adoption process takes a long. So the question is then, has this adoption process really come to an end and are we in a steady state as far as that balance is concerned that you mentioned, Matt, between low code and pro code, or are we still in this low code adoption process?
Because a new technology requires different thinking, different skill set. Right. It's a discovery process, but also skill building process. So that's a question. There are a couple of things that speak for lower code. I mean, first of all, it reduces the scarcity of development talent. Right? That's an obvious one.
Secondly, it does generate network effects. I mean, I already spoke about the reusable components, right, the libraries. And then, and then it does also actually another benefit that's not mentioned a lot is actually on the security side. So with pro code you have to do a fair amount of code review more than you have to do with low code. With low code you can take a platform approach from a security perspective perspective. So that's another reason why we actually prefer low code over pro code. But ultimately, right, there is a balance here.
It's just that the question is how far into the adoption process are we and is low code already fully leveraged or do we still have some way to go here? Now, of course, the question comes up, right? And now that we are, we're told that English is the new, the hardest new programming language. Right. You know, does this benefit low code more or pro code? Right. I do get this question and I hear it at conferences, I can speak to it.
I honestly, I, I don't know it, it probably benefits both, right, Pro code and low code. But how it will tip the scales, I, I have no answer to.
Matt
And I see clearly that you are passionate about the AI and the impact of the AI on the Technological part of the industry. And I'm wondering what is the impact of the generative AI on the insurance industry? Do you already see some disruptions or maybe yourself you introduce some kind of solutions around it?
Frank Schmid
Yeah, that's a good point. Thank you for bringing that up. Yes, I'm passionate about it. I like to compare it to electricity. So generative AI, we look at it as general purpose technology. And general purpose technology has three properties. One is it is pervasive, so it shows up everywhere.
Look at electricity. And the same you will see with generative AI. And it's really the generative aspect of AI. So think of November 30, 2022 as the arrival date of generative AI. That's the release of ChatGPT, right? The arrival of generative AI and what is called the application sector. So that's us, right?
We apply this technology. That brings me to the second property, which means the general purpose technology is capable of ongoing technological improvement. Now when you think of electricity, right, the first battery was built at around 1800. It took a long time for the light bulb to arrive and then the alternating current electric motor in 1890 around. So you know, it took decades, right, for the technology to advance. But that's that, that's the hallmark of a general purpose technology is capable of ongoing improvement for decades. And secondly, general purpose technology technology spawns co innovation and co investment in the application sector.
So be it insurance or you know, any, any other industry. Now if you take the two things together, ongoing technological improvement for decades and the co invention process in the application sector. So at the level of the insurer, then you know, you will conclude that it will be like a feedback loop playing out over decades so that the adoption process will take a long time. And I think that we should keep that in mind. It brings, brings us back to what we discussed before, like reversibility and modularity and things like that. So now of course when a new general purpose technology arrives, and again there were not many, like following electricity and the electric motor, right there, there was the semiconductor for instance, and before electricity was a steam engine. So they are not that many and they are clearly highly transformative.
Now of course when such general purpose technology arise, then there is some automation anxiety takes hold in society, right? And there is this fear of widespread job losses. Now clearly a general purpose technology gives rise to automation and hence automation anxiety in society. But automation has two aspects. One is the substitution of labor and the other one is what is called augmentation of labor. So the enhancing and complementing of human skills and substitution of labor, it's one off. This is steady concept.
Substitute labor, you have a productivity gain, that's it, right? Whereas augmentation of labor, so the enhancing and complementing of human skills, this is an ongoing process, right? It's a dynamic process. It plays out during this entire adoption cycle. And historically, if you look at the productivity gains that a general purpose technology has delivered, it's absolutely substitution of labor is dwarfed by augmentation of labor. And the result is that human labor is much more valuable today than it was at the dawn of the industrial revolution, 250 years ago. So first, we still all have jobs, and secondly, these jobs pay well.
So all the technological progress that we have experienced over the past 250 years have actually made humans labor more valuable, not less valuable. So that's to be kept in mind. So what will happen in this generative AI adoption process? Well, you know, look similar, will resemble what will, what will happen in the economy at large in other industries. So the way I look at it, there are, there are three levels of, you know, adoption. One is task level improvement. So you know, all the copilots that we now have, right?
Task level improvement, a given task can be done in a better way and some productivity gain, the co invention effort is not that great, it's not that hard to do. But the productivity gains are also not that great. But then the next level of adoption is workflow redesign. And this is where you will see meaningful productivity gains. Now that takes greater convention effort. And you heard me talk about workflow quite a bit in this session. But also the productivity gains will be much greater.
So greater convention effort, greater productivity gains. And then the third one, and it may take some time for this to arrive, is organizational redesign. So task level improvement, workflow redesign, organizational redesign. Now, the poster child of organization redesign is really Amazon. When the Internet came along and Amazon reinvented a bookstore, right? It was not task level improvement, it was not workflow redesign, it was really organizational redesign. Took some time to arrive.
Great convention effort, but also of course, tremendous productivity gain. So that's the journey, right? And there's clearly a timeline here as well, right? Workflow redesign takes longer, organizational redesign takes longer. And organizational redesign may come from the outside, it may not. You know, it could be a change in the business model, the way we do insurance, we don't know. But it's interesting to see that clearly.
Amazon or, you know, Tesla, the electric car, these were innovations came from the outside, not, you know, were not born in the legacy industry. Right.
Matt
And let's maybe connect now the two things together. I mean, you mentioned we talk about the local, we talk about the AI and I think today there's a lot of proof of concept in area of AI.
This is like a fresh topic, fresh approach. Maybe like not fresh topic, but a fresh approach. Like we have a completely different approach that we had like a few years ago and it's way more, faster to create some kind of proof of concept. And there are tools that are, you know, released on the market pretty fast that like disruption is extremely fast nowadays thanks to that. And I'm just wondering, do you use the local solutions to be, to build the AI proof of concept or do you go maybe more of a custom built, do you have a more custom built approach with the, you know, with the pro code as we, as we call it?
Frank Schmid
Yeah, we, we rely heavily on functionalities provided by our Hyperscaler, our cloud service provider. And look at these functionalities, they're quite powerful. And building an AI system is essentially for us it's about orchestration, orchestrating these functionalities. And so we have a fair amount of AI in our organization and we are quite disciplined. Right. It all comes from our Hyperscaler and we, we try to be really disciplined here. So it's all about orchestration.
If you want to get something going in Generative AI, it's, it's a fair amount of engineering, right. It sits in your data architecture. You have to have your foundations in place and then all the AI and cloud engineering that comes with that. So for us it's a matter of orchestration. This is how we look at it and making it part of the workflow. Now we use low code also in this context. I give you a small example.
If you took Azure OpenAI Studio and the chat capabilities there, there is no safety prompt, right? Meaning that from a governance perspective, you would have to manually put in your safety prompt that guides the behavior of the model. So we use our low code platforms, we build a chat functionality, an interface and a wrapper, if you will, a wrapper around Azure OpenAI Studio. And so the safety prompt is hard coded and it's visible to the user, but it's hard coded so it cannot be changed. Now that's a matter of governance, for instance, but it's just a small example of orchestration and the role low code can play. Of course, it's just a small example, but overall, again, it's really about orchestration and inserting it into existing workflow or maybe redesigning workflow and do this in a modular way.
Matt
And last but not least is the question that I ask all of my guests. And Frank, can you recommend any books and resources that have had a major influence on you as a tech leader?
Frank Schmid
Yeah. Thank you, Matt. That's actually easy to answer. There are two books that heavily influenced me over decades, in fact. One is Hakakura, the Code of the Samurai. That's the most influential, most influential book when it comes to leadership. But also Leo Toll's Toys, War and Peace.
You know, I appreciate this book as a study of the role of randomness and human experience. It gets suspected around the war. And I also value the, you know, the, the essay by Isaac Berlin called the Hedgehog and the Fox, which gives us a valuable introduction to Tolstoy's view of history. And, you know, it's fascinating space. What is the role of individuals versus the historical process that plays out overall? And then on the lighter side, but I think quite useful when it comes to leadership, is actually Robert Aldridge 1965 movie, the Flight of the Phoenix, which is just a great movie and a great study of, you know, human behavior and exigent circumstances and how you, how you can think about the situation and the decisions that you derive in such an environment.
Matt
Frank, thank you so much for really insightful discussion here today. I really appreciate your time and your views here and wish you all the best in reinventing and disrupting the insurance industry.
Frank Schmid
Thank you very much, man. It was a pleasure to be with you. Thank you. Follow Matt and Leshek on LinkedIn.
In this episode, Matt interviews Brooks Folk about his role as Global Engineering Leader at nCino and about the challenges of leading at scale. Brooks shares his experiences, lessons learned, and the importance of balancing autonomy with standardization in a large organization. He also discusses the concept of “elephant carpaccio” for iterative software development and the importance of adapting leadership styles to different team needs.
In this episode, Matt interviews Vance Rodgers, an engineering leader with a background in the U.S. Army Rangers, about balancing growth with efficiency. Vance shares insights from his military experience, including the importance of planning, measuring work, and continuous feedback loops. He also discusses his “first law of business,” which focuses on making the company money through revenue generation, operational efficiency, and risk management.
In this episode, Herman Man discusses leading cross-functional teams and adapting frameworks to fit organizational goals. He shares insights from his experiences at Xero, Microsoft, and Bluevine, emphasizing the importance of questioning the status quo and focusing on first principles. Herman also touches on cultural differences in global teams and the impact of AI on the fintech industry, highlighting the need to solve customer problems effectively.
In dieser Folge interviewt Leszek Patrick Jørgensen über seinen Werdegang und seinen Führungsstil. Patrick spricht über seinen Weg von der Softwareentwicklung zur Mitbegründerin von Scaleup Finance. Er gibt Einblicke darüber, wie seine Erfahrungen als professioneller Fechter seinen Ansatz zum Unternehmertum beeinflussen, wobei Fokus, Wettbewerb und Prozess wichtiger sind als Ergebnisse.